docs(opc): support request_ts#TS-5728

This commit is contained in:
zyyang 2025-03-13 21:04:08 +08:00
parent 5fb86e29e6
commit 32505dacec
4 changed files with 130 additions and 114 deletions

View File

@ -108,7 +108,7 @@ The header is the first line of the CSV file, with the following rules:
(1) The header of the CSV can configure the following columns: (1) The header of the CSV can configure the following columns:
| Number | Column Name | Description | Required | Default Behavior | | Number | Column Name | Description | Required | Default Behavior |
| ------ | ----------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |--------|-------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -------- |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | point_id | The id of the data point on the OPC UA server | Yes | None | | 1 | point_id | The id of the data point on the OPC UA server | Yes | None |
| 2 | stable | The corresponding supertable for the data point in TDengine | Yes | None | | 2 | stable | The corresponding supertable for the data point in TDengine | Yes | None |
| 3 | tbname | The corresponding subtable for the data point in TDengine | Yes | None | | 3 | tbname | The corresponding subtable for the data point in TDengine | Yes | None |
@ -117,11 +117,13 @@ The header is the first line of the CSV file, with the following rules:
| 6 | value_transform | The transformation function executed in taosX for the collected value of the data point | No | Do not transform the collected value uniformly | | 6 | value_transform | The transformation function executed in taosX for the collected value of the data point | No | Do not transform the collected value uniformly |
| 7 | type | The data type of the collected value of the data point | No | Use the original type of the collected value as the data type in TDengine | | 7 | type | The data type of the collected value of the data point | No | Use the original type of the collected value as the data type in TDengine |
| 8 | quality_col | The column name in TDengine corresponding to the quality of the collected value | No | Do not add a quality column in TDengine uniformly | | 8 | quality_col | The column name in TDengine corresponding to the quality of the collected value | No | Do not add a quality column in TDengine uniformly |
| 9 | ts_col | The original timestamp column of the data point in TDengine | No | If both ts_col and received_ts_col are non-empty, use the former as the timestamp column; if one of ts_col or received_ts_col is non-empty, use the non-empty column as the timestamp column; if both are empty, use the original timestamp of the data point as the timestamp column with the default name `ts`. | | 9 | ts_col | The original timestamp column of the data point in TDengine | No | ts_col, request_ts, received_ts these 3 columns, when there are more than 2 columns, the leftmost column is used as the primary key in TDengine. |
| 10 | received_ts_col | The timestamp column in TDengine when the data point value is received | No | Same as above | | 10 | request_ts_col | The timestamp column in TDengine when the data point value is request | No | Same as above |
| 11 | ts_transform | The transformation function executed in taosX for the original timestamp of the data point | No | Do not transform the original timestamp of the data point uniformly | | 11 | received_ts_col | The timestamp column in TDengine when the data point value is received | No | Same as above |
| 12 | received_ts_transform | The transformation function executed in taosX for the received timestamp of the data point | No | Do not transform the received timestamp of the data point uniformly | | 12 | ts_transform | The transformation function executed in taosX for the original timestamp of the data point | No | Do not transform the original timestamp of the data point uniformly |
| 13 | tag::VARCHAR(200)::name | The Tag column corresponding to the data point in TDengine. Here `tag` is a reserved keyword indicating that this column is a tag; `VARCHAR(200)` indicates the type of tag; `name` is the actual name of the tag. | No | If 1 or more tag columns are configured, use the configured tag columns; if no tag columns are configured and stable exists in TDengine, use the tags of the stable in TDengine; if no tag columns are configured and stable does not exist in TDengine, automatically add the following 2 tag columns: tag::VARCHAR(256)::point_id and tag::VARCHAR(256)::point_name | | 13 | request_ts_transform | The transformation function executed in taosX for the request timestamp of the data point | No | Do not transform the original timestamp of the data point uniformly |
| 14 | received_ts_transform | The transformation function executed in taosX for the received timestamp of the data point | No | Do not transform the received timestamp of the data point uniformly |
| 15 | tag::VARCHAR(200)::name | The Tag column corresponding to the data point in TDengine. Here `tag` is a reserved keyword indicating that this column is a tag; `VARCHAR(200)` indicates the type of tag; `name` is the actual name of the tag. | No | If 1 or more tag columns are configured, use the configured tag columns; if no tag columns are configured and stable exists in TDengine, use the tags of the stable in TDengine; if no tag columns are configured and stable does not exist in TDengine, automatically add the following 2 tag columns: tag::VARCHAR(256)::point_id and tag::VARCHAR(256)::point_name |
(2) In the CSV Header, there cannot be duplicate columns; (2) In the CSV Header, there cannot be duplicate columns;
@ -138,7 +140,7 @@ Each Row in the CSV file configures an OPC data point. The rules for Rows are as
(1) Correspondence with columns in the Header (1) Correspondence with columns in the Header
| Number | Column in Header | Type of Value | Value Range | Mandatory | Default Value | | Number | Column in Header | Type of Value | Value Range | Mandatory | Default Value |
| ------ | ----------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | ------------------------ | |--------|-------------------------| ------------- |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| --------- |---------------------------------------|
| 1 | point_id | String | Strings like `ns=3;i=1005`, must meet the OPC UA ID specification, i.e., include ns and id parts | Yes | | | 1 | point_id | String | Strings like `ns=3;i=1005`, must meet the OPC UA ID specification, i.e., include ns and id parts | Yes | |
| 2 | enable | int | 0: Do not collect this point, and delete the corresponding subtable in TDengine before the OPC DataIn task starts; 1: Collect this point, do not delete the subtable before the OPC DataIn task starts. | No | 1 | | 2 | enable | int | 0: Do not collect this point, and delete the corresponding subtable in TDengine before the OPC DataIn task starts; 1: Collect this point, do not delete the subtable before the OPC DataIn task starts. | No | 1 |
| 3 | stable | String | Any string that meets the TDengine supertable naming convention; if special character `.` exists, replace with underscore if `{type}` exists: if type in CSV file is not empty, replace with the value of type if type is empty, replace with the original type of the collected value | Yes | | | 3 | stable | String | Any string that meets the TDengine supertable naming convention; if special character `.` exists, replace with underscore if `{type}` exists: if type in CSV file is not empty, replace with the value of type if type is empty, replace with the original type of the collected value | Yes | |
@ -148,10 +150,12 @@ Each Row in the CSV file configures an OPC data point. The rules for Rows are as
| 7 | type | String | Supported types include: b/bool/i8/tinyint/i16/small/inti32/int/i64/bigint/u8/tinyint unsigned/u16/smallint unsigned/u32/int unsigned/u64/bigint unsigned/f32/float/f64/double/timestamp/timestamp(ms)/timestamp(us)/timestamp(ns)/json | No | Original type of the data point value | | 7 | type | String | Supported types include: b/bool/i8/tinyint/i16/small/inti32/int/i64/bigint/u8/tinyint unsigned/u16/smallint unsigned/u32/int unsigned/u64/bigint unsigned/f32/float/f64/double/timestamp/timestamp(ms)/timestamp(us)/timestamp(ns)/json | No | Original type of the data point value |
| 8 | quality_col | String | Column name that meets TDengine naming convention | No | None | | 8 | quality_col | String | Column name that meets TDengine naming convention | No | None |
| 9 | ts_col | String | Column name that meets TDengine naming convention | No | ts | | 9 | ts_col | String | Column name that meets TDengine naming convention | No | ts |
| 10 | received_ts_col | String | Column name that meets TDengine naming convention | No | rts | | 10 | request_ts_col | String | Column name that meets TDengine naming convention | No | qts |
| 11 | ts_transform | String | Supports +, -, *, /, % operators, for example: ts / 1000* 1000, sets the last 3 digits of a timestamp in ms to 0; ts + 8 *3600* 1000, adds 8 hours to a timestamp in ms; ts - 8 *3600* 1000, subtracts 8 hours from a timestamp in ms; | No | None | | 11 | received_ts_col | String | Column name that meets TDengine naming convention | No | rts |
| 12 | received_ts_transform | String | No | None | | | 12 | ts_transform | String | Supports +, -, *, /, % operators, for example: ts / 1000* 1000, sets the last 3 digits of a timestamp in ms to 0; ts + 8 *3600* 1000, adds 8 hours to a timestamp in ms; ts - 8 *3600* 1000, subtracts 8 hours from a timestamp in ms; | No | None |
| 13 | tag::VARCHAR(200)::name | String | The value inside a tag, when the tag type is VARCHAR, can be in Chinese | No | NULL | | 13 | request_ts_transform | String | Supports +, -, *, /, % operators, for example: qts / 1000* 1000, sets the last 3 digits of a timestamp in ms to 0; qts + 8 *3600* 1000, adds 8 hours to a timestamp in ms; qts - 8 *3600* 1000, subtracts 8 hours from a timestamp in ms; | No | None |
| 14 | received_ts_transform | String | Supports +, -, *, /, % operators, for example: qts / 1000* 1000, sets the last 3 digits of a timestamp in ms to 0; qts + 8 *3600* 1000, adds 8 hours to a timestamp in ms; qts - 8 *3600* 1000, subtracts 8 hours from a timestamp in ms; | None | None |
| 15 | tag::VARCHAR(200)::name | String | The value inside a tag, when the tag type is VARCHAR, can be in Chinese | No | NULL |
(2) `point_id` is unique throughout the DataIn task, meaning: in an OPC DataIn task, a data point can only be written to one subtable in TDengine. If you need to write a data point to multiple subtables, you need to create multiple OPC DataIn tasks; (2) `point_id` is unique throughout the DataIn task, meaning: in an OPC DataIn task, a data point can only be written to one subtable in TDengine. If you need to write a data point to multiple subtables, you need to create multiple OPC DataIn tasks;
@ -171,7 +175,7 @@ Data points can be filtered by configuring **Root Node ID**, **Namespace**, **Re
Configure **Supertable Name**, **Table Name** to specify the supertable and subtable where the data will be written. Configure **Supertable Name**, **Table Name** to specify the supertable and subtable where the data will be written.
Configure **Primary Key Column**, choose `origin_ts` to use the original timestamp of the OPC data point as the primary key in TDengine; choose `received_ts` to use the data's reception timestamp as the primary key in TDengine. Configure **Primary Key Alias** to specify the name of the TDengine timestamp column. Configure **Primary Key Column**, choose `origin_ts` to use the original timestamp of the OPC data point as the primary key in TDengine; choose `request_ts` to use the data's request timestamp as the primary key in TDengine; choose `received_ts` to use the data's reception timestamp as the primary key in TDengine. Configure **Primary Key Alias** to specify the name of the TDengine timestamp column.
<figure> <figure>
<Image img={imgStep5} alt=""/> <Image img={imgStep5} alt=""/>

View File

@ -82,7 +82,7 @@ The header is the first line of the CSV file, with the following rules:
(1) The header of the CSV can configure the following columns: (1) The header of the CSV can configure the following columns:
| No. | Column Name | Description | Required | Default Behavior | | No. | Column Name | Description | Required | Default Behavior |
| ---- | ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |-----|-------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| 1 | tag_name | The id of the data point on the OPC DA server | Yes | None | | 1 | tag_name | The id of the data point on the OPC DA server | Yes | None |
| 2 | stable | The supertable in TDengine corresponding to the data point | Yes | None | | 2 | stable | The supertable in TDengine corresponding to the data point | Yes | None |
| 3 | tbname | The subtable in TDengine corresponding to the data point | Yes | None | | 3 | tbname | The subtable in TDengine corresponding to the data point | Yes | None |
@ -91,11 +91,13 @@ The header is the first line of the CSV file, with the following rules:
| 6 | value_transform | The transform function executed in taosX for the collected value of the data point | No | Do not perform a transform on the collected value | | 6 | value_transform | The transform function executed in taosX for the collected value of the data point | No | Do not perform a transform on the collected value |
| 7 | type | The data type of the collected value of the data point | No | Use the original type of the collected value as the data type in TDengine | | 7 | type | The data type of the collected value of the data point | No | Use the original type of the collected value as the data type in TDengine |
| 8 | quality_col | The column name in TDengine corresponding to the quality of the collected value | No | Do not add a quality column in TDengine | | 8 | quality_col | The column name in TDengine corresponding to the quality of the collected value | No | Do not add a quality column in TDengine |
| 9 | ts_col | The timestamp column in TDengine corresponding to the original timestamp of the data point | No | If both ts_col and received_ts_col are non-empty, use the former as the timestamp column; if one of ts_col or received_ts_col is non-empty, use the non-empty column as the timestamp column; if both are empty, use the original timestamp of the data point as the timestamp column in TDengine, with the default column name ts. | | 9 | ts_col | The timestamp column in TDengine corresponding to the original timestamp of the data point | No | ts_col, request_ts, received_ts these 3 columns, when there are more than 2 columns, the leftmost column is used as the primary key in TDengine. |
| 10 | received_ts_col | The timestamp column in TDengine corresponding to the timestamp when the data point value was received | No | | | 10 | request_ts_col | The timestamp column in TDengine corresponding to the timestamp when the data point value was request | No | Same as above |
| 11 | ts_transform | The transform function executed in taosX for the original timestamp of the data point | No | Do not perform a transform on the original timestamp of the data point | | 11 | received_ts_col | The timestamp column in TDengine corresponding to the timestamp when the data point value was received | No | Same as above |
| 12 | received_ts_transform | The transform function executed in taosX for the received timestamp of the data point | No | Do not perform a transform on the received timestamp of the data point | | 12 | ts_transform | The transform function executed in taosX for the original timestamp of the data point | No | Do not perform a transform on the original timestamp of the data point |
| 13 | tag::VARCHAR(200)::name | The Tag column in TDengine corresponding to the data point. Where `tag` is a reserved keyword, indicating that this column is a tag column; `VARCHAR(200)` indicates the type of this tag, which can also be other legal types; `name` is the actual name of this tag. | No | If configuring more than one tag column, use the configured tag columns; if no tag columns are configured, and stable exists in TDengine, use the tags of the stable in TDengine; if no tag columns are configured, and stable does not exist in TDengine, automatically add the following two tag columns by default: tag::VARCHAR(256)::point_idtag::VARCHAR(256)::point_name | | 13 | request_ts_transform | The transform function executed in taosX for the request timestamp of the data point | No | Do not perform a transform on the received timestamp of the data point |
| 14 | received_ts_transform | The transform function executed in taosX for the received timestamp of the data point | No | Do not perform a transform on the received timestamp of the data point |
| 15 | tag::VARCHAR(200)::name | The Tag column in TDengine corresponding to the data point. Where `tag` is a reserved keyword, indicating that this column is a tag column; `VARCHAR(200)` indicates the type of this tag, which can also be other legal types; `name` is the actual name of this tag. | No | If configuring more than one tag column, use the configured tag columns; if no tag columns are configured, and stable exists in TDengine, use the tags of the stable in TDengine; if no tag columns are configured, and stable does not exist in TDengine, automatically add the following two tag columns by default: tag::VARCHAR(256)::point_idtag::VARCHAR(256)::point_name |
(2) In the CSV Header, there cannot be duplicate columns; (2) In the CSV Header, there cannot be duplicate columns;
@ -112,7 +114,7 @@ Each Row in the CSV file configures an OPC data point. The rules for Rows are as
(1) Correspondence with columns in the Header (1) Correspondence with columns in the Header
| Number | Column in Header | Type of Value | Range of Values | Mandatory | Default Value | | Number | Column in Header | Type of Value | Range of Values | Mandatory | Default Value |
| ------ | ----------------------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | ------------------------ | |--------|-------------------------| ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------- | ------------------------ |
| 1 | tag_name | String | Strings like `root.parent.temperature`, must meet the OPC DA ID specification | Yes | | | 1 | tag_name | String | Strings like `root.parent.temperature`, must meet the OPC DA ID specification | Yes | |
| 2 | enable | int | 0: Do not collect this point, and delete the corresponding subtable in TDengine before the OPC DataIn task starts; 1: Collect this point, do not delete the subtable before the OPC DataIn task starts. | No | 1 | | 2 | enable | int | 0: Do not collect this point, and delete the corresponding subtable in TDengine before the OPC DataIn task starts; 1: Collect this point, do not delete the subtable before the OPC DataIn task starts. | No | 1 |
| 3 | stable | String | Any string that meets the TDengine supertable naming convention; if there are special characters `.`, replace with underscore. If `{type}` exists: if type in CSV file is not empty, replace with the value of type; if empty, replace with the original type of the collected value | Yes | | | 3 | stable | String | Any string that meets the TDengine supertable naming convention; if there are special characters `.`, replace with underscore. If `{type}` exists: if type in CSV file is not empty, replace with the value of type; if empty, replace with the original type of the collected value | Yes | |
@ -122,10 +124,12 @@ Each Row in the CSV file configures an OPC data point. The rules for Rows are as
| 7 | type | String | Supported types include: b/bool/i8/tinyint/i16/smallint/i32/int/i64/bigint/u8/tinyint unsigned/u16/smallint unsigned/u32/int unsigned/u64/bigint unsigned/f32/floatf64/double/timestamp/timestamp(ms)/timestamp(us)/timestamp(ns)/json | No | Original type of data point value | | 7 | type | String | Supported types include: b/bool/i8/tinyint/i16/smallint/i32/int/i64/bigint/u8/tinyint unsigned/u16/smallint unsigned/u32/int unsigned/u64/bigint unsigned/f32/floatf64/double/timestamp/timestamp(ms)/timestamp(us)/timestamp(ns)/json | No | Original type of data point value |
| 8 | quality_col | String | Column name that meets TDengine naming convention | No | None | | 8 | quality_col | String | Column name that meets TDengine naming convention | No | None |
| 9 | ts_col | String | Column name that meets TDengine naming convention | No | ts | | 9 | ts_col | String | Column name that meets TDengine naming convention | No | ts |
| 10 | received_ts_col | String | Column name that meets TDengine naming convention | No | rts | | 10 | request_ts_col | String | Column name that meets TDengine naming convention | No | rts |
| 11 | ts_transform | String | Supports +, -, *, /, % operators, for example: ts / 1000* 1000, sets the last 3 digits of a ms unit timestamp to 0; ts + 8 *3600* 1000, adds 8 hours to a ms precision timestamp; ts - 8 *3600* 1000, subtracts 8 hours from a ms precision timestamp; | No | None | | 11 | received_ts_col | String | Column name that meets TDengine naming convention | No | rts |
| 12 | received_ts_transform | String | No | None | | | 12 | ts_transform | String | Supports +, -, *, /, % operators, for example: ts / 1000* 1000, sets the last 3 digits of a ms unit timestamp to 0; ts + 8 *3600* 1000, adds 8 hours to a ms precision timestamp; ts - 8 *3600* 1000, subtracts 8 hours from a ms precision timestamp; | No | None |
| 13 | tag::VARCHAR(200)::name | String | The value in tag, when the tag type is VARCHAR, it can be in Chinese | No | NULL | | 13 | request_ts_transform | String | No | None | |
| 14 | received_ts_transform | String | No | None | |
| 15 | tag::VARCHAR(200)::name | String | The value in tag, when the tag type is VARCHAR, it can be in Chinese | No | NULL |
(2) `tag_name` is unique throughout the DataIn task, that is: in an OPC DataIn task, a data point can only be written to one subtable in TDengine. If you need to write a data point to multiple subtables, you need to create multiple OPC DataIn tasks; (2) `tag_name` is unique throughout the DataIn task, that is: in an OPC DataIn task, a data point can only be written to one subtable in TDengine. If you need to write a data point to multiple subtables, you need to create multiple OPC DataIn tasks;
@ -145,7 +149,7 @@ Data points can be filtered by configuring the **Root Node ID** and **Regular Ex
Configure **Supertable Name** and **Table Name** to specify the supertable and subtable where the data will be written. Configure **Supertable Name** and **Table Name** to specify the supertable and subtable where the data will be written.
Configure **Primary Key Column**, choosing `origin_ts` to use the original timestamp of the OPC data point as the primary key in TDengine; choosing `received_ts` to use the timestamp when the data is received as the primary key. Configure **Primary Key Alias** to specify the name of the TDengine timestamp column. Configure **Primary Key Column**, choosing `origin_ts` to use the original timestamp of the OPC data point as the primary key in TDengine; choosing `request_ts` to use the timestamp when the data is request as the primary key; choosing `received_ts` to use the timestamp when the data is received as the primary key. Configure **Primary Key Alias** to specify the name of the TDengine timestamp column.
<figure> <figure>
<Image img={imgStep4} alt=""/> <Image img={imgStep4} alt=""/>

View File

@ -90,7 +90,7 @@ Header 是 CSV 文件的第一行,规则如下:
| 序号 | 列名 | 描述 | 是否必填 | 默认行为 | | 序号 | 列名 | 描述 | 是否必填 | 默认行为 |
| ---- | ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |----|-------------------------|--------------------------------------------------------------------------------------------------------------------| -------- |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | point_id | 数据点位在 OPC UA 服务器上的 id | 是 | 无 | | 1 | point_id | 数据点位在 OPC UA 服务器上的 id | 是 | 无 |
| 2 | stable | 数据点位在 TDengine 中对应的超级表 | 是 | 无 | | 2 | stable | 数据点位在 TDengine 中对应的超级表 | 是 | 无 |
| 3 | tbname | 数据点位在 TDengine 中对应的子表 | 是 | 无 | | 3 | tbname | 数据点位在 TDengine 中对应的子表 | 是 | 无 |
@ -99,11 +99,13 @@ Header 是 CSV 文件的第一行,规则如下:
| 6 | value_transform | 数据点位采集值在 taosX 中执行的变换函数 | 否 | 统一不进行采集值的 transform | | 6 | value_transform | 数据点位采集值在 taosX 中执行的变换函数 | 否 | 统一不进行采集值的 transform |
| 7 | type | 数据点位采集值的数据类型 | 否 | 统一使用采集值的原始类型作为 TDengine 中的数据类型 | | 7 | type | 数据点位采集值的数据类型 | 否 | 统一使用采集值的原始类型作为 TDengine 中的数据类型 |
| 8 | quality_col | 数据点位采集值质量在 TDengine 中对应的列名 | 否 | 统一不在 TDengine 添加 quality 列 | | 8 | quality_col | 数据点位采集值质量在 TDengine 中对应的列名 | 否 | 统一不在 TDengine 添加 quality 列 |
| 9 | ts_col | 数据点位的原始时间戳在 TDengine 中对应的时间戳列 | 否 | ts_col 和 received_ts_col 都非空时使用前者作为时间戳列ts_col 和 received_ts_col 有一列非空时使用不为空的列作时间戳列ts_col 和 received_ts_col 都为空时,使用数据点位原始时间戳作 TDengine 中的时间戳列,且列名为默认值`ts`。 | | 9 | ts_col | 数据点位的原始时间戳在 TDengine 中对应的时间戳列 | 否 | ts_colrequest_tsreceived_ts 这 3 列,当有 2 列以上存在时,以最左侧的列作为 TDengine 中的主键。 |
| 10 | received_ts_col | 接收到该点位采集值时的时间戳在 TDengine 中对应的时间戳列 | 否 | 同上 | | 10 | request_ts_col | 请求到该点位采集值时的时间戳在 TDengine 中对应的时间戳列 | 否 | ts_colrequest_tsreceived_ts 这 3 列,当有 2 列以上存在时,以最左侧的列作为 TDengine 中的主键。 |
| 11 | ts_transform | 数据点位时间戳在 taosX 中执行的变换函数 | 否 | 统一不进行数据点位原始时间戳的 transform | | 11 | received_ts_col | 接收到该点位采集值时的时间戳在 TDengine 中对应的时间戳列 | 否 | ts_colrequest_tsreceived_ts 这 3 列,当有 2 列以上存在时,以最左侧的列作为 TDengine 中的主键。 |
| 12 | received_ts_transform | 数据点位接收时间戳在 taosX 中执行的变换函数 | 否 | 统一不进行数据点位接收时间戳的 transform | | 12 | ts_transform | 数据点位时间戳在 taosX 中执行的变换函数 | 否 | 统一不进行数据点位原始时间戳的 transform |
| 13 | tag::VARCHAR(200)::name | 数据点位在 TDengine 中对应的 Tag 列。其中`tag` 为保留关键字,表示该列为一个 tag 列;`VARCHAR(200)` 表示该 tag 的类型,也可以是其它合法的类型;`name` 是该 tag 的实际名称。 | 否 | 配置 1 个以上的 tag 列,则使用配置的 tag 列;没有配置任何 tag 列,且 stable 在 TDengine 中存在,使用 TDengine 中的 stable 的 tag没有配置任何 tag 列,且 stable 在 TDengine 中不存在,则默认自动添加以下 2 个 tag 列tag::VARCHAR(256)::point_id 和 tag::VARCHAR(256)::point_name | | 13 | request_ts_transform | 数据点位接收时间戳在 taosX 中执行的变换函数 | 否 | 统一不进行数据点位接收时间戳的 transform |
| 14 | received_ts_transform | 数据点位接收时间戳在 taosX 中执行的变换函数 | 否 | 统一不进行数据点位接收时间戳的 transform |
| 15 | tag::VARCHAR(200)::name | 数据点位在 TDengine 中对应的 Tag 列。其中`tag` 为保留关键字,表示该列为一个 tag 列;`VARCHAR(200)` 表示该 tag 的类型,也可以是其它合法的类型;`name` 是该 tag 的实际名称。 | 否 | 配置 1 个以上的 tag 列,则使用配置的 tag 列;没有配置任何 tag 列,且 stable 在 TDengine 中存在,使用 TDengine 中的 stable 的 tag没有配置任何 tag 列,且 stable 在 TDengine 中不存在,则默认自动添加以下 2 个 tag 列tag::VARCHAR(256)::point_id 和 tag::VARCHAR(256)::point_name |
(2) CSV Header 中,不能有重复的列; (2) CSV Header 中,不能有重复的列;
@ -121,7 +123,7 @@ CSV 文件中的每个 Row 配置一个 OPC 数据点位。Row 的规则如下
| 序号 | Header 中的列 | 值的类型 | 值的范围 | 是否必填 | 默认值 | | 序号 | Header 中的列 | 值的类型 | 值的范围 | 是否必填 | 默认值 |
| ---- | ----------------------- | -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------------------------ | |----|-------------------------| -------- |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------| ------------------------ |
| 1 | point_id | String | 类似`ns=3;i=1005`这样的字符串,要满足 OPC UA 的 ID 的规范,即:包含 ns 和 id 部分 | 是 | | | 1 | point_id | String | 类似`ns=3;i=1005`这样的字符串,要满足 OPC UA 的 ID 的规范,即:包含 ns 和 id 部分 | 是 | |
| 2 | enable | int | 0不采集该点位且在 OPC DataIn 任务开始前,删除 TDengine 中点位对应的子表1采集该点位在 OPC DataIn 任务开始前,不删除子表。 | 否 | 1 | | 2 | enable | int | 0不采集该点位且在 OPC DataIn 任务开始前,删除 TDengine 中点位对应的子表1采集该点位在 OPC DataIn 任务开始前,不删除子表。 | 否 | 1 |
| 3 | stable | String | 符合 TDengine 超级表命名规范的任何字符串;如果存在特殊字符`.`,使用下划线替换如果存在`{type}`CSV 文件的 type 不为空,使用 type 的值进行替换CSV 文件的 type 为空,使用采集值的原始类型进行替换 | 是 | | | 3 | stable | String | 符合 TDengine 超级表命名规范的任何字符串;如果存在特殊字符`.`,使用下划线替换如果存在`{type}`CSV 文件的 type 不为空,使用 type 的值进行替换CSV 文件的 type 为空,使用采集值的原始类型进行替换 | 是 | |
@ -131,10 +133,12 @@ CSV 文件中的每个 Row 配置一个 OPC 数据点位。Row 的规则如下
| 7 | type | String | 支持类型包括b/bool/i8/tinyint/i16/small/inti32/int/i64/bigint/u8/tinyint unsigned/u16/smallint unsigned/u32/int unsigned/u64/bigint unsigned/f32/float/f64/double/timestamp/timestamp(ms)/timestamp(us)/timestamp(ns)/json | 否 | 数据点位采集值的原始类型 | | 7 | type | String | 支持类型包括b/bool/i8/tinyint/i16/small/inti32/int/i64/bigint/u8/tinyint unsigned/u16/smallint unsigned/u32/int unsigned/u64/bigint unsigned/f32/float/f64/double/timestamp/timestamp(ms)/timestamp(us)/timestamp(ns)/json | 否 | 数据点位采集值的原始类型 |
| 8 | quality_col | String | 符合 TDengine 命名规范的列名 | 否 | None | | 8 | quality_col | String | 符合 TDengine 命名规范的列名 | 否 | None |
| 9 | ts_col | String | 符合 TDengine 命名规范的列名 | 否 | ts | | 9 | ts_col | String | 符合 TDengine 命名规范的列名 | 否 | ts |
| 10 | received_ts_col | String | 符合 TDengine 命名规范的列名 | 否 | rts | | 10 | request_ts_col | String | 符合 TDengine 命名规范的列名 | 否 | rts |
| 11 | ts_transform | String | 支持 +、-、*、/、% 操作符例如ts / 1000 * 1000将一个 ms 单位的时间戳的最后 3 位置为 0ts + 8 * 3600 * 1000将一个 ms 精度的时间戳,增加 8 小时ts - 8 * 3600 * 1000将一个 ms 精度的时间戳,减去 8 小时; | 否 | None | | 11 | received_ts_col | String | 符合 TDengine 命名规范的列名 | 否 | rts |
| 12 | received_ts_transform | String | 否 | None | | | 12 | ts_transform | String | 支持 +、-、*、/、% 操作符例如ts / 1000 * 1000将一个 ms 单位的时间戳的最后 3 位置为 0ts + 8 * 3600 * 1000将一个 ms 精度的时间戳,增加 8 小时ts - 8 * 3600 * 1000将一个 ms 精度的时间戳,减去 8 小时; | 否 | None |
| 13 | tag::VARCHAR(200)::name | String | tag 里的值,当 tag 的类型是 VARCHAR 时,可以是中文 | 否 | NULL | | 13 | request_ts_transform | String | 支持 +、-、*、/、% 操作符例如qts / 1000 * 1000将一个 ms 单位的时间戳的最后 3 位置为 0qts + 8 * 3600 * 1000将一个 ms 精度的时间戳,增加 8 小时qts - 8 * 3600 * 1000将一个 ms 精度的时间戳,减去 8 小时; | 否 | None |
| 14 | received_ts_transform | String | 支持 +、-、*、/、% 操作符例如rts / 1000 * 1000将一个 ms 单位的时间戳的最后 3 位置为 0rts + 8 * 3600 * 1000将一个 ms 精度的时间戳,增加 8 小时rts - 8 * 3600 * 1000将一个 ms 精度的时间戳,减去 8 小时; | 否 | None |
| 15 | tag::VARCHAR(200)::name | String | tag 里的值,当 tag 的类型是 VARCHAR 时,可以是中文 | 否 | NULL |
(2) point_id 在整个 DataIn 任务中是唯一的,即:在一个 OPC DataIn 任务中,一个数据点位只能被写入到 TDengine 的一张子表。如果需要将一个数据点位写入多张子表,需要建多个 OPC DataIn 任务; (2) point_id 在整个 DataIn 任务中是唯一的,即:在一个 OPC DataIn 任务中,一个数据点位只能被写入到 TDengine 的一张子表。如果需要将一个数据点位写入多张子表,需要建多个 OPC DataIn 任务;
@ -154,7 +158,7 @@ CSV 文件中的每个 Row 配置一个 OPC 数据点位。Row 的规则如下
通过配置 **超级表名**、**表名称**,指定数据要写入的超级表、子表。 通过配置 **超级表名**、**表名称**,指定数据要写入的超级表、子表。
配置**主键列**,选择 origin_ts 表示使用 OPC 点位数据的原始时间戳作 TDengine 中的主键;选择 received_ts 表示使用数据的接收时间戳作 TDengine 中的主键。配置**主键别名**,指定 TDengine 时间戳列的名称。 配置**主键列**,选择 origin_ts 表示使用 OPC 点位数据的原始时间戳作 TDengine 中的主键;选择 request_ts 表示使用数据的请求时间戳作 TDengine 中的主键;选择 received_ts 表示使用数据的接收时间戳作 TDengine 中的主键。配置**主键别名**,指定 TDengine 时间戳列的名称。
![point.png](./pic/opcua-06-point.png) ![point.png](./pic/opcua-06-point.png)

View File

@ -66,7 +66,7 @@ Header 是 CSV 文件的第一行,规则如下:
| 序号 | 列名 | 描述 | 是否必填 | 默认行为 | | 序号 | 列名 | 描述 | 是否必填 | 默认行为 |
| ---- | ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |----|-------------------------|-------------------------------------------------------------------------------------------------------------------| -------- |---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | tag_name | 数据点位在 OPC DA 服务器上的 id | 是 | 无 | | 1 | tag_name | 数据点位在 OPC DA 服务器上的 id | 是 | 无 |
| 2 | stable | 数据点位在 TDengine 中对应的超级表 | 是 | 无 | | 2 | stable | 数据点位在 TDengine 中对应的超级表 | 是 | 无 |
| 3 | tbname | 数据点位在 TDengine 中对应的子表 | 是 | 无 | | 3 | tbname | 数据点位在 TDengine 中对应的子表 | 是 | 无 |
@ -75,11 +75,13 @@ Header 是 CSV 文件的第一行,规则如下:
| 6 | value_transform | 数据点位采集值在 taosX 中执行的变换函数 | 否 | 统一不进行采集值的 transform | | 6 | value_transform | 数据点位采集值在 taosX 中执行的变换函数 | 否 | 统一不进行采集值的 transform |
| 7 | type | 数据点位采集值的数据类型 | 否 | 统一使用采集值的原始类型作为 TDengine 中的数据类型 | | 7 | type | 数据点位采集值的数据类型 | 否 | 统一使用采集值的原始类型作为 TDengine 中的数据类型 |
| 8 | quality_col | 数据点位采集值质量在 TDengine 中对应的列名 | 否 | 统一不在 TDengine 添加 quality 列 | | 8 | quality_col | 数据点位采集值质量在 TDengine 中对应的列名 | 否 | 统一不在 TDengine 添加 quality 列 |
| 9 | ts_col | 数据点位的原始时间戳在 TDengine 中对应的时间戳列 | 否 | ts_col 和 received_ts_col 都非空时使用前者作为时间戳列ts_col 和 received_ts_col 有一列非空时使用不为空的列作时间戳列ts_col 和 received_ts_col 都为空时,使用数据点位原始时间戳作 TDengine 中的时间戳列且列名为默认值ts。 | | 9 | ts_col | 数据点位的原始时间戳在 TDengine 中对应的时间戳列 | 否 | ts_colrequest_tsreceived_ts 这 3 列,当有 2 列以上存在时,以最左侧的列作为 TDengine 中的主键 |
| 10 | received_ts_col | 接收到该点位采集值时的时间戳在 TDengine 中对应的时间戳列 | 否 | | | 10 | request_ts_col | 请求该点位采集值时的时间戳在 TDengine 中对应的时间戳列 | 否 | ts_colrequest_tsreceived_ts 这 3 列,当有 2 列以上存在时,以最左侧的列作为 TDengine 中的主键 |
| 11 | ts_transform | 数据点位时间戳在 taosX 中执行的变换函数 | 否 | 统一不进行数据点位原始时间戳的 transform | | 11 | received_ts_col | 接收到该点位采集值时的时间戳在 TDengine 中对应的时间戳列 | 否 | ts_colrequest_tsreceived_ts 这 3 列,当有 2 列以上存在时,以最左侧的列作为 TDengine 中的主键 |
| 12 | received_ts_transform | 数据点位接收时间戳在 taosX 中执行的变换函数 | 否 | 统一不进行数据点位接收时间戳的 transform | | 12 | ts_transform | 数据点位时间戳在 taosX 中执行的变换函数 | 否 | 统一不进行数据点位原始时间戳的 transform |
| 13 | tag::VARCHAR(200)::name | 数据点位在 TDengine 中对应的 Tag 列。其中`tag` 为保留关键字,表示该列为一个 tag 列;`VARCHAR(200)` 表示该 tag 的类型,也可以是其它合法的类型;`name` 是该 tag 的实际名称。 | 否 | 配置 1 个以上的 tag 列,则使用配置的 tag 列;没有配置任何 tag 列,且 stable 在 TDengine 中存在,使用 TDengine 中的 stable 的 tag没有配置任何 tag 列,且 stable 在 TDengine 中不存在,则默认自动添加以下 2 个 tag 列tag::VARCHAR(256)::point_idtag::VARCHAR(256)::point_name | | 13 | request_ts_transform | 数据点位请求时间戳在 taosX 中执行的变换函数 | 否 | 统一不进行数据点位原始时间戳的 transform |
| 14 | received_ts_transform | 数据点位接收时间戳在 taosX 中执行的变换函数 | 否 | 统一不进行数据点位接收时间戳的 transform |
| 15 | tag::VARCHAR(200)::name | 数据点位在 TDengine 中对应的 Tag 列。其中`tag` 为保留关键字,表示该列为一个 tag 列;`VARCHAR(200)` 表示该 tag 的类型,也可以是其它合法的类型;`name` 是该 tag 的实际名称。 | 否 | 配置 1 个以上的 tag 列,则使用配置的 tag 列;没有配置任何 tag 列,且 stable 在 TDengine 中存在,使用 TDengine 中的 stable 的 tag没有配置任何 tag 列,且 stable 在 TDengine 中不存在,则默认自动添加以下 2 个 tag 列tag::VARCHAR(256)::point_idtag::VARCHAR(256)::point_name |
(2) CSV Header 中,不能有重复的列; (2) CSV Header 中,不能有重复的列;
@ -97,7 +99,7 @@ CSV 文件中的每个 Row 配置一个 OPC 数据点位。Row 的规则如下
| 序号 | Header 中的列 | 值的类型 | 值的范围 | 是否必填 | 默认值 | | 序号 | Header 中的列 | 值的类型 | 值的范围 | 是否必填 | 默认值 |
| ---- | ----------------------- | -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------- | ------------------------ | |----|-------------------------| -------- |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------|--------------|
| 1 | tag_name | String | 类似`root.parent.temperature`这样的字符串,要满足 OPC DA 的 ID 规范 | 是 | | | 1 | tag_name | String | 类似`root.parent.temperature`这样的字符串,要满足 OPC DA 的 ID 规范 | 是 | |
| 2 | enable | int | 0不采集该点位且在 OPC DataIn 任务开始前,删除 TDengine 中点位对应的子表1采集该点位在 OPC DataIn 任务开始前,不删除子表。 | 否 | 1 | | 2 | enable | int | 0不采集该点位且在 OPC DataIn 任务开始前,删除 TDengine 中点位对应的子表1采集该点位在 OPC DataIn 任务开始前,不删除子表。 | 否 | 1 |
| 3 | stable | String | 符合 TDengine 超级表命名规范的任何字符串;如果存在特殊字符`.`,使用下划线替换如果存在`{type}`CSV 文件的 type 不为空,使用 type 的值进行替换CSV 文件的 type 为空,使用采集值的原始类型进行替换 | 是 | | | 3 | stable | String | 符合 TDengine 超级表命名规范的任何字符串;如果存在特殊字符`.`,使用下划线替换如果存在`{type}`CSV 文件的 type 不为空,使用 type 的值进行替换CSV 文件的 type 为空,使用采集值的原始类型进行替换 | 是 | |
@ -107,10 +109,12 @@ CSV 文件中的每个 Row 配置一个 OPC 数据点位。Row 的规则如下
| 7 | type | String | 支持类型包括b/bool/i8/tinyint/i16/smallint/i32/int/i64/bigint/u8/tinyint unsigned/u16/smallint unsigned/u32/int unsigned/u64/bigint unsigned/f32/floatf64/double/timestamp/timestamp(ms)/timestamp(us)/timestamp(ns)/json | 否 | 数据点位采集值的原始类型 | | 7 | type | String | 支持类型包括b/bool/i8/tinyint/i16/smallint/i32/int/i64/bigint/u8/tinyint unsigned/u16/smallint unsigned/u32/int unsigned/u64/bigint unsigned/f32/floatf64/double/timestamp/timestamp(ms)/timestamp(us)/timestamp(ns)/json | 否 | 数据点位采集值的原始类型 |
| 8 | quality_col | String | 符合 TDengine 命名规范的列名 | 否 | None | | 8 | quality_col | String | 符合 TDengine 命名规范的列名 | 否 | None |
| 9 | ts_col | String | 符合 TDengine 命名规范的列名 | 否 | ts | | 9 | ts_col | String | 符合 TDengine 命名规范的列名 | 否 | ts |
| 10 | received_ts_col | String | 符合 TDengine 命名规范的列名 | 否 | rts | | 10 | request_ts_col | String | 符合 TDengine 命名规范的列名 | 否 | qts |
| 11 | ts_transform | String | 支持 +、-、*、/、% 操作符例如ts / 1000 * 1000将一个 ms 单位的时间戳的最后 3 位置为 0ts + 8 * 3600 * 1000将一个 ms 精度的时间戳,增加 8 小时ts - 8 * 3600 * 1000将一个 ms 精度的时间戳,减去 8 小时; | 否 | None | | 11 | received_ts_col | String | 符合 TDengine 命名规范的列名 | 否 | rts |
| 12 | received_ts_transform | String | 否 | None | | | 12 | ts_transform | String | 支持 +、-、*、/、% 操作符例如ts / 1000 * 1000将一个 ms 单位的时间戳的最后 3 位置为 0ts + 8 * 3600 * 1000将一个 ms 精度的时间戳,增加 8 小时ts - 8 * 3600 * 1000将一个 ms 精度的时间戳,减去 8 小时; | 否 | None |
| 13 | tag::VARCHAR(200)::name | String | tag 里的值,当 tag 的类型是 VARCHAR 时,可以是中文 | 否 | NULL | | 13 | request_ts_transform | String | 支持 +、-、*、/、% 操作符例如ts / 1000 * 1000将一个 ms 单位的时间戳的最后 3 位置为 0qts + 8 * 3600 * 1000将一个 ms 精度的时间戳,增加 8 小时qts - 8 * 3600 * 1000将一个 ms 精度的时间戳,减去 8 小时; | 否 | None |
| 14 | received_ts_transform | String | 支持 +、-、*、/、% 操作符例如ts / 1000 * 1000将一个 ms 单位的时间戳的最后 3 位置为 0rts + 8 * 3600 * 1000将一个 ms 精度的时间戳,增加 8 小时rts - 8 * 3600 * 1000将一个 ms 精度的时间戳,减去 8 小时; | 否 | None |
| 15 | tag::VARCHAR(200)::name | String | tag 里的值,当 tag 的类型是 VARCHAR 时,可以是中文 | 否 | NULL |
(2) tag_name 在整个 DataIn 任务中是唯一的,即:在一个 OPC DataIn 任务中,一个数据点位只能被写入到 TDengine 的一张子表。如果需要将一个数据点位写入多张子表,需要建多个 OPC DataIn 任务; (2) tag_name 在整个 DataIn 任务中是唯一的,即:在一个 OPC DataIn 任务中,一个数据点位只能被写入到 TDengine 的一张子表。如果需要将一个数据点位写入多张子表,需要建多个 OPC DataIn 任务;
@ -130,7 +134,7 @@ CSV 文件中的每个 Row 配置一个 OPC 数据点位。Row 的规则如下
通过配置 **超级表名**、**表名称**,指定数据要写入的超级表、子表。 通过配置 **超级表名**、**表名称**,指定数据要写入的超级表、子表。
配置**主键列**,选择 origin_ts 表示使用 OPC 点位数据的原始时间戳作 TDengine 中的主键;选择 received_ts 表示使用数据的接收时间戳作 TDengine 中的主键。配置**主键别名**,指定 TDengine 时间戳列的名称。 配置**主键列**,选择 origin_ts 表示使用 OPC 点位数据的原始时间戳作 TDengine 中的主键;选择 request_ts 表示使用数据的请求时间戳作 TDengine 中的主键;选择 received_ts 表示使用数据的接收时间戳作 TDengine 中的主键。配置**主键别名**,指定 TDengine 时间戳列的名称。
![point.png](./pic/opcda-06-point.png) ![point.png](./pic/opcda-06-point.png)