Merge branch 'main' into docs/TD-33352-MAIN

This commit is contained in:
Alex Duan 2024-12-30 14:47:44 +08:00
commit 44fdb38541
117 changed files with 2801 additions and 761 deletions

66
.github/workflows/taosd-ci-build.yml vendored Normal file
View File

@ -0,0 +1,66 @@
name: TDengine Build
on:
pull_request:
branches:
- 'main'
- '3.0'
- '3.1'
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
runs-on: ubuntu-latest
name: Run unit tests
steps:
- name: Checkout the repository
uses: actions/checkout@v4
- name: Set up Go
uses: actions/setup-go@v5
with:
go-version: 1.18
- name: Install system dependencies
run: |
sudo apt update -y
sudo apt install -y build-essential cmake \
libgeos-dev libjansson-dev libsnappy-dev liblzma-dev libz-dev \
zlib1g pkg-config libssl-dev gawk
- name: Build and install TDengine
run: |
mkdir debug && cd debug
cmake .. -DBUILD_HTTP=false -DBUILD_JDBC=false \
-DBUILD_TOOLS=true -DBUILD_TEST=off \
-DBUILD_KEEPER=true -DBUILD_DEPENDENCY_TESTS=false
make -j 4
sudo make install
which taosd
which taosadapter
which taoskeeper
- name: Start taosd
run: |
cp /etc/taos/taos.cfg ./
sudo echo "supportVnodes 256" >> taos.cfg
nohup sudo taosd -c taos.cfg &
- name: Start taosadapter
run: nohup sudo taosadapter &
- name: Run tests with taosBenchmark
run: |
taosBenchmark -t 10 -n 10 -y
taos -s "select count(*) from test.meters"
- name: Clean up
if: always()
run: |
if pgrep taosd; then sudo pkill taosd; fi
if pgrep taosadapter; then sudo pkill taosadapter; fi

View File

@ -10,6 +10,7 @@
</p> </p>
<p> <p>
[![Build Status](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml/badge.svg)](https://github.com/taosdata/TDengine/actions/workflows/taosd-ci-build.yml)
[![Coverage Status](https://coveralls.io/repos/github/taosdata/TDengine/badge.svg?branch=3.0)](https://coveralls.io/github/taosdata/TDengine?branch=3.0) [![Coverage Status](https://coveralls.io/repos/github/taosdata/TDengine/badge.svg?branch=3.0)](https://coveralls.io/github/taosdata/TDengine?branch=3.0)
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4201/badge)](https://bestpractices.coreinfrastructure.org/projects/4201) [![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/4201/badge)](https://bestpractices.coreinfrastructure.org/projects/4201)
<br /> <br />

View File

@ -109,7 +109,7 @@ If you are using Maven to manage your project, simply add the following dependen
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.4.0</version> <version>3.5.0</version>
</dependency> </dependency>
``` ```

View File

@ -28,8 +28,15 @@ Next, we continue to use smart meters as an example to demonstrate the efficient
<Tabs defaultValue="java" groupId="lang"> <Tabs defaultValue="java" groupId="lang">
<TabItem value="java" label="Java"> <TabItem value="java" label="Java">
There are two kinds of interfaces for parameter binding: one is the standard JDBC interface, and the other is an extended interface. The extended interface offers better performance.
```java ```java
{{#include docs/examples/java/src/main/java/com/taos/example/WSParameterBindingBasicDemo.java:para_bind}} {{#include docs/examples/java/src/main/java/com/taos/example/WSParameterBindingStdInterfaceDemo.java:para_bind}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/WSParameterBindingExtendInterfaceDemo.java:para_bind}}
``` ```
This is a [more detailed parameter binding example](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/WSParameterBindingFullDemo.java) This is a [more detailed parameter binding example](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/WSParameterBindingFullDemo.java)
@ -96,9 +103,19 @@ This is a [more detailed parameter binding example](https://github.com/taosdata/
</TabItem> </TabItem>
<TabItem label="Go" value="go"> <TabItem label="Go" value="go">
The example code for binding parameters with stmt2 (Go connector v3.6.0 and above, TDengine v3.3.5.0 and above) is as follows:
```go
{{#include docs/examples/go/stmt2/native/main.go}}
```
The example code for binding parameters with stmt is as follows:
```go ```go
{{#include docs/examples/go/stmt/native/main.go}} {{#include docs/examples/go/stmt/native/main.go}}
``` ```
</TabItem> </TabItem>
<TabItem label="Rust" value="rust"> <TabItem label="Rust" value="rust">

View File

@ -190,7 +190,8 @@ The effective value of charset is UTF-8.
|Parameter Name |Supported Version |Dynamic Modification|Description| |Parameter Name |Supported Version |Dynamic Modification|Description|
|-----------------------|-------------------------|--------------------|------------| |-----------------------|-------------------------|--------------------|------------|
|supportVnodes | |Supported, effective immediately |Maximum number of vnodes supported by a dnode, range 0-4096, default value is twice the number of CPU cores + 5| |supportVnodes | |Supported, effective immediately |Maximum number of vnodes supported by a dnode, range 0-4096, default value is twice the number of CPU cores + 5|
|numOfCommitThreads | |Supported, effective after restart|Maximum number of commit threads, range 0-1024, default value 4| |numOfCommitThreads | |Supported, effective after restart|Maximum number of commit threads, range 1-1024, default value 4|
|numOfCompactThreads | |Supported, effective after restart|Maximum number of commit threads, range 1-16, default value 2|
|numOfMnodeReadThreads | |Supported, effective after restart|Number of Read threads for mnode, range 0-1024, default value is one quarter of the CPU cores (not exceeding 4)| |numOfMnodeReadThreads | |Supported, effective after restart|Number of Read threads for mnode, range 0-1024, default value is one quarter of the CPU cores (not exceeding 4)|
|numOfVnodeQueryThreads | |Supported, effective after restart|Number of Query threads for vnode, range 0-1024, default value is twice the number of CPU cores (not exceeding 16)| |numOfVnodeQueryThreads | |Supported, effective after restart|Number of Query threads for vnode, range 0-1024, default value is twice the number of CPU cores (not exceeding 16)|
|numOfVnodeFetchThreads | |Supported, effective after restart|Number of Fetch threads for vnode, range 0-1024, default value is one quarter of the CPU cores (not exceeding 4)| |numOfVnodeFetchThreads | |Supported, effective after restart|Number of Fetch threads for vnode, range 0-1024, default value is one quarter of the CPU cores (not exceeding 4)|

View File

@ -148,6 +148,7 @@ When using time windows, note:
- The window width of the aggregation period is specified by the keyword INTERVAL, with the shortest interval being 10 milliseconds (10a); it also supports an offset (the offset must be less than the interval), which is the offset of the time window division compared to "UTC moment 0". The SLIDING statement is used to specify the forward increment of the aggregation period, i.e., the duration of each window slide forward. - The window width of the aggregation period is specified by the keyword INTERVAL, with the shortest interval being 10 milliseconds (10a); it also supports an offset (the offset must be less than the interval), which is the offset of the time window division compared to "UTC moment 0". The SLIDING statement is used to specify the forward increment of the aggregation period, i.e., the duration of each window slide forward.
- When using the INTERVAL statement, unless in very special cases, it is required to configure the timezone parameter in the taos.cfg configuration files of both the client and server to the same value to avoid frequent cross-time zone conversions by time processing functions, which can cause severe performance impacts. - When using the INTERVAL statement, unless in very special cases, it is required to configure the timezone parameter in the taos.cfg configuration files of both the client and server to the same value to avoid frequent cross-time zone conversions by time processing functions, which can cause severe performance impacts.
- The returned results have a strictly monotonically increasing time-series. - The returned results have a strictly monotonically increasing time-series.
- When using AUTO as the window offset, if the WHERE time condition is complex, such as multiple AND/OR/IN combinations, AUTO may not take effect. In such cases, you can manually specify the window offset to resolve the issue.
- When using AUTO as the window offset, if the window width unit is d (day), n (month), w (week), y (year), such as: INTERVAL(1d, AUTO), INTERVAL(3w, AUTO), the TSMA optimization cannot take effect. If TSMA is manually created on the target table, the statement will report an error and exit; in this case, you can explicitly specify the Hint SKIP_TSMA or not use AUTO as the window offset. - When using AUTO as the window offset, if the window width unit is d (day), n (month), w (week), y (year), such as: INTERVAL(1d, AUTO), INTERVAL(3w, AUTO), the TSMA optimization cannot take effect. If TSMA is manually created on the target table, the statement will report an error and exit; in this case, you can explicitly specify the Hint SKIP_TSMA or not use AUTO as the window offset.
### State Window ### State Window

View File

@ -30,33 +30,34 @@ The JDBC driver implementation for TDengine strives to be consistent with relati
## Version History ## Version History
| taos-jdbcdriver Version | Major Changes | TDengine Version | | taos-jdbcdriver Version | Major Changes | TDengine Version |
| ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- | | ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ |
| 3.4.0 | 1. Replaced fastjson library with jackson. <br/> 2. WebSocket uses a separate protocol identifier. <br/> 3. Optimized background thread usage to avoid user misuse leading to timeouts. | - | | 3.5.0 | 1. Optimized the performance of WebSocket connection parameter binding, supporting parameter binding queries using binary data. <br/> 2. Optimized the performance of small queries in WebSocket connection. <br/> 3. Added support for setting time zone and app info on WebSocket connection. | 3.3.5.0 and higher |
| 3.3.4 | Fixed getInt error when data type is float. | - | | 3.4.0 | 1. Replaced fastjson library with jackson. <br/> 2. WebSocket uses a separate protocol identifier. <br/> 3. Optimized background thread usage to avoid user misuse leading to timeouts. | - |
| 3.3.3 | Fixed memory leak caused by closing WebSocket statement. | - | | 3.3.4 | Fixed getInt error when data type is float. | - |
| 3.3.2 | 1. Optimized parameter binding performance under WebSocket connection. <br/> 2. Improved support for mybatis. | - | | 3.3.3 | Fixed memory leak caused by closing WebSocket statement. | - |
| 3.3.0 | 1. Optimized data transmission performance under WebSocket connection. <br/> 2. Supports skipping SSL verification, off by default. | 3.3.2.0 and higher | | 3.3.2 | 1. Optimized parameter binding performance under WebSocket connection. <br/> 2. Improved support for mybatis. | - |
| 3.2.11 | Fixed a bug in closing result set in Native connection. | - | | 3.3.0 | 1. Optimized data transmission performance under WebSocket connection. <br/> 2. Supports skipping SSL verification, off by default. | 3.3.2.0 and higher |
| 3.2.10 | 1. REST/WebSocket connections support data compression during transmission. <br/> 2. WebSocket automatic reconnection mechanism, off by default. <br/> 3. Connection class provides methods for schemaless writing. <br/> 4. Optimized data fetching performance for native connections. <br/> 5. Fixed some known issues. <br/> 6. Metadata retrieval functions can return a list of supported functions. | - | | 3.2.11 | Fixed a bug in closing result set in Native connection. | - |
| 3.2.9 | Fixed bug in closing WebSocket prepareStatement. | - | | 3.2.10 | 1. REST/WebSocket connections support data compression during transmission. <br/> 2. WebSocket automatic reconnection mechanism, off by default. <br/> 3. Connection class provides methods for schemaless writing. <br/> 4. Optimized data fetching performance for native connections. <br/> 5. Fixed some known issues. <br/> 6. Metadata retrieval functions can return a list of supported functions. | - |
| 3.2.8 | 1. Optimized auto-commit. <br/> 2. Fixed manual commit bug in WebSocket. <br/> 3. Optimized WebSocket prepareStatement using a single connection. <br/> 4. Metadata supports views. | - | | 3.2.9 | Fixed bug in closing WebSocket prepareStatement. | - |
| 3.2.7 | 1. Supports VARBINARY and GEOMETRY types. <br/> 2. Added timezone setting support for native connections. <br/> 3. Added WebSocket automatic reconnection feature. | 3.2.0.0 and higher | | 3.2.8 | 1. Optimized auto-commit. <br/> 2. Fixed manual commit bug in WebSocket. <br/> 3. Optimized WebSocket prepareStatement using a single connection. <br/> 4. Metadata supports views. | - |
| 3.2.5 | Data subscription adds committed() and assignment() methods. | 3.1.0.3 and higher | | 3.2.7 | 1. Supports VARBINARY and GEOMETRY types. <br/> 2. Added timezone setting support for native connections. <br/> 3. Added WebSocket automatic reconnection feature. | 3.2.0.0 and higher |
| 3.2.4 | Data subscription adds enable.auto.commit parameter under WebSocket connection, as well as unsubscribe() method. | - | | 3.2.5 | Data subscription adds committed() and assignment() methods. | 3.1.0.3 and higher |
| 3.2.3 | Fixed ResultSet data parsing failure in some cases. | - | | 3.2.4 | Data subscription adds enable.auto.commit parameter under WebSocket connection, as well as unsubscribe() method. | - |
| 3.2.2 | New feature: Data subscription supports seek function. | 3.0.5.0 and higher | | 3.2.3 | Fixed ResultSet data parsing failure in some cases. | - |
| 3.2.1 | 1. WebSocket connection supports schemaless and prepareStatement writing. <br/> 2. Consumer poll returns result set as ConsumerRecord, which can be accessed through value() method. | 3.0.3.0 and higher | | 3.2.2 | New feature: Data subscription supports seek function. | 3.0.5.0 and higher |
| 3.2.0 | Connection issues, not recommended for use. | - | | 3.2.1 | 1. WebSocket connection supports schemaless and prepareStatement writing. <br/> 2. Consumer poll returns result set as ConsumerRecord, which can be accessed through value() method. | 3.0.3.0 and higher |
| 3.1.0 | WebSocket connection supports subscription function. | - | | 3.2.0 | Connection issues, not recommended for use. | - |
| 3.0.1 - 3.0.4 | Fixed data parsing errors in result sets under some conditions. 3.0.1 compiled in JDK 11 environment, other versions recommended for JDK 8. | - | | 3.1.0 | WebSocket connection supports subscription function. | - |
| 3.0.0 | Supports TDengine 3.0 | 3.0.0.0 and higher | | 3.0.1 - 3.0.4 | Fixed data parsing errors in result sets under some conditions. 3.0.1 compiled in JDK 11 environment, other versions recommended for JDK 8. | - |
| 2.0.42 | Fixed wasNull interface return value in WebSocket connection. | - | | 3.0.0 | Supports TDengine 3.0 | 3.0.0.0 and higher |
| 2.0.41 | Fixed username and password encoding method in REST connection. | - | | 2.0.42 | Fixed wasNull interface return value in WebSocket connection. | - |
| 2.0.39 - 2.0.40 | Added REST connection/request timeout settings. | - | | 2.0.41 | Fixed username and password encoding method in REST connection. | - |
| 2.0.38 | JDBC REST connection adds batch fetching function. | - | | 2.0.39 - 2.0.40 | Added REST connection/request timeout settings. | - |
| 2.0.37 | Added support for json tag. | - | | 2.0.38 | JDBC REST connection adds batch fetching function. | - |
| 2.0.36 | Added support for schemaless writing. | - | | 2.0.37 | Added support for json tag. | - |
| 2.0.36 | Added support for schemaless writing. | - |
## Exceptions and Error Codes ## Exceptions and Error Codes
@ -75,47 +76,47 @@ The error codes that the JDBC connector may report include 4 types:
Please refer to the specific error codes: Please refer to the specific error codes:
| Error Code | Description | Suggested Actions | | Error Code | Description | Suggested Actions |
| ---------- | --------------------------------------------------------------- | ----------------------------------------------------------------------------------------- | | ---------- | --------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------- |
| 0x2301 | connection already closed | The connection is already closed, check the connection status, or recreate the connection to execute related commands. | | 0x2301 | connection already closed | The connection is already closed, check the connection status, or recreate the connection to execute related commands. |
| 0x2302 | this operation is NOT supported currently! | The current interface is not supported, consider switching to another connection method. | | 0x2302 | this operation is NOT supported currently! | The current interface is not supported, consider switching to another connection method. |
| 0x2303 | invalid variables | Invalid parameters, please check the interface specifications and adjust the parameter types and sizes. | | 0x2303 | invalid variables | Invalid parameters, please check the interface specifications and adjust the parameter types and sizes. |
| 0x2304 | statement is closed | The statement is already closed, check if the statement was used after being closed, or if the connection is normal. | | 0x2304 | statement is closed | The statement is already closed, check if the statement was used after being closed, or if the connection is normal. |
| 0x2305 | resultSet is closed | The resultSet has been released, check if the resultSet was used after being released. | | 0x2305 | resultSet is closed | The resultSet has been released, check if the resultSet was used after being released. |
| 0x2306 | Batch is empty! | Add parameters to prepareStatement before executing executeBatch. | | 0x2306 | Batch is empty! | Add parameters to prepareStatement before executing executeBatch. |
| 0x2307 | Can not issue data manipulation statements with executeQuery() | Use executeUpdate() for update operations, not executeQuery(). | | 0x2307 | Can not issue data manipulation statements with executeQuery() | Use executeUpdate() for update operations, not executeQuery(). |
| 0x2308 | Can not issue SELECT via executeUpdate() | Use executeQuery() for query operations, not executeUpdate(). | | 0x2308 | Can not issue SELECT via executeUpdate() | Use executeQuery() for query operations, not executeUpdate(). |
| 0x230d | parameter index out of range | Parameter out of bounds, check the reasonable range of parameters. | | 0x230d | parameter index out of range | Parameter out of bounds, check the reasonable range of parameters. |
| 0x230e | connection already closed | The connection is already closed, check if the Connection was used after being closed, or if the connection is normal. | | 0x230e | connection already closed | The connection is already closed, check if the Connection was used after being closed, or if the connection is normal. |
| 0x230f | unknown sql type in tdengine | Check the Data Type types supported by TDengine. | | 0x230f | unknown sql type in tdengine | Check the Data Type types supported by TDengine. |
| 0x2310 | can't register JDBC-JNI driver | Cannot register JNI driver, check if the url is correctly filled. | | 0x2310 | can't register JDBC-JNI driver | Cannot register JNI driver, check if the url is correctly filled. |
| 0x2312 | url is not set | Check if the REST connection url is correctly filled. | | 0x2312 | url is not set | Check if the REST connection url is correctly filled. |
| 0x2314 | numeric value out of range | Check if the correct interface was used for numeric types in the result set. | | 0x2314 | numeric value out of range | Check if the correct interface was used for numeric types in the result set. |
| 0x2315 | unknown taos type in tdengine | When converting TDengine data types to JDBC data types, check if the correct TDengine data type was specified. | | 0x2315 | unknown taos type in tdengine | When converting TDengine data types to JDBC data types, check if the correct TDengine data type was specified. |
| 0x2317 | | Incorrect request type used in REST connection. | | 0x2317 | | Incorrect request type used in REST connection. |
| 0x2318 | | Data transmission error occurred in REST connection, check the network situation and retry. | | 0x2318 | | Data transmission error occurred in REST connection, check the network situation and retry. |
| 0x2319 | user is required | Username information is missing when creating a connection. | | 0x2319 | user is required | Username information is missing when creating a connection. |
| 0x231a | password is required | Password information is missing when creating a connection. | | 0x231a | password is required | Password information is missing when creating a connection. |
| 0x231c | httpEntity is null, sql: | An exception occurred in REST connection execution. | | 0x231c | httpEntity is null, sql: | An exception occurred in REST connection execution. |
| 0x231d | can't create connection with server within | Increase the httpConnectTimeout parameter to extend the connection time, or check the connection with taosAdapter. | | 0x231d | can't create connection with server within | Increase the httpConnectTimeout parameter to extend the connection time, or check the connection with taosAdapter. |
| 0x231e | failed to complete the task within the specified time | Increase the messageWaitTimeout parameter to extend the execution time, or check the connection with taosAdapter. | | 0x231e | failed to complete the task within the specified time | Increase the messageWaitTimeout parameter to extend the execution time, or check the connection with taosAdapter. |
| 0x2350 | unknown error | Unknown exception, please provide feedback to the developers on github. | | 0x2350 | unknown error | Unknown exception, please provide feedback to the developers on github. |
| 0x2352 | Unsupported encoding | An unsupported character encoding set was specified in the local connection. | | 0x2352 | Unsupported encoding | An unsupported character encoding set was specified in the local connection. |
| 0x2353 | internal error of database, please see taoslog for more details | An error occurred while executing prepareStatement in local connection, check taos log for troubleshooting. | | 0x2353 | internal error of database, please see taoslog for more details | An error occurred while executing prepareStatement in local connection, check taos log for troubleshooting. |
| 0x2354 | JNI connection is NULL | The Connection was already closed when executing commands in local connection. Check the connection with TDengine. | | 0x2354 | JNI connection is NULL | The Connection was already closed when executing commands in local connection. Check the connection with TDengine. |
| 0x2355 | JNI result set is NULL | The result set is abnormal in local connection, check the connection and retry. | | 0x2355 | JNI result set is NULL | The result set is abnormal in local connection, check the connection and retry. |
| 0x2356 | invalid num of fields | The meta information of the result set obtained in local connection does not match. | | 0x2356 | invalid num of fields | The meta information of the result set obtained in local connection does not match. |
| 0x2357 | empty sql string | Fill in the correct SQL for execution. | | 0x2357 | empty sql string | Fill in the correct SQL for execution. |
| 0x2359 | JNI alloc memory failed, please see taoslog for more details | Memory allocation error in local connection, check taos log for troubleshooting. | | 0x2359 | JNI alloc memory failed, please see taoslog for more details | Memory allocation error in local connection, check taos log for troubleshooting. |
| 0x2371 | consumer properties must not be null! | Parameters are null when creating a subscription, fill in the correct parameters. | | 0x2371 | consumer properties must not be null! | Parameters are null when creating a subscription, fill in the correct parameters. |
| 0x2372 | configs contain empty key, failed to set consumer property | The parameter key contains empty values, fill in the correct parameters. | | 0x2372 | configs contain empty key, failed to set consumer property | The parameter key contains empty values, fill in the correct parameters. |
| 0x2373 | failed to set consumer property, | The parameter value contains empty values, fill in the correct parameters. | | 0x2373 | failed to set consumer property, | The parameter value contains empty values, fill in the correct parameters. |
| 0x2375 | topic reference has been destroyed | During the data subscription process, the topic reference was released. Check the connection with TDengine. | | 0x2375 | topic reference has been destroyed | During the data subscription process, the topic reference was released. Check the connection with TDengine. |
| 0x2376 | failed to set consumer topic, topic name is empty | During the data subscription process, the subscription topic name is empty. Check if the specified topic name is correctly filled. | | 0x2376 | failed to set consumer topic, topic name is empty | During the data subscription process, the subscription topic name is empty. Check if the specified topic name is correctly filled. |
| 0x2377 | consumer reference has been destroyed | The data transmission channel for the subscription has been closed, check the connection with TDengine. | | 0x2377 | consumer reference has been destroyed | The data transmission channel for the subscription has been closed, check the connection with TDengine. |
| 0x2378 | consumer create error | Data subscription creation failed, check the error information and taos log for troubleshooting. | | 0x2378 | consumer create error | Data subscription creation failed, check the error information and taos log for troubleshooting. |
| 0x2379 | seek offset must not be a negative number | The seek interface parameter must not be negative, use the correct parameters. | | 0x2379 | seek offset must not be a negative number | The seek interface parameter must not be negative, use the correct parameters. |
| 0x237a | vGroup not found in result set | VGroup not assigned to the current consumer, due to the Rebalance mechanism causing the Consumer and VGroup to be unbound. | | 0x237a | vGroup not found in result set | VGroup not assigned to the current consumer, due to the Rebalance mechanism causing the Consumer and VGroup to be unbound. |
- [TDengine Java Connector Error Code](https://github.com/taosdata/taos-connector-jdbc/blob/main/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java) - [TDengine Java Connector Error Code](https://github.com/taosdata/taos-connector-jdbc/blob/main/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java)
<!-- - [TDengine_ERROR_CODE](../error-code) --> <!-- - [TDengine_ERROR_CODE](../error-code) -->
@ -244,13 +245,13 @@ For WebSocket connections, the configuration parameters in the URL are as follow
- user: Login username for TDengine, default value 'root'. - user: Login username for TDengine, default value 'root'.
- password: User login password, default value 'taosdata'. - password: User login password, default value 'taosdata'.
- charset: Specifies the character set for parsing string data when batch fetching is enabled.
- batchErrorIgnore: true: Continues executing the following SQL if one SQL fails during the execution of Statement's executeBatch. false: Does not execute any statements after a failed SQL. Default value: false. - batchErrorIgnore: true: Continues executing the following SQL if one SQL fails during the execution of Statement's executeBatch. false: Does not execute any statements after a failed SQL. Default value: false.
- httpConnectTimeout: Connection timeout in ms, default value 60000. - httpConnectTimeout: Connection timeout in ms, default value 60000.
- messageWaitTimeout: Message timeout in ms, default value 60000. - messageWaitTimeout: Message timeout in ms, default value 60000.
- useSSL: Whether SSL is used in the connection. - useSSL: Whether SSL is used in the connection.
- timezone: Client timezone, default is the system current timezone. Recommended not to set, using the system time zone provides better performance.
**Note**: Some configuration items (such as: locale, timezone) do not take effect in WebSocket connections. **Note**: Some configuration items (such as: locale, charset) do not take effect in WebSocket connections.
**REST Connection** **REST Connection**
Using JDBC REST connection does not depend on the client driver. Compared to native JDBC connections, you only need to: Using JDBC REST connection does not depend on the client driver. Compared to native JDBC connections, you only need to:
@ -263,14 +264,13 @@ For REST connections, the configuration parameters in the URL are as follows:
- user: Login username for TDengine, default value 'root'. - user: Login username for TDengine, default value 'root'.
- password: User login password, default value 'taosdata'. - password: User login password, default value 'taosdata'.
- charset: Specifies the character set for parsing string data when batch fetching is enabled.
- batchErrorIgnore: true: Continues executing the following SQL if one SQL fails during the execution of Statement's executeBatch. false: Does not execute any statements after a failed SQL. Default value: false. - batchErrorIgnore: true: Continues executing the following SQL if one SQL fails during the execution of Statement's executeBatch. false: Does not execute any statements after a failed SQL. Default value: false.
- httpConnectTimeout: Connection timeout in ms, default value 60000. - httpConnectTimeout: Connection timeout in ms, default value 60000.
- httpSocketTimeout: Socket timeout in ms, default value 60000. - httpSocketTimeout: Socket timeout in ms, default value 60000.
- useSSL: Whether SSL is used in the connection. - useSSL: Whether SSL is used in the connection.
- httpPoolSize: REST concurrent request size, default 20. - httpPoolSize: REST concurrent request size, default 20.
**Note**: Some configuration items (such as: locale, timezone) do not take effect in REST connections. **Note**: Some configuration items (such as: locale, charset and timezone) do not take effect in REST connections.
:::note :::note
@ -294,7 +294,9 @@ The configuration parameters in properties are as follows:
- TSDBDriver.PROPERTY_KEY_CONFIG_DIR: Effective only when using native JDBC connections. Client configuration file directory path, default value on Linux OS is `/etc/taos`, on Windows OS is `C:/TDengine/cfg`. - TSDBDriver.PROPERTY_KEY_CONFIG_DIR: Effective only when using native JDBC connections. Client configuration file directory path, default value on Linux OS is `/etc/taos`, on Windows OS is `C:/TDengine/cfg`.
- TSDBDriver.PROPERTY_KEY_CHARSET: Character set used by the client, default value is the system character set. - TSDBDriver.PROPERTY_KEY_CHARSET: Character set used by the client, default value is the system character set.
- TSDBDriver.PROPERTY_KEY_LOCALE: Effective only when using native JDBC connections. Client locale, default value is the current system locale. - TSDBDriver.PROPERTY_KEY_LOCALE: Effective only when using native JDBC connections. Client locale, default value is the current system locale.
- TSDBDriver.PROPERTY_KEY_TIME_ZONE: Effective only when using native JDBC connections. Client time zone, default value is the current system time zone. Due to historical reasons, we only support part of the POSIX standard, such as UTC-8 (representing Shanghai, China), GMT-8, Asia/Shanghai. - TSDBDriver.PROPERTY_KEY_TIME_ZONE:
- Native connections: Client time zone, default value is the current system time zone. Effective globally. Due to historical reasons, we only support part of the POSIX standard, such as UTC-8 (representing Shanghai, China), GMT-8, Asia/Shanghai.
- WebSocket connections. Client time zone, default value is the current system time zone. Effective on the connection. Only IANA time zones are supported, such as Asia/Shanghai. It is recommended not to set this parameter, as using the system time zone provides better performance.
- TSDBDriver.HTTP_CONNECT_TIMEOUT: Connection timeout, in ms, default value is 60000. Effective only in REST connections. - TSDBDriver.HTTP_CONNECT_TIMEOUT: Connection timeout, in ms, default value is 60000. Effective only in REST connections.
- TSDBDriver.HTTP_SOCKET_TIMEOUT: Socket timeout, in ms, default value is 60000. Effective only in REST connections and when batchfetch is set to false. - TSDBDriver.HTTP_SOCKET_TIMEOUT: Socket timeout, in ms, default value is 60000. Effective only in REST connections and when batchfetch is set to false.
- TSDBDriver.PROPERTY_KEY_MESSAGE_WAIT_TIMEOUT: Message timeout, in ms, default value is 60000. Effective only under WebSocket connections. - TSDBDriver.PROPERTY_KEY_MESSAGE_WAIT_TIMEOUT: Message timeout, in ms, default value is 60000. Effective only under WebSocket connections.
@ -303,12 +305,14 @@ The configuration parameters in properties are as follows:
- TSDBDriver.PROPERTY_KEY_ENABLE_COMPRESSION: Whether to enable compression during transmission. Effective only when using REST/WebSocket connections. true: enabled, false: not enabled. Default is false. - TSDBDriver.PROPERTY_KEY_ENABLE_COMPRESSION: Whether to enable compression during transmission. Effective only when using REST/WebSocket connections. true: enabled, false: not enabled. Default is false.
- TSDBDriver.PROPERTY_KEY_ENABLE_AUTO_RECONNECT: Whether to enable auto-reconnect. Effective only when using WebSocket connections. true: enabled, false: not enabled. Default is false. - TSDBDriver.PROPERTY_KEY_ENABLE_AUTO_RECONNECT: Whether to enable auto-reconnect. Effective only when using WebSocket connections. true: enabled, false: not enabled. Default is false.
> **Note**: Enabling auto-reconnect is only effective for simple SQL execution, schema-less writing, and data subscription. It is ineffective for parameter binding. Auto-reconnect is only effective for connections established through parameters specifying the database, and ineffective for later `use db` statements to switch databases. > **Note**: Enabling auto-reconnect is only effective for simple SQL execution, schema-less writing, and data subscription. It is ineffective for parameter binding. Auto-reconnect is only effective for connections established through parameters specifying the database, and ineffective for later `use db` statements to switch databases.
- TSDBDriver.PROPERTY_KEY_RECONNECT_INTERVAL_MS: Auto-reconnect retry interval, in milliseconds, default value 2000. Effective only when PROPERTY_KEY_ENABLE_AUTO_RECONNECT is true. - TSDBDriver.PROPERTY_KEY_RECONNECT_INTERVAL_MS: Auto-reconnect retry interval, in milliseconds, default value 2000. Effective only when PROPERTY_KEY_ENABLE_AUTO_RECONNECT is true.
- TSDBDriver.PROPERTY_KEY_RECONNECT_RETRY_COUNT: Auto-reconnect retry count, default value 3, effective only when PROPERTY_KEY_ENABLE_AUTO_RECONNECT is true. - TSDBDriver.PROPERTY_KEY_RECONNECT_RETRY_COUNT: Auto-reconnect retry count, default value 3, effective only when PROPERTY_KEY_ENABLE_AUTO_RECONNECT is true.
- TSDBDriver.PROPERTY_KEY_DISABLE_SSL_CERT_VALIDATION: Disable SSL certificate validation. Effective only when using WebSocket connections. true: enabled, false: not enabled. Default is false. - TSDBDriver.PROPERTY_KEY_DISABLE_SSL_CERT_VALIDATION: Disable SSL certificate validation. Effective only when using WebSocket connections. true: enabled, false: not enabled. Default is false.
- TSDBDriver.PROPERTY_KEY_APP_NAME: App name, can be used for display in the `show connections` query result. Effective only when using WebSocket connections. Default value is java.
- TSDBDriver.PROPERTY_KEY_APP_IP: App IP, can be used for display in the `show connections` query result. Effective only when using WebSocket connections. Default value is empty.
Additionally, for native JDBC connections, other parameters such as log level and SQL length can be specified by specifying the URL and Properties. Additionally, for native JDBC connections, other parameters such as log level and SQL length can be specified by specifying the URL and Properties.
**Priority of Configuration Parameters** **Priority of Configuration Parameters**
@ -489,16 +493,16 @@ For example: if the password is specified as taosdata in the URL and as taosdemo
List of interface methods that return `true` for supported features, others not explicitly mentioned return `false`. List of interface methods that return `true` for supported features, others not explicitly mentioned return `false`.
| Interface Method | Description | | Interface Method | Description |
|--------------------------------------------------------|-----------------------------------------------------| | ---------------------------------------------- | ----------------------------------------------------------------------------------------------- |
| `boolean nullsAreSortedAtStart()` | Determines if `NULL` values are sorted at the start | | `boolean nullsAreSortedAtStart()` | Determines if `NULL` values are sorted at the start |
| `boolean storesLowerCaseIdentifiers()` | Determines if the database stores identifiers in lowercase | | `boolean storesLowerCaseIdentifiers()` | Determines if the database stores identifiers in lowercase |
| `boolean supportsAlterTableWithAddColumn()` | Determines if the database supports adding columns with `ALTER TABLE` | | `boolean supportsAlterTableWithAddColumn()` | Determines if the database supports adding columns with `ALTER TABLE` |
| `boolean supportsAlterTableWithDropColumn()` | Determines if the database supports dropping columns with `ALTER TABLE` | | `boolean supportsAlterTableWithDropColumn()` | Determines if the database supports dropping columns with `ALTER TABLE` |
| `boolean supportsColumnAliasing()` | Determines if the database supports column aliasing | | `boolean supportsColumnAliasing()` | Determines if the database supports column aliasing |
| `boolean supportsGroupBy()` | Determines if the database supports `GROUP BY` statements | | `boolean supportsGroupBy()` | Determines if the database supports `GROUP BY` statements |
| `boolean isCatalogAtStart()` | Determines if the catalog name appears at the start of the fully qualified name in the database | | `boolean isCatalogAtStart()` | Determines if the catalog name appears at the start of the fully qualified name in the database |
| `boolean supportsCatalogsInDataManipulation()` | Determines if the database supports catalog names in data manipulation statements | | `boolean supportsCatalogsInDataManipulation()` | Determines if the database supports catalog names in data manipulation statements |
### Connection Features ### Connection Features

View File

@ -21,24 +21,25 @@ Supports Go 1.14 and above.
## Version History ## Version History
| driver-go Version | Major Changes | TDengine Version | | driver-go Version | Major Changes | TDengine Version |
|------------------|------------------------------------------------------------------|-------------------| |-------------------|-------------------------------------------------------------------------------------------------|--------------------|
| v3.5.8 | Fixed null pointer exception. | - | | v3.6.0 | stmt2 native interface, DSN supports passwords containing special characters (url.QueryEscape). | 3.3.5.0 and higher |
| v3.5.7 | taosWS and taosRestful support passing request id. | - | | v3.5.8 | Fixed null pointer exception. | - |
| v3.5.6 | Improved websocket query and insert performance. | 3.3.2.0 and higher | | v3.5.7 | taosWS and taosRestful support passing request id. | - |
| v3.5.5 | Restful supports skipping SSL certificate check. | - | | v3.5.6 | Improved websocket query and insert performance. | 3.3.2.0 and higher |
| v3.5.4 | Compatible with TDengine 3.3.0.0 tmq raw data. | - | | v3.5.5 | Restful supports skipping SSL certificate check. | - |
| v3.5.3 | Refactored taosWS. | - | | v3.5.4 | Compatible with TDengine 3.3.0.0 tmq raw data. | - |
| v3.5.2 | Websocket compression and optimized tmq subscription performance. | 3.2.3.0 and higher | | v3.5.3 | Refactored taosWS. | - |
| v3.5.1 | Native stmt query and geometry type support. | 3.2.1.0 and higher | | v3.5.2 | Websocket compression and optimized tmq subscription performance. | 3.2.3.0 and higher |
| v3.5.0 | Support tmq get assignment and seek offset. | 3.0.5.0 and higher | | v3.5.1 | Native stmt query and geometry type support. | 3.2.1.0 and higher |
| v3.3.1 | Schemaless protocol insert based on websocket. | 3.0.4.1 and higher | | v3.5.0 | Support tmq get assignment and seek offset. | 3.0.5.0 and higher |
| v3.1.0 | Provided Kafka-like subscription API. | - | | v3.3.1 | Schemaless protocol insert based on websocket. | 3.0.4.1 and higher |
| v3.0.4 | Added request id related interfaces. | 3.0.2.2 and higher | | v3.1.0 | Provided Kafka-like subscription API. | - |
| v3.0.3 | Websocket-based statement insert. | - | | v3.0.4 | Added request id related interfaces. | 3.0.2.2 and higher |
| v3.0.2 | Websocket-based data query and insert. | 3.0.1.5 and higher | | v3.0.3 | Websocket-based statement insert. | - |
| v3.0.1 | Websocket-based message subscription. | - | | v3.0.2 | Websocket-based data query and insert. | 3.0.1.5 and higher |
| v3.0.0 | Adapted to TDengine 3.0 query and insert. | 3.0.0.0 and higher | | v3.0.1 | Websocket-based message subscription. | - |
| v3.0.0 | Adapted to TDengine 3.0 query and insert. | 3.0.0.0 and higher |
## Exceptions and Error Codes ## Exceptions and Error Codes
@ -136,6 +137,8 @@ Full form of DSN:
username:password@protocol(address)/dbname?param=value username:password@protocol(address)/dbname?param=value
``` ```
When the password contains special characters, it needs to be escaped using url.QueryEscape.
##### Native Connection ##### Native Connection
Import the driver: Import the driver:
@ -493,6 +496,43 @@ The `af` package provides more interfaces using native connections for parameter
* **Interface Description**: Closes the statement. * **Interface Description**: Closes the statement.
* **Return Value**: Error information. * **Return Value**: Error information.
From version 3.6.0, the `stmt2` interface for binding parameters is provided.
* `func (conn *Connector) Stmt2(reqID int64, singleTableBindOnce bool) *Stmt2`
* **Interface Description**: Returns a Stmt2 object bound to this connection.
* **Parameter Description**:
* `reqID`: Request ID.
* `singleTableBindOnce`: Indicates whether a single child table is bound only once during a single execution.
* **Return Value**: Stmt2 object.
* `func (s *Stmt2) Prepare(sql string) error`
* **Interface Description**: Prepares an SQL.
* **Parameter Description**:
* `sql`: The statement for parameter binding.
* **Return Value**: Error information.
* `func (s *Stmt2) Bind(params []*stmt.TaosStmt2BindData) error`
* **Interface Description**: Binds data to the prepared statement.
* **Parameter Description**:
* `params`: The data to bind.
* **Return Value**: Error information.
* `func (s *Stmt2) Execute() error`
* **Interface Description**: Executes the batch.
* **Return Value**: Error information.
* `func (s *Stmt2) GetAffectedRows() int`
* **Interface Description**: Gets the number of affected rows (only valid for insert statements).
* **Return Value**: Number of affected rows.
* `func (s *Stmt2) UseResult() (driver.Rows, error)`
* **Interface Description**: Retrieves the result set (only valid for query statements).
* **Return Value**: Result set Rows object, error information.
* `func (s *Stmt2) Close() error`
* **Interface Description**: Closes the statement.
* **Return Value**: Error information.
The `ws/stmt` package provides interfaces for parameter binding via WebSocket The `ws/stmt` package provides interfaces for parameter binding via WebSocket
* `func (c *Connector) Init() (*Stmt, error)` * `func (c *Connector) Init() (*Stmt, error)`

View File

@ -19,7 +19,7 @@
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.4.0</version> <version>3.5.0</version>
</dependency> </dependency>
<dependency> <dependency>
<groupId>org.locationtech.jts</groupId> <groupId>org.locationtech.jts</groupId>

View File

@ -47,7 +47,7 @@
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.4.0</version> <version>3.5.0</version>
</dependency> </dependency>
</dependencies> </dependencies>

View File

@ -18,7 +18,7 @@
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.4.0</version> <version>3.5.0</version>
</dependency> </dependency>
<!-- druid --> <!-- druid -->
<dependency> <dependency>

View File

@ -17,7 +17,7 @@
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.4.0</version> <version>3.5.0</version>
</dependency> </dependency>
<dependency> <dependency>
<groupId>com.google.guava</groupId> <groupId>com.google.guava</groupId>

View File

@ -47,7 +47,7 @@
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.4.0</version> <version>3.5.0</version>
</dependency> </dependency>
<dependency> <dependency>

View File

@ -70,7 +70,7 @@
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.4.0</version> <version>3.5.0</version>
</dependency> </dependency>
<dependency> <dependency>

View File

@ -67,7 +67,7 @@
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.4.0</version> <version>3.5.0</version>
<!-- <scope>system</scope>--> <!-- <scope>system</scope>-->
<!-- <systemPath>${project.basedir}/src/main/resources/lib/taos-jdbcdriver-2.0.15-dist.jar</systemPath>--> <!-- <systemPath>${project.basedir}/src/main/resources/lib/taos-jdbcdriver-2.0.15-dist.jar</systemPath>-->
</dependency> </dependency>

View File

@ -2,7 +2,7 @@ module goexample
go 1.17 go 1.17
require github.com/taosdata/driver-go/v3 v3.5.6 require github.com/taosdata/driver-go/v3 v3.6.0
require ( require (
github.com/google/uuid v1.3.0 // indirect github.com/google/uuid v1.3.0 // indirect

View File

@ -18,8 +18,8 @@ github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY= github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/taosdata/driver-go/v3 v3.5.6 h1:LDVtMyT3B9p2VREsd5KKM91D4Y7P4kSdh2SQumXi8bk= github.com/taosdata/driver-go/v3 v3.6.0 h1:4dRXMl01DhIS5xBXUvtkkB+MjL8g64zN674xKd+ojTE=
github.com/taosdata/driver-go/v3 v3.5.6/go.mod h1:H2vo/At+rOPY1aMzUV9P49SVX7NlXb3LAbKw+MCLrmU= github.com/taosdata/driver-go/v3 v3.6.0/go.mod h1:H2vo/At+rOPY1aMzUV9P49SVX7NlXb3LAbKw+MCLrmU=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c h1:dUUwHk2QECo/6vqA44rthZ8ie2QXMNeKRTHCNY2nXvo=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@ -0,0 +1,84 @@
package main
import (
"database/sql/driver"
"fmt"
"log"
"math/rand"
"time"
"github.com/taosdata/driver-go/v3/af"
"github.com/taosdata/driver-go/v3/common"
"github.com/taosdata/driver-go/v3/common/stmt"
)
func main() {
host := "127.0.0.1"
numOfSubTable := 10
numOfRow := 10
db, err := af.Open(host, "root", "taosdata", "", 0)
if err != nil {
log.Fatalln("Failed to connect to " + host + "; ErrMessage: " + err.Error())
}
defer db.Close()
// prepare database and table
_, err = db.Exec("CREATE DATABASE IF NOT EXISTS power")
if err != nil {
log.Fatalln("Failed to create database power, ErrMessage: " + err.Error())
}
_, err = db.Exec("USE power")
if err != nil {
log.Fatalln("Failed to use database power, ErrMessage: " + err.Error())
}
_, err = db.Exec("CREATE STABLE IF NOT EXISTS meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))")
if err != nil {
log.Fatalln("Failed to create stable meters, ErrMessage: " + err.Error())
}
// prepare statement
sql := "INSERT INTO ? USING meters TAGS(?,?) VALUES (?,?,?,?)"
reqID := common.GetReqID()
stmt2 := db.Stmt2(reqID, false)
err = stmt2.Prepare(sql)
if err != nil {
log.Fatalln("Failed to prepare sql, sql: " + sql + ", ErrMessage: " + err.Error())
}
for i := 1; i <= numOfSubTable; i++ {
// generate column data
current := time.Now()
columns := make([][]driver.Value, 4)
for j := 0; j < numOfRow; j++ {
columns[0] = append(columns[0], current.Add(time.Millisecond*time.Duration(j)))
columns[1] = append(columns[1], rand.Float32()*30)
columns[2] = append(columns[2], rand.Int31n(300))
columns[3] = append(columns[3], rand.Float32())
}
// generate bind data
tableName := fmt.Sprintf("d_bind_%d", i)
tags := []driver.Value{int32(i), []byte(fmt.Sprintf("location_%d", i))}
bindData := []*stmt.TaosStmt2BindData{
{
TableName: tableName,
Tags: tags,
Cols: columns,
},
}
// bind params
err = stmt2.Bind(bindData)
if err != nil {
log.Fatalln("Failed to bind params, ErrMessage: " + err.Error())
}
// execute batch
err = stmt2.Execute()
if err != nil {
log.Fatalln("Failed to exec, ErrMessage: " + err.Error())
}
// get affected rows
affected := stmt2.GetAffectedRows()
// you can check exeResult here
fmt.Printf("Successfully inserted %d rows to %s.\n", affected, tableName)
}
err = stmt2.Close()
if err != nil {
log.Fatal("failed to close statement, err:", err)
}
}

View File

@ -22,7 +22,7 @@
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.4.0</version> <version>3.5.0</version>
</dependency> </dependency>
<!-- ANCHOR_END: dep--> <!-- ANCHOR_END: dep-->

View File

@ -0,0 +1,87 @@
package com.taos.example;
import com.taosdata.jdbc.ws.TSWSPreparedStatement;
import java.sql.*;
import java.util.ArrayList;
import java.util.Random;
// ANCHOR: para_bind
public class WSParameterBindingExtendInterfaceDemo {
// modify host to your own
private static final String host = "127.0.0.1";
private static final Random random = new Random(System.currentTimeMillis());
private static final int numOfSubTable = 10, numOfRow = 10;
public static void main(String[] args) throws SQLException {
String jdbcUrl = "jdbc:TAOS-WS://" + host + ":6041";
try (Connection conn = DriverManager.getConnection(jdbcUrl, "root", "taosdata")) {
init(conn);
String sql = "INSERT INTO ? USING power.meters TAGS(?,?) VALUES (?,?,?,?)";
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
for (int i = 1; i <= numOfSubTable; i++) {
// set table name
pstmt.setTableName("d_bind_" + i);
// set tags
pstmt.setTagInt(0, i);
pstmt.setTagString(1, "location_" + i);
// set column ts
ArrayList<Long> tsList = new ArrayList<>();
long current = System.currentTimeMillis();
for (int j = 0; j < numOfRow; j++)
tsList.add(current + j);
pstmt.setTimestamp(0, tsList);
// set column current
ArrayList<Float> currentList = new ArrayList<>();
for (int j = 0; j < numOfRow; j++)
currentList.add(random.nextFloat() * 30);
pstmt.setFloat(1, currentList);
// set column voltage
ArrayList<Integer> voltageList = new ArrayList<>();
for (int j = 0; j < numOfRow; j++)
voltageList.add(random.nextInt(300));
pstmt.setInt(2, voltageList);
// set column phase
ArrayList<Float> phaseList = new ArrayList<>();
for (int j = 0; j < numOfRow; j++)
phaseList.add(random.nextFloat());
pstmt.setFloat(3, phaseList);
// add column
pstmt.columnDataAddBatch();
}
// execute column
pstmt.columnDataExecuteBatch();
// you can check exeResult here
System.out.println("Successfully inserted " + (numOfSubTable * numOfRow) + " rows to power.meters.");
}
} catch (Exception ex) {
// please refer to the JDBC specifications for detailed exceptions info
System.out.printf("Failed to insert to table meters using stmt, %sErrMessage: %s%n",
ex instanceof SQLException ? "ErrCode: " + ((SQLException) ex).getErrorCode() + ", " : "",
ex.getMessage());
// Print stack trace for context in examples. Use logging in production.
ex.printStackTrace();
throw ex;
}
}
private static void init(Connection conn) throws SQLException {
try (Statement stmt = conn.createStatement()) {
stmt.execute("CREATE DATABASE IF NOT EXISTS power");
stmt.execute("USE power");
stmt.execute(
"CREATE STABLE IF NOT EXISTS power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))");
}
}
}
// ANCHOR_END: para_bind

View File

@ -1,12 +1,10 @@
package com.taos.example; package com.taos.example;
import com.taosdata.jdbc.ws.TSWSPreparedStatement;
import java.sql.*; import java.sql.*;
import java.util.Random; import java.util.Random;
// ANCHOR: para_bind // ANCHOR: para_bind
public class WSParameterBindingBasicDemo { public class WSParameterBindingStdInterfaceDemo {
// modify host to your own // modify host to your own
private static final String host = "127.0.0.1"; private static final String host = "127.0.0.1";
@ -19,31 +17,29 @@ public class WSParameterBindingBasicDemo {
try (Connection conn = DriverManager.getConnection(jdbcUrl, "root", "taosdata")) { try (Connection conn = DriverManager.getConnection(jdbcUrl, "root", "taosdata")) {
init(conn); init(conn);
String sql = "INSERT INTO ? USING power.meters TAGS(?,?) VALUES (?,?,?,?)"; // If you are certain that the child table exists, you can avoid binding the tag column to improve performance.
String sql = "INSERT INTO power.meters (tbname, groupid, location, ts, current, voltage, phase) VALUES (?,?,?,?,?,?,?)";
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) { try (PreparedStatement pstmt = conn.prepareStatement(sql)) {
long current = System.currentTimeMillis();
for (int i = 1; i <= numOfSubTable; i++) { for (int i = 1; i <= numOfSubTable; i++) {
// set table name
pstmt.setTableName("d_bind_" + i);
// set tags
pstmt.setTagInt(0, i);
pstmt.setTagString(1, "location_" + i);
// set columns
long current = System.currentTimeMillis();
for (int j = 0; j < numOfRow; j++) { for (int j = 0; j < numOfRow; j++) {
pstmt.setTimestamp(1, new Timestamp(current + j)); pstmt.setString(1, "d_bind_" + i);
pstmt.setFloat(2, random.nextFloat() * 30);
pstmt.setInt(3, random.nextInt(300)); pstmt.setInt(2, i);
pstmt.setFloat(4, random.nextFloat()); pstmt.setString(3, "location_" + i);
pstmt.setTimestamp(4, new Timestamp(current + j));
pstmt.setFloat(5, random.nextFloat() * 30);
pstmt.setInt(6, random.nextInt(300));
pstmt.setFloat(7, random.nextFloat());
pstmt.addBatch(); pstmt.addBatch();
} }
int[] exeResult = pstmt.executeBatch();
// you can check exeResult here
System.out.println("Successfully inserted " + exeResult.length + " rows to power.meters.");
} }
int[] exeResult = pstmt.executeBatch();
// you can check exeResult here
System.out.println("Successfully inserted " + exeResult.length + " rows to power.meters.");
} }
} catch (Exception ex) { } catch (Exception ex) {
// please refer to the JDBC specifications for detailed exceptions info // please refer to the JDBC specifications for detailed exceptions info

View File

@ -118,9 +118,14 @@ public class TestAll {
} }
@Test @Test
public void testWsStmtBasic() throws Exception { public void testWsStmtStd() throws Exception {
dropDB("power"); dropDB("power");
WSParameterBindingBasicDemo.main(args); WSParameterBindingStdInterfaceDemo.main(args);
}
@Test
public void testWsStmtExtend() throws Exception {
dropDB("power");
WSParameterBindingExtendInterfaceDemo.main(args);
} }
@Test @Test

View File

@ -89,7 +89,7 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速
<dependency> <dependency>
<groupId>com.taosdata.jdbc</groupId> <groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId> <artifactId>taos-jdbcdriver</artifactId>
<version>3.4.0</version> <version>3.5.0</version>
</dependency> </dependency>
``` ```

View File

@ -26,10 +26,16 @@ import TabItem from "@theme/TabItem";
## WebSocket 连接 ## WebSocket 连接
<Tabs defaultValue="java" groupId="lang"> <Tabs defaultValue="java" groupId="lang">
<TabItem value="java" label="Java"> <TabItem value="java" label="Java">
参数绑定有两种接口使用方式,一种是 JDBC 标准接口,一种是扩展接口,扩展接口性能更好一些。
```java ```java
{{#include docs/examples/java/src/main/java/com/taos/example/WSParameterBindingBasicDemo.java:para_bind}} {{#include docs/examples/java/src/main/java/com/taos/example/WSParameterBindingStdInterfaceDemo.java:para_bind}}
``` ```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/WSParameterBindingExtendInterfaceDemo.java:para_bind}}
```
这是一个[更详细的参数绑定示例](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/WSParameterBindingFullDemo.java) 这是一个[更详细的参数绑定示例](https://github.com/taosdata/TDengine/blob/main/docs/examples/java/src/main/java/com/taos/example/WSParameterBindingFullDemo.java)
@ -91,9 +97,20 @@ import TabItem from "@theme/TabItem";
``` ```
</TabItem> </TabItem>
<TabItem label="Go" value="go"> <TabItem label="Go" value="go">
stmt2 绑定参数的示例代码如下go 连接器 v3.6.0 及以上TDengine v3.3.5.0 及以上):
```go
{{#include docs/examples/go/stmt2/native/main.go}}
```
stmt 绑定参数的示例代码如下:
```go ```go
{{#include docs/examples/go/stmt/native/main.go}} {{#include docs/examples/go/stmt/native/main.go}}
``` ```
</TabItem> </TabItem>
<TabItem label="Rust" value="rust"> <TabItem label="Rust" value="rust">

View File

@ -185,7 +185,8 @@ charset 的有效值是 UTF-8。
|参数名称|支持版本|动态修改|参数含义| |参数名称|支持版本|动态修改|参数含义|
|--------------------------|----------|-------------------------|-| |--------------------------|----------|-------------------------|-|
|supportVnodes | |支持动态修改 立即生效 |dnode 支持的最大 vnode 数目,取值范围 0-4096默认值 CPU 核数的 2 倍 + 5| |supportVnodes | |支持动态修改 立即生效 |dnode 支持的最大 vnode 数目,取值范围 0-4096默认值 CPU 核数的 2 倍 + 5|
|numOfCommitThreads | |支持动态修改 重启生效 |落盘线程的最大数量,取值范围 0-1024默认值为 4| |numOfCommitThreads | |支持动态修改 重启生效 |落盘线程的最大数量,取值范围 1-1024默认值为 4|
|numOfCompactThreads | |支持动态修改 重启生效 |落盘线程的最大数量,取值范围 1-16默认值为 2|
|numOfMnodeReadThreads | |支持动态修改 重启生效 |mnode 的 Read 线程数目,取值范围 0-1024默认值为 CPU 核数的四分之一(不超过 4| |numOfMnodeReadThreads | |支持动态修改 重启生效 |mnode 的 Read 线程数目,取值范围 0-1024默认值为 CPU 核数的四分之一(不超过 4|
|numOfVnodeQueryThreads | |支持动态修改 重启生效 |vnode 的 Query 线程数目,取值范围 0-1024默认值为 CPU 核数的两倍(不超过 16| |numOfVnodeQueryThreads | |支持动态修改 重启生效 |vnode 的 Query 线程数目,取值范围 0-1024默认值为 CPU 核数的两倍(不超过 16|
|numOfVnodeFetchThreads | |支持动态修改 重启生效 |vnode 的 Fetch 线程数目,取值范围 0-1024默认值为 CPU 核数的四分之一(不超过 4| |numOfVnodeFetchThreads | |支持动态修改 重启生效 |vnode 的 Fetch 线程数目,取值范围 0-1024默认值为 CPU 核数的四分之一(不超过 4|
@ -250,7 +251,7 @@ charset 的有效值是 UTF-8。
|minimalLogDirGB | |不支持动态修改 |日志文件夹所在磁盘可用空间大小小于该值时,停止写日志,单位 GB默认值 1| |minimalLogDirGB | |不支持动态修改 |日志文件夹所在磁盘可用空间大小小于该值时,停止写日志,单位 GB默认值 1|
|numOfLogLines | |支持动态修改 立即生效 |单个日志文件允许的最大行数,默认值 10,000,000| |numOfLogLines | |支持动态修改 立即生效 |单个日志文件允许的最大行数,默认值 10,000,000|
|asyncLog | |支持动态修改 立即生效 |日志写入模式0同步1异步默认值 1| |asyncLog | |支持动态修改 立即生效 |日志写入模式0同步1异步默认值 1|
|logKeepDays | |支持动态修改 立即生效 |日志文件的最长保存时间,单位:天,默认值 0意味着无限保存,日志文件不会被重命名,也不会有新的日志文件滚动产生,但日志文件的内容有可能会不断滚动,取决于日志文件大小的设置;当设置为大于 0 的值时,当日志文件大小达到设置的上限时会被重命名为 taosdlog.yyy其中 yyy 为日志文件最后修改的时间戳,并滚动产生新的日志文件| |logKeepDays | |支持动态修改 立即生效 |日志文件的最长保存时间,单位:天,默认值 0小于等于0意味着只有两个日志文件相互切换保存日志超过两个文件保存数量的日志会被删除;当设置为大于 0 的值时,当日志文件大小达到设置的上限时会被重命名为 taosdlog.yyy其中 yyy 为日志文件最后修改的时间戳,并滚动产生新的日志文件|
|slowLogThreshold|3.3.3.0 后|支持动态修改 立即生效 |慢查询门限值,大于等于门限值认为是慢查询,单位秒,默认值 3 | |slowLogThreshold|3.3.3.0 后|支持动态修改 立即生效 |慢查询门限值,大于等于门限值认为是慢查询,单位秒,默认值 3 |
|slowLogMaxLen |3.3.3.0 后|支持动态修改 立即生效 |慢查询日志最大长度,取值范围 1-16384默认值 4096| |slowLogMaxLen |3.3.3.0 后|支持动态修改 立即生效 |慢查询日志最大长度,取值范围 1-16384默认值 4096|
|slowLogScope |3.3.3.0 后|支持动态修改 立即生效 |慢查询记录类型,取值范围 ALL/QUERY/INSERT/OTHERS/NONE默认值 QUERY| |slowLogScope |3.3.3.0 后|支持动态修改 立即生效 |慢查询记录类型,取值范围 ALL/QUERY/INSERT/OTHERS/NONE默认值 QUERY|

View File

@ -14,7 +14,7 @@ taosAdapter 是一个 TDengine 的配套工具,是 TDengine 集群和应用程
taosAdapter 提供以下功能: taosAdapter 提供以下功能:
- RESTful 接口 - Websocket/RESTful 接口
- 兼容 InfluxDB v1 写接口 - 兼容 InfluxDB v1 写接口
- 兼容 OpenTSDB JSON 和 telnet 格式写入 - 兼容 OpenTSDB JSON 和 telnet 格式写入
- 无缝连接到 Telegraf - 无缝连接到 Telegraf

View File

@ -138,6 +138,7 @@ SELECT COUNT(*) FROM meters WHERE _rowts - voltage > 1000000;
- 聚合时间段的窗口宽度由关键词 INTERVAL 指定,最短时间间隔 10 毫秒10a并且支持偏移 offset偏移必须小于间隔也即时间窗口划分与“UTC 时刻 0”相比的偏移量。SLIDING 语句用于指定聚合时间段的前向增量,也即每次窗口向前滑动的时长。 - 聚合时间段的窗口宽度由关键词 INTERVAL 指定,最短时间间隔 10 毫秒10a并且支持偏移 offset偏移必须小于间隔也即时间窗口划分与“UTC 时刻 0”相比的偏移量。SLIDING 语句用于指定聚合时间段的前向增量,也即每次窗口向前滑动的时长。
- 使用 INTERVAL 语句时,除非极特殊的情况,都要求把客户端和服务端的 taos.cfg 配置文件中的 timezone 参数配置为相同的取值,以避免时间处理函数频繁进行跨时区转换而导致的严重性能影响。 - 使用 INTERVAL 语句时,除非极特殊的情况,都要求把客户端和服务端的 taos.cfg 配置文件中的 timezone 参数配置为相同的取值,以避免时间处理函数频繁进行跨时区转换而导致的严重性能影响。
- 返回的结果中时间序列严格单调递增。 - 返回的结果中时间序列严格单调递增。
- 使用 AUTO 作为窗口偏移量时,如果 WHERE 时间条件比较复杂,比如多个 AND/OR/IN 互相组合,那么 AUTO 可能不生效,这种情况可以通过手动指定窗口偏移量进行解决。
- 使用 AUTO 作为窗口偏移量时,如果窗口宽度的单位是 d (天), n (月), w (周), y (年),比如: INTERVAL(1d, AUTO), INTERVAL(3w, AUTO),此时 TSMA 优化无法生效。如果目标表上手动创建了TSMA语句会报错退出这种情况下可以显式指定 Hint SKIP_TSMA 或者不使用 AUTO 作为窗口偏移量。 - 使用 AUTO 作为窗口偏移量时,如果窗口宽度的单位是 d (天), n (月), w (周), y (年),比如: INTERVAL(1d, AUTO), INTERVAL(3w, AUTO),此时 TSMA 优化无法生效。如果目标表上手动创建了TSMA语句会报错退出这种情况下可以显式指定 Hint SKIP_TSMA 或者不使用 AUTO 作为窗口偏移量。
### 状态窗口 ### 状态窗口

View File

@ -37,6 +37,6 @@ description: 合法字符集和命名中的限制规则
- 库的数目,超级表的数目、表的数目,系统不做限制,仅受系统资源限制 - 库的数目,超级表的数目、表的数目,系统不做限制,仅受系统资源限制
- 数据库的副本数只能设置为 1 或 3 - 数据库的副本数只能设置为 1 或 3
- 用户名的最大长度是 23 字节 - 用户名的最大长度是 23 字节
- 用户密码的最大长度是 31 字节 - 用户密码的长度范围是 8-16 字节
- 总数据行数取决于可用资源 - 总数据行数取决于可用资源
- 单个数据库的虚拟结点数上限为 1024 - 单个数据库的虚拟结点数上限为 1024

View File

@ -33,6 +33,7 @@ TDengine 的 JDBC 驱动实现尽可能与关系型数据库驱动保持一致
| taos-jdbcdriver 版本 | 主要变化 | TDengine 版本 | | taos-jdbcdriver 版本 | 主要变化 | TDengine 版本 |
| ------------------| ---------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- | | ------------------| ---------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------- |
| 3.5.0 | 1. 优化了 WebSocket 连接参数绑定性能,支持参数绑定查询使用二进制数据 <br/> 2. 优化了 WebSocket 连接在小查询上的性能 <br/> 3. WebSocket 连接上支持设置时区和应用信息 | 3.3.5.0 及更高版本 |
| 3.4.0 | 1. 使用 jackson 库替换 fastjson 库 <br/> 2. WebSocket 采用独立协议标识 <br/> 3. 优化后台拉取线程使用,避免用户误用导致超时 | - | | 3.4.0 | 1. 使用 jackson 库替换 fastjson 库 <br/> 2. WebSocket 采用独立协议标识 <br/> 3. 优化后台拉取线程使用,避免用户误用导致超时 | - |
| 3.3.4 | 解决了 getInt 在数据类型为 float 报错 | - | | 3.3.4 | 解决了 getInt 在数据类型为 float 报错 | - |
| 3.3.3 | 解决了 WebSocket statement 关闭导致的内存泄漏 | - | | 3.3.3 | 解决了 WebSocket statement 关闭导致的内存泄漏 | - |
@ -243,13 +244,13 @@ TDengine 中,只要保证 firstEp 和 secondEp 中一个节点有效,就可
对于 WebSocket 连接url 中的配置参数如下: 对于 WebSocket 连接url 中的配置参数如下:
- user登录 TDengine 用户名,默认值 'root'。 - user登录 TDengine 用户名,默认值 'root'。
- password用户登录密码默认值 'taosdata'。 - password用户登录密码默认值 'taosdata'。
- charset: 当开启批量拉取数据时,指定解析字符串数据的字符集。
- batchErrorIgnoretrue在执行 Statement 的 executeBatch 时,如果中间有一条 SQL 执行失败,继续执行下面的 SQL 了。false不再执行失败 SQL 后的任何语句。默认值为false。 - batchErrorIgnoretrue在执行 Statement 的 executeBatch 时,如果中间有一条 SQL 执行失败,继续执行下面的 SQL 了。false不再执行失败 SQL 后的任何语句。默认值为false。
- httpConnectTimeout: 连接超时时间,单位 ms 默认值为 60000。 - httpConnectTimeout: 连接超时时间,单位 ms 默认值为 60000。
- messageWaitTimeout: 消息超时时间, 单位 ms 默认值为 60000。 - messageWaitTimeout: 消息超时时间, 单位 ms 默认值为 60000。
- useSSL: 连接中是否使用 SSL。 - useSSL: 连接中是否使用 SSL。
- timezone客户端使用的时区连接上生效默认值为系统时区。推荐不设置使用系统时区性能更好。
**注意**部分配置项比如locale、timezone)在 WebSocket 连接中不生效。 **注意**部分配置项比如locale、charset在 WebSocket 连接中不生效。
**REST 连接** **REST 连接**
使用 JDBC REST 连接,不需要依赖客户端驱动。与 JDBC 原生连接相比,仅需要: 使用 JDBC REST 连接,不需要依赖客户端驱动。与 JDBC 原生连接相比,仅需要:
@ -261,14 +262,13 @@ TDengine 中,只要保证 firstEp 和 secondEp 中一个节点有效,就可
对于 REST 连接url 中的配置参数如下: 对于 REST 连接url 中的配置参数如下:
- user登录 TDengine 用户名,默认值 'root'。 - user登录 TDengine 用户名,默认值 'root'。
- password用户登录密码默认值 'taosdata'。 - password用户登录密码默认值 'taosdata'。
- charset: 当开启批量拉取数据时,指定解析字符串数据的字符集。
- batchErrorIgnoretrue在执行 Statement 的 executeBatch 时,如果中间有一条 SQL 执行失败,继续执行下面的 SQL 了。false不再执行失败 SQL 后的任何语句。默认值为false。 - batchErrorIgnoretrue在执行 Statement 的 executeBatch 时,如果中间有一条 SQL 执行失败,继续执行下面的 SQL 了。false不再执行失败 SQL 后的任何语句。默认值为false。
- httpConnectTimeout: 连接超时时间,单位 ms 默认值为 60000。 - httpConnectTimeout: 连接超时时间,单位 ms 默认值为 60000。
- httpSocketTimeout: socket 超时时间,单位 ms默认值为 60000。 - httpSocketTimeout: socket 超时时间,单位 ms默认值为 60000。
- useSSL: 连接中是否使用 SSL。 - useSSL: 连接中是否使用 SSL。
- httpPoolSize: REST 并发请求大小,默认 20。 - httpPoolSize: REST 并发请求大小,默认 20。
**注意**部分配置项比如locale、timezone在 REST 连接中不生效。 **注意**部分配置项比如locale、charset 和 timezone在 REST 连接中不生效。
:::note :::note
@ -291,7 +291,9 @@ properties 中的配置参数如下:
- TSDBDriver.PROPERTY_KEY_CONFIG_DIR仅在使用 JDBC 原生连接时生效。客户端配置文件目录路径Linux OS 上默认值 `/etc/taos`Windows OS 上默认值 `C:/TDengine/cfg`。 - TSDBDriver.PROPERTY_KEY_CONFIG_DIR仅在使用 JDBC 原生连接时生效。客户端配置文件目录路径Linux OS 上默认值 `/etc/taos`Windows OS 上默认值 `C:/TDengine/cfg`。
- TSDBDriver.PROPERTY_KEY_CHARSET客户端使用的字符集默认值为系统字符集。 - TSDBDriver.PROPERTY_KEY_CHARSET客户端使用的字符集默认值为系统字符集。
- TSDBDriver.PROPERTY_KEY_LOCALE仅在使用 JDBC 原生连接时生效。 客户端语言环境,默认值系统当前 locale。 - TSDBDriver.PROPERTY_KEY_LOCALE仅在使用 JDBC 原生连接时生效。 客户端语言环境,默认值系统当前 locale。
- TSDBDriver.PROPERTY_KEY_TIME_ZONE仅在使用 JDBC 原生连接时生效。 客户端使用的时区默认值为系统当前时区。因为历史的原因我们只支持POSIX标准的部分规范如UTC-8(代表中国上上海), GMT-8Asia/Shanghai 这几种形式。 - TSDBDriver.PROPERTY_KEY_TIME_ZONE
- 原生连接客户端使用的时区默认值为系统当前时区全局生效。因为历史的原因我们只支持POSIX标准的部分规范如UTC-8(代表中国上上海), GMT-8Asia/Shanghai 这几种形式。
- WebSocket 连接:客户端使用的时区,连接上生效,默认值为系统时区。仅支持 IANA 时区,即 Asia/Shanghai 这种形式。推荐不设置,使用系统时区性能更好。
- TSDBDriver.HTTP_CONNECT_TIMEOUT: 连接超时时间,单位 ms 默认值为 60000。仅在 REST 连接时生效。 - TSDBDriver.HTTP_CONNECT_TIMEOUT: 连接超时时间,单位 ms 默认值为 60000。仅在 REST 连接时生效。
- TSDBDriver.HTTP_SOCKET_TIMEOUT: socket 超时时间,单位 ms默认值为 60000。仅在 REST 连接且 batchfetch 设置为 false 时生效。 - TSDBDriver.HTTP_SOCKET_TIMEOUT: socket 超时时间,单位 ms默认值为 60000。仅在 REST 连接且 batchfetch 设置为 false 时生效。
- TSDBDriver.PROPERTY_KEY_MESSAGE_WAIT_TIMEOUT: 消息超时时间, 单位 ms 默认值为 60000。 仅 WebSocket 连接下有效。 - TSDBDriver.PROPERTY_KEY_MESSAGE_WAIT_TIMEOUT: 消息超时时间, 单位 ms 默认值为 60000。 仅 WebSocket 连接下有效。
@ -299,12 +301,15 @@ properties 中的配置参数如下:
- TSDBDriver.HTTP_POOL_SIZE: REST 并发请求大小,默认 20。 - TSDBDriver.HTTP_POOL_SIZE: REST 并发请求大小,默认 20。
- TSDBDriver.PROPERTY_KEY_ENABLE_COMPRESSION: 传输过程是否启用压缩。仅在使用 REST/WebSocket 连接时生效。true: 启用false: 不启用。默认为 false。 - TSDBDriver.PROPERTY_KEY_ENABLE_COMPRESSION: 传输过程是否启用压缩。仅在使用 REST/WebSocket 连接时生效。true: 启用false: 不启用。默认为 false。
- TSDBDriver.PROPERTY_KEY_ENABLE_AUTO_RECONNECT: 是否启用自动重连。仅在使用 WebSocket 连接时生效。true: 启用false: 不启用。默认为 false。 - TSDBDriver.PROPERTY_KEY_ENABLE_AUTO_RECONNECT: 是否启用自动重连。仅在使用 WebSocket 连接时生效。true: 启用false: 不启用。默认为 false。
> **注意**:启用自动重连仅对简单执行 SQL 语句以及 无模式写入、数据订阅有效。对于参数绑定无效。自动重连仅对连接建立时通过参数指定数据库有效,对后面的 `use db` 语句切换数据库无效。 > **注意**:启用自动重连仅对简单执行 SQL 语句以及 无模式写入、数据订阅有效。对于参数绑定无效。自动重连仅对连接建立时通过参数指定数据库有效,对后面的 `use db` 语句切换数据库无效。
- TSDBDriver.PROPERTY_KEY_RECONNECT_INTERVAL_MS: 自动重连重试间隔,单位毫秒,默认值 2000。仅在 PROPERTY_KEY_ENABLE_AUTO_RECONNECT 为 true 时生效。 - TSDBDriver.PROPERTY_KEY_RECONNECT_INTERVAL_MS: 自动重连重试间隔,单位毫秒,默认值 2000。仅在 PROPERTY_KEY_ENABLE_AUTO_RECONNECT 为 true 时生效。
- TSDBDriver.PROPERTY_KEY_RECONNECT_RETRY_COUNT: 自动重连重试次数,默认值 3仅在 PROPERTY_KEY_ENABLE_AUTO_RECONNECT 为 true 时生效。 - TSDBDriver.PROPERTY_KEY_RECONNECT_RETRY_COUNT: 自动重连重试次数,默认值 3仅在 PROPERTY_KEY_ENABLE_AUTO_RECONNECT 为 true 时生效。
- TSDBDriver.PROPERTY_KEY_DISABLE_SSL_CERT_VALIDATION: 关闭 SSL 证书验证 。仅在使用 WebSocket 连接时生效。true: 启用false: 不启用。默认为 false。 - TSDBDriver.PROPERTY_KEY_DISABLE_SSL_CERT_VALIDATION: 关闭 SSL 证书验证 。仅在使用 WebSocket 连接时生效。true: 启用false: 不启用。默认为 false。
- TSDBDriver.PROPERTY_KEY_APP_NAME: App 名称,可用于 `show connections` 查询结果显示。仅在使用 WebSocket 连接时生效。默认值为 java。
- TSDBDriver.PROPERTY_KEY_APP_IP: App IP可用于 `show connections` 查询结果显示。仅在使用 WebSocket 连接时生效。默认值为空。
此外对 JDBC 原生连接,通过指定 URL 和 Properties 还可以指定其他参数比如日志级别、SQL 长度等。 此外对 JDBC 原生连接,通过指定 URL 和 Properties 还可以指定其他参数比如日志级别、SQL 长度等。
**配置参数的优先级** **配置参数的优先级**

View File

@ -23,24 +23,25 @@ import RequestId from "./_request_id.mdx";
## 版本历史 ## 版本历史
| driver-go 版本 | 主要变化 | TDengine 版本 | | driver-go 版本 | 主要变化 | TDengine 版本 |
|-------------|-------------------------------------|---------------| |--------------|--------------------------------------------|---------------|
| v3.5.8 | 修复空指针异常 | - | | v3.6.0 | stmt2 原生接口DSN 支持密码包含特殊字符url.QueryEscape | 3.3.5.0 及更高版本 |
| v3.5.7 | taosWS 和 taosRestful 支持传入 request id | - | | v3.5.8 | 修复空指针异常 | - |
| v3.5.6 | 提升 websocket 查询和写入性能 | 3.3.2.0 及更高版本 | | v3.5.7 | taosWS 和 taosRestful 支持传入 request id | - |
| v3.5.5 | restful 支持跳过 ssl 证书检查 | - | | v3.5.6 | 提升 websocket 查询和写入性能 | 3.3.2.0 及更高版本 |
| v3.5.4 | 兼容 TDengine 3.3.0.0 tmq raw data | - | | v3.5.5 | restful 支持跳过 ssl 证书检查 | - |
| v3.5.3 | 重构 taosWS | - | | v3.5.4 | 兼容 TDengine 3.3.0.0 tmq raw data | - |
| v3.5.2 | websocket 压缩和优化消息订阅性能 | 3.2.3.0 及更高版本 | | v3.5.3 | 重构 taosWS | - |
| v3.5.1 | 原生 stmt 查询和 geometry 类型支持 | 3.2.1.0 及更高版本 | | v3.5.2 | websocket 压缩和优化消息订阅性能 | 3.2.3.0 及更高版本 |
| v3.5.0 | 获取消费进度及按照指定进度开始消费 | 3.0.5.0 及更高版本 | | v3.5.1 | 原生 stmt 查询和 geometry 类型支持 | 3.2.1.0 及更高版本 |
| v3.3.1 | 基于 websocket 的 schemaless 协议写入 | 3.0.4.1 及更高版本 | | v3.5.0 | 获取消费进度及按照指定进度开始消费 | 3.0.5.0 及更高版本 |
| v3.1.0 | 提供贴近 kafka 的订阅 api | - | | v3.3.1 | 基于 websocket 的 schemaless 协议写入 | 3.0.4.1 及更高版本 |
| v3.0.4 | 新增 request id 相关接口 | 3.0.2.2 及更高版本 | | v3.1.0 | 提供贴近 kafka 的订阅 api | - |
| v3.0.3 | 基于 websocket 的 statement 写入 | - | | v3.0.4 | 新增 request id 相关接口 | 3.0.2.2 及更高版本 |
| v3.0.2 | 基于 websocket 的数据查询和写入 | 3.0.1.5 及更高版本 | | v3.0.3 | 基于 websocket 的 statement 写入 | - |
| v3.0.1 | 基于 websocket 的消息订阅 | - | | v3.0.2 | 基于 websocket 的数据查询和写入 | 3.0.1.5 及更高版本 |
| v3.0.0 | 适配 TDengine 3.0 查询和写入 | 3.0.0.0 及更高版本 | | v3.0.1 | 基于 websocket 的消息订阅 | - |
| v3.0.0 | 适配 TDengine 3.0 查询和写入 | 3.0.0.0 及更高版本 |
## 异常和错误码 ## 异常和错误码
@ -137,6 +138,8 @@ WKB规范请参考[Well-Known Binary (WKB)](https://libgeos.org/specifications/w
username:password@protocol(address)/dbname?param=value username:password@protocol(address)/dbname?param=value
``` ```
当密码中包含特殊字符时,需要使用 `url.QueryEscape` 进行转义。
##### 原生连接 ##### 原生连接
导入驱动: 导入驱动:
@ -494,6 +497,37 @@ Prepare 允许使用预编译的 SQL 语句,可以提高性能并提供参数
- **接口说明**:关闭语句。 - **接口说明**:关闭语句。
- **返回值**:错误信息。 - **返回值**:错误信息。
从 3.6.0 版本开始,提供 stmt2 绑定参数的接口
- `func (conn *Connector) Stmt2(reqID int64, singleTableBindOnce bool) *Stmt2`
- **接口说明**:从连接创建 stmt2。
- **参数说明**
- `reqID`:请求 ID。
- `singleTableBindOnce`:单个子表在单次执行中只有一次数据绑定。
- **返回值**stmt2 对象。
- `func (s *Stmt2) Prepare(sql string) error`
- **接口说明**:绑定 sql 语句。
- **参数说明**
- `sql`:要绑定的 sql 语句。
- **返回值**:错误信息。
- `func (s *Stmt2) Bind(params []*stmt.TaosStmt2BindData) error`
- **接口说明**:绑定数据。
- **参数说明**
- params要绑定的数据。
- **返回值**:错误信息。
- `func (s *Stmt2) Execute() error`
- **接口说明**:执行语句。
- **返回值**:错误信息。
- `func (s *Stmt2) GetAffectedRows() int`
- **接口说明**:获取受影响行数(只在插入语句有效)。
- **返回值**:受影响行数。
- `func (s *Stmt2) UseResult() (driver.Rows, error)`
- **接口说明**:获取结果集(只在查询语句有效)。
- **返回值**:结果集 Rows 对象,错误信息。
- `func (s *Stmt2) Close() error`
- **接口说明**关闭stmt2。
- **返回值**:错误信息。
`ws/stmt` 包提供了通过 WebSocket 进行参数绑定的接口 `ws/stmt` 包提供了通过 WebSocket 进行参数绑定的接口
- `func (c *Connector) Init() (*Stmt, error)` - `func (c *Connector) Init() (*Stmt, error)`

View File

@ -112,9 +112,8 @@ extern int32_t tsNumOfSnodeWriteThreads;
extern int64_t tsQueueMemoryAllowed; extern int64_t tsQueueMemoryAllowed;
extern int32_t tsRetentionSpeedLimitMB; extern int32_t tsRetentionSpeedLimitMB;
extern const char *tsAlterCompactTaskKeywords; extern int32_t tsNumOfCompactThreads;
extern int32_t tsNumOfCompactThreads; extern int32_t tsNumOfRetentionThreads;
extern int32_t tsNumOfRetentionThreads;
// sync raft // sync raft
extern int32_t tsElectInterval; extern int32_t tsElectInterval;
@ -291,7 +290,6 @@ extern int tsStreamAggCnt;
extern bool tsFilterScalarMode; extern bool tsFilterScalarMode;
extern int32_t tsMaxStreamBackendCache; extern int32_t tsMaxStreamBackendCache;
extern int32_t tsPQSortMemThreshold; extern int32_t tsPQSortMemThreshold;
extern int32_t tsResolveFQDNRetryTime;
extern bool tsStreamCoverage; extern bool tsStreamCoverage;
extern int8_t tsS3EpNum; extern int8_t tsS3EpNum;

View File

@ -389,7 +389,6 @@ typedef struct SStateStore {
int32_t (*streamStateFillGetGroupKVByCur)(SStreamStateCur* pCur, SWinKey* pKey, const void** pVal, int32_t* pVLen); int32_t (*streamStateFillGetGroupKVByCur)(SStreamStateCur* pCur, SWinKey* pKey, const void** pVal, int32_t* pVLen);
int32_t (*streamStateGetKVByCur)(SStreamStateCur* pCur, SWinKey* pKey, const void** pVal, int32_t* pVLen); int32_t (*streamStateGetKVByCur)(SStreamStateCur* pCur, SWinKey* pKey, const void** pVal, int32_t* pVLen);
void (*streamStateSetFillInfo)(SStreamState* pState);
void (*streamStateClearExpiredState)(SStreamState* pState); void (*streamStateClearExpiredState)(SStreamState* pState);
int32_t (*streamStateSessionAddIfNotExist)(SStreamState* pState, SSessionKey* key, TSKEY gap, void** pVal, int32_t (*streamStateSessionAddIfNotExist)(SStreamState* pState, SSessionKey* key, TSKEY gap, void** pVal,
@ -455,7 +454,6 @@ typedef struct SStateStore {
int32_t (*streamStateBegin)(SStreamState* pState); int32_t (*streamStateBegin)(SStreamState* pState);
void (*streamStateCommit)(SStreamState* pState); void (*streamStateCommit)(SStreamState* pState);
void (*streamStateDestroy)(SStreamState* pState, bool remove); void (*streamStateDestroy)(SStreamState* pState, bool remove);
int32_t (*streamStateDeleteCheckPoint)(SStreamState* pState, TSKEY mark);
void (*streamStateReloadInfo)(SStreamState* pState, TSKEY ts); void (*streamStateReloadInfo)(SStreamState* pState, TSKEY ts);
void (*streamStateCopyBackend)(SStreamState* src, SStreamState* dst); void (*streamStateCopyBackend)(SStreamState* src, SStreamState* dst);
} SStateStore; } SStateStore;

View File

@ -112,7 +112,7 @@ typedef struct SDatabaseOptions {
int8_t s3Compact; int8_t s3Compact;
int8_t withArbitrator; int8_t withArbitrator;
// for auto-compact // for auto-compact
int8_t compactTimeOffset; // hours int32_t compactTimeOffset; // hours
int32_t compactInterval; // minutes int32_t compactInterval; // minutes
int32_t compactStartTime; // minutes int32_t compactStartTime; // minutes
int32_t compactEndTime; // minutes int32_t compactEndTime; // minutes

View File

@ -34,7 +34,6 @@ void streamStateClose(SStreamState* pState, bool remove);
int32_t streamStateBegin(SStreamState* pState); int32_t streamStateBegin(SStreamState* pState);
void streamStateCommit(SStreamState* pState); void streamStateCommit(SStreamState* pState);
void streamStateDestroy(SStreamState* pState, bool remove); void streamStateDestroy(SStreamState* pState, bool remove);
int32_t streamStateDeleteCheckPoint(SStreamState* pState, TSKEY mark);
int32_t streamStateDelTaskDb(SStreamState* pState); int32_t streamStateDelTaskDb(SStreamState* pState);
int32_t streamStateFuncPut(SStreamState* pState, const SWinKey* key, const void* value, int32_t vLen); int32_t streamStateFuncPut(SStreamState* pState, const SWinKey* key, const void* value, int32_t vLen);
@ -108,7 +107,6 @@ int32_t streamStateFillGetGroupKVByCur(SStreamStateCur* pCur, SWinKey* pKey, con
int32_t streamStateGetKVByCur(SStreamStateCur* pCur, SWinKey* pKey, const void** pVal, int32_t* pVLen); int32_t streamStateGetKVByCur(SStreamStateCur* pCur, SWinKey* pKey, const void** pVal, int32_t* pVLen);
// twa // twa
void streamStateSetFillInfo(SStreamState* pState);
void streamStateClearExpiredState(SStreamState* pState); void streamStateClearExpiredState(SStreamState* pState);
void streamStateCurNext(SStreamState* pState, SStreamStateCur* pCur); void streamStateCurNext(SStreamState* pState, SStreamStateCur* pCur);

View File

@ -67,7 +67,6 @@ SStreamSnapshot* getSnapshot(SStreamFileState* pFileState);
void flushSnapshot(SStreamFileState* pFileState, SStreamSnapshot* pSnapshot, bool flushState); void flushSnapshot(SStreamFileState* pFileState, SStreamSnapshot* pSnapshot, bool flushState);
int32_t recoverSnapshot(SStreamFileState* pFileState, int64_t ckId); int32_t recoverSnapshot(SStreamFileState* pFileState, int64_t ckId);
int32_t getSnapshotIdList(SStreamFileState* pFileState, SArray* list);
int32_t deleteExpiredCheckPoint(SStreamFileState* pFileState, TSKEY mark); int32_t deleteExpiredCheckPoint(SStreamFileState* pFileState, TSKEY mark);
int32_t streamFileStateGetSelectRowSize(SStreamFileState* pFileState); int32_t streamFileStateGetSelectRowSize(SStreamFileState* pFileState);
void streamFileStateReloadInfo(SStreamFileState* pFileState, TSKEY ts); void streamFileStateReloadInfo(SStreamFileState* pFileState, TSKEY ts);

View File

@ -148,7 +148,7 @@ int32_t tfsMkdirRecur(STfs *pTfs, const char *rname);
* @return int32_t 0 for success, -1 for failure. * @return int32_t 0 for success, -1 for failure.
*/ */
int32_t tfsMkdirRecurAt(STfs *pTfs, const char *rname, SDiskID diskId); int32_t tfsMkdirRecurAt(STfs *pTfs, const char *rname, SDiskID diskId);
#if 0
/** /**
* @brief check directories exist in tfs. * @brief check directories exist in tfs.
* *
@ -158,7 +158,7 @@ int32_t tfsMkdirRecurAt(STfs *pTfs, const char *rname, SDiskID diskId);
* @return true for exist, false for not exist. * @return true for exist, false for not exist.
*/ */
bool tfsDirExistAt(STfs *pTfs, const char *rname, SDiskID diskId); bool tfsDirExistAt(STfs *pTfs, const char *rname, SDiskID diskId);
#endif
/** /**
* @brief Remove directory at all levels in tfs. * @brief Remove directory at all levels in tfs.
* *
@ -241,7 +241,7 @@ void tfsBasename(const STfsFile *pFile, char *dest);
* @param dest The buffer where dirname will be saved. * @param dest The buffer where dirname will be saved.
*/ */
void tfsDirname(const STfsFile *pFile, char *dest); void tfsDirname(const STfsFile *pFile, char *dest);
#if 0
/** /**
* @brief Get the absolute file name of rname. * @brief Get the absolute file name of rname.
* *
@ -251,7 +251,7 @@ void tfsDirname(const STfsFile *pFile, char *dest);
* @param aname absolute file name * @param aname absolute file name
*/ */
void tfsAbsoluteName(STfs *pTfs, SDiskID diskId, const char *rname, char *aname); void tfsAbsoluteName(STfs *pTfs, SDiskID diskId, const char *rname, char *aname);
#endif
/** /**
* @brief Remove file in tfs. * @brief Remove file in tfs.
* *

View File

@ -126,6 +126,10 @@ void fetchCallback(void* param, void* res, int32_t numOfRow) {
void queryCallback(void* param, void* res, int32_t code) { void queryCallback(void* param, void* res, int32_t code) {
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
(void)printf("failed to execute, reason:%s\n", taos_errstr(res)); (void)printf("failed to execute, reason:%s\n", taos_errstr(res));
taos_free_result(res);
tsem_t *sem = (tsem_t *)param;
tsem_post(sem);
return;
} }
(void)printf("start to fetch data\n"); (void)printf("start to fetch data\n");
taos_fetch_raw_block_a(res, fetchCallback, param); taos_fetch_raw_block_a(res, fetchCallback, param);

View File

@ -66,9 +66,9 @@ int32_t s3Begin() {
void s3End() { S3_deinitialize(); } void s3End() { S3_deinitialize(); }
int32_t s3Init() { TAOS_RETURN(TSDB_CODE_SUCCESS); /*s3Begin();*/ } int32_t s3Init() { TAOS_RETURN(TSDB_CODE_SUCCESS); /*s3Begin();*/ }
#if 0
static int32_t s3ListBucket(char const *bucketname); static int32_t s3ListBucket(char const *bucketname);
#endif
static void s3DumpCfgByEp(int8_t epIndex) { static void s3DumpCfgByEp(int8_t epIndex) {
// clang-format off // clang-format off
(void)fprintf(stdout, (void)fprintf(stdout,
@ -291,7 +291,7 @@ static int32_t s3ListBucketByEp(char const *bucketname, int8_t epIndex) {
TAOS_RETURN(code); TAOS_RETURN(code);
} }
#if 0
static int32_t s3ListBucket(char const *bucketname) { static int32_t s3ListBucket(char const *bucketname) {
int32_t code = 0; int32_t code = 0;
@ -312,7 +312,7 @@ static int32_t s3ListBucket(char const *bucketname) {
TAOS_RETURN(code); TAOS_RETURN(code);
} }
#endif
typedef struct growbuffer { typedef struct growbuffer {
// The total number of bytes, and the start byte // The total number of bytes, and the start byte
int size; int size;

View File

@ -123,6 +123,9 @@ static const SSysDbTableSchema userDBSchema[] = {
{.name = "s3_compact", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT, .sysInfo = true}, {.name = "s3_compact", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT, .sysInfo = true},
{.name = "with_arbitrator", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT, .sysInfo = true}, {.name = "with_arbitrator", .bytes = 1, .type = TSDB_DATA_TYPE_TINYINT, .sysInfo = true},
{.name = "encrypt_algorithm", .bytes = TSDB_ENCRYPT_ALGO_STR_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true}, {.name = "encrypt_algorithm", .bytes = TSDB_ENCRYPT_ALGO_STR_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true},
{.name = "compact_interval", .bytes = 12 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true},
{.name = "compact_time_range", .bytes = 24 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true},
{.name = "compact_time_offset", .bytes = 4 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true},
}; };
static const SSysDbTableSchema userFuncSchema[] = { static const SSysDbTableSchema userFuncSchema[] = {

View File

@ -102,9 +102,8 @@ int32_t tsMaxStreamBackendCache = 128; // M
int32_t tsPQSortMemThreshold = 16; // M int32_t tsPQSortMemThreshold = 16; // M
int32_t tsRetentionSpeedLimitMB = 0; // unlimited int32_t tsRetentionSpeedLimitMB = 0; // unlimited
const char *tsAlterCompactTaskKeywords = "max_compact_tasks"; int32_t tsNumOfCompactThreads = 2;
int32_t tsNumOfCompactThreads = 2; int32_t tsNumOfRetentionThreads = 1;
int32_t tsNumOfRetentionThreads = 1;
// sync raft // sync raft
int32_t tsElectInterval = 25 * 1000; int32_t tsElectInterval = 25 * 1000;
@ -327,7 +326,6 @@ char tsUdfdLdLibPath[512] = "";
bool tsDisableStream = false; bool tsDisableStream = false;
int64_t tsStreamBufferSize = 128 * 1024 * 1024; int64_t tsStreamBufferSize = 128 * 1024 * 1024;
bool tsFilterScalarMode = false; bool tsFilterScalarMode = false;
int tsResolveFQDNRetryTime = 100; // seconds
int tsStreamAggCnt = 100000; int tsStreamAggCnt = 100000;
bool tsStreamCoverage = false; bool tsStreamCoverage = false;
@ -745,8 +743,9 @@ static int32_t taosAddClientCfg(SConfig *pCfg) {
CFG_DYN_CLIENT, CFG_CATEGORY_LOCAL)); CFG_DYN_CLIENT, CFG_CATEGORY_LOCAL));
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "tsmaDataDeleteMark", tsmaDataDeleteMark, 60 * 60 * 1000, INT64_MAX, TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "tsmaDataDeleteMark", tsmaDataDeleteMark, 60 * 60 * 1000, INT64_MAX,
CFG_SCOPE_CLIENT, CFG_DYN_CLIENT, CFG_CATEGORY_LOCAL)); CFG_SCOPE_CLIENT, CFG_DYN_CLIENT, CFG_CATEGORY_LOCAL));
TAOS_CHECK_RETURN(cfgAddBool(pCfg, "streamCoverage", tsStreamCoverage, CFG_DYN_CLIENT, CFG_DYN_CLIENT, CFG_CATEGORY_LOCAL)); TAOS_CHECK_RETURN(
cfgAddBool(pCfg, "streamCoverage", tsStreamCoverage, CFG_DYN_CLIENT, CFG_DYN_CLIENT, CFG_CATEGORY_LOCAL));
TAOS_RETURN(TSDB_CODE_SUCCESS); TAOS_RETURN(TSDB_CODE_SUCCESS);
} }
@ -795,9 +794,6 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
tsNumOfCommitThreads = tsNumOfCores / 2; tsNumOfCommitThreads = tsNumOfCores / 2;
tsNumOfCommitThreads = TRANGE(tsNumOfCommitThreads, 2, 4); tsNumOfCommitThreads = TRANGE(tsNumOfCommitThreads, 2, 4);
tsNumOfCompactThreads = tsNumOfCommitThreads;
tsNumOfCompactThreads = TRANGE(tsNumOfCompactThreads, 2, 4);
tsNumOfSupportVnodes = tsNumOfCores * 2 + 5; tsNumOfSupportVnodes = tsNumOfCores * 2 + 5;
tsNumOfSupportVnodes = TMAX(tsNumOfSupportVnodes, 2); tsNumOfSupportVnodes = TMAX(tsNumOfSupportVnodes, 2);
@ -842,7 +838,7 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "queryBufferSize", tsQueryBufferSize, -1, 500000000000, CFG_SCOPE_SERVER, CFG_DYN_SERVER_LAZY, CFG_CATEGORY_LOCAL)); TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "queryBufferSize", tsQueryBufferSize, -1, 500000000000, CFG_SCOPE_SERVER, CFG_DYN_SERVER_LAZY, CFG_CATEGORY_LOCAL));
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "queryRspPolicy", tsQueryRspPolicy, 0, 1, CFG_SCOPE_SERVER, CFG_DYN_SERVER,CFG_CATEGORY_GLOBAL)); TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "queryRspPolicy", tsQueryRspPolicy, 0, 1, CFG_SCOPE_SERVER, CFG_DYN_SERVER,CFG_CATEGORY_GLOBAL));
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "numOfCommitThreads", tsNumOfCommitThreads, 1, 1024, CFG_SCOPE_SERVER, CFG_DYN_SERVER_LAZY,CFG_CATEGORY_LOCAL)); TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "numOfCommitThreads", tsNumOfCommitThreads, 1, 1024, CFG_SCOPE_SERVER, CFG_DYN_SERVER_LAZY,CFG_CATEGORY_LOCAL));
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "maxCompactConcurrency", tsNumOfCompactThreads, 1, 1024, CFG_SCOPE_SERVER, CFG_DYN_SERVER_LAZY,CFG_CATEGORY_LOCAL)); TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "numOfCompactThreads", tsNumOfCompactThreads, 1, 16, CFG_SCOPE_SERVER, CFG_DYN_SERVER,CFG_CATEGORY_LOCAL));
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "retentionSpeedLimitMB", tsRetentionSpeedLimitMB, 0, 1024, CFG_SCOPE_SERVER, CFG_DYN_SERVER,CFG_CATEGORY_GLOBAL)); TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "retentionSpeedLimitMB", tsRetentionSpeedLimitMB, 0, 1024, CFG_SCOPE_SERVER, CFG_DYN_SERVER,CFG_CATEGORY_GLOBAL));
TAOS_CHECK_RETURN(cfgAddBool(pCfg, "queryUseMemoryPool", tsQueryUseMemoryPool, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_LOCAL) != 0); TAOS_CHECK_RETURN(cfgAddBool(pCfg, "queryUseMemoryPool", tsQueryUseMemoryPool, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_LOCAL) != 0);
TAOS_CHECK_RETURN(cfgAddBool(pCfg, "memPoolFullFunc", tsMemPoolFullFunc, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_LOCAL) != 0); TAOS_CHECK_RETURN(cfgAddBool(pCfg, "memPoolFullFunc", tsMemPoolFullFunc, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_LOCAL) != 0);
@ -956,7 +952,6 @@ static int32_t taosAddServerCfg(SConfig *pCfg) {
TAOS_CHECK_RETURN(cfgAddBool(pCfg, "filterScalarMode", tsFilterScalarMode, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_LOCAL)); TAOS_CHECK_RETURN(cfgAddBool(pCfg, "filterScalarMode", tsFilterScalarMode, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_LOCAL));
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "maxStreamBackendCache", tsMaxStreamBackendCache, 16, 1024, CFG_SCOPE_SERVER, CFG_DYN_ENT_SERVER_LAZY,CFG_CATEGORY_LOCAL)); TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "maxStreamBackendCache", tsMaxStreamBackendCache, 16, 1024, CFG_SCOPE_SERVER, CFG_DYN_ENT_SERVER_LAZY,CFG_CATEGORY_LOCAL));
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "pqSortMemThreshold", tsPQSortMemThreshold, 1, 10240, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_LOCAL)); TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "pqSortMemThreshold", tsPQSortMemThreshold, 1, 10240, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_LOCAL));
TAOS_CHECK_RETURN(cfgAddInt32(pCfg, "resolveFQDNRetryTime", tsResolveFQDNRetryTime, 1, 10240, CFG_SCOPE_SERVER, CFG_DYN_NONE,CFG_CATEGORY_GLOBAL));
TAOS_CHECK_RETURN(cfgAddString(pCfg, "s3Accesskey", tsS3AccessKey[0], CFG_SCOPE_SERVER, CFG_DYN_ENT_SERVER_LAZY,CFG_CATEGORY_GLOBAL)); TAOS_CHECK_RETURN(cfgAddString(pCfg, "s3Accesskey", tsS3AccessKey[0], CFG_SCOPE_SERVER, CFG_DYN_ENT_SERVER_LAZY,CFG_CATEGORY_GLOBAL));
TAOS_CHECK_RETURN(cfgAddString(pCfg, "s3Endpoint", tsS3Endpoint[0], CFG_SCOPE_SERVER, CFG_DYN_ENT_SERVER_LAZY,CFG_CATEGORY_GLOBAL)); TAOS_CHECK_RETURN(cfgAddString(pCfg, "s3Endpoint", tsS3Endpoint[0], CFG_SCOPE_SERVER, CFG_DYN_ENT_SERVER_LAZY,CFG_CATEGORY_GLOBAL));
@ -1040,10 +1035,8 @@ static int32_t taosUpdateServerCfg(SConfig *pCfg) {
pItem->stype = stype; pItem->stype = stype;
} }
pItem = cfgGetItem(pCfg, "maxCompactConcurrency"); pItem = cfgGetItem(pCfg, "numOfCompactThreads");
if (pItem != NULL && pItem->stype == CFG_STYPE_DEFAULT) { if (pItem != NULL && pItem->stype == CFG_STYPE_DEFAULT) {
tsNumOfCompactThreads = numOfCores / 2;
tsNumOfCompactThreads = TRANGE(tsNumOfCompactThreads, 2, 4);
pItem->i32 = tsNumOfCompactThreads; pItem->i32 = tsNumOfCompactThreads;
pItem->stype = stype; pItem->stype = stype;
} }
@ -1548,7 +1541,7 @@ static int32_t taosSetServerCfg(SConfig *pCfg) {
TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "numOfCommitThreads"); TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "numOfCommitThreads");
tsNumOfCommitThreads = pItem->i32; tsNumOfCommitThreads = pItem->i32;
TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "maxCompactConcurrency"); TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "numOfCompactThreads");
tsNumOfCompactThreads = pItem->i32; tsNumOfCompactThreads = pItem->i32;
TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "retentionSpeedLimitMB"); TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "retentionSpeedLimitMB");
@ -1822,9 +1815,6 @@ static int32_t taosSetServerCfg(SConfig *pCfg) {
TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "pqSortMemThreshold"); TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "pqSortMemThreshold");
tsPQSortMemThreshold = pItem->i32; tsPQSortMemThreshold = pItem->i32;
TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "resolveFQDNRetryTime");
tsResolveFQDNRetryTime = pItem->i32;
TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "minDiskFreeSize"); TAOS_CHECK_GET_CFG_ITEM(pCfg, pItem, "minDiskFreeSize");
tsMinDiskFreeSize = pItem->i64; tsMinDiskFreeSize = pItem->i64;
@ -2354,6 +2344,8 @@ static int32_t taosCfgSetOption(OptionNameAndVar *pOptions, int32_t optionSize,
TAOS_RETURN(code); TAOS_RETURN(code);
} }
extern void tsdbAlterNumCompactThreads();
static int32_t taosCfgDynamicOptionsForServer(SConfig *pCfg, const char *name) { static int32_t taosCfgDynamicOptionsForServer(SConfig *pCfg, const char *name) {
int32_t code = TSDB_CODE_SUCCESS; int32_t code = TSDB_CODE_SUCCESS;
int32_t lino = -1; int32_t lino = -1;
@ -2404,6 +2396,17 @@ static int32_t taosCfgDynamicOptionsForServer(SConfig *pCfg, const char *name) {
goto _exit; goto _exit;
} }
if (strcasecmp(name, "numOfCompactThreads") == 0) {
#ifdef TD_ENTERPRISE
tsNumOfCompactThreads = pItem->i32;
code = TSDB_CODE_SUCCESS;
// tsdbAlterNumCompactThreads();
#else
code = TSDB_CODE_INVALID_CFG;
#endif
goto _exit;
}
{ // 'bool/int32_t/int64_t/float/double' variables with general modification function { // 'bool/int32_t/int64_t/float/double' variables with general modification function
static OptionNameAndVar debugOptions[] = { static OptionNameAndVar debugOptions[] = {
{"dDebugFlag", &dDebugFlag}, {"vDebugFlag", &vDebugFlag}, {"dDebugFlag", &dDebugFlag}, {"vDebugFlag", &vDebugFlag},
@ -2453,7 +2456,6 @@ static int32_t taosCfgDynamicOptionsForServer(SConfig *pCfg, const char *name) {
{"randErrorDivisor", &tsRandErrDivisor}, {"randErrorDivisor", &tsRandErrDivisor},
{"randErrorScope", &tsRandErrScope}, {"randErrorScope", &tsRandErrScope},
{"syncLogBufferMemoryAllowed", &tsLogBufferMemoryAllowed}, {"syncLogBufferMemoryAllowed", &tsLogBufferMemoryAllowed},
{"resolveFQDNRetryTime", &tsResolveFQDNRetryTime},
{"syncHeartbeatInterval", &tsHeartbeatInterval}, {"syncHeartbeatInterval", &tsHeartbeatInterval},
{"syncHeartbeatTimeout", &tsHeartbeatTimeout}, {"syncHeartbeatTimeout", &tsHeartbeatTimeout},
{"syncSnapReplMaxWaitN", &tsSnapReplMaxWaitN}, {"syncSnapReplMaxWaitN", &tsSnapReplMaxWaitN},

View File

@ -40,6 +40,46 @@ add_test(
COMMAND dataformatTest COMMAND dataformatTest
) )
# cosCpTest.cpp
add_executable(cosCpTest "")
target_sources(
cosCpTest
PRIVATE
"cosCpTest.cpp"
)
target_link_libraries(cosCpTest gtest gtest_main util common)
target_include_directories(
cosCpTest
PUBLIC "${TD_SOURCE_DIR}/include/common"
PUBLIC "${TD_SOURCE_DIR}/include/util"
)
add_test(
NAME cosCpTest
COMMAND cosCpTest
)
if(TD_LINUX)
# cosTest.cpp
add_executable(cosTest "")
target_sources(
cosTest
PRIVATE
"cosTest.cpp"
)
target_link_libraries(cosTest gtest gtest_main util common)
target_include_directories(
cosTest
PUBLIC "${TD_SOURCE_DIR}/include/common"
PUBLIC "${TD_SOURCE_DIR}/include/util"
)
add_test(
NAME cosTest
COMMAND cosTest
)
endif()
if (${TD_LINUX}) if (${TD_LINUX})
# tmsg test # tmsg test
add_executable(tmsgTest "") add_executable(tmsgTest "")
@ -60,4 +100,4 @@ if (${TD_LINUX})
add_custom_command(TARGET tmsgTest POST_BUILD add_custom_command(TARGET tmsgTest POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different ${MSG_TBL_FILE} $<TARGET_FILE_DIR:tmsgTest> COMMAND ${CMAKE_COMMAND} -E copy_if_different ${MSG_TBL_FILE} $<TARGET_FILE_DIR:tmsgTest>
) )
endif () endif ()

View File

@ -0,0 +1,305 @@
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <gtest/gtest.h>
#include <cos_cp.h>
#include <taoserror.h>
#include <tglobal.h>
#include <iostream>
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wwrite-strings"
#pragma GCC diagnostic ignored "-Wunused-function"
#pragma GCC diagnostic ignored "-Wunused-variable"
#pragma GCC diagnostic ignored "-Wsign-compare"
int main(int argc, char **argv) {
testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
TEST(testCase, cpOpenCloseRemove) {
int32_t code = 0, lino = 0;
int64_t contentLength = 1024;
const int64_t MULTIPART_CHUNK_SIZE = 64 << 20; // multipart is 768M
uint64_t chunk_size = MULTIPART_CHUNK_SIZE >> 3;
int totalSeq = (contentLength + chunk_size - 1) / chunk_size;
const int max_part_num = 10000;
if (totalSeq > max_part_num) {
chunk_size = (contentLength + max_part_num - contentLength % max_part_num) / max_part_num;
totalSeq = (contentLength + chunk_size - 1) / chunk_size;
}
SCheckpoint cp;
char const *file = "./afile";
char file_cp_path[TSDB_FILENAME_LEN];
(void)snprintf(file_cp_path, TSDB_FILENAME_LEN, "%s.cp", file);
cp.parts = (SCheckpointPart *)taosMemoryCalloc(max_part_num, sizeof(SCheckpointPart));
if (!cp.parts) {
TAOS_CHECK_EXIT(terrno);
}
EXPECT_EQ(cos_cp_open(file_cp_path, &cp), TSDB_CODE_SUCCESS);
if (cp.thefile) {
EXPECT_EQ(cos_cp_close(cp.thefile), TSDB_CODE_SUCCESS);
}
if (cp.parts) {
taosMemoryFree(cp.parts);
}
EXPECT_EQ(cos_cp_remove(file_cp_path), TSDB_CODE_SUCCESS);
return;
_exit:
std::cout << "code: " << code << std::endl;
}
TEST(testCase, cpBuild) {
int32_t code = 0, lino = 0;
int64_t contentLength = 1024;
const int64_t MULTIPART_CHUNK_SIZE = 64 << 20; // multipart is 768M
uint64_t chunk_size = MULTIPART_CHUNK_SIZE >> 3;
int totalSeq = (contentLength + chunk_size - 1) / chunk_size;
const int max_part_num = 10000;
if (totalSeq > max_part_num) {
chunk_size = (contentLength + max_part_num - contentLength % max_part_num) / max_part_num;
totalSeq = (contentLength + chunk_size - 1) / chunk_size;
}
SCheckpoint cp;
char const *file = "./afile";
char file_cp_path[TSDB_FILENAME_LEN];
int64_t lmtime = 20241220141705;
char const *upload_id = "upload-id-xxx";
(void)snprintf(file_cp_path, TSDB_FILENAME_LEN, "%s.cp", file);
(void)memset(&cp, 0, sizeof(cp));
cp.parts = (SCheckpointPart *)taosMemoryCalloc(max_part_num, sizeof(SCheckpointPart));
if (!cp.parts) {
TAOS_CHECK_EXIT(terrno);
}
EXPECT_EQ(cos_cp_open(file_cp_path, &cp), TSDB_CODE_SUCCESS);
cos_cp_build_upload(&cp, file, contentLength, lmtime, upload_id, chunk_size);
EXPECT_EQ(cos_cp_dump(&cp), TSDB_CODE_SUCCESS);
if (cp.thefile) {
EXPECT_EQ(cos_cp_close(cp.thefile), TSDB_CODE_SUCCESS);
}
if (cp.parts) {
taosMemoryFree(cp.parts);
}
return;
_exit:
std::cout << "code: " << code << std::endl;
}
TEST(testCase, cpLoad) {
int32_t code = 0, lino = 0;
int64_t contentLength = 1024;
const int64_t MULTIPART_CHUNK_SIZE = 64 << 20; // multipart is 768M
uint64_t chunk_size = MULTIPART_CHUNK_SIZE >> 3;
int totalSeq = (contentLength + chunk_size - 1) / chunk_size;
const int max_part_num = 10000;
if (totalSeq > max_part_num) {
chunk_size = (contentLength + max_part_num - contentLength % max_part_num) / max_part_num;
totalSeq = (contentLength + chunk_size - 1) / chunk_size;
}
SCheckpoint cp;
char const *file = "./afile";
char file_cp_path[TSDB_FILENAME_LEN];
int64_t lmtime = 20241220141705;
char const *upload_id = "upload-id-xxx";
(void)snprintf(file_cp_path, TSDB_FILENAME_LEN, "%s.cp", file);
(void)memset(&cp, 0, sizeof(cp));
cp.parts = (SCheckpointPart *)taosMemoryCalloc(max_part_num, sizeof(SCheckpointPart));
if (!cp.parts) {
TAOS_CHECK_EXIT(terrno);
}
if (taosCheckExistFile(file_cp_path)) {
EXPECT_EQ(cos_cp_load(file_cp_path, &cp), TSDB_CODE_SUCCESS);
EXPECT_EQ(cos_cp_is_valid_upload(&cp, contentLength, lmtime), true);
EXPECT_EQ(cp.cp_type, COS_CP_TYPE_UPLOAD);
EXPECT_EQ(cp.md5, std::string(""));
EXPECT_EQ(cp.thefile, nullptr);
EXPECT_EQ(std::string(cp.file_path), "./afile");
EXPECT_EQ(cp.file_size, 1024);
EXPECT_EQ(cp.file_last_modified, 20241220141705);
EXPECT_EQ(cp.file_md5, std::string(""));
EXPECT_EQ(cp.object_name, std::string(""));
EXPECT_EQ(cp.object_size, 0);
EXPECT_EQ(cp.object_last_modified, std::string(""));
EXPECT_EQ(cp.object_etag, std::string(""));
EXPECT_EQ(cp.upload_id, std::string("upload-id-xxx"));
EXPECT_EQ(cp.part_num, 1);
EXPECT_EQ(cp.part_size, 8388608);
EXPECT_EQ(cp.parts[0].index, 0);
EXPECT_EQ(cp.parts[0].offset, 0);
EXPECT_EQ(cp.parts[0].size, 1024);
EXPECT_EQ(cp.parts[0].completed, 0);
EXPECT_EQ(cp.parts[0].etag, std::string(""));
EXPECT_EQ(cp.parts[0].crc64, 0);
}
if (cp.thefile) {
EXPECT_EQ(cos_cp_close(cp.thefile), TSDB_CODE_SUCCESS);
}
if (cp.parts) {
taosMemoryFree(cp.parts);
}
EXPECT_EQ(cos_cp_remove(file_cp_path), TSDB_CODE_SUCCESS);
return;
_exit:
std::cout << "code: " << code << std::endl;
}
TEST(testCase, cpBuildUpdate) {
int32_t code = 0, lino = 0;
int64_t contentLength = 1024;
const int64_t MULTIPART_CHUNK_SIZE = 64 << 20; // multipart is 768M
uint64_t chunk_size = MULTIPART_CHUNK_SIZE >> 3;
int totalSeq = (contentLength + chunk_size - 1) / chunk_size;
const int max_part_num = 10000;
if (totalSeq > max_part_num) {
chunk_size = (contentLength + max_part_num - contentLength % max_part_num) / max_part_num;
totalSeq = (contentLength + chunk_size - 1) / chunk_size;
}
SCheckpoint cp;
char const *file = "./afile";
char file_cp_path[TSDB_FILENAME_LEN];
int64_t lmtime = 20241220141705;
char const *upload_id = "upload-id-xxx";
int seq = 1;
char *etags[1] = {"etags-1-xxx"};
(void)snprintf(file_cp_path, TSDB_FILENAME_LEN, "%s.cp", file);
(void)memset(&cp, 0, sizeof(cp));
cp.parts = (SCheckpointPart *)taosMemoryCalloc(max_part_num, sizeof(SCheckpointPart));
if (!cp.parts) {
TAOS_CHECK_EXIT(terrno);
}
EXPECT_EQ(cos_cp_open(file_cp_path, &cp), TSDB_CODE_SUCCESS);
cos_cp_build_upload(&cp, file, contentLength, lmtime, upload_id, chunk_size);
cos_cp_update(&cp, cp.parts[seq - 1].index, etags[seq - 1], 0);
EXPECT_EQ(cos_cp_dump(&cp), TSDB_CODE_SUCCESS);
if (cp.thefile) {
EXPECT_EQ(cos_cp_close(cp.thefile), TSDB_CODE_SUCCESS);
}
if (cp.parts) {
taosMemoryFree(cp.parts);
}
return;
_exit:
std::cout << "code: " << code << std::endl;
}
TEST(testCase, cpLoadUpdate) {
int32_t code = 0, lino = 0;
int64_t contentLength = 1024;
const int64_t MULTIPART_CHUNK_SIZE = 64 << 20; // multipart is 768M
uint64_t chunk_size = MULTIPART_CHUNK_SIZE >> 3;
int totalSeq = (contentLength + chunk_size - 1) / chunk_size;
const int max_part_num = 10000;
if (totalSeq > max_part_num) {
chunk_size = (contentLength + max_part_num - contentLength % max_part_num) / max_part_num;
totalSeq = (contentLength + chunk_size - 1) / chunk_size;
}
SCheckpoint cp;
char const *file = "./afile";
char file_cp_path[TSDB_FILENAME_LEN];
int64_t lmtime = 20241220141705;
char const *upload_id = "upload-id-xxx";
(void)snprintf(file_cp_path, TSDB_FILENAME_LEN, "%s.cp", file);
(void)memset(&cp, 0, sizeof(cp));
cp.parts = (SCheckpointPart *)taosMemoryCalloc(max_part_num, sizeof(SCheckpointPart));
if (!cp.parts) {
TAOS_CHECK_EXIT(terrno);
}
if (taosCheckExistFile(file_cp_path)) {
EXPECT_EQ(cos_cp_load(file_cp_path, &cp), TSDB_CODE_SUCCESS);
EXPECT_EQ(cos_cp_is_valid_upload(&cp, contentLength, lmtime), true);
EXPECT_EQ(cp.cp_type, COS_CP_TYPE_UPLOAD);
EXPECT_EQ(cp.md5, std::string(""));
EXPECT_EQ(cp.thefile, nullptr);
EXPECT_EQ(std::string(cp.file_path), "./afile");
EXPECT_EQ(cp.file_size, 1024);
EXPECT_EQ(cp.file_last_modified, 20241220141705);
EXPECT_EQ(cp.file_md5, std::string(""));
EXPECT_EQ(cp.object_name, std::string(""));
EXPECT_EQ(cp.object_size, 0);
EXPECT_EQ(cp.object_last_modified, std::string(""));
EXPECT_EQ(cp.object_etag, std::string(""));
EXPECT_EQ(cp.upload_id, std::string("upload-id-xxx"));
EXPECT_EQ(cp.part_num, 1);
EXPECT_EQ(cp.part_size, 8388608);
EXPECT_EQ(cp.parts[0].index, 0);
EXPECT_EQ(cp.parts[0].offset, 0);
EXPECT_EQ(cp.parts[0].size, 1024);
EXPECT_EQ(cp.parts[0].completed, 1);
EXPECT_EQ(cp.parts[0].etag, std::string("etags-1-xxx"));
EXPECT_EQ(cp.parts[0].crc64, 0);
}
if (cp.thefile) {
EXPECT_EQ(cos_cp_close(cp.thefile), TSDB_CODE_SUCCESS);
}
if (cp.parts) {
taosMemoryFree(cp.parts);
}
EXPECT_EQ(cos_cp_remove(file_cp_path), TSDB_CODE_SUCCESS);
return;
_exit:
std::cout << "code: " << code << std::endl;
}

View File

@ -0,0 +1,185 @@
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <gtest/gtest.h>
#include <cos.h>
#include <taoserror.h>
#include <tglobal.h>
#include <iostream>
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wwrite-strings"
#pragma GCC diagnostic ignored "-Wunused-function"
#pragma GCC diagnostic ignored "-Wunused-variable"
#pragma GCC diagnostic ignored "-Wsign-compare"
int main(int argc, char **argv) {
testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
int32_t cosInitEnv() {
int32_t code = 0;
bool isBlob = false;
extern int8_t tsS3Ablob;
extern char tsS3Hostname[][TSDB_FQDN_LEN];
extern char tsS3AccessKeyId[][TSDB_FQDN_LEN];
extern char tsS3AccessKeySecret[][TSDB_FQDN_LEN];
extern char tsS3BucketName[TSDB_FQDN_LEN];
tsS3Ablob = isBlob;
/*
const char *hostname = "endpoint/<account-name>.blob.core.windows.net";
const char *accessKeyId = "<access-key-id/account-name>";
const char *accessKeySecret = "<access-key-secret/account-key>";
const char *bucketName = "<bucket/container-name>";
*/
// const char *hostname = "http://192.168.1.52:9000";
// const char *accessKeyId = "zOgllR6bSnw2Ah3mCNel";
// const char *accessKeySecret = "cdO7oXAu3Cqdb1rUdevFgJMi0LtRwCXdWKQx4bhX";
// const char *bucketName = "test-bucket";
const char *hostname = "192.168.1.52:9000";
const char *accessKeyId = "zOgllR6bSnw2Ah3mCNel";
const char *accessKeySecret = "cdO7oXAu3Cqdb1rUdevFgJMi0LtRwCXdWKQx4bhX";
const char *bucketName = "ci-bucket19";
tstrncpy(&tsS3Hostname[0][0], hostname, TSDB_FQDN_LEN);
tstrncpy(&tsS3AccessKeyId[0][0], accessKeyId, TSDB_FQDN_LEN);
tstrncpy(&tsS3AccessKeySecret[0][0], accessKeySecret, TSDB_FQDN_LEN);
tstrncpy(tsS3BucketName, bucketName, TSDB_FQDN_LEN);
// setup s3 env
extern int8_t tsS3EpNum;
extern int8_t tsS3Https[TSDB_MAX_EP_NUM];
tsS3EpNum = 1;
tsS3Https[0] = false;
tstrncpy(tsTempDir, "/tmp/", PATH_MAX);
tsS3Enabled = true;
return code;
}
TEST(testCase, cosCpPutError) {
int32_t code = 0, lino = 0;
char const *objectName = "testObject";
EXPECT_EQ(cosInitEnv(), TSDB_CODE_SUCCESS);
EXPECT_EQ(s3Begin(), TSDB_CODE_SUCCESS);
#if defined(USE_S3)
EXPECT_EQ(s3Size(objectName), -1);
#else
EXPECT_EQ(s3Size(objectName), 0);
#endif
s3EvictCache("", 0);
s3End();
return;
_exit:
std::cout << "code: " << code << std::endl;
}
TEST(testCase, cosCpPut) {
int32_t code = 0, lino = 0;
int8_t with_cp = 0;
char *data = nullptr;
const long objectSize = 65 * 1024 * 1024;
char const *objectName = "cosut.bin";
const char object_name[] = "cosut.bin";
EXPECT_EQ(std::string(object_name), objectName);
EXPECT_EQ(cosInitEnv(), TSDB_CODE_SUCCESS);
EXPECT_EQ(s3Begin(), TSDB_CODE_SUCCESS);
{
data = (char *)taosMemoryCalloc(1, objectSize);
if (!data) {
TAOS_CHECK_EXIT(terrno);
}
for (int i = 0; i < objectSize / 2; ++i) {
data[i * 2 + 1] = 1;
}
char path[PATH_MAX] = {0};
char path_download[PATH_MAX] = {0};
int ds_len = strlen(TD_DIRSEP);
int tmp_len = strlen(tsTempDir);
(void)snprintf(path, PATH_MAX, "%s", tsTempDir);
if (strncmp(tsTempDir + tmp_len - ds_len, TD_DIRSEP, ds_len) != 0) {
(void)snprintf(path + tmp_len, PATH_MAX - tmp_len, "%s", TD_DIRSEP);
(void)snprintf(path + tmp_len + ds_len, PATH_MAX - tmp_len - ds_len, "%s", object_name);
} else {
(void)snprintf(path + tmp_len, PATH_MAX - tmp_len, "%s", object_name);
}
tstrncpy(path_download, path, strlen(path) + 1);
tstrncpy(path_download + strlen(path), ".download", strlen(".download") + 1);
TdFilePtr fp = taosOpenFile(path, TD_FILE_WRITE | TD_FILE_CREATE | TD_FILE_WRITE_THROUGH);
GTEST_ASSERT_NE(fp, nullptr);
int n = taosWriteFile(fp, data, objectSize);
GTEST_ASSERT_EQ(n, objectSize);
code = taosCloseFile(&fp);
GTEST_ASSERT_EQ(code, 0);
code = s3PutObjectFromFile2(path, objectName, with_cp);
GTEST_ASSERT_EQ(code, 0);
with_cp = 1;
code = s3PutObjectFromFile2(path, objectName, with_cp);
GTEST_ASSERT_EQ(code, 0);
#if defined(USE_S3)
EXPECT_EQ(s3Size(objectName), objectSize);
#else
EXPECT_EQ(s3Size(objectName), 0);
#endif
s3End();
s3EvictCache("", 0);
taosMemoryFree(data);
EXPECT_EQ(taosRemoveFile(path), TSDB_CODE_SUCCESS);
}
return;
_exit:
if (data) {
taosMemoryFree(data);
s3End();
}
std::cout << "code: " << code << std::endl;
}

View File

@ -475,27 +475,6 @@ int32_t dmProcessGrantRsp(SDnodeMgmt *pMgmt, SRpcMsg *pMsg) {
return 0; return 0;
} }
extern void tsdbAlterNumCompactThreads();
static int32_t dmAlterMaxCompactTask(const char *value) {
int32_t max_compact_tasks;
char *endptr = NULL;
max_compact_tasks = taosStr2Int32(value, &endptr, 10);
if (endptr == value || endptr[0] != '\0') {
return TSDB_CODE_INVALID_MSG;
}
if (max_compact_tasks != tsNumOfCompactThreads) {
dInfo("alter max compact tasks from %d to %d", tsNumOfCompactThreads, max_compact_tasks);
tsNumOfCompactThreads = max_compact_tasks;
#ifdef TD_ENTERPRISE
(void)tsdbAlterNumCompactThreads();
#endif
}
return TSDB_CODE_SUCCESS;
}
int32_t dmProcessConfigReq(SDnodeMgmt *pMgmt, SRpcMsg *pMsg) { int32_t dmProcessConfigReq(SDnodeMgmt *pMgmt, SRpcMsg *pMsg) {
int32_t code = 0; int32_t code = 0;
SDCfgDnodeReq cfgReq = {0}; SDCfgDnodeReq cfgReq = {0};
@ -509,10 +488,6 @@ int32_t dmProcessConfigReq(SDnodeMgmt *pMgmt, SRpcMsg *pMsg) {
return taosUpdateTfsItemDisable(pCfg, cfgReq.value, pMgmt->pTfs); return taosUpdateTfsItemDisable(pCfg, cfgReq.value, pMgmt->pTfs);
} }
if (strncmp(cfgReq.config, tsAlterCompactTaskKeywords, strlen(tsAlterCompactTaskKeywords) + 1) == 0) {
return dmAlterMaxCompactTask(cfgReq.value);
}
dInfo("start to config, option:%s, value:%s", cfgReq.config, cfgReq.value); dInfo("start to config, option:%s, value:%s", cfgReq.config, cfgReq.value);
code = cfgGetAndSetItem(pCfg, &pItem, cfgReq.config, cfgReq.value, CFG_STYPE_ALTER_SERVER_CMD, true); code = cfgGetAndSetItem(pCfg, &pItem, cfgReq.config, cfgReq.value, CFG_STYPE_ALTER_SERVER_CMD, true);

View File

@ -129,7 +129,7 @@ SArray *mmGetMsgHandles() {
if (dmSetMgmtHandle(pArray, TDMT_MND_DROP_USER, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_MND_DROP_USER, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_MND_GET_USER_AUTH, mmPutMsgToReadQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_MND_GET_USER_AUTH, mmPutMsgToReadQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_MND_CREATE_DNODE, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_MND_CREATE_DNODE, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_MND_CONFIG_DNODE, mmPutMsgToReadQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_MND_CONFIG_DNODE, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_MND_DROP_DNODE, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_MND_DROP_DNODE, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_MND_CREATE_MNODE, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_MND_CREATE_MNODE, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_MND_ALTER_MNODE, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER; if (dmSetMgmtHandle(pArray, TDMT_MND_ALTER_MNODE, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;

View File

@ -499,6 +499,20 @@ static int32_t mndCheckDbCfg(SMnode *pMnode, SDbCfg *pCfg) {
if (pCfg->s3KeepLocal < TSDB_MIN_S3_KEEP_LOCAL || pCfg->s3KeepLocal > TSDB_MAX_S3_KEEP_LOCAL) return code; if (pCfg->s3KeepLocal < TSDB_MIN_S3_KEEP_LOCAL || pCfg->s3KeepLocal > TSDB_MAX_S3_KEEP_LOCAL) return code;
if (pCfg->s3Compact < TSDB_MIN_S3_COMPACT || pCfg->s3Compact > TSDB_MAX_S3_COMPACT) return code; if (pCfg->s3Compact < TSDB_MIN_S3_COMPACT || pCfg->s3Compact > TSDB_MAX_S3_COMPACT) return code;
if (pCfg->compactInterval != 0 &&
(pCfg->compactInterval < TSDB_MIN_COMPACT_INTERVAL || pCfg->compactInterval > pCfg->daysToKeep2))
return code;
if (pCfg->compactStartTime != 0 &&
(pCfg->compactStartTime < -pCfg->daysToKeep2 || pCfg->compactStartTime > -pCfg->daysPerFile))
return code;
if (pCfg->compactEndTime != 0 &&
(pCfg->compactEndTime < -pCfg->daysToKeep2 || pCfg->compactEndTime > -pCfg->daysPerFile))
return code;
if (pCfg->compactStartTime != 0 && pCfg->compactEndTime != 0 && pCfg->compactStartTime >= pCfg->compactEndTime)
return code;
if (pCfg->compactTimeOffset < TSDB_MIN_COMPACT_TIME_OFFSET || pCfg->compactTimeOffset > TSDB_MAX_COMPACT_TIME_OFFSET)
return code;
code = 0; code = 0;
TAOS_RETURN(code); TAOS_RETURN(code);
} }
@ -564,6 +578,22 @@ static int32_t mndCheckInChangeDbCfg(SMnode *pMnode, SDbCfg *pOldCfg, SDbCfg *pN
if (pNewCfg->s3KeepLocal < TSDB_MIN_S3_KEEP_LOCAL || pNewCfg->s3KeepLocal > TSDB_MAX_S3_KEEP_LOCAL) return code; if (pNewCfg->s3KeepLocal < TSDB_MIN_S3_KEEP_LOCAL || pNewCfg->s3KeepLocal > TSDB_MAX_S3_KEEP_LOCAL) return code;
if (pNewCfg->s3Compact < TSDB_MIN_S3_COMPACT || pNewCfg->s3Compact > TSDB_MAX_S3_COMPACT) return code; if (pNewCfg->s3Compact < TSDB_MIN_S3_COMPACT || pNewCfg->s3Compact > TSDB_MAX_S3_COMPACT) return code;
if (pNewCfg->compactInterval != 0 &&
(pNewCfg->compactInterval < TSDB_MIN_COMPACT_INTERVAL || pNewCfg->compactInterval > pNewCfg->daysToKeep2))
return code;
if (pNewCfg->compactStartTime != 0 &&
(pNewCfg->compactStartTime < -pNewCfg->daysToKeep2 || pNewCfg->compactStartTime > -pNewCfg->daysPerFile))
return code;
if (pNewCfg->compactEndTime != 0 &&
(pNewCfg->compactEndTime < -pNewCfg->daysToKeep2 || pNewCfg->compactEndTime > -pNewCfg->daysPerFile))
return code;
if (pNewCfg->compactStartTime != 0 && pNewCfg->compactEndTime != 0 &&
pNewCfg->compactStartTime >= pNewCfg->compactEndTime)
return code;
if (pNewCfg->compactTimeOffset < TSDB_MIN_COMPACT_TIME_OFFSET ||
pNewCfg->compactTimeOffset > TSDB_MAX_COMPACT_TIME_OFFSET)
return code;
code = 0; code = 0;
TAOS_RETURN(code); TAOS_RETURN(code);
} }
@ -1150,20 +1180,24 @@ static int32_t mndSetDbCfgFromAlterDbReq(SDbObj *pDb, SAlterDbReq *pAlter) {
code = 0; code = 0;
} }
bool compactTimeRangeChanged = false;
if (pAlter->compactStartTime != pDb->cfg.compactStartTime && if (pAlter->compactStartTime != pDb->cfg.compactStartTime &&
(pAlter->compactStartTime == TSDB_DEFAULT_COMPACT_START_TIME || (pAlter->compactStartTime == TSDB_DEFAULT_COMPACT_START_TIME ||
pAlter->compactStartTime <= -pDb->cfg.daysPerFile)) { pAlter->compactStartTime <= -pDb->cfg.daysPerFile)) {
pDb->cfg.compactStartTime = pAlter->compactStartTime; pDb->cfg.compactStartTime = pAlter->compactStartTime;
pDb->vgVersion++; compactTimeRangeChanged = true;
code = 0; code = 0;
} }
if (pAlter->compactEndTime != pDb->cfg.compactEndTime && if (pAlter->compactEndTime != pDb->cfg.compactEndTime &&
(pAlter->compactEndTime == TSDB_DEFAULT_COMPACT_END_TIME || pAlter->compactEndTime <= -pDb->cfg.daysPerFile)) { (pAlter->compactEndTime == TSDB_DEFAULT_COMPACT_END_TIME || pAlter->compactEndTime <= -pDb->cfg.daysPerFile)) {
pDb->cfg.compactEndTime = pAlter->compactEndTime; pDb->cfg.compactEndTime = pAlter->compactEndTime;
pDb->vgVersion++; compactTimeRangeChanged = true;
code = 0; code = 0;
} }
if(compactTimeRangeChanged) {
pDb->vgVersion++;
}
if (pAlter->compactTimeOffset >= TSDB_MIN_COMPACT_TIME_OFFSET && if (pAlter->compactTimeOffset >= TSDB_MIN_COMPACT_TIME_OFFSET &&
pAlter->compactTimeOffset != pDb->cfg.compactTimeOffset) { pAlter->compactTimeOffset != pDb->cfg.compactTimeOffset) {
@ -1408,14 +1442,6 @@ static void mndDumpDbCfgInfo(SDbCfgRsp *cfgRsp, SDbObj *pDb) {
cfgRsp->compactInterval = pDb->cfg.compactInterval; cfgRsp->compactInterval = pDb->cfg.compactInterval;
cfgRsp->compactStartTime = pDb->cfg.compactStartTime; cfgRsp->compactStartTime = pDb->cfg.compactStartTime;
cfgRsp->compactEndTime = pDb->cfg.compactEndTime; cfgRsp->compactEndTime = pDb->cfg.compactEndTime;
if (cfgRsp->compactInterval > 0) {
if (cfgRsp->compactStartTime == 0) {
cfgRsp->compactStartTime = -cfgRsp->daysToKeep2;
}
if (cfgRsp->compactEndTime == 0) {
cfgRsp->compactEndTime = -cfgRsp->daysPerFile;
}
}
cfgRsp->compactTimeOffset = pDb->cfg.compactTimeOffset; cfgRsp->compactTimeOffset = pDb->cfg.compactTimeOffset;
} }
@ -2432,6 +2458,7 @@ static void mndDumpDbInfoData(SMnode *pMnode, SSDataBlock *pBlock, SDbObj *pDb,
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++); pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
TAOS_CHECK_GOTO(colDataSetVal(pColInfo, rows, (const char *)strictVstr, false), &lino, _OVER); TAOS_CHECK_GOTO(colDataSetVal(pColInfo, rows, (const char *)strictVstr, false), &lino, _OVER);
char durationStr[128] = {0};
char durationVstr[128] = {0}; char durationVstr[128] = {0};
int32_t len = formatDurationOrKeep(&durationVstr[VARSTR_HEADER_SIZE], sizeof(durationVstr) - VARSTR_HEADER_SIZE, int32_t len = formatDurationOrKeep(&durationVstr[VARSTR_HEADER_SIZE], sizeof(durationVstr) - VARSTR_HEADER_SIZE,
pDb->cfg.daysPerFile); pDb->cfg.daysPerFile);
@ -2440,10 +2467,10 @@ static void mndDumpDbInfoData(SMnode *pMnode, SSDataBlock *pBlock, SDbObj *pDb,
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++); pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
TAOS_CHECK_GOTO(colDataSetVal(pColInfo, rows, (const char *)durationVstr, false), &lino, _OVER); TAOS_CHECK_GOTO(colDataSetVal(pColInfo, rows, (const char *)durationVstr, false), &lino, _OVER);
char keepVstr[512] = {0}; char keepVstr[128] = {0};
char keep0Str[128] = {0}; char keep0Str[32] = {0};
char keep1Str[128] = {0}; char keep1Str[32] = {0};
char keep2Str[128] = {0}; char keep2Str[32] = {0};
int32_t lenKeep0 = formatDurationOrKeep(keep0Str, sizeof(keep0Str), pDb->cfg.daysToKeep0); int32_t lenKeep0 = formatDurationOrKeep(keep0Str, sizeof(keep0Str), pDb->cfg.daysToKeep0);
int32_t lenKeep1 = formatDurationOrKeep(keep1Str, sizeof(keep1Str), pDb->cfg.daysToKeep1); int32_t lenKeep1 = formatDurationOrKeep(keep1Str, sizeof(keep1Str), pDb->cfg.daysToKeep1);
@ -2556,6 +2583,26 @@ static void mndDumpDbInfoData(SMnode *pMnode, SSDataBlock *pBlock, SDbObj *pDb,
STR_WITH_MAXSIZE_TO_VARSTR(encryptAlgorithmVStr, encryptAlgorithmStr, 24); STR_WITH_MAXSIZE_TO_VARSTR(encryptAlgorithmVStr, encryptAlgorithmStr, 24);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++); pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
TAOS_CHECK_GOTO(colDataSetVal(pColInfo, rows, (const char *)encryptAlgorithmVStr, false), &lino, _OVER); TAOS_CHECK_GOTO(colDataSetVal(pColInfo, rows, (const char *)encryptAlgorithmVStr, false), &lino, _OVER);
TAOS_UNUSED(formatDurationOrKeep(durationStr, sizeof(durationStr), pDb->cfg.compactInterval));
STR_WITH_MAXSIZE_TO_VARSTR(durationVstr, durationStr, sizeof(durationVstr));
if ((pColInfo = taosArrayGet(pBlock->pDataBlock, cols++))) {
TAOS_CHECK_GOTO(colDataSetVal(pColInfo, rows, (const char *)durationVstr, false), &lino, _OVER);
}
len = formatDurationOrKeep(durationStr, sizeof(durationStr), pDb->cfg.compactStartTime);
TAOS_UNUSED(formatDurationOrKeep(durationVstr, sizeof(durationVstr), pDb->cfg.compactEndTime));
TAOS_UNUSED(snprintf(durationStr + len, sizeof(durationStr) - len, ",%s", durationVstr));
STR_WITH_MAXSIZE_TO_VARSTR(durationVstr, durationStr, sizeof(durationVstr));
if ((pColInfo = taosArrayGet(pBlock->pDataBlock, cols++))) {
TAOS_CHECK_GOTO(colDataSetVal(pColInfo, rows, (const char *)durationVstr, false), &lino, _OVER);
}
TAOS_UNUSED(snprintf(durationStr, sizeof(durationStr), "%dh", pDb->cfg.compactTimeOffset));
STR_WITH_MAXSIZE_TO_VARSTR(durationVstr, durationStr, sizeof(durationVstr));
if ((pColInfo = taosArrayGet(pBlock->pDataBlock, cols++))) {
TAOS_CHECK_GOTO(colDataSetVal(pColInfo, rows, (const char *)durationVstr, false), &lino, _OVER);
}
} }
_OVER: _OVER:
if (code != 0) mError("failed to retrieve at line:%d, since %s", lino, tstrerror(code)); if (code != 0) mError("failed to retrieve at line:%d, since %s", lino, tstrerror(code));

View File

@ -68,7 +68,6 @@ void initStateStoreAPI(SStateStore* pStore) {
pStore->streamStateFillGetGroupKVByCur = streamStateFillGetGroupKVByCur; pStore->streamStateFillGetGroupKVByCur = streamStateFillGetGroupKVByCur;
pStore->streamStateGetKVByCur = streamStateGetKVByCur; pStore->streamStateGetKVByCur = streamStateGetKVByCur;
pStore->streamStateSetFillInfo = streamStateSetFillInfo;
pStore->streamStateClearExpiredState = streamStateClearExpiredState; pStore->streamStateClearExpiredState = streamStateClearExpiredState;
pStore->streamStateSessionAddIfNotExist = streamStateSessionAddIfNotExist; pStore->streamStateSessionAddIfNotExist = streamStateSessionAddIfNotExist;
@ -117,7 +116,6 @@ void initStateStoreAPI(SStateStore* pStore) {
pStore->streamStateBegin = streamStateBegin; pStore->streamStateBegin = streamStateBegin;
pStore->streamStateCommit = streamStateCommit; pStore->streamStateCommit = streamStateCommit;
pStore->streamStateDestroy = streamStateDestroy; pStore->streamStateDestroy = streamStateDestroy;
pStore->streamStateDeleteCheckPoint = streamStateDeleteCheckPoint;
pStore->streamStateReloadInfo = streamStateReloadInfo; pStore->streamStateReloadInfo = streamStateReloadInfo;
pStore->streamStateCopyBackend = streamStateCopyBackend; pStore->streamStateCopyBackend = streamStateCopyBackend;
} }

View File

@ -680,7 +680,7 @@ static int32_t fset_cmpr_fn(const struct STFileSet *pSet1, const struct STFileSe
return 0; return 0;
} }
static int32_t edit_fs(STFileSystem *fs, const TFileOpArray *opArray) { static int32_t edit_fs(STFileSystem *fs, const TFileOpArray *opArray, EFEditT etype) {
int32_t code = 0; int32_t code = 0;
int32_t lino = 0; int32_t lino = 0;
@ -690,6 +690,8 @@ static int32_t edit_fs(STFileSystem *fs, const TFileOpArray *opArray) {
TFileSetArray *fsetArray = fs->fSetArrTmp; TFileSetArray *fsetArray = fs->fSetArrTmp;
STFileSet *fset = NULL; STFileSet *fset = NULL;
const STFileOp *op; const STFileOp *op;
int32_t fid = INT32_MIN;
TSKEY now = taosGetTimestampMs();
TARRAY2_FOREACH_PTR(opArray, op) { TARRAY2_FOREACH_PTR(opArray, op) {
if (!fset || fset->fid != op->fid) { if (!fset || fset->fid != op->fid) {
STFileSet tfset = {.fid = op->fid}; STFileSet tfset = {.fid = op->fid};
@ -708,6 +710,15 @@ static int32_t edit_fs(STFileSystem *fs, const TFileOpArray *opArray) {
code = tsdbTFileSetEdit(fs->tsdb, fset, op); code = tsdbTFileSetEdit(fs->tsdb, fset, op);
TSDB_CHECK_CODE(code, lino, _exit); TSDB_CHECK_CODE(code, lino, _exit);
if (fid != op->fid) {
fid = op->fid;
if (etype == TSDB_FEDIT_COMMIT) {
fset->lastCommit = now;
} else if (etype == TSDB_FEDIT_COMPACT) {
fset->lastCompact = now;
}
}
} }
// remove empty empty stt level and empty file set // remove empty empty stt level and empty file set
@ -864,7 +875,7 @@ int32_t tsdbFSEditBegin(STFileSystem *fs, const TFileOpArray *opArray, EFEditT e
fs->etype = etype; fs->etype = etype;
// edit // edit
code = edit_fs(fs, opArray); code = edit_fs(fs, opArray, etype);
TSDB_CHECK_CODE(code, lino, _exit); TSDB_CHECK_CODE(code, lino, _exit);
// save fs // save fs
@ -1288,6 +1299,12 @@ int32_t tsdbFileSetReaderOpen(void *pVnode, struct SFileSetReader **ppReader) {
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }
extern bool tsdbShouldCompact(const STFileSet *pFileSet);
#ifndef TD_ENTERPRISE
bool tsdbShouldCompact(const STFileSet *pFileSet) { return false; }
#endif
static int32_t tsdbFileSetReaderNextNoLock(struct SFileSetReader *pReader) { static int32_t tsdbFileSetReaderNextNoLock(struct SFileSetReader *pReader) {
STsdb *pTsdb = pReader->pTsdb; STsdb *pTsdb = pReader->pTsdb;
int32_t code = TSDB_CODE_SUCCESS; int32_t code = TSDB_CODE_SUCCESS;
@ -1311,7 +1328,7 @@ static int32_t tsdbFileSetReaderNextNoLock(struct SFileSetReader *pReader) {
// get file set details // get file set details
pReader->fid = pReader->pFileSet->fid; pReader->fid = pReader->pFileSet->fid;
tsdbFidKeyRange(pReader->fid, pTsdb->keepCfg.days, pTsdb->keepCfg.precision, &pReader->startTime, &pReader->endTime); tsdbFidKeyRange(pReader->fid, pTsdb->keepCfg.days, pTsdb->keepCfg.precision, &pReader->startTime, &pReader->endTime);
pReader->lastCompactTime = 0; // TODO pReader->lastCompactTime = pReader->pFileSet->lastCompact;
pReader->totalSize = 0; pReader->totalSize = 0;
for (int32_t i = 0; i < TSDB_FTYPE_MAX; i++) { for (int32_t i = 0; i < TSDB_FTYPE_MAX; i++) {
STFileObj *fobj = pReader->pFileSet->farr[i]; STFileObj *fobj = pReader->pFileSet->farr[i];
@ -1375,7 +1392,7 @@ int32_t tsdbFileSetGetEntryField(struct SFileSetReader *pReader, const char *fie
fieldName = "should_compact"; fieldName = "should_compact";
if (strncmp(field, fieldName, strlen(fieldName) + 1) == 0) { if (strncmp(field, fieldName, strlen(fieldName) + 1) == 0) {
*(char *)value = 0; // TODO *(char *)value = tsdbShouldCompact(pReader->pFileSet);
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
} }

View File

@ -273,6 +273,15 @@ int32_t tsdbTFileSetToJson(const STFileSet *fset, cJSON *json) {
if (code) return code; if (code) return code;
} }
// about compact and commit
if (cJSON_AddNumberToObject(json, "last compact", fset->lastCompact) == NULL) {
return TSDB_CODE_OUT_OF_MEMORY;
}
if (cJSON_AddNumberToObject(json, "last commit", fset->lastCommit) == NULL) {
return TSDB_CODE_OUT_OF_MEMORY;
}
return 0; return 0;
} }
@ -324,6 +333,20 @@ int32_t tsdbJsonToTFileSet(STsdb *pTsdb, const cJSON *json, STFileSet **fset) {
} else { } else {
return TSDB_CODE_FILE_CORRUPTED; return TSDB_CODE_FILE_CORRUPTED;
} }
// about compact and commit
item1 = cJSON_GetObjectItem(json, "last compact");
if (cJSON_IsNumber(item1)) {
(*fset)->lastCompact = item1->valuedouble;
} else {
(*fset)->lastCompact = 0;
}
item1 = cJSON_GetObjectItem(json, "last commit");
if (cJSON_IsNumber(item1)) {
(*fset)->lastCommit = item1->valuedouble;
} else {
(*fset)->lastCommit = 0;
}
return 0; return 0;
} }
@ -467,6 +490,9 @@ int32_t tsdbTFileSetApplyEdit(STsdb *pTsdb, const STFileSet *fset1, STFileSet *f
} }
} }
fset2->lastCompact = fset1->lastCompact;
fset2->lastCommit = fset1->lastCommit;
return 0; return 0;
} }
@ -522,6 +548,9 @@ int32_t tsdbTFileSetInitCopy(STsdb *pTsdb, const STFileSet *fset1, STFileSet **f
if (code) return code; if (code) return code;
} }
(*fset)->lastCompact = fset1->lastCompact;
(*fset)->lastCommit = fset1->lastCommit;
return 0; return 0;
} }
@ -617,6 +646,9 @@ int32_t tsdbTFileSetInitRef(STsdb *pTsdb, const STFileSet *fset1, STFileSet **fs
} }
} }
(*fset)->lastCompact = fset1->lastCompact;
(*fset)->lastCommit = fset1->lastCommit;
return 0; return 0;
} }

View File

@ -92,6 +92,8 @@ struct STFileSet {
int64_t maxVerValid; int64_t maxVerValid;
STFileObj *farr[TSDB_FTYPE_MAX]; // file array STFileObj *farr[TSDB_FTYPE_MAX]; // file array
TSttLvlArray lvlArr[1]; // level array TSttLvlArray lvlArr[1]; // level array
TSKEY lastCompact;
TSKEY lastCommit;
bool mergeScheduled; bool mergeScheduled;
SVATaskID mergeTask; SVATaskID mergeTask;

View File

@ -191,7 +191,6 @@ void initStateStoreAPI(SStateStore* pStore) {
pStore->streamStateFillGetGroupKVByCur = streamStateFillGetGroupKVByCur; pStore->streamStateFillGetGroupKVByCur = streamStateFillGetGroupKVByCur;
pStore->streamStateGetKVByCur = streamStateGetKVByCur; pStore->streamStateGetKVByCur = streamStateGetKVByCur;
pStore->streamStateSetFillInfo = streamStateSetFillInfo;
pStore->streamStateClearExpiredState = streamStateClearExpiredState; pStore->streamStateClearExpiredState = streamStateClearExpiredState;
pStore->streamStateSessionAddIfNotExist = streamStateSessionAddIfNotExist; pStore->streamStateSessionAddIfNotExist = streamStateSessionAddIfNotExist;
@ -243,7 +242,6 @@ void initStateStoreAPI(SStateStore* pStore) {
pStore->streamStateBegin = streamStateBegin; pStore->streamStateBegin = streamStateBegin;
pStore->streamStateCommit = streamStateCommit; pStore->streamStateCommit = streamStateCommit;
pStore->streamStateDestroy = streamStateDestroy; pStore->streamStateDestroy = streamStateDestroy;
pStore->streamStateDeleteCheckPoint = streamStateDeleteCheckPoint;
pStore->streamStateReloadInfo = streamStateReloadInfo; pStore->streamStateReloadInfo = streamStateReloadInfo;
pStore->streamStateCopyBackend = streamStateCopyBackend; pStore->streamStateCopyBackend = streamStateCopyBackend;
} }

View File

@ -1415,7 +1415,8 @@ static int32_t vnodeProcessAlterTbReq(SVnode *pVnode, int64_t ver, void *pReq, i
SVAlterTbReq vAlterTbReq = {0}; SVAlterTbReq vAlterTbReq = {0};
SVAlterTbRsp vAlterTbRsp = {0}; SVAlterTbRsp vAlterTbRsp = {0};
SDecoder dc = {0}; SDecoder dc = {0};
int32_t rcode = 0; int32_t code = 0;
int32_t lino = 0;
int32_t ret; int32_t ret;
SEncoder ec = {0}; SEncoder ec = {0};
STableMetaRsp vMetaRsp = {0}; STableMetaRsp vMetaRsp = {0};
@ -1431,7 +1432,6 @@ static int32_t vnodeProcessAlterTbReq(SVnode *pVnode, int64_t ver, void *pReq, i
if (tDecodeSVAlterTbReq(&dc, &vAlterTbReq) < 0) { if (tDecodeSVAlterTbReq(&dc, &vAlterTbReq) < 0) {
vAlterTbRsp.code = TSDB_CODE_INVALID_MSG; vAlterTbRsp.code = TSDB_CODE_INVALID_MSG;
tDecoderClear(&dc); tDecoderClear(&dc);
rcode = -1;
goto _exit; goto _exit;
} }
@ -1439,7 +1439,6 @@ static int32_t vnodeProcessAlterTbReq(SVnode *pVnode, int64_t ver, void *pReq, i
if (metaAlterTable(pVnode->pMeta, ver, &vAlterTbReq, &vMetaRsp) < 0) { if (metaAlterTable(pVnode->pMeta, ver, &vAlterTbReq, &vMetaRsp) < 0) {
vAlterTbRsp.code = terrno; vAlterTbRsp.code = terrno;
tDecoderClear(&dc); tDecoderClear(&dc);
rcode = -1;
goto _exit; goto _exit;
} }
tDecoderClear(&dc); tDecoderClear(&dc);
@ -1449,6 +1448,31 @@ static int32_t vnodeProcessAlterTbReq(SVnode *pVnode, int64_t ver, void *pReq, i
vAlterTbRsp.pMeta = &vMetaRsp; vAlterTbRsp.pMeta = &vMetaRsp;
} }
if (vAlterTbReq.action == TSDB_ALTER_TABLE_UPDATE_TAG_VAL || vAlterTbReq.action == TSDB_ALTER_TABLE_UPDATE_MULTI_TAG_VAL) {
int64_t uid = metaGetTableEntryUidByName(pVnode->pMeta, vAlterTbReq.tbName);
if (uid == 0) {
vError("vgId:%d, %s failed at %s:%d since table %s not found", TD_VID(pVnode), __func__, __FILE__, __LINE__,
vAlterTbReq.tbName);
goto _exit;
}
SArray* tbUids = taosArrayInit(4, sizeof(int64_t));
void* p = taosArrayPush(tbUids, &uid);
TSDB_CHECK_NULL(p, code, lino, _exit, terrno);
vDebug("vgId:%d, remove tags value altered table:%s from query table list", TD_VID(pVnode), vAlterTbReq.tbName);
if ((code = tqUpdateTbUidList(pVnode->pTq, tbUids, false)) < 0) {
vError("vgId:%d, failed to remove tbUid list since %s", TD_VID(pVnode), tstrerror(code));
}
vDebug("vgId:%d, try to add table:%s in query table list", TD_VID(pVnode), vAlterTbReq.tbName);
if ((code = tqUpdateTbUidList(pVnode->pTq, tbUids, true)) < 0) {
vError("vgId:%d, failed to add tbUid list since %s", TD_VID(pVnode), tstrerror(code));
}
taosArrayDestroy(tbUids);
}
_exit: _exit:
taosArrayDestroy(vAlterTbReq.pMultiTag); taosArrayDestroy(vAlterTbReq.pMultiTag);
tEncodeSize(tEncodeSVAlterTbRsp, &vAlterTbRsp, pRsp->contLen, ret); tEncodeSize(tEncodeSVAlterTbRsp, &vAlterTbRsp, pRsp->contLen, ret);
@ -1457,6 +1481,7 @@ _exit:
if (tEncodeSVAlterTbRsp(&ec, &vAlterTbRsp) != 0) { if (tEncodeSVAlterTbRsp(&ec, &vAlterTbRsp) != 0) {
vError("vgId:%d, failed to encode alter table response", TD_VID(pVnode)); vError("vgId:%d, failed to encode alter table response", TD_VID(pVnode));
} }
tEncoderClear(&ec); tEncoderClear(&ec);
if (vMetaRsp.pSchemas) { if (vMetaRsp.pSchemas) {
taosMemoryFree(vMetaRsp.pSchemas); taosMemoryFree(vMetaRsp.pSchemas);

View File

@ -38,6 +38,8 @@ TDBlockBlobClient TDBlockBlobClient::CreateFromConnectionString(const std::strin
return newClient; return newClient;
} }
TDBlockBlobClient::TDBlockBlobClient(BlobClient blobClient) : BlobClient(std::move(blobClient)) {}
#if 0
TDBlockBlobClient::TDBlockBlobClient(const std::string& blobUrl, std::shared_ptr<StorageSharedKeyCredential> credential, TDBlockBlobClient::TDBlockBlobClient(const std::string& blobUrl, std::shared_ptr<StorageSharedKeyCredential> credential,
const BlobClientOptions& options) const BlobClientOptions& options)
: BlobClient(blobUrl, std::move(credential), options) {} : BlobClient(blobUrl, std::move(credential), options) {}
@ -50,8 +52,6 @@ TDBlockBlobClient::TDBlockBlobClient(const std::string&
TDBlockBlobClient::TDBlockBlobClient(const std::string& blobUrl, const BlobClientOptions& options) TDBlockBlobClient::TDBlockBlobClient(const std::string& blobUrl, const BlobClientOptions& options)
: BlobClient(blobUrl, options) {} : BlobClient(blobUrl, options) {}
TDBlockBlobClient::TDBlockBlobClient(BlobClient blobClient) : BlobClient(std::move(blobClient)) {}
TDBlockBlobClient TDBlockBlobClient::WithSnapshot(const std::string& snapshot) const { TDBlockBlobClient TDBlockBlobClient::WithSnapshot(const std::string& snapshot) const {
TDBlockBlobClient newClient(*this); TDBlockBlobClient newClient(*this);
if (snapshot.empty()) { if (snapshot.empty()) {
@ -74,47 +74,6 @@ TDBlockBlobClient TDBlockBlobClient::WithVersionId(const std::string& versionId)
return newClient; return newClient;
} }
Azure::Response<Models::UploadBlockBlobResult> TDBlockBlobClient::Upload(Azure::Core::IO::BodyStream& content,
const UploadBlockBlobOptions& options,
const Azure::Core::Context& context) const {
_detail::BlockBlobClient::UploadBlockBlobOptions protocolLayerOptions;
if (options.TransactionalContentHash.HasValue()) {
if (options.TransactionalContentHash.Value().Algorithm == HashAlgorithm::Md5) {
protocolLayerOptions.TransactionalContentMD5 = options.TransactionalContentHash.Value().Value;
} else if (options.TransactionalContentHash.Value().Algorithm == HashAlgorithm::Crc64) {
protocolLayerOptions.TransactionalContentCrc64 = options.TransactionalContentHash.Value().Value;
}
}
protocolLayerOptions.BlobContentType = options.HttpHeaders.ContentType;
protocolLayerOptions.BlobContentEncoding = options.HttpHeaders.ContentEncoding;
protocolLayerOptions.BlobContentLanguage = options.HttpHeaders.ContentLanguage;
protocolLayerOptions.BlobContentMD5 = options.HttpHeaders.ContentHash.Value;
protocolLayerOptions.BlobContentDisposition = options.HttpHeaders.ContentDisposition;
protocolLayerOptions.BlobCacheControl = options.HttpHeaders.CacheControl;
protocolLayerOptions.Metadata = std::map<std::string, std::string>(options.Metadata.begin(), options.Metadata.end());
protocolLayerOptions.BlobTagsString = _detail::TagsToString(options.Tags);
protocolLayerOptions.Tier = options.AccessTier;
protocolLayerOptions.LeaseId = options.AccessConditions.LeaseId;
protocolLayerOptions.IfModifiedSince = options.AccessConditions.IfModifiedSince;
protocolLayerOptions.IfUnmodifiedSince = options.AccessConditions.IfUnmodifiedSince;
protocolLayerOptions.IfMatch = options.AccessConditions.IfMatch;
protocolLayerOptions.IfNoneMatch = options.AccessConditions.IfNoneMatch;
protocolLayerOptions.IfTags = options.AccessConditions.TagConditions;
if (m_customerProvidedKey.HasValue()) {
protocolLayerOptions.EncryptionKey = m_customerProvidedKey.Value().Key;
protocolLayerOptions.EncryptionKeySha256 = m_customerProvidedKey.Value().KeyHash;
protocolLayerOptions.EncryptionAlgorithm = m_customerProvidedKey.Value().Algorithm.ToString();
}
protocolLayerOptions.EncryptionScope = m_encryptionScope;
if (options.ImmutabilityPolicy.HasValue()) {
protocolLayerOptions.ImmutabilityPolicyExpiry = options.ImmutabilityPolicy.Value().ExpiresOn;
protocolLayerOptions.ImmutabilityPolicyMode = options.ImmutabilityPolicy.Value().PolicyMode;
}
protocolLayerOptions.LegalHold = options.HasLegalHold;
return _detail::BlockBlobClient::Upload(*m_pipeline, m_blobUrl, content, protocolLayerOptions, context);
}
Azure::Response<Models::UploadBlockBlobFromResult> TDBlockBlobClient::UploadFrom( Azure::Response<Models::UploadBlockBlobFromResult> TDBlockBlobClient::UploadFrom(
const uint8_t* buffer, size_t bufferSize, const UploadBlockBlobFromOptions& options, const uint8_t* buffer, size_t bufferSize, const UploadBlockBlobFromOptions& options,
const Azure::Core::Context& context) const { const Azure::Core::Context& context) const {
@ -270,6 +229,47 @@ Azure::Response<Models::UploadBlockBlobFromResult> TDBlockBlobClient::UploadFrom
return Azure::Response<Models::UploadBlockBlobFromResult>(std::move(result), return Azure::Response<Models::UploadBlockBlobFromResult>(std::move(result),
std::move(commitBlockListResponse.RawResponse)); std::move(commitBlockListResponse.RawResponse));
} }
#endif
Azure::Response<Models::UploadBlockBlobResult> TDBlockBlobClient::Upload(Azure::Core::IO::BodyStream& content,
const UploadBlockBlobOptions& options,
const Azure::Core::Context& context) const {
_detail::BlockBlobClient::UploadBlockBlobOptions protocolLayerOptions;
if (options.TransactionalContentHash.HasValue()) {
if (options.TransactionalContentHash.Value().Algorithm == HashAlgorithm::Md5) {
protocolLayerOptions.TransactionalContentMD5 = options.TransactionalContentHash.Value().Value;
} else if (options.TransactionalContentHash.Value().Algorithm == HashAlgorithm::Crc64) {
protocolLayerOptions.TransactionalContentCrc64 = options.TransactionalContentHash.Value().Value;
}
}
protocolLayerOptions.BlobContentType = options.HttpHeaders.ContentType;
protocolLayerOptions.BlobContentEncoding = options.HttpHeaders.ContentEncoding;
protocolLayerOptions.BlobContentLanguage = options.HttpHeaders.ContentLanguage;
protocolLayerOptions.BlobContentMD5 = options.HttpHeaders.ContentHash.Value;
protocolLayerOptions.BlobContentDisposition = options.HttpHeaders.ContentDisposition;
protocolLayerOptions.BlobCacheControl = options.HttpHeaders.CacheControl;
protocolLayerOptions.Metadata = std::map<std::string, std::string>(options.Metadata.begin(), options.Metadata.end());
protocolLayerOptions.BlobTagsString = _detail::TagsToString(options.Tags);
protocolLayerOptions.Tier = options.AccessTier;
protocolLayerOptions.LeaseId = options.AccessConditions.LeaseId;
protocolLayerOptions.IfModifiedSince = options.AccessConditions.IfModifiedSince;
protocolLayerOptions.IfUnmodifiedSince = options.AccessConditions.IfUnmodifiedSince;
protocolLayerOptions.IfMatch = options.AccessConditions.IfMatch;
protocolLayerOptions.IfNoneMatch = options.AccessConditions.IfNoneMatch;
protocolLayerOptions.IfTags = options.AccessConditions.TagConditions;
if (m_customerProvidedKey.HasValue()) {
protocolLayerOptions.EncryptionKey = m_customerProvidedKey.Value().Key;
protocolLayerOptions.EncryptionKeySha256 = m_customerProvidedKey.Value().KeyHash;
protocolLayerOptions.EncryptionAlgorithm = m_customerProvidedKey.Value().Algorithm.ToString();
}
protocolLayerOptions.EncryptionScope = m_encryptionScope;
if (options.ImmutabilityPolicy.HasValue()) {
protocolLayerOptions.ImmutabilityPolicyExpiry = options.ImmutabilityPolicy.Value().ExpiresOn;
protocolLayerOptions.ImmutabilityPolicyMode = options.ImmutabilityPolicy.Value().PolicyMode;
}
protocolLayerOptions.LegalHold = options.HasLegalHold;
return _detail::BlockBlobClient::Upload(*m_pipeline, m_blobUrl, content, protocolLayerOptions, context);
}
Azure::Response<Models::UploadBlockBlobFromResult> TDBlockBlobClient::UploadFrom( Azure::Response<Models::UploadBlockBlobFromResult> TDBlockBlobClient::UploadFrom(
const std::string& fileName, int64_t offset, int64_t size, const UploadBlockBlobFromOptions& options, const std::string& fileName, int64_t offset, int64_t size, const UploadBlockBlobFromOptions& options,
@ -349,7 +349,7 @@ Azure::Response<Models::UploadBlockBlobFromResult> TDBlockBlobClient::UploadFrom
return Azure::Response<Models::UploadBlockBlobFromResult>(std::move(result), return Azure::Response<Models::UploadBlockBlobFromResult>(std::move(result),
std::move(commitBlockListResponse.RawResponse)); std::move(commitBlockListResponse.RawResponse));
} }
#if 0
Azure::Response<Models::UploadBlockBlobFromUriResult> TDBlockBlobClient::UploadFromUri( Azure::Response<Models::UploadBlockBlobFromUriResult> TDBlockBlobClient::UploadFromUri(
const std::string& sourceUri, const UploadBlockBlobFromUriOptions& options, const std::string& sourceUri, const UploadBlockBlobFromUriOptions& options,
const Azure::Core::Context& context) const { const Azure::Core::Context& context) const {
@ -396,7 +396,7 @@ Azure::Response<Models::UploadBlockBlobFromUriResult> TDBlockBlobClient::UploadF
return _detail::BlockBlobClient::UploadFromUri(*m_pipeline, m_blobUrl, protocolLayerOptions, context); return _detail::BlockBlobClient::UploadFromUri(*m_pipeline, m_blobUrl, protocolLayerOptions, context);
} }
#endif
Azure::Response<Models::StageBlockResult> TDBlockBlobClient::StageBlock(const std::string& blockId, Azure::Response<Models::StageBlockResult> TDBlockBlobClient::StageBlock(const std::string& blockId,
Azure::Core::IO::BodyStream& content, Azure::Core::IO::BodyStream& content,
const StageBlockOptions& options, const StageBlockOptions& options,
@ -419,7 +419,7 @@ Azure::Response<Models::StageBlockResult> TDBlockBlobClient::StageBlock(const st
protocolLayerOptions.EncryptionScope = m_encryptionScope; protocolLayerOptions.EncryptionScope = m_encryptionScope;
return _detail::BlockBlobClient::StageBlock(*m_pipeline, m_blobUrl, content, protocolLayerOptions, context); return _detail::BlockBlobClient::StageBlock(*m_pipeline, m_blobUrl, content, protocolLayerOptions, context);
} }
#if 0
Azure::Response<Models::StageBlockFromUriResult> TDBlockBlobClient::StageBlockFromUri( Azure::Response<Models::StageBlockFromUriResult> TDBlockBlobClient::StageBlockFromUri(
const std::string& blockId, const std::string& sourceUri, const StageBlockFromUriOptions& options, const std::string& blockId, const std::string& sourceUri, const StageBlockFromUriOptions& options,
const Azure::Core::Context& context) const { const Azure::Core::Context& context) const {
@ -457,7 +457,7 @@ Azure::Response<Models::StageBlockFromUriResult> TDBlockBlobClient::StageBlockFr
return _detail::BlockBlobClient::StageBlockFromUri(*m_pipeline, m_blobUrl, protocolLayerOptions, context); return _detail::BlockBlobClient::StageBlockFromUri(*m_pipeline, m_blobUrl, protocolLayerOptions, context);
} }
#endif
Azure::Response<Models::CommitBlockListResult> TDBlockBlobClient::CommitBlockList( Azure::Response<Models::CommitBlockListResult> TDBlockBlobClient::CommitBlockList(
const std::vector<std::string>& blockIds, const CommitBlockListOptions& options, const std::vector<std::string>& blockIds, const CommitBlockListOptions& options,
const Azure::Core::Context& context) const { const Azure::Core::Context& context) const {
@ -492,7 +492,7 @@ Azure::Response<Models::CommitBlockListResult> TDBlockBlobClient::CommitBlockLis
return _detail::BlockBlobClient::CommitBlockList(*m_pipeline, m_blobUrl, protocolLayerOptions, context); return _detail::BlockBlobClient::CommitBlockList(*m_pipeline, m_blobUrl, protocolLayerOptions, context);
} }
#if 0
Azure::Response<Models::GetBlockListResult> TDBlockBlobClient::GetBlockList(const GetBlockListOptions& options, Azure::Response<Models::GetBlockListResult> TDBlockBlobClient::GetBlockList(const GetBlockListOptions& options,
const Azure::Core::Context& context) const { const Azure::Core::Context& context) const {
_detail::BlockBlobClient::GetBlockBlobBlockListOptions protocolLayerOptions; _detail::BlockBlobClient::GetBlockBlobBlockListOptions protocolLayerOptions;
@ -502,6 +502,7 @@ Azure::Response<Models::GetBlockListResult> TDBlockBlobClient::GetBlockList(cons
return _detail::BlockBlobClient::GetBlockList(*m_pipeline, m_blobUrl, protocolLayerOptions, return _detail::BlockBlobClient::GetBlockList(*m_pipeline, m_blobUrl, protocolLayerOptions,
_internal::WithReplicaStatus(context)); _internal::WithReplicaStatus(context));
} }
#endif
/* /*
Azure::Response<Models::QueryBlobResult> TDBlockBlobClient::Query(const std::string& querySqlExpression, Azure::Response<Models::QueryBlobResult> TDBlockBlobClient::Query(const std::string& querySqlExpression,
const QueryBlobOptions& options, const QueryBlobOptions& options,

View File

@ -0,0 +1,210 @@
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include <gtest/gtest.h>
#include <cstring>
#include <iostream>
#include <queue>
// clang-format off
#include "td_block_blob_client.hpp"
#include "az.h"
// clang-format on
using namespace Azure::Storage;
using namespace Azure::Storage::Blobs;
extern int8_t tsS3Enabled;
extern char tsS3BucketName[TSDB_FQDN_LEN];
static int32_t azInitEnv() {
int32_t code = 0;
extern int8_t tsS3EpNum;
extern char tsS3Hostname[][TSDB_FQDN_LEN];
extern char tsS3AccessKeyId[][TSDB_FQDN_LEN];
extern char tsS3AccessKeySecret[][TSDB_FQDN_LEN];
/* TCS parameter format
tsS3Hostname[0] = "<endpoint>/<account-name>.blob.core.windows.net";
tsS3AccessKeyId[0] = "<access-key-id/account-name>";
tsS3AccessKeySecret[0] = "<access-key-secret/account-key>";
tsS3BucketName = "<bucket/container-name>";
*/
const char *hostname = "<endpoint>/<account-name>.blob.core.windows.net";
const char *accessKeyId = "<access-key-id/account-name>";
const char *accessKeySecret = "<access-key-secret/account-key>";
const char *bucketName = "<bucket/container-name>";
if (hostname[0] != '<') {
tstrncpy(&tsS3Hostname[0][0], hostname, TSDB_FQDN_LEN);
tstrncpy(&tsS3AccessKeyId[0][0], accessKeyId, TSDB_FQDN_LEN);
tstrncpy(&tsS3AccessKeySecret[0][0], accessKeySecret, TSDB_FQDN_LEN);
tstrncpy(tsS3BucketName, bucketName, TSDB_FQDN_LEN);
} else {
const char *accountId = getenv("ablob_account_id");
if (!accountId) {
return -1;
}
const char *accountSecret = getenv("ablob_account_secret");
if (!accountSecret) {
return -1;
}
const char *containerName = getenv("ablob_container");
if (!containerName) {
return -1;
}
TAOS_STRCPY(&tsS3Hostname[0][0], accountId);
TAOS_STRCAT(&tsS3Hostname[0][0], ".blob.core.windows.net");
TAOS_STRCPY(&tsS3AccessKeyId[0][0], accountId);
TAOS_STRCPY(&tsS3AccessKeySecret[0][0], accountSecret);
TAOS_STRCPY(tsS3BucketName, containerName);
}
tstrncpy(tsTempDir, "/tmp/", PATH_MAX);
tsS3Enabled = true;
return code;
}
// TEST(AzTest, DISABLED_InterfaceTest) {
TEST(AzETest, InterfaceTest) {
int code = 0;
bool check = false;
bool withcp = false;
code = azInitEnv();
if (code) {
std::cout << "ablob env init failed with: " << code << std::endl;
return;
}
GTEST_ASSERT_EQ(code, 0);
GTEST_ASSERT_EQ(tsS3Enabled, 1);
code = azBegin();
GTEST_ASSERT_EQ(code, 0);
code = azCheckCfg();
GTEST_ASSERT_EQ(code, 0);
const int size = 4096;
char data[size] = {0};
for (int i = 0; i < size / 2; ++i) {
data[i * 2 + 1] = 1;
}
const char object_name[] = "azut.bin";
char path[PATH_MAX] = {0};
char path_download[PATH_MAX] = {0};
int ds_len = strlen(TD_DIRSEP);
int tmp_len = strlen(tsTempDir);
(void)snprintf(path, PATH_MAX, "%s", tsTempDir);
if (strncmp(tsTempDir + tmp_len - ds_len, TD_DIRSEP, ds_len) != 0) {
(void)snprintf(path + tmp_len, PATH_MAX - tmp_len, "%s", TD_DIRSEP);
(void)snprintf(path + tmp_len + ds_len, PATH_MAX - tmp_len - ds_len, "%s", object_name);
} else {
(void)snprintf(path + tmp_len, PATH_MAX - tmp_len, "%s", object_name);
}
tstrncpy(path_download, path, strlen(path) + 1);
tstrncpy(path_download + strlen(path), ".download", strlen(".download") + 1);
TdFilePtr fp = taosOpenFile(path, TD_FILE_WRITE | TD_FILE_CREATE | TD_FILE_WRITE_THROUGH);
GTEST_ASSERT_NE(fp, nullptr);
int n = taosWriteFile(fp, data, size);
GTEST_ASSERT_EQ(n, size);
code = taosCloseFile(&fp);
GTEST_ASSERT_EQ(code, 0);
code = azPutObjectFromFileOffset(path, object_name, 0, size);
GTEST_ASSERT_EQ(code, 0);
uint8_t *pBlock = NULL;
code = azGetObjectBlock(object_name, 0, size, check, &pBlock);
GTEST_ASSERT_EQ(code, 0);
for (int i = 0; i < size / 2; ++i) {
GTEST_ASSERT_EQ(pBlock[i * 2], 0);
GTEST_ASSERT_EQ(pBlock[i * 2 + 1], 1);
}
taosMemoryFree(pBlock);
code = azGetObjectToFile(object_name, path_download);
GTEST_ASSERT_EQ(code, 0);
{
TdFilePtr fp = taosOpenFile(path, TD_FILE_READ);
GTEST_ASSERT_NE(fp, nullptr);
(void)memset(data, 0, size);
int64_t n = taosReadFile(fp, data, size);
GTEST_ASSERT_EQ(n, size);
code = taosCloseFile(&fp);
GTEST_ASSERT_EQ(code, 0);
for (int i = 0; i < size / 2; ++i) {
GTEST_ASSERT_EQ(data[i * 2], 0);
GTEST_ASSERT_EQ(data[i * 2 + 1], 1);
}
}
azDeleteObjectsByPrefix(object_name);
// list object to check
code = azPutObjectFromFile2(path, object_name, withcp);
GTEST_ASSERT_EQ(code, 0);
code = azGetObjectsByPrefix(object_name, tsTempDir);
GTEST_ASSERT_EQ(code, 0);
{
TdFilePtr fp = taosOpenFile(path, TD_FILE_READ);
GTEST_ASSERT_NE(fp, nullptr);
(void)memset(data, 0, size);
int64_t n = taosReadFile(fp, data, size);
GTEST_ASSERT_EQ(n, size);
code = taosCloseFile(&fp);
GTEST_ASSERT_EQ(code, 0);
for (int i = 0; i < size / 2; ++i) {
GTEST_ASSERT_EQ(data[i * 2], 0);
GTEST_ASSERT_EQ(data[i * 2 + 1], 1);
}
}
TDBlockBlobClient blobClient =
TDBlockBlobClient::CreateFromConnectionString(std::getenv("ablob_cs"), std::string(tsS3BucketName), object_name);
const char *object_name_arr[] = {object_name};
code = azDeleteObjects(object_name_arr, 1);
GTEST_ASSERT_EQ(code, 0);
azEnd();
}

View File

@ -199,3 +199,132 @@ TEST(AzTest, InterfaceTest) {
azEnd(); azEnd();
} }
// TEST(AzTest, DISABLED_InterfaceTestBig) {
TEST(AzTest, InterfaceTestBig) {
int code = 0;
bool check = false;
bool withcp = false;
code = azInitEnv();
if (code) {
std::cout << "ablob env init failed with: " << code << std::endl;
return;
}
GTEST_ASSERT_EQ(code, 0);
GTEST_ASSERT_EQ(tsS3Enabled, 1);
code = azBegin();
GTEST_ASSERT_EQ(code, 0);
code = azCheckCfg();
GTEST_ASSERT_EQ(code, 0);
const int size = 256 * 1024 * 1024 + 1;
char *data = (char *)taosMemoryCalloc(1, size);
if (!data) {
std::cout << "code: " << code << "terrno: " << terrno << std::endl;
return;
}
for (int i = 0; i < size / 2; ++i) {
data[i * 2 + 1] = 1;
}
const char object_name[] = "azut.bin";
char path[PATH_MAX] = {0};
char path_download[PATH_MAX] = {0};
int ds_len = strlen(TD_DIRSEP);
int tmp_len = strlen(tsTempDir);
(void)snprintf(path, PATH_MAX, "%s", tsTempDir);
if (strncmp(tsTempDir + tmp_len - ds_len, TD_DIRSEP, ds_len) != 0) {
(void)snprintf(path + tmp_len, PATH_MAX - tmp_len, "%s", TD_DIRSEP);
(void)snprintf(path + tmp_len + ds_len, PATH_MAX - tmp_len - ds_len, "%s", object_name);
} else {
(void)snprintf(path + tmp_len, PATH_MAX - tmp_len, "%s", object_name);
}
tstrncpy(path_download, path, strlen(path) + 1);
tstrncpy(path_download + strlen(path), ".download", strlen(".download") + 1);
TdFilePtr fp = taosOpenFile(path, TD_FILE_WRITE | TD_FILE_CREATE | TD_FILE_WRITE_THROUGH);
GTEST_ASSERT_NE(fp, nullptr);
int n = taosWriteFile(fp, data, size);
GTEST_ASSERT_EQ(n, size);
code = taosCloseFile(&fp);
GTEST_ASSERT_EQ(code, 0);
code = azPutObjectFromFileOffset(path, object_name, 0, size);
GTEST_ASSERT_EQ(code, 0);
uint8_t *pBlock = NULL;
code = azGetObjectBlock(object_name, 0, size, check, &pBlock);
GTEST_ASSERT_EQ(code, 0);
for (int i = 0; i < size / 2; ++i) {
GTEST_ASSERT_EQ(pBlock[i * 2], 0);
GTEST_ASSERT_EQ(pBlock[i * 2 + 1], 1);
}
taosMemoryFree(pBlock);
code = azGetObjectToFile(object_name, path_download);
GTEST_ASSERT_EQ(code, 0);
{
TdFilePtr fp = taosOpenFile(path, TD_FILE_READ);
GTEST_ASSERT_NE(fp, nullptr);
(void)memset(data, 0, size);
int64_t n = taosReadFile(fp, data, size);
GTEST_ASSERT_EQ(n, size);
code = taosCloseFile(&fp);
GTEST_ASSERT_EQ(code, 0);
for (int i = 0; i < size / 2; ++i) {
GTEST_ASSERT_EQ(data[i * 2], 0);
GTEST_ASSERT_EQ(data[i * 2 + 1], 1);
}
}
azDeleteObjectsByPrefix(object_name);
// list object to check
code = azPutObjectFromFile2(path, object_name, withcp);
GTEST_ASSERT_EQ(code, 0);
code = azGetObjectsByPrefix(object_name, tsTempDir);
GTEST_ASSERT_EQ(code, 0);
{
TdFilePtr fp = taosOpenFile(path, TD_FILE_READ);
GTEST_ASSERT_NE(fp, nullptr);
(void)memset(data, 0, size);
int64_t n = taosReadFile(fp, data, size);
GTEST_ASSERT_EQ(n, size);
code = taosCloseFile(&fp);
GTEST_ASSERT_EQ(code, 0);
for (int i = 0; i < size / 2; ++i) {
GTEST_ASSERT_EQ(data[i * 2], 0);
GTEST_ASSERT_EQ(data[i * 2 + 1], 1);
}
}
const char *object_name_arr[] = {object_name};
code = azDeleteObjects(object_name_arr, 1);
GTEST_ASSERT_EQ(code, 0);
taosMemoryFree(data);
azEnd();
}

View File

@ -1986,10 +1986,19 @@ void catalogDestroy(void) {
} }
if (gCtgMgmt.cacheTimer) { if (gCtgMgmt.cacheTimer) {
if (taosTmrStop(gCtgMgmt.cacheTimer)) { if (!taosTmrStop(gCtgMgmt.cacheTimer)) {
qTrace("stop catalog cache timer may failed"); /*
qDebug("catalog cacheTimer %" PRIuPTR " not stopped", (uintptr_t)gCtgMgmt.cacheTimer);
while (!taosTmrIsStopped(&gCtgMgmt.cacheTimer)) {
taosMsleep(1);
}
*/
} }
qDebug("catalog cacheTimer %" PRIuPTR " is stopped", (uintptr_t)gCtgMgmt.cacheTimer);
gCtgMgmt.cacheTimer = NULL; gCtgMgmt.cacheTimer = NULL;
taosTmrCleanUp(gCtgMgmt.timer); taosTmrCleanUp(gCtgMgmt.timer);
gCtgMgmt.timer = NULL; gCtgMgmt.timer = NULL;
} }

View File

@ -480,6 +480,8 @@ void ctgdShowDBCache(SCatalog *pCtg, SHashObj *dbHash) {
dbCache = (SCtgDBCache *)pIter; dbCache = (SCtgDBCache *)pIter;
CTG_LOCK(CTG_READ, &dbCache->dbLock);
dbFName = taosHashGetKey(pIter, &len); dbFName = taosHashGetKey(pIter, &len);
int32_t metaNum = dbCache->tbCache ? taosHashGetSize(dbCache->tbCache) : 0; int32_t metaNum = dbCache->tbCache ? taosHashGetSize(dbCache->tbCache) : 0;
@ -509,6 +511,8 @@ void ctgdShowDBCache(SCatalog *pCtg, SHashObj *dbHash) {
hashMethod, hashPrefix, hashSuffix, vgNum); hashMethod, hashPrefix, hashSuffix, vgNum);
if (dbCache->vgCache.vgInfo) { if (dbCache->vgCache.vgInfo) {
CTG_LOCK(CTG_READ, &dbCache->vgCache.vgLock);
int32_t i = 0; int32_t i = 0;
void *pVgIter = taosHashIterate(dbCache->vgCache.vgInfo->vgHash, NULL); void *pVgIter = taosHashIterate(dbCache->vgCache.vgInfo->vgHash, NULL);
while (pVgIter) { while (pVgIter) {
@ -524,6 +528,8 @@ void ctgdShowDBCache(SCatalog *pCtg, SHashObj *dbHash) {
pVgIter = taosHashIterate(dbCache->vgCache.vgInfo->vgHash, pVgIter); pVgIter = taosHashIterate(dbCache->vgCache.vgInfo->vgHash, pVgIter);
} }
CTG_UNLOCK(CTG_READ, &dbCache->vgCache.vgLock);
} }
if (dbCache->cfgCache.cfgInfo) { if (dbCache->cfgCache.cfgInfo) {
@ -544,6 +550,8 @@ void ctgdShowDBCache(SCatalog *pCtg, SHashObj *dbHash) {
pCfg->schemaless, pCfg->sstTrigger); pCfg->schemaless, pCfg->sstTrigger);
} }
CTG_UNLOCK(CTG_READ, &dbCache->dbLock);
++i; ++i;
pIter = taosHashIterate(dbHash, pIter); pIter = taosHashIterate(dbHash, pIter);
} }

View File

@ -149,12 +149,13 @@ void ctgTestInitLogFile() {
return; return;
} }
const char *defaultLogFileNamePrefix = "taoslog"; const char *defaultLogFileNamePrefix = "catalogTest";
const int32_t maxLogFileNum = 10; const int32_t maxLogFileNum = 10;
tsAsyncLog = 0; tsAsyncLog = 0;
qDebugFlag = 159; qDebugFlag = 159;
tmrDebugFlag = 159; tmrDebugFlag = 159;
tsNumOfLogLines = 1000000000;
TAOS_STRCPY(tsLogDir, TD_LOG_DIR_PATH); TAOS_STRCPY(tsLogDir, TD_LOG_DIR_PATH);
(void)ctgdEnableDebug("api", true); (void)ctgdEnableDebug("api", true);
@ -1839,7 +1840,7 @@ TEST(tableMeta, updateStbMeta) {
while (true) { while (true) {
uint64_t n = 0; uint64_t n = 0;
ASSERT(0 == ctgdGetStatNum("runtime.numOfOpDequeue", (void *)&n)); ASSERT(0 == ctgdGetStatNum("runtime.numOfOpDequeue", (void *)&n));
if (n != 3) { if (n < 3) {
taosMsleep(50); taosMsleep(50);
} else { } else {
break; break;

View File

@ -61,9 +61,13 @@ static int32_t anomalyCacheBlock(SAnomalyWindowOperatorInfo* pInfo, SSDataBlock*
int32_t createAnomalywindowOperatorInfo(SOperatorInfo* downstream, SPhysiNode* physiNode, SExecTaskInfo* pTaskInfo, int32_t createAnomalywindowOperatorInfo(SOperatorInfo* downstream, SPhysiNode* physiNode, SExecTaskInfo* pTaskInfo,
SOperatorInfo** pOptrInfo) { SOperatorInfo** pOptrInfo) {
QRY_PARAM_CHECK(pOptrInfo); QRY_PARAM_CHECK(pOptrInfo);
int32_t code = TSDB_CODE_SUCCESS;
int32_t lino = 0;
size_t keyBufSize = 0;
int32_t num = 0;
SExprInfo* pExprInfo = NULL;
const char* id = GET_TASKID(pTaskInfo);
int32_t code = TSDB_CODE_SUCCESS;
int32_t lino = 0;
SAnomalyWindowOperatorInfo* pInfo = taosMemoryCalloc(1, sizeof(SAnomalyWindowOperatorInfo)); SAnomalyWindowOperatorInfo* pInfo = taosMemoryCalloc(1, sizeof(SAnomalyWindowOperatorInfo));
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo)); SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
SAnomalyWindowPhysiNode* pAnomalyNode = (SAnomalyWindowPhysiNode*)physiNode; SAnomalyWindowPhysiNode* pAnomalyNode = (SAnomalyWindowPhysiNode*)physiNode;
@ -74,13 +78,13 @@ int32_t createAnomalywindowOperatorInfo(SOperatorInfo* downstream, SPhysiNode* p
} }
if (!taosAnalGetOptStr(pAnomalyNode->anomalyOpt, "algo", pInfo->algoName, sizeof(pInfo->algoName))) { if (!taosAnalGetOptStr(pAnomalyNode->anomalyOpt, "algo", pInfo->algoName, sizeof(pInfo->algoName))) {
qError("failed to get anomaly_window algorithm name from %s", pAnomalyNode->anomalyOpt); qError("%s failed to get anomaly_window algorithm name from %s", id, pAnomalyNode->anomalyOpt);
code = TSDB_CODE_ANA_ALGO_NOT_FOUND; code = TSDB_CODE_ANA_ALGO_NOT_FOUND;
goto _error; goto _error;
} }
if (taosAnalGetAlgoUrl(pInfo->algoName, ANAL_ALGO_TYPE_ANOMALY_DETECT, pInfo->algoUrl, sizeof(pInfo->algoUrl)) != 0) { if (taosAnalGetAlgoUrl(pInfo->algoName, ANAL_ALGO_TYPE_ANOMALY_DETECT, pInfo->algoUrl, sizeof(pInfo->algoUrl)) != 0) {
qError("failed to get anomaly_window algorithm url from %s", pInfo->algoName); qError("%s failed to get anomaly_window algorithm url from %s", id, pInfo->algoName);
code = TSDB_CODE_ANA_ALGO_NOT_LOAD; code = TSDB_CODE_ANA_ALGO_NOT_LOAD;
goto _error; goto _error;
} }
@ -94,20 +98,18 @@ int32_t createAnomalywindowOperatorInfo(SOperatorInfo* downstream, SPhysiNode* p
SExprInfo* pScalarExprInfo = NULL; SExprInfo* pScalarExprInfo = NULL;
code = createExprInfo(pAnomalyNode->window.pExprs, NULL, &pScalarExprInfo, &numOfScalarExpr); code = createExprInfo(pAnomalyNode->window.pExprs, NULL, &pScalarExprInfo, &numOfScalarExpr);
QUERY_CHECK_CODE(code, lino, _error); QUERY_CHECK_CODE(code, lino, _error);
code = initExprSupp(&pInfo->scalarSup, pScalarExprInfo, numOfScalarExpr, &pTaskInfo->storageAPI.functionStore); code = initExprSupp(&pInfo->scalarSup, pScalarExprInfo, numOfScalarExpr, &pTaskInfo->storageAPI.functionStore);
QUERY_CHECK_CODE(code, lino, _error); QUERY_CHECK_CODE(code, lino, _error);
} }
size_t keyBufSize = 0;
int32_t num = 0;
SExprInfo* pExprInfo = NULL;
code = createExprInfo(pAnomalyNode->window.pFuncs, NULL, &pExprInfo, &num); code = createExprInfo(pAnomalyNode->window.pFuncs, NULL, &pExprInfo, &num);
QUERY_CHECK_CODE(code, lino, _error); QUERY_CHECK_CODE(code, lino, _error);
initResultSizeInfo(&pOperator->resultInfo, 4096); initResultSizeInfo(&pOperator->resultInfo, 4096);
code = initAggSup(&pOperator->exprSupp, &pInfo->aggSup, pExprInfo, num, keyBufSize, pTaskInfo->id.str, code = initAggSup(&pOperator->exprSupp, &pInfo->aggSup, pExprInfo, num, keyBufSize, id, pTaskInfo->streamInfo.pState,
pTaskInfo->streamInfo.pState, &pTaskInfo->storageAPI.functionStore); &pTaskInfo->storageAPI.functionStore);
QUERY_CHECK_CODE(code, lino, _error); QUERY_CHECK_CODE(code, lino, _error);
SSDataBlock* pResBlock = createDataBlockFromDescNode(pAnomalyNode->window.node.pOutputDataBlockDesc); SSDataBlock* pResBlock = createDataBlockFromDescNode(pAnomalyNode->window.node.pOutputDataBlockDesc);
@ -124,27 +126,19 @@ int32_t createAnomalywindowOperatorInfo(SOperatorInfo* downstream, SPhysiNode* p
pInfo->anomalyCol = extractColumnFromColumnNode(pColNode); pInfo->anomalyCol = extractColumnFromColumnNode(pColNode);
pInfo->anomalyKey.type = pInfo->anomalyCol.type; pInfo->anomalyKey.type = pInfo->anomalyCol.type;
pInfo->anomalyKey.bytes = pInfo->anomalyCol.bytes; pInfo->anomalyKey.bytes = pInfo->anomalyCol.bytes;
pInfo->anomalyKey.pData = taosMemoryCalloc(1, pInfo->anomalyCol.bytes); pInfo->anomalyKey.pData = taosMemoryCalloc(1, pInfo->anomalyCol.bytes);
if (pInfo->anomalyKey.pData == NULL) { QUERY_CHECK_NULL(pInfo->anomalyKey.pData, code, lino, _error, terrno)
goto _error;
}
int32_t itemSize = sizeof(int32_t) + pInfo->aggSup.resultRowSize + pInfo->anomalyKey.bytes; int32_t itemSize = sizeof(int32_t) + pInfo->aggSup.resultRowSize + pInfo->anomalyKey.bytes;
pInfo->anomalySup.pResultRow = taosMemoryCalloc(1, itemSize); pInfo->anomalySup.pResultRow = taosMemoryCalloc(1, itemSize);
if (pInfo->anomalySup.pResultRow == NULL) { QUERY_CHECK_NULL(pInfo->anomalySup.pResultRow, code, lino, _error, terrno)
code = terrno;
goto _error;
}
pInfo->anomalySup.blocks = taosArrayInit(16, sizeof(SSDataBlock*)); pInfo->anomalySup.blocks = taosArrayInit(16, sizeof(SSDataBlock*));
if (pInfo->anomalySup.blocks == NULL) { QUERY_CHECK_NULL(pInfo->anomalySup.blocks, code, lino, _error, terrno)
code = terrno;
goto _error;
}
pInfo->anomalySup.windows = taosArrayInit(16, sizeof(STimeWindow)); pInfo->anomalySup.windows = taosArrayInit(16, sizeof(STimeWindow));
if (pInfo->anomalySup.windows == NULL) { QUERY_CHECK_NULL(pInfo->anomalySup.windows, code, lino, _error, terrno)
code = terrno;
goto _error;
}
code = filterInitFromNode((SNode*)pAnomalyNode->window.node.pConditions, &pOperator->exprSupp.pFilterInfo, 0); code = filterInitFromNode((SNode*)pAnomalyNode->window.node.pConditions, &pOperator->exprSupp.pFilterInfo, 0);
QUERY_CHECK_CODE(code, lino, _error); QUERY_CHECK_CODE(code, lino, _error);
@ -162,18 +156,21 @@ int32_t createAnomalywindowOperatorInfo(SOperatorInfo* downstream, SPhysiNode* p
*pOptrInfo = pOperator; *pOptrInfo = pOperator;
qDebug("anomaly_window operator is created, algo:%s url:%s opt:%s", pInfo->algoName, pInfo->algoUrl, qDebug("%s anomaly_window operator is created, algo:%s url:%s opt:%s", id, pInfo->algoName, pInfo->algoUrl,
pInfo->anomalyOpt); pInfo->anomalyOpt);
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
_error: _error:
qError("%s failed to create anomaly_window operator, line:%d algo:%s code:%s", id, lino, pAnomalyNode->anomalyOpt,
tstrerror(code));
if (pInfo != NULL) { if (pInfo != NULL) {
anomalyDestroyOperatorInfo(pInfo); anomalyDestroyOperatorInfo(pInfo);
} }
destroyOperatorAndDownstreams(pOperator, &downstream, 1); destroyOperatorAndDownstreams(pOperator, &downstream, 1);
pTaskInfo->code = code; pTaskInfo->code = code;
qError("failed to create anomaly_window operator, algo:%s code:0x%x", pInfo->algoName, code);
return code; return code;
} }

View File

@ -4466,6 +4466,19 @@ _end:
return code; return code;
} }
static bool isWinResult(SSessionKey* pKey, SSHashObj* pSeUpdate, SSHashObj* pResults) {
SSessionKey checkKey = {0};
getSessionHashKey(pKey, &checkKey);
if (tSimpleHashGet(pSeUpdate, &checkKey, sizeof(SSessionKey)) != NULL) {
return true;
}
if (tSimpleHashGet(pResults, &checkKey, sizeof(SSessionKey)) != NULL) {
return true;
}
return false;
}
static void doStreamStateAggImpl(SOperatorInfo* pOperator, SSDataBlock* pSDataBlock, SSHashObj* pSeUpdated, static void doStreamStateAggImpl(SOperatorInfo* pOperator, SSDataBlock* pSDataBlock, SSHashObj* pSeUpdated,
SSHashObj* pStDeleted) { SSHashObj* pStDeleted) {
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo; SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
@ -4518,7 +4531,9 @@ static void doStreamStateAggImpl(SOperatorInfo* pOperator, SSDataBlock* pSDataBl
code = setStateOutputBuf(pAggSup, tsCols[i], groupId, pKeyData, &curWin, &nextWin); code = setStateOutputBuf(pAggSup, tsCols[i], groupId, pKeyData, &curWin, &nextWin);
QUERY_CHECK_CODE(code, lino, _end); QUERY_CHECK_CODE(code, lino, _end);
releaseOutputBuf(pAggSup->pState, nextWin.winInfo.pStatePos, &pAPI->stateStore); if (isWinResult(&nextWin.winInfo.sessionWin, pSeUpdated, pAggSup->pResultRows) == false) {
releaseOutputBuf(pAggSup->pState, nextWin.winInfo.pStatePos, &pAPI->stateStore);
}
setSessionWinOutputInfo(pSeUpdated, &curWin.winInfo); setSessionWinOutputInfo(pSeUpdated, &curWin.winInfo);
code = updateStateWindowInfo(pAggSup, &curWin, &nextWin, tsCols, groupId, pKeyColInfo, rows, i, &allEqual, code = updateStateWindowInfo(pAggSup, &curWin, &nextWin, tsCols, groupId, pKeyColInfo, rows, i, &allEqual,

View File

@ -1441,7 +1441,7 @@ static int32_t doSetUserTableMetaInfo(SStoreMetaReader* pMetaReaderFn, SStoreMet
SMetaReader mr1 = {0}; SMetaReader mr1 = {0};
pMetaReaderFn->initReader(&mr1, pVnode, META_READER_NOLOCK, pMetaFn); pMetaReaderFn->initReader(&mr1, pVnode, META_READER_NOLOCK, pMetaFn);
int64_t suid = pMReader->me.ctbEntry.suid; int64_t suid = pMReader->me.ctbEntry.suid;
code = pMetaReaderFn->getTableEntryByUid(&mr1, suid); code = pMetaReaderFn->getTableEntryByUid(&mr1, suid);
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
@ -1752,7 +1752,7 @@ static SSDataBlock* sysTableBuildUserTables(SOperatorInfo* pOperator) {
SMetaReader mr = {0}; SMetaReader mr = {0};
pAPI->metaReaderFn.initReader(&mr, pInfo->readHandle.vnode, META_READER_NOLOCK, &pAPI->metaFn); pAPI->metaReaderFn.initReader(&mr, pInfo->readHandle.vnode, META_READER_NOLOCK, &pAPI->metaFn);
uint64_t suid = pInfo->pCur->mr.me.ctbEntry.suid; uint64_t suid = pInfo->pCur->mr.me.ctbEntry.suid;
code = pAPI->metaReaderFn.getTableEntryByUid(&mr, suid); code = pAPI->metaReaderFn.getTableEntryByUid(&mr, suid);
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
@ -2269,6 +2269,8 @@ static SSDataBlock* sysTableBuildUserFileSets(SOperatorInfo* pOperator) {
if (ret) { if (ret) {
if (ret == TSDB_CODE_NOT_FOUND) { if (ret == TSDB_CODE_NOT_FOUND) {
// no more scan entry // no more scan entry
setOperatorCompleted(pOperator);
pAPI->tsdReader.fileSetReaderClose(&pInfo->pFileSetReader);
break; break;
} else { } else {
code = ret; code = ret;
@ -2284,7 +2286,7 @@ static SSDataBlock* sysTableBuildUserFileSets(SOperatorInfo* pOperator) {
// db_name // db_name
pColInfoData = taosArrayGet(p->pDataBlock, index++); pColInfoData = taosArrayGet(p->pDataBlock, index++);
QUERY_CHECK_NULL(pColInfoData, code, lino, _end, terrno); QUERY_CHECK_NULL(pColInfoData, code, lino, _end, terrno);
code = colDataSetVal(pColInfoData, numOfRows, db, false); code = colDataSetVal(pColInfoData, numOfRows, dbname, false);
QUERY_CHECK_CODE(code, lino, _end); QUERY_CHECK_CODE(code, lino, _end);
// vgroup_id // vgroup_id

View File

@ -6429,7 +6429,7 @@ int32_t blockDistFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
return code; return code;
} }
len = tsnprintf(varDataVal(st), sizeof(st) - VARSTR_HEADER_SIZE, "Inmem_Rows=[%d] Stt_Rows=[%d] ", len = tsnprintf(varDataVal(st), sizeof(st) - VARSTR_HEADER_SIZE, "Inmem_Rows=[%u] Stt_Rows=[%u] ",
pData->numOfInmemRows, pData->numOfSttRows); pData->numOfInmemRows, pData->numOfSttRows);
varDataSetLen(st, len); varDataSetLen(st, len);
code = colDataSetVal(pColInfo, row++, st, false); code = colDataSetVal(pColInfo, row++, st, false);

View File

@ -28,7 +28,7 @@
extern "C" { extern "C" {
#endif #endif
typedef enum { MATCH, JUMP, SPLIT, RANGE } InstType; typedef enum { INS_MATCH, INS_JUMP, INS_SPLIT, INS_RANGE } InstType;
typedef struct MatchValue { typedef struct MatchValue {
#ifdef WINDOWS #ifdef WINDOWS

View File

@ -159,14 +159,14 @@ bool dfaBuilderCacheState(FstDfaBuilder *builder, FstSparseSet *set, uint32_t *r
if (false == sparSetGet(set, i, &ip)) continue; if (false == sparSetGet(set, i, &ip)) continue;
Inst *inst = taosArrayGet(builder->dfa->insts, ip); Inst *inst = taosArrayGet(builder->dfa->insts, ip);
if (inst->ty == JUMP || inst->ty == SPLIT) { if (inst->ty == INS_JUMP || inst->ty == INS_SPLIT) {
continue; continue;
} else if (inst->ty == RANGE) { } else if (inst->ty == INS_RANGE) {
if (taosArrayPush(tinsts, &ip) == NULL) { if (taosArrayPush(tinsts, &ip) == NULL) {
code = terrno; code = terrno;
goto _exception; goto _exception;
} }
} else if (inst->ty == MATCH) { } else if (inst->ty == INS_MATCH) {
isMatch = true; isMatch = true;
if (taosArrayPush(tinsts, &ip) == NULL) { if (taosArrayPush(tinsts, &ip) == NULL) {
code = terrno; code = terrno;
@ -234,11 +234,11 @@ void dfaAdd(FstDfa *dfa, FstSparseSet *set, uint32_t ip) {
} }
bool succ = sparSetAdd(set, ip, NULL); bool succ = sparSetAdd(set, ip, NULL);
Inst *inst = taosArrayGet(dfa->insts, ip); Inst *inst = taosArrayGet(dfa->insts, ip);
if (inst->ty == MATCH || inst->ty == RANGE) { if (inst->ty == INS_MATCH || inst->ty == INS_RANGE) {
// do nothing // do nothing
} else if (inst->ty == JUMP) { } else if (inst->ty == INS_JUMP) {
dfaAdd(dfa, set, inst->jv.step); dfaAdd(dfa, set, inst->jv.step);
} else if (inst->ty == SPLIT) { } else if (inst->ty == INS_SPLIT) {
dfaAdd(dfa, set, inst->sv.len1); dfaAdd(dfa, set, inst->sv.len1);
dfaAdd(dfa, set, inst->sv.len2); dfaAdd(dfa, set, inst->sv.len2);
} }
@ -253,11 +253,11 @@ bool dfaRun(FstDfa *dfa, FstSparseSet *from, FstSparseSet *to, uint8_t byte) {
if (false == sparSetGet(from, i, &ip)) continue; if (false == sparSetGet(from, i, &ip)) continue;
Inst *inst = taosArrayGet(dfa->insts, ip); Inst *inst = taosArrayGet(dfa->insts, ip);
if (inst->ty == JUMP || inst->ty == SPLIT) { if (inst->ty == INS_JUMP || inst->ty == INS_SPLIT) {
continue; continue;
} else if (inst->ty == MATCH) { } else if (inst->ty == INS_MATCH) {
isMatch = true; isMatch = true;
} else if (inst->ty == RANGE) { } else if (inst->ty == INS_RANGE) {
if (inst->rv.start <= byte && byte <= inst->rv.end) { if (inst->rv.start <= byte && byte <= inst->rv.end) {
dfaAdd(dfa, to, ip + 1); dfaAdd(dfa, to, ip + 1);
} }

View File

@ -17,6 +17,7 @@
#include "tglobal.h" #include "tglobal.h"
#include "tskiplist.h" #include "tskiplist.h"
#include "tutil.h" #include "tutil.h"
#include "indexFstDfa.h"
class UtilEnv : public ::testing::Test { class UtilEnv : public ::testing::Test {
protected: protected:
@ -41,6 +42,29 @@ class UtilEnv : public ::testing::Test {
SArray *rslt; SArray *rslt;
}; };
class UtilComm : public ::testing::Test {
protected:
virtual void SetUp() {
// src = (SArray *)taosArrayInit(2, sizeof(void *));
// for (int i = 0; i < 3; i++) {
// SArray *m = taosArrayInit(10, sizeof(uint64_t));
// taosArrayPush(src, &m);
// }
// rslt = (SArray *)taosArrayInit(10, sizeof(uint64_t));
}
virtual void TearDown() {
// for (int i = 0; i < taosArrayGetSize(src); i++) {
// SArray *m = (SArray *)taosArrayGetP(src, i);
// taosArrayDestroy(m);
// }
// taosArrayDestroy(src);
}
// SArray *src;
// SArray *rslt;
};
static void clearSourceArray(SArray *p) { static void clearSourceArray(SArray *p) {
for (int i = 0; i < taosArrayGetSize(p); i++) { for (int i = 0; i < taosArrayGetSize(p); i++) {
SArray *m = (SArray *)taosArrayGetP(p, i); SArray *m = (SArray *)taosArrayGetP(p, i);
@ -369,3 +393,35 @@ TEST_F(UtilEnv, testDictComm) {
EXPECT_EQ(COMMON_INPUTS[v], i); EXPECT_EQ(COMMON_INPUTS[v], i);
} }
} }
TEST_F(UtilComm, testCompress) {
for (int32_t i = 0; i < 6; i++) {
_cache_range_compare cmpFunc = idxGetCompare((RangeType)(i));
//char[32]a = 0, b = 1;
char a[32] = {0};
char b[32] = {1};
for (int32_t j = 0; j < TSDB_DATA_TYPE_MAX; j++) {
cmpFunc(a, b, j);
}
}
}
TEST_F(UtilComm, testfstDfa) {
{
FstDfaBuilder *builder = dfaBuilderCreate(NULL);
ASSERT_TRUE(builder != NULL);
dfaBuilderDestroy(builder);
}
{
SArray *pInst = taosArrayInit(32, sizeof(uint8_t));
for (int32_t i = 0; i < 26; i++) {
uint8_t v = 'a' + i;
taosArrayPush(pInst, &v);
}
FstDfaBuilder *builder = dfaBuilderCreate(pInst);
FstDfa *dfa = dfaBuilderBuild(builder);
dfaBuilderDestroy(builder);
}
}

View File

@ -290,8 +290,7 @@ db_options(A) ::= db_options(B) ENCRYPT_ALGORITHM NK_STRING(C).
db_options(A) ::= db_options(B) DNODES NK_STRING(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_DNODES, &C); } db_options(A) ::= db_options(B) DNODES NK_STRING(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_DNODES, &C); }
db_options(A) ::= db_options(B) COMPACT_INTERVAL NK_INTEGER (C). { A = setDatabaseOption(pCxt, B, DB_OPTION_COMPACT_INTERVAL, &C); } db_options(A) ::= db_options(B) COMPACT_INTERVAL NK_INTEGER (C). { A = setDatabaseOption(pCxt, B, DB_OPTION_COMPACT_INTERVAL, &C); }
db_options(A) ::= db_options(B) COMPACT_INTERVAL NK_VARIABLE(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_COMPACT_INTERVAL, &C); } db_options(A) ::= db_options(B) COMPACT_INTERVAL NK_VARIABLE(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_COMPACT_INTERVAL, &C); }
db_options(A) ::= db_options(B) COMPACT_TIME_RANGE signed_integer_list(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_COMPACT_TIME_RANGE, C); } db_options(A) ::= db_options(B) COMPACT_TIME_RANGE signed_duration_list(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_COMPACT_TIME_RANGE, C); }
db_options(A) ::= db_options(B) COMPACT_TIME_RANGE signed_variable_list(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_COMPACT_TIME_RANGE, C); }
db_options(A) ::= db_options(B) COMPACT_TIME_OFFSET NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_COMPACT_TIME_OFFSET, &C); } db_options(A) ::= db_options(B) COMPACT_TIME_OFFSET NK_INTEGER(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_COMPACT_TIME_OFFSET, &C); }
db_options(A) ::= db_options(B) COMPACT_TIME_OFFSET NK_VARIABLE(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_COMPACT_TIME_OFFSET, &C); } db_options(A) ::= db_options(B) COMPACT_TIME_OFFSET NK_VARIABLE(C). { A = setDatabaseOption(pCxt, B, DB_OPTION_COMPACT_TIME_OFFSET, &C); }
@ -331,8 +330,7 @@ alter_db_option(A) ::= KEEP_TIME_OFFSET NK_INTEGER(B).
alter_db_option(A) ::= ENCRYPT_ALGORITHM NK_STRING(B). { A.type = DB_OPTION_ENCRYPT_ALGORITHM; A.val = B; } alter_db_option(A) ::= ENCRYPT_ALGORITHM NK_STRING(B). { A.type = DB_OPTION_ENCRYPT_ALGORITHM; A.val = B; }
alter_db_option(A) ::= COMPACT_INTERVAL NK_INTEGER(B). { A.type = DB_OPTION_COMPACT_INTERVAL; A.val = B; } alter_db_option(A) ::= COMPACT_INTERVAL NK_INTEGER(B). { A.type = DB_OPTION_COMPACT_INTERVAL; A.val = B; }
alter_db_option(A) ::= COMPACT_INTERVAL NK_VARIABLE(B). { A.type = DB_OPTION_COMPACT_INTERVAL; A.val = B; } alter_db_option(A) ::= COMPACT_INTERVAL NK_VARIABLE(B). { A.type = DB_OPTION_COMPACT_INTERVAL; A.val = B; }
alter_db_option(A) ::= COMPACT_TIME_RANGE signed_integer_list(B). { A.type = DB_OPTION_COMPACT_TIME_RANGE; A.pList = B; } alter_db_option(A) ::= COMPACT_TIME_RANGE signed_duration_list(B). { A.type = DB_OPTION_COMPACT_TIME_RANGE; A.pList = B; }
alter_db_option(A) ::= COMPACT_TIME_RANGE signed_variable_list(B). { A.type = DB_OPTION_COMPACT_TIME_RANGE; A.pList = B; }
alter_db_option(A) ::= COMPACT_TIME_OFFSET NK_INTEGER(B). { A.type = DB_OPTION_COMPACT_TIME_OFFSET; A.val = B; } alter_db_option(A) ::= COMPACT_TIME_OFFSET NK_INTEGER(B). { A.type = DB_OPTION_COMPACT_TIME_OFFSET; A.val = B; }
alter_db_option(A) ::= COMPACT_TIME_OFFSET NK_VARIABLE(B). { A.type = DB_OPTION_COMPACT_TIME_OFFSET; A.val = B; } alter_db_option(A) ::= COMPACT_TIME_OFFSET NK_VARIABLE(B). { A.type = DB_OPTION_COMPACT_TIME_OFFSET; A.val = B; }
@ -341,20 +339,17 @@ alter_db_option(A) ::= COMPACT_TIME_OFFSET NK_VARIABLE(B).
integer_list(A) ::= NK_INTEGER(B). { A = createNodeList(pCxt, createValueNode(pCxt, TSDB_DATA_TYPE_BIGINT, &B)); } integer_list(A) ::= NK_INTEGER(B). { A = createNodeList(pCxt, createValueNode(pCxt, TSDB_DATA_TYPE_BIGINT, &B)); }
integer_list(A) ::= integer_list(B) NK_COMMA NK_INTEGER(C). { A = addNodeToList(pCxt, B, createValueNode(pCxt, TSDB_DATA_TYPE_BIGINT, &C)); } integer_list(A) ::= integer_list(B) NK_COMMA NK_INTEGER(C). { A = addNodeToList(pCxt, B, createValueNode(pCxt, TSDB_DATA_TYPE_BIGINT, &C)); }
%type signed_integer_list { SNodeList* }
%destructor signed_integer_list { nodesDestroyList($$); }
signed_integer_list(A) ::= signed_integer(B). { A = createNodeList(pCxt, B); }
signed_integer_list(A) ::= signed_integer_list(B) NK_COMMA signed_integer(C). { A = addNodeToList(pCxt, B, C); }
%type variable_list { SNodeList* } %type variable_list { SNodeList* }
%destructor variable_list { nodesDestroyList($$); } %destructor variable_list { nodesDestroyList($$); }
variable_list(A) ::= NK_VARIABLE(B). { A = createNodeList(pCxt, createDurationValueNode(pCxt, &B)); } variable_list(A) ::= NK_VARIABLE(B). { A = createNodeList(pCxt, createDurationValueNode(pCxt, &B)); }
variable_list(A) ::= variable_list(B) NK_COMMA NK_VARIABLE(C). { A = addNodeToList(pCxt, B, createDurationValueNode(pCxt, &C)); } variable_list(A) ::= variable_list(B) NK_COMMA NK_VARIABLE(C). { A = addNodeToList(pCxt, B, createDurationValueNode(pCxt, &C)); }
%type signed_variable_list { SNodeList* } %type signed_duration_list { SNodeList* }
%destructor signed_variable_list { nodesDestroyList($$); } %destructor signed_duration_list { nodesDestroyList($$); }
signed_variable_list(A) ::= signed_variable(B). { A = createNodeList(pCxt, releaseRawExprNode(pCxt, B)); } signed_duration_list(A) ::= signed_variable(B). { A = createNodeList(pCxt, releaseRawExprNode(pCxt, B)); }
signed_variable_list(A) ::= signed_variable_list(B) NK_COMMA signed_variable(C). { A = addNodeToList(pCxt, B, releaseRawExprNode(pCxt, C)); } signed_duration_list(A) ::= signed_integer(B). { A = createNodeList(pCxt, B); }
signed_duration_list(A) ::= signed_duration_list(B) NK_COMMA signed_integer(C). { A = addNodeToList(pCxt, B, C); }
signed_duration_list(A) ::= signed_duration_list(B) NK_COMMA signed_variable(C). { A = addNodeToList(pCxt, B, releaseRawExprNode(pCxt, C)); }
%type retention_list { SNodeList* } %type retention_list { SNodeList* }
%destructor retention_list { nodesDestroyList($$); } %destructor retention_list { nodesDestroyList($$); }

View File

@ -1841,6 +1841,9 @@ static int32_t doGetStbRowValues(SInsertParseContext* pCxt, SVnodeModifyOpStmt*
} }
if (TK_NK_QUESTION == pToken->type) { if (TK_NK_QUESTION == pToken->type) {
if (!pCxt->pComCxt->isStmtBind && i != 0) {
return buildInvalidOperationMsg(&pCxt->msg, "not support mixed bind and non-bind values");
}
pCxt->isStmtBind = true; pCxt->isStmtBind = true;
pStmt->usingTableProcessing = true; pStmt->usingTableProcessing = true;
if (pCols->pColIndex[i] == tbnameIdx) { if (pCols->pColIndex[i] == tbnameIdx) {
@ -1874,6 +1877,9 @@ static int32_t doGetStbRowValues(SInsertParseContext* pCxt, SVnodeModifyOpStmt*
return buildInvalidOperationMsg(&pCxt->msg, "not expected numOfBound"); return buildInvalidOperationMsg(&pCxt->msg, "not expected numOfBound");
} }
} else { } else {
if (pCxt->pComCxt->isStmtBind) {
return buildInvalidOperationMsg(&pCxt->msg, "not support mixed bind and non-bind values");
}
if (pCols->pColIndex[i] < numOfCols) { if (pCols->pColIndex[i] < numOfCols) {
const SSchema* pSchema = &pSchemas[pCols->pColIndex[i]]; const SSchema* pSchema = &pSchemas[pCols->pColIndex[i]];
SColVal* pVal = taosArrayGet(pStbRowsCxt->aColVals, pCols->pColIndex[i]); SColVal* pVal = taosArrayGet(pStbRowsCxt->aColVals, pCols->pColIndex[i]);

View File

@ -2005,7 +2005,7 @@ static EDealRes translateDurationValue(STranslateContext* pCxt, SValueNode* pVal
pVal->datum.i = AUTO_DURATION_VALUE; pVal->datum.i = AUTO_DURATION_VALUE;
pVal->unit = getPrecisionUnit(pVal->node.resType.precision); pVal->unit = getPrecisionUnit(pVal->node.resType.precision);
} else if (parseNatualDuration(pVal->literal, strlen(pVal->literal), &pVal->datum.i, &pVal->unit, } else if (parseNatualDuration(pVal->literal, strlen(pVal->literal), &pVal->datum.i, &pVal->unit,
pVal->node.resType.precision, false) != TSDB_CODE_SUCCESS) { pVal->node.resType.precision, true) != TSDB_CODE_SUCCESS) {
return generateDealNodeErrMsg(pCxt, TSDB_CODE_PAR_WRONG_VALUE_TYPE, pVal->literal); return generateDealNodeErrMsg(pCxt, TSDB_CODE_PAR_WRONG_VALUE_TYPE, pVal->literal);
} }
*(int64_t*)&pVal->typeData = pVal->datum.i; *(int64_t*)&pVal->typeData = pVal->datum.i;
@ -7782,7 +7782,7 @@ static int32_t buildCreateDbReq(STranslateContext* pCxt, SCreateDatabaseStmt* pS
static int32_t checkRangeOption(STranslateContext* pCxt, int32_t code, const char* pName, int64_t val, int64_t minVal, static int32_t checkRangeOption(STranslateContext* pCxt, int32_t code, const char* pName, int64_t val, int64_t minVal,
int64_t maxVal, bool skipUndef) { int64_t maxVal, bool skipUndef) {
if (skipUndef ? ((val >= 0) && (val < minVal || val > maxVal)) : (val < minVal || val > maxVal)) { if (skipUndef ? ((val >= 0 || val < -2) && (val < minVal || val > maxVal)) : (val < minVal || val > maxVal)) {
return generateSyntaxErrMsgExt(&pCxt->msgBuf, code, return generateSyntaxErrMsgExt(&pCxt->msgBuf, code,
"Invalid option %s: %" PRId64 ", valid range: [%" PRId64 ", %" PRId64 "]", pName, "Invalid option %s: %" PRId64 ", valid range: [%" PRId64 ", %" PRId64 "]", pName,
val, minVal, maxVal); val, minVal, maxVal);
@ -8148,7 +8148,6 @@ static int32_t checkOptionsDependency(STranslateContext* pCxt, const char* pDbNa
static int32_t checkDbCompactIntervalOption(STranslateContext* pCxt, const char* pDbName, SDatabaseOptions* pOptions) { static int32_t checkDbCompactIntervalOption(STranslateContext* pCxt, const char* pDbName, SDatabaseOptions* pOptions) {
int32_t code = 0; int32_t code = 0;
int64_t interval = 0;
int32_t keep2 = pOptions->keep[2]; int32_t keep2 = pOptions->keep[2];
if (NULL != pOptions->pCompactIntervalNode) { if (NULL != pOptions->pCompactIntervalNode) {
@ -8163,23 +8162,26 @@ static int32_t checkDbCompactIntervalOption(STranslateContext* pCxt, const char*
pOptions->pCompactIntervalNode->unit, TIME_UNIT_MINUTE, TIME_UNIT_HOUR, pOptions->pCompactIntervalNode->unit, TIME_UNIT_MINUTE, TIME_UNIT_HOUR,
TIME_UNIT_DAY); TIME_UNIT_DAY);
} }
interval = getBigintFromValueNode(pOptions->pCompactIntervalNode); int64_t interval = getBigintFromValueNode(pOptions->pCompactIntervalNode);
if (interval != 0) { if (interval != 0) {
if (keep2 == -1) { // alter db if (keep2 == -1) { // alter db
TAOS_CHECK_RETURN(translateGetDbCfg(pCxt, pDbName, &pOptions->pDbCfg)); TAOS_CHECK_RETURN(translateGetDbCfg(pCxt, pDbName, &pOptions->pDbCfg));
keep2 = pOptions->pDbCfg->daysToKeep2; keep2 = pOptions->pDbCfg->daysToKeep2;
} }
code = checkDbRangeOption(pCxt, "compact_interval", interval, TSDB_MIN_COMPACT_INTERVAL, keep2); code = checkDbRangeOption(pCxt, "compact_interval", interval, TSDB_MIN_COMPACT_INTERVAL, keep2);
TAOS_CHECK_RETURN(code);
} }
pOptions->compactInterval = (int32_t)interval;
} else if (pOptions->compactInterval > 0) { } else if (pOptions->compactInterval > 0) {
interval = pOptions->compactInterval * 1440; // convert to minutes int64_t interval = (int64_t)pOptions->compactInterval * 1440; // convert to minutes
if (keep2 == -1) { // alter db if (keep2 == -1) { // alter db
TAOS_CHECK_RETURN(translateGetDbCfg(pCxt, pDbName, &pOptions->pDbCfg)); TAOS_CHECK_RETURN(translateGetDbCfg(pCxt, pDbName, &pOptions->pDbCfg));
keep2 = pOptions->pDbCfg->daysToKeep2; keep2 = pOptions->pDbCfg->daysToKeep2;
} }
code = checkDbRangeOption(pCxt, "compact_interval", interval, TSDB_MIN_COMPACT_INTERVAL, keep2); code = checkDbRangeOption(pCxt, "compact_interval", interval, TSDB_MIN_COMPACT_INTERVAL, keep2);
TAOS_CHECK_RETURN(code);
pOptions->compactInterval = (int32_t)interval;
} }
if (code == 0) pOptions->compactInterval = interval;
return code; return code;
} }
@ -8222,6 +8224,10 @@ static int32_t checkDbCompactTimeRangeOption(STranslateContext* pCxt, const char
pOptions->compactStartTime = getBigintFromValueNode(pStart); pOptions->compactStartTime = getBigintFromValueNode(pStart);
pOptions->compactEndTime = getBigintFromValueNode(pEnd); pOptions->compactEndTime = getBigintFromValueNode(pEnd);
if (pOptions->compactStartTime == 0 && pOptions->compactEndTime == 0) {
return TSDB_CODE_SUCCESS;
}
if (pOptions->compactStartTime >= pOptions->compactEndTime) { if (pOptions->compactStartTime >= pOptions->compactEndTime) {
return generateSyntaxErrMsgExt( return generateSyntaxErrMsgExt(
&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_DB_OPTION, &pCxt->msgBuf, TSDB_CODE_PAR_INVALID_DB_OPTION,
@ -8238,7 +8244,7 @@ static int32_t checkDbCompactTimeRangeOption(STranslateContext* pCxt, const char
} }
if (pOptions->compactStartTime < -keep2 || pOptions->compactStartTime > -days) { if (pOptions->compactStartTime < -keep2 || pOptions->compactStartTime > -days) {
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_DB_OPTION, return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_DB_OPTION,
"Invalid option compact_time_range: %dm, start_time should be in range: [%dm, %dm]", "Invalid option compact_time_range: %dm, start time should be in range: [%dm, %dm]",
pOptions->compactStartTime, -keep2, -days); pOptions->compactStartTime, -keep2, -days);
} }
if (pOptions->compactEndTime < -keep2 || pOptions->compactEndTime > -days) { if (pOptions->compactEndTime < -keep2 || pOptions->compactEndTime > -days) {
@ -10086,7 +10092,6 @@ static int32_t translateAlterDnode(STranslateContext* pCxt, SAlterDnodeStmt* pSt
const char* validConfigs[] = { const char* validConfigs[] = {
"encrypt_key", "encrypt_key",
tsAlterCompactTaskKeywords,
}; };
if (0 == strncasecmp(cfgReq.config, validConfigs[0], strlen(validConfigs[0]) + 1)) { if (0 == strncasecmp(cfgReq.config, validConfigs[0], strlen(validConfigs[0]) + 1)) {
int32_t klen = strlen(cfgReq.value); int32_t klen = strlen(cfgReq.value);
@ -10097,28 +10102,6 @@ static int32_t translateAlterDnode(STranslateContext* pCxt, SAlterDnodeStmt* pSt
ENCRYPT_KEY_LEN_MIN, ENCRYPT_KEY_LEN); ENCRYPT_KEY_LEN_MIN, ENCRYPT_KEY_LEN);
} }
code = buildCmdMsg(pCxt, TDMT_MND_CREATE_ENCRYPT_KEY, (FSerializeFunc)tSerializeSMCfgDnodeReq, &cfgReq); code = buildCmdMsg(pCxt, TDMT_MND_CREATE_ENCRYPT_KEY, (FSerializeFunc)tSerializeSMCfgDnodeReq, &cfgReq);
} else if (0 == strncasecmp(cfgReq.config, validConfigs[1], strlen(validConfigs[1]) + 1)) {
char* endptr = NULL;
int32_t maxCompactTasks = taosStr2Int32(cfgReq.value, &endptr, 10);
int32_t minMaxCompactTasks = MIN_MAX_COMPACT_TASKS;
int32_t maxMaxCompactTasks = MAX_MAX_COMPACT_TASKS;
// check format
if (endptr == cfgReq.value || endptr[0] != '\0') {
tFreeSMCfgDnodeReq(&cfgReq);
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_DNODE_INVALID_COMPACT_TASKS,
"Invalid max compact tasks: %s", cfgReq.value);
}
// check range
if (maxCompactTasks < minMaxCompactTasks || maxCompactTasks > maxMaxCompactTasks) {
tFreeSMCfgDnodeReq(&cfgReq);
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_DNODE_INVALID_COMPACT_TASKS,
"Invalid max compact tasks: %d, valid range [%d,%d]", maxCompactTasks,
minMaxCompactTasks, maxMaxCompactTasks);
}
code = buildCmdMsg(pCxt, TDMT_MND_CONFIG_DNODE, (FSerializeFunc)tSerializeSMCfgDnodeReq, &cfgReq);
} else { } else {
code = buildCmdMsg(pCxt, TDMT_MND_CONFIG_DNODE, (FSerializeFunc)tSerializeSMCfgDnodeReq, &cfgReq); code = buildCmdMsg(pCxt, TDMT_MND_CONFIG_DNODE, (FSerializeFunc)tSerializeSMCfgDnodeReq, &cfgReq);
} }

View File

@ -5128,8 +5128,8 @@ static int32_t fltSclCollectOperatorFromNode(SNode *pNode, SArray *sclOpList) {
SOperatorNode *pOper = (SOperatorNode *)pNode; SOperatorNode *pOper = (SOperatorNode *)pNode;
SValueNode *valNode = (SValueNode *)pOper->pRight; SExprNode* pLeft = (SExprNode*)pOper->pLeft;
if (IS_NUMERIC_TYPE(valNode->node.resType.type) || valNode->node.resType.type == TSDB_DATA_TYPE_TIMESTAMP) { if (IS_NUMERIC_TYPE(pLeft->resType.type) || pLeft->resType.type == TSDB_DATA_TYPE_TIMESTAMP) {
SNode* pLeft = NULL, *pRight = NULL; SNode* pLeft = NULL, *pRight = NULL;
int32_t code = nodesCloneNode(pOper->pLeft, &pLeft); int32_t code = nodesCloneNode(pOper->pLeft, &pLeft);
if (TSDB_CODE_SUCCESS != code) { if (TSDB_CODE_SUCCESS != code) {

View File

@ -254,6 +254,9 @@ int32_t uploadCheckpointData(SStreamTask* pTask, int64_t checkpointId, int64_t d
int32_t chkptTriggerRecvMonitorHelper(SStreamTask* pTask, void* param, SArray** ppNotSendList); int32_t chkptTriggerRecvMonitorHelper(SStreamTask* pTask, void* param, SArray** ppNotSendList);
int32_t downloadCheckpointByNameS3(const char* id, const char* fname, const char* dstName); int32_t downloadCheckpointByNameS3(const char* id, const char* fname, const char* dstName);
int32_t uploadCheckpointToS3(const char* id, const char* path); int32_t uploadCheckpointToS3(const char* id, const char* path);
int32_t deleteCheckpointFile(const char* id, const char* name);
int32_t doCheckBeforeHandleChkptTrigger(SStreamTask* pTask, int64_t checkpointId, SStreamDataBlock* pBlock,
int32_t transId);
#ifdef __cplusplus #ifdef __cplusplus
} }

View File

@ -18,7 +18,7 @@
#include "streamBackendRocksdb.h" #include "streamBackendRocksdb.h"
#include "streamInt.h" #include "streamInt.h"
#define CHECK_NOT_RSP_DURATION 10 * 1000 // 10 sec #define CHECK_NOT_RSP_DURATION 60 * 1000 // 60 sec
static void processDownstreamReadyRsp(SStreamTask* pTask); static void processDownstreamReadyRsp(SStreamTask* pTask);
static void rspMonitorFn(void* param, void* tmrId); static void rspMonitorFn(void* param, void* tmrId);
@ -660,7 +660,7 @@ void handleTimeoutDownstreamTasks(SStreamTask* pTask, SArray* pTimeoutList) {
pInfo->timeoutRetryCount += 1; pInfo->timeoutRetryCount += 1;
// timeout more than 100 sec, add into node update list // timeout more than 600 sec, add into node update list
if (pInfo->timeoutRetryCount > 10) { if (pInfo->timeoutRetryCount > 10) {
pInfo->timeoutRetryCount = 0; pInfo->timeoutRetryCount = 0;
@ -674,7 +674,7 @@ void handleTimeoutDownstreamTasks(SStreamTask* pTask, SArray* pTimeoutList) {
findCheckRspStatus(pInfo, *pTaskId, &p); findCheckRspStatus(pInfo, *pTaskId, &p);
if (p != NULL) { if (p != NULL) {
code = streamTaskAddIntoNodeUpdateList(pTask, p->vgId); code = streamTaskAddIntoNodeUpdateList(pTask, p->vgId);
stDebug("s-task:%s vgId:%d downstream task:0x%x (vgId:%d) timeout more than 100sec, add into nodeUpdate list", stDebug("s-task:%s vgId:%d downstream task:0x%x (vgId:%d) timeout more than 600sec, add into nodeUpdate list",
id, vgId, p->taskId, p->vgId); id, vgId, p->taskId, p->vgId);
} }
} }

View File

@ -19,7 +19,6 @@
#include "tcs.h" #include "tcs.h"
static int32_t downloadCheckpointDataByName(const char* id, const char* fname, const char* dstName); static int32_t downloadCheckpointDataByName(const char* id, const char* fname, const char* dstName);
static int32_t deleteCheckpointFile(const char* id, const char* name);
static int32_t streamTaskUploadCheckpoint(const char* id, const char* path, int64_t checkpointId); static int32_t streamTaskUploadCheckpoint(const char* id, const char* path, int64_t checkpointId);
#ifdef BUILD_NO_CALL #ifdef BUILD_NO_CALL
static int32_t deleteCheckpoint(const char* id); static int32_t deleteCheckpoint(const char* id);
@ -230,8 +229,8 @@ int32_t continueDispatchCheckpointTriggerBlock(SStreamDataBlock* pBlock, SStream
return code; return code;
} }
static int32_t doCheckBeforeHandleChkptTrigger(SStreamTask* pTask, int64_t checkpointId, SStreamDataBlock* pBlock, int32_t doCheckBeforeHandleChkptTrigger(SStreamTask* pTask, int64_t checkpointId, SStreamDataBlock* pBlock,
int32_t transId) { int32_t transId) {
int32_t code = 0; int32_t code = 0;
int32_t vgId = pTask->pMeta->vgId; int32_t vgId = pTask->pMeta->vgId;
int32_t taskLevel = pTask->info.taskLevel; int32_t taskLevel = pTask->info.taskLevel;

View File

@ -954,7 +954,6 @@ static int32_t doTaskChkptStatusCheck(SStreamTask* pTask, void* param, int32_t n
int32_t vgId = pTask->pMeta->vgId; int32_t vgId = pTask->pMeta->vgId;
if (pTmrInfo->launchChkptId != pActiveInfo->activeId) { if (pTmrInfo->launchChkptId != pActiveInfo->activeId) {
streamCleanBeforeQuitTmr(pTmrInfo, param);
stWarn("s-task:%s vgId:%d ready-msg send tmr launched by previous checkpoint procedure, checkpointId:%" PRId64 stWarn("s-task:%s vgId:%d ready-msg send tmr launched by previous checkpoint procedure, checkpointId:%" PRId64
", quit", ", quit",
id, vgId, pTmrInfo->launchChkptId); id, vgId, pTmrInfo->launchChkptId);
@ -963,13 +962,11 @@ static int32_t doTaskChkptStatusCheck(SStreamTask* pTask, void* param, int32_t n
// active checkpoint info is cleared for now // active checkpoint info is cleared for now
if ((pActiveInfo->activeId == 0) || (pActiveInfo->transId == 0) || (num == 0) || (pTask->chkInfo.startTs == 0)) { if ((pActiveInfo->activeId == 0) || (pActiveInfo->transId == 0) || (num == 0) || (pTask->chkInfo.startTs == 0)) {
streamCleanBeforeQuitTmr(pTmrInfo, param);
stWarn("s-task:%s vgId:%d active checkpoint may be cleared, quit from readyMsg send tmr", id, vgId); stWarn("s-task:%s vgId:%d active checkpoint may be cleared, quit from readyMsg send tmr", id, vgId);
return -1; return -1;
} }
if (taosArrayGetSize(pTask->upstreamInfo.pList) != num) { if (taosArrayGetSize(pTask->upstreamInfo.pList) != num) {
streamCleanBeforeQuitTmr(pTmrInfo, param);
stWarn("s-task:%s vgId:%d upstream number:%d not equals sent readyMsg:%d, quit from readyMsg send tmr", id, stWarn("s-task:%s vgId:%d upstream number:%d not equals sent readyMsg:%d, quit from readyMsg send tmr", id,
vgId, (int32_t)taosArrayGetSize(pTask->upstreamInfo.pList), num); vgId, (int32_t)taosArrayGetSize(pTask->upstreamInfo.pList), num);
return -1; return -1;
@ -998,6 +995,7 @@ static int32_t doFindNotConfirmUpstream(SArray** ppNotRspList, SArray* pList, in
void* p = taosArrayPush(pTmp, &pInfo->upstreamTaskId); void* p = taosArrayPush(pTmp, &pInfo->upstreamTaskId);
if (p == NULL) { if (p == NULL) {
stError("s-task:%s vgId:%d failed to record not rsp task, code: out of memory", id, vgId); stError("s-task:%s vgId:%d failed to record not rsp task, code: out of memory", id, vgId);
taosArrayDestroy(pTmp);
return terrno; return terrno;
} else { } else {
stDebug("s-task:%s vgId:%d level:%d checkpoint-ready rsp from upstream:0x%x not confirmed yet", id, vgId, level, stDebug("s-task:%s vgId:%d level:%d checkpoint-ready rsp from upstream:0x%x not confirmed yet", id, vgId, level,
@ -1047,13 +1045,13 @@ static void doSendChkptReadyMsg(SStreamTask* pTask, SArray* pNotRspList, int64_t
} }
} }
static int32_t chkptReadyMsgSendHelper(SStreamTask* pTask, void* param, SArray* pNotRspList) { static int32_t chkptReadyMsgSendHelper(SStreamTask* pTask, void* param, SArray** pNotRspList) {
SActiveCheckpointInfo* pActiveInfo = pTask->chkInfo.pActiveInfo; SActiveCheckpointInfo* pActiveInfo = pTask->chkInfo.pActiveInfo;
SStreamTmrInfo* pTmrInfo = &pActiveInfo->chkptReadyMsgTmr; SStreamTmrInfo* pTmrInfo = &pActiveInfo->chkptReadyMsgTmr;
SArray* pList = pActiveInfo->pReadyMsgList; SArray* pList = pActiveInfo->pReadyMsgList;
int32_t num = taosArrayGetSize(pList); int32_t num = taosArrayGetSize(pList);
int32_t vgId = pTask->pMeta->vgId; int32_t vgId = pTask->pMeta->vgId;
int32_t checkpointId = pActiveInfo->activeId; int64_t checkpointId = pActiveInfo->activeId;
const char* id = pTask->id.idStr; const char* id = pTask->id.idStr;
int32_t notRsp = 0; int32_t notRsp = 0;
@ -1062,18 +1060,17 @@ static int32_t chkptReadyMsgSendHelper(SStreamTask* pTask, void* param, SArray*
return code; return code;
} }
code = doFindNotConfirmUpstream(&pNotRspList, pList, num, vgId, pTask->info.taskLevel, id); code = doFindNotConfirmUpstream(pNotRspList, pList, num, vgId, pTask->info.taskLevel, id);
if (code) { if (code) {
streamCleanBeforeQuitTmr(pTmrInfo, param);
stError("s-task:%s failed to find not rsp checkpoint-ready downstream, code:%s, out of tmr", id, tstrerror(code)); stError("s-task:%s failed to find not rsp checkpoint-ready downstream, code:%s, out of tmr", id, tstrerror(code));
return code; return code;
} }
notRsp = taosArrayGetSize(pNotRspList); notRsp = taosArrayGetSize(*pNotRspList);
if (notRsp == 0) { if (notRsp == 0) {
streamClearChkptReadyMsg(pActiveInfo); streamClearChkptReadyMsg(pActiveInfo);
} else { } else {
doSendChkptReadyMsg(pTask, pNotRspList, checkpointId, pList); doSendChkptReadyMsg(pTask, *pNotRspList, checkpointId, pList);
} }
return code; return code;
@ -1137,10 +1134,12 @@ static void chkptReadyMsgSendMonitorFn(void* param, void* tmrId) {
} }
streamMutexLock(&pActiveInfo->lock); streamMutexLock(&pActiveInfo->lock);
code = chkptReadyMsgSendHelper(pTask, param, pNotRspList); code = chkptReadyMsgSendHelper(pTask, param, &pNotRspList);
streamMutexUnlock(&pActiveInfo->lock); streamMutexUnlock(&pActiveInfo->lock);
if (code != TSDB_CODE_SUCCESS) { if (code != TSDB_CODE_SUCCESS) {
streamCleanBeforeQuitTmr(pTmrInfo, param);
streamMetaReleaseTask(pTask->pMeta, pTask); streamMetaReleaseTask(pTask->pMeta, pTask);
taosArrayDestroy(pNotRspList); taosArrayDestroy(pNotRspList);
return; return;
@ -1176,7 +1175,7 @@ int32_t streamTaskSendCheckpointReadyMsg(SStreamTask* pTask) {
int32_t num = taosArrayGetSize(pList); int32_t num = taosArrayGetSize(pList);
if (taosArrayGetSize(pTask->upstreamInfo.pList) != num) { if (taosArrayGetSize(pTask->upstreamInfo.pList) != num) {
stError("s-task:%s invalid number of sent readyMsg:%d to upstream:%d", id, num, stError("s-task:%s invalid number of sent readyMsg:%d to upstream:%d not send chkpt-ready msg", id, num,
(int32_t)taosArrayGetSize(pTask->upstreamInfo.pList)); (int32_t)taosArrayGetSize(pTask->upstreamInfo.pList));
streamMutexUnlock(&pActiveInfo->lock); streamMutexUnlock(&pActiveInfo->lock);
return TSDB_CODE_STREAM_INTERNAL_ERROR; return TSDB_CODE_STREAM_INTERNAL_ERROR;
@ -1200,7 +1199,7 @@ int32_t streamTaskSendCheckpointReadyMsg(SStreamTask* pTask) {
stError("s-task:%s failed to send checkpoint-ready msg, try nex time in 10s", id); stError("s-task:%s failed to send checkpoint-ready msg, try nex time in 10s", id);
} }
} else { } else {
stError("s-task:%s failed to prepare the checkpoint-ready msg, try nex time in 10s", id); stError("s-task:%s failed to prepare the checkpoint-ready msg, try next time in 10s", id);
} }
} }

View File

@ -915,8 +915,7 @@ int32_t streamResumeTask(SStreamTask* pTask) {
while (1) { while (1) {
code = doStreamExecTask(pTask); code = doStreamExecTask(pTask);
if (code) { if (code) {
stError("s-task:%s failed to exec stream task, code:%s", id, tstrerror(code)); stError("s-task:%s failed to exec stream task, code:%s, continue", id, tstrerror(code));
return code;
} }
// check if continue // check if continue
streamMutexLock(&pTask->lock); streamMutexLock(&pTask->lock);

View File

@ -546,10 +546,6 @@ void streamStateDestroy(SStreamState* pState, bool remove) {
taosMemoryFreeClear(pState); taosMemoryFreeClear(pState);
} }
int32_t streamStateDeleteCheckPoint(SStreamState* pState, TSKEY mark) {
return deleteExpiredCheckPoint(pState->pFileState, mark);
}
void streamStateReloadInfo(SStreamState* pState, TSKEY ts) { streamFileStateReloadInfo(pState->pFileState, ts); } void streamStateReloadInfo(SStreamState* pState, TSKEY ts) { streamFileStateReloadInfo(pState->pFileState, ts); }
void streamStateCopyBackend(SStreamState* src, SStreamState* dst) { void streamStateCopyBackend(SStreamState* src, SStreamState* dst) {
@ -617,8 +613,6 @@ int32_t streamStateGroupGetKVByCur(SStreamStateCur* pCur, int64_t* pKey, void**
void streamStateClearExpiredState(SStreamState* pState) { clearExpiredState(pState->pFileState); } void streamStateClearExpiredState(SStreamState* pState) { clearExpiredState(pState->pFileState); }
void streamStateSetFillInfo(SStreamState* pState) { setFillInfo(pState->pFileState); }
int32_t streamStateGetPrev(SStreamState* pState, const SWinKey* pKey, SWinKey* pResKey, void** pVal, int32_t* pVLen, int32_t streamStateGetPrev(SStreamState* pState, const SWinKey* pKey, SWinKey* pResKey, void** pVal, int32_t* pVLen,
int32_t* pWinCode) { int32_t* pWinCode) {
return getRowStatePrevRow(pState->pFileState, pKey, pResKey, pVal, pVLen, pWinCode); return getRowStatePrevRow(pState->pFileState, pKey, pResKey, pVal, pVLen, pWinCode);

View File

@ -667,18 +667,6 @@ void deleteRowBuff(SStreamFileState* pFileState, const void* pKey, int32_t keyLe
} }
} }
int32_t resetRowBuff(SStreamFileState* pFileState, const void* pKey, int32_t keyLen) {
int32_t code_buff = pFileState->stateBuffRemoveFn(pFileState->rowStateBuff, pKey, keyLen);
int32_t code_file = pFileState->stateFileRemoveFn(pFileState, pKey);
if (pFileState->searchBuff != NULL) {
deleteHashSortRowBuff(pFileState, pKey);
}
if (code_buff == TSDB_CODE_SUCCESS || code_file == TSDB_CODE_SUCCESS) {
return TSDB_CODE_SUCCESS;
}
return TSDB_CODE_FAILED;
}
static int32_t recoverSessionRowBuff(SStreamFileState* pFileState, SRowBuffPos* pPos) { static int32_t recoverSessionRowBuff(SStreamFileState* pFileState, SRowBuffPos* pPos) {
int32_t code = TSDB_CODE_SUCCESS; int32_t code = TSDB_CODE_SUCCESS;
int32_t lino = 0; int32_t lino = 0;
@ -868,10 +856,6 @@ int32_t forceRemoveCheckpoint(SStreamFileState* pFileState, int64_t checkpointId
return streamDefaultDel_rocksdb(pFileState->pFileStore, keyBuf); return streamDefaultDel_rocksdb(pFileState->pFileStore, keyBuf);
} }
int32_t getSnapshotIdList(SStreamFileState* pFileState, SArray* list) {
return streamDefaultIterGet_rocksdb(pFileState->pFileStore, TASK_KEY, NULL, list);
}
int32_t deleteExpiredCheckPoint(SStreamFileState* pFileState, TSKEY mark) { int32_t deleteExpiredCheckPoint(SStreamFileState* pFileState, TSKEY mark) {
int32_t code = TSDB_CODE_SUCCESS; int32_t code = TSDB_CODE_SUCCESS;
int64_t maxCheckPointId = 0; int64_t maxCheckPointId = 0;
@ -1227,10 +1211,6 @@ SSHashObj* getGroupIdCache(SStreamFileState* pFileState) {
return pFileState->pGroupIdMap; return pFileState->pGroupIdMap;
} }
void setFillInfo(SStreamFileState* pFileState) {
pFileState->hasFillCatch = false;
}
void clearExpiredState(SStreamFileState* pFileState) { void clearExpiredState(SStreamFileState* pFileState) {
int32_t code = TSDB_CODE_SUCCESS; int32_t code = TSDB_CODE_SUCCESS;
int32_t lino = 0; int32_t lino = 0;
@ -1261,6 +1241,7 @@ _end:
} }
} }
#ifdef BUILD_NO_CALL
int32_t getStateSearchRowBuff(SStreamFileState* pFileState, const SWinKey* pKey, void** pVal, int32_t* pVLen, int32_t getStateSearchRowBuff(SStreamFileState* pFileState, const SWinKey* pKey, void** pVal, int32_t* pVLen,
int32_t* pWinCode) { int32_t* pWinCode) {
int32_t code = TSDB_CODE_SUCCESS; int32_t code = TSDB_CODE_SUCCESS;
@ -1328,6 +1309,7 @@ _end:
} }
return code; return code;
} }
#endif
int32_t getRowStatePrevRow(SStreamFileState* pFileState, const SWinKey* pKey, SWinKey* pResKey, void** ppVal, int32_t getRowStatePrevRow(SStreamFileState* pFileState, const SWinKey* pKey, SWinKey* pResKey, void** ppVal,
int32_t* pVLen, int32_t* pWinCode) { int32_t* pVLen, int32_t* pWinCode) {

View File

@ -390,9 +390,78 @@ TEST(sstreamTaskGetTriggerRecvStatusTest, streamTaskGetTriggerRecvStatusFnTest)
extern int8_t tsS3EpNum; extern int8_t tsS3EpNum;
tsS3EpNum = 1; tsS3EpNum = 1;
code = uploadCheckpointToS3("123", "/tmp/backend5/stream"); code = uploadCheckpointToS3("123", "/tmp/backend5/stream/stream");
EXPECT_EQ(code, TSDB_CODE_SUCCESS); EXPECT_NE(code, TSDB_CODE_OUT_OF_RANGE);
code = downloadCheckpointByNameS3("123", "/root/download", ""); code = downloadCheckpointByNameS3("123", "/root/download", "");
EXPECT_NE(code, TSDB_CODE_OUT_OF_RANGE); EXPECT_NE(code, TSDB_CODE_OUT_OF_RANGE);
code = deleteCheckpointFile("aaa123", "bbb");
EXPECT_NE(code, TSDB_CODE_OUT_OF_RANGE);
} }
TEST(doCheckBeforeHandleChkptTriggerTest, doCheckBeforeHandleChkptTriggerFnTest) {
SStreamTask* pTask = NULL;
int64_t uid = 2222222222222;
SArray* array = taosArrayInit(4, POINTER_BYTES);
int32_t code = tNewStreamTask(uid, TASK_LEVEL__SINK, NULL, false, 0, 0, array,
false, 1, &pTask);
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
initTaskLock(pTask);
const char *path = "/tmp/doCheckBeforeHandleChkptTriggerTest/stream";
code = streamMetaOpen((path), NULL, NULL, NULL, 0, 0, NULL, &pTask->pMeta);
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
SStreamState *pState = streamStateOpen((char *)path, pTask, 0, 0);
ASSERT(pState != NULL);
pTask->pBackend = pState->pTdbState->pOwner->pBackend;
code = streamTaskCreateActiveChkptInfo(&pTask->chkInfo.pActiveInfo);
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
pTask->chkInfo.checkpointId = 123;
code = doCheckBeforeHandleChkptTrigger(pTask, 100, NULL, 0);
ASSERT_EQ(code, TSDB_CODE_STREAM_INVLD_CHKPT);
pTask->chkInfo.pActiveInfo->failedId = 223;
code = doCheckBeforeHandleChkptTrigger(pTask, 200, NULL, 0);
ASSERT_EQ(code, TSDB_CODE_STREAM_INVLD_CHKPT);
SStreamDataBlock block;
block.srcTaskId = 456;
SStreamTask upTask;
upTask = *pTask;
upTask.id.taskId = 456;
streamTaskSetUpstreamInfo(pTask, &upTask);
pTask->chkInfo.pActiveInfo->failedId = 23;
code = doCheckBeforeHandleChkptTrigger(pTask, 123, &block, 0);
ASSERT_EQ(code, TSDB_CODE_STREAM_INVLD_CHKPT);
streamTaskSetUpstreamInfo(pTask, &upTask);
streamTaskSetStatusReady(pTask);
code = streamTaskHandleEvent(pTask->status.pSM, TASK_EVENT_GEN_CHECKPOINT);
ASSERT_EQ(code, TSDB_CODE_SUCCESS);
pTask->chkInfo.pActiveInfo->activeId = 223;
STaskCheckpointReadyInfo readyInfo;
readyInfo.upstreamTaskId = 4567;
block.srcTaskId = 4567;
void* pBuf = rpcMallocCont(sizeof(SMsgHead) + 1);
initRpcMsg(&readyInfo.msg, 0, pBuf, sizeof(SMsgHead) + 1);
taosArrayPush(pTask->chkInfo.pActiveInfo->pReadyMsgList, &readyInfo);
code = doCheckBeforeHandleChkptTrigger(pTask, 223, &block, 0);
ASSERT_NE(code, TSDB_CODE_SUCCESS);
pTask->chkInfo.pActiveInfo->allUpstreamTriggerRecv = 1;
code = doCheckBeforeHandleChkptTrigger(pTask, 223, &block, 0);
ASSERT_NE(code, TSDB_CODE_SUCCESS);
pTask->chkInfo.pActiveInfo->activeId = 1111;
code = doCheckBeforeHandleChkptTrigger(pTask, 223, &block, 0);
ASSERT_EQ(code, TSDB_CODE_STREAM_INVLD_CHKPT);
}

View File

@ -24,6 +24,8 @@
#include "tglobal.h" #include "tglobal.h"
#include "ttime.h" #include "ttime.h"
#define FQDNRETRYTIMES 100
static void syncCfg2SimpleStr(const SSyncCfg* pCfg, char* buf, int32_t bufLen) { static void syncCfg2SimpleStr(const SSyncCfg* pCfg, char* buf, int32_t bufLen) {
int32_t len = tsnprintf(buf, bufLen, "{num:%d, as:%d, [", pCfg->replicaNum, pCfg->myIndex); int32_t len = tsnprintf(buf, bufLen, "{num:%d, as:%d, [", pCfg->replicaNum, pCfg->myIndex);
for (int32_t i = 0; i < pCfg->replicaNum; ++i) { for (int32_t i = 0; i < pCfg->replicaNum; ++i) {
@ -45,7 +47,8 @@ void syncUtilNodeInfo2EpSet(const SNodeInfo* pInfo, SEpSet* pEpSet) {
bool syncUtilNodeInfo2RaftId(const SNodeInfo* pInfo, SyncGroupId vgId, SRaftId* raftId) { bool syncUtilNodeInfo2RaftId(const SNodeInfo* pInfo, SyncGroupId vgId, SRaftId* raftId) {
uint32_t ipv4 = 0xFFFFFFFF; uint32_t ipv4 = 0xFFFFFFFF;
sDebug("vgId:%d, resolve sync addr from fqdn, ep:%s:%u", vgId, pInfo->nodeFqdn, pInfo->nodePort); sDebug("vgId:%d, resolve sync addr from fqdn, ep:%s:%u", vgId, pInfo->nodeFqdn, pInfo->nodePort);
for (int32_t i = 0; i < tsResolveFQDNRetryTime; i++) {
for (int32_t i = 0; i < FQDNRETRYTIMES; i++) {
int32_t code = taosGetIpv4FromFqdn(pInfo->nodeFqdn, &ipv4); int32_t code = taosGetIpv4FromFqdn(pInfo->nodeFqdn, &ipv4);
if (code) { if (code) {
sError("vgId:%d, failed to resolve sync addr, dnode:%d fqdn:%s, retry", vgId, pInfo->nodeId, pInfo->nodeFqdn); sError("vgId:%d, failed to resolve sync addr, dnode:%d fqdn:%s, retry", vgId, pInfo->nodeId, pInfo->nodeFqdn);

View File

@ -2023,16 +2023,29 @@ int tdbBtreePrev(SBTC *pBtc, void **ppKey, int *kLen, void **ppVal, int *vLen) {
memcpy(pKey, cd.pKey, (size_t)cd.kLen); memcpy(pKey, cd.pKey, (size_t)cd.kLen);
if (ppVal) { if (ppVal) {
// TODO: vLen may be zero if (cd.vLen > 0) {
pVal = tdbRealloc(*ppVal, cd.vLen); pVal = tdbRealloc(*ppVal, cd.vLen);
if (pVal == NULL) { if (pVal == NULL) {
tdbFree(pKey); tdbFree(pKey);
return terrno; return terrno;
}
memcpy(pVal, cd.pVal, (size_t)cd.vLen);
if (TDB_CELLDECODER_FREE_VAL(&cd)) {
tdbTrace("tdb/btree-next decoder: %p pVal free: %p", &cd, cd.pVal);
tdbFree(cd.pVal);
}
} else {
pVal = NULL;
} }
*ppVal = pVal; *ppVal = pVal;
*vLen = cd.vLen; *vLen = cd.vLen;
memcpy(pVal, cd.pVal, (size_t)cd.vLen); } else {
if (TDB_CELLDECODER_FREE_VAL(&cd)) {
tdbTrace("tdb/btree-next2 decoder: %p pVal free: %p", &cd, cd.pVal);
tdbFree(cd.pVal);
}
} }
ret = tdbBtcMoveToPrev(pBtc); ret = tdbBtcMoveToPrev(pBtc);

View File

@ -66,3 +66,10 @@ int tdbGetFileSize(tdb_fd_t fd, int szPage, SPgno *size) {
*size = szBytes / szPage; *size = szBytes / szPage;
return 0; return 0;
} }
void tdbCloseDir(TdDirPtr *ppDir) {
int32_t ret = taosCloseDir(ppDir);
if (ret) {
tdbError("failed to close directory, reason:%s", tstrerror(ret));
}
}

View File

@ -71,12 +71,7 @@ typedef TdFilePtr tdb_fd_t;
#define tdbGetDirEntryName taosGetDirEntryName #define tdbGetDirEntryName taosGetDirEntryName
#define tdbDirEntryBaseName taosDirEntryBaseName #define tdbDirEntryBaseName taosDirEntryBaseName
static FORCE_INLINE void tdbCloseDir(TdDirPtr *ppDir) { void tdbCloseDir(TdDirPtr *ppDir);
int32_t ret = taosCloseDir(ppDir);
if (ret) {
tdbError("failed to close directory, reason:%s", tstrerror(ret));
}
}
#define tdbOsRemove remove #define tdbOsRemove remove
#define tdbOsFileSize(FD, PSIZE) taosFStatFile(FD, PSIZE, NULL) #define tdbOsFileSize(FD, PSIZE) taosFStatFile(FD, PSIZE, NULL)

View File

@ -16,6 +16,10 @@
#ifndef _TD_TFS_INT_H_ #ifndef _TD_TFS_INT_H_
#define _TD_TFS_INT_H_ #define _TD_TFS_INT_H_
#ifdef __cplusplus
extern "C" {
#endif
#include "os.h" #include "os.h"
#include "taosdef.h" #include "taosdef.h"
@ -74,6 +78,7 @@ typedef struct STfs {
SHashObj *hash; // name to did map SHashObj *hash; // name to did map
} STfs; } STfs;
int32_t tfsCheckAndFormatCfg(STfs *pTfs, SDiskCfg *pCfg);
int32_t tfsNewDisk(int32_t level, int32_t id, int8_t disable, const char *dir, STfsDisk **ppDisk); int32_t tfsNewDisk(int32_t level, int32_t id, int8_t disable, const char *dir, STfsDisk **ppDisk);
STfsDisk *tfsFreeDisk(STfsDisk *pDisk); STfsDisk *tfsFreeDisk(STfsDisk *pDisk);
int32_t tfsUpdateDiskSize(STfsDisk *pDisk); int32_t tfsUpdateDiskSize(STfsDisk *pDisk);

View File

@ -19,7 +19,6 @@
static int32_t tfsMount(STfs *pTfs, SDiskCfg *pCfg); static int32_t tfsMount(STfs *pTfs, SDiskCfg *pCfg);
static int32_t tfsCheck(STfs *pTfs); static int32_t tfsCheck(STfs *pTfs);
static int32_t tfsCheckAndFormatCfg(STfs *pTfs, SDiskCfg *pCfg);
static int32_t tfsFormatDir(char *idir, char *odir); static int32_t tfsFormatDir(char *idir, char *odir);
static int32_t tfsGetDiskByName(STfs *pTfs, const char *dir, STfsDisk **ppDisk); static int32_t tfsGetDiskByName(STfs *pTfs, const char *dir, STfsDisk **ppDisk);
static int32_t tfsOpendirImpl(STfs *pTfs, STfsDir *pDir); static int32_t tfsOpendirImpl(STfs *pTfs, STfsDir *pDir);
@ -245,13 +244,13 @@ void tfsDirname(const STfsFile *pFile, char *dest) {
tstrncpy(tname, pFile->aname, TSDB_FILENAME_LEN); tstrncpy(tname, pFile->aname, TSDB_FILENAME_LEN);
tstrncpy(dest, taosDirName(tname), TSDB_FILENAME_LEN); tstrncpy(dest, taosDirName(tname), TSDB_FILENAME_LEN);
} }
#if 0
void tfsAbsoluteName(STfs *pTfs, SDiskID diskId, const char *rname, char *aname) { void tfsAbsoluteName(STfs *pTfs, SDiskID diskId, const char *rname, char *aname) {
STfsDisk *pDisk = TFS_DISK_AT(pTfs, diskId); STfsDisk *pDisk = TFS_DISK_AT(pTfs, diskId);
(void)snprintf(aname, TSDB_FILENAME_LEN, "%s%s%s", pDisk->path, TD_DIRSEP, rname); (void)snprintf(aname, TSDB_FILENAME_LEN, "%s%s%s", pDisk->path, TD_DIRSEP, rname);
} }
#endif
int32_t tfsRemoveFile(const STfsFile *pFile) { return taosRemoveFile(pFile->aname); } int32_t tfsRemoveFile(const STfsFile *pFile) { return taosRemoveFile(pFile->aname); }
int32_t tfsCopyFile(const STfsFile *pFile1, const STfsFile *pFile2) { int32_t tfsCopyFile(const STfsFile *pFile1, const STfsFile *pFile2) {
@ -340,7 +339,7 @@ int32_t tfsMkdir(STfs *pTfs, const char *rname) {
TAOS_RETURN(0); TAOS_RETURN(0);
} }
#if 0
bool tfsDirExistAt(STfs *pTfs, const char *rname, SDiskID diskId) { bool tfsDirExistAt(STfs *pTfs, const char *rname, SDiskID diskId) {
STfsDisk *pDisk = TFS_DISK_AT(pTfs, diskId); STfsDisk *pDisk = TFS_DISK_AT(pTfs, diskId);
char aname[TMPNAME_LEN]; char aname[TMPNAME_LEN];
@ -348,7 +347,7 @@ bool tfsDirExistAt(STfs *pTfs, const char *rname, SDiskID diskId) {
(void)snprintf(aname, TMPNAME_LEN, "%s%s%s", pDisk->path, TD_DIRSEP, rname); (void)snprintf(aname, TMPNAME_LEN, "%s%s%s", pDisk->path, TD_DIRSEP, rname);
return taosDirExist(aname); return taosDirExist(aname);
} }
#endif
int32_t tfsRmdir(STfs *pTfs, const char *rname) { int32_t tfsRmdir(STfs *pTfs, const char *rname) {
if (rname[0] == 0) { if (rname[0] == 0) {
TAOS_RETURN(0); TAOS_RETURN(0);
@ -515,7 +514,7 @@ _exit:
TAOS_RETURN(code); TAOS_RETURN(code);
} }
static int32_t tfsCheckAndFormatCfg(STfs *pTfs, SDiskCfg *pCfg) { int32_t tfsCheckAndFormatCfg(STfs *pTfs, SDiskCfg *pCfg) {
int32_t code = 0; int32_t code = 0;
char dirName[TSDB_FILENAME_LEN] = "\0"; char dirName[TSDB_FILENAME_LEN] = "\0";
@ -577,32 +576,32 @@ static int32_t tfsCheckAndFormatCfg(STfs *pTfs, SDiskCfg *pCfg) {
} }
static int32_t tfsFormatDir(char *idir, char *odir) { static int32_t tfsFormatDir(char *idir, char *odir) {
int32_t code = 0, lino = 0;
wordexp_t wep = {0}; wordexp_t wep = {0};
int32_t dirLen = 0;
char tmp[PATH_MAX] = {0};
int32_t code = wordexp(idir, &wep, 0); code = wordexp(idir, &wep, 0);
if (code != 0) { if (code != 0) {
TAOS_RETURN(TAOS_SYSTEM_ERROR(code)); TAOS_CHECK_EXIT(TAOS_SYSTEM_ERROR(code));
} }
char tmp[PATH_MAX] = {0}; TAOS_CHECK_EXIT(taosRealPath(wep.we_wordv[0], tmp, PATH_MAX));
if (taosRealPath(wep.we_wordv[0], tmp, PATH_MAX) != 0) {
code = TAOS_SYSTEM_ERROR(errno);
wordfree(&wep);
TAOS_RETURN(code);
}
int32_t dirLen = strlen(tmp); dirLen = strlen(tmp);
if (dirLen < 0 || dirLen >= TSDB_FILENAME_LEN) { if (dirLen < 0 || dirLen >= TSDB_FILENAME_LEN) {
wordfree(&wep); TAOS_CHECK_EXIT(TSDB_CODE_OUT_OF_RANGE);
code = TSDB_CODE_OUT_OF_RANGE;
fError("failed to mount %s to FS since %s, real path:%s, len:%d", idir, tstrerror(code), tmp, dirLen);
TAOS_RETURN(code);
} }
tstrncpy(odir, tmp, TSDB_FILENAME_LEN); tstrncpy(odir, tmp, TSDB_FILENAME_LEN);
_exit:
wordfree(&wep); wordfree(&wep);
TAOS_RETURN(0); if (code != 0) {
fError("failed to mount %s to FS at line %d since %s, real path:%s, len:%d", idir, lino, tstrerror(code), tmp,
dirLen);
}
TAOS_RETURN(code);
} }
static int32_t tfsCheck(STfs *pTfs) { static int32_t tfsCheck(STfs *pTfs) {

View File

@ -41,13 +41,13 @@ void tfsDestroyTier(STfsTier *pTier) {
int32_t tfsMountDiskToTier(STfsTier *pTier, SDiskCfg *pCfg, STfsDisk **ppDisk) { int32_t tfsMountDiskToTier(STfsTier *pTier, SDiskCfg *pCfg, STfsDisk **ppDisk) {
int32_t code = 0; int32_t code = 0;
int32_t lino = 0; int32_t lino = 0;
int32_t id = 0;
STfsDisk *pDisk = NULL; STfsDisk *pDisk = NULL;
if (pTier->ndisk >= TFS_MAX_DISKS_PER_TIER) { if (pTier->ndisk >= TFS_MAX_DISKS_PER_TIER) {
TAOS_CHECK_GOTO(TSDB_CODE_FS_TOO_MANY_MOUNT, &lino, _exit); TAOS_CHECK_GOTO(TSDB_CODE_FS_TOO_MANY_MOUNT, &lino, _exit);
} }
int32_t id = 0;
if (pTier->level == 0) { if (pTier->level == 0) {
if (pTier->disks[0] != NULL) { if (pTier->disks[0] != NULL) {
id = pTier->ndisk; id = pTier->ndisk;

View File

@ -7,8 +7,13 @@ target_link_libraries(
PUBLIC tfs PUBLIC tfs
PUBLIC gtest_main PUBLIC gtest_main
) )
target_include_directories(
tfs_test
PUBLIC "${TD_SOURCE_DIR}/include/libs/tfs"
PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
# add_test( add_test(
# NAME tfs_test NAME tfs_test
# COMMAND tfs_test COMMAND tfs_test
# ) )

View File

@ -13,6 +13,7 @@
#include "os.h" #include "os.h"
#include "tfs.h" #include "tfs.h"
#include "tfsInt.h"
class TfsTest : public ::testing::Test { class TfsTest : public ::testing::Test {
protected: protected:
@ -280,6 +281,9 @@ TEST_F(TfsTest, 04_File) {
const STfsFile *pf2 = tfsReaddir(pDir); const STfsFile *pf2 = tfsReaddir(pDir);
EXPECT_EQ(pf2, nullptr); EXPECT_EQ(pf2, nullptr);
pDir->pDir = taosOpenDir(fulldir);
EXPECT_NE(pDir->pDir, nullptr);
tfsClosedir(pDir); tfsClosedir(pDir);
} }
@ -744,3 +748,116 @@ TEST_F(TfsTest, 05_MultiDisk) {
tfsClose(pTfs); tfsClose(pTfs);
} }
TEST_F(TfsTest, 06_Misc) {
// tfsDisk.c
STfsDisk *pDisk = NULL;
EXPECT_EQ(tfsNewDisk(0, 0, 0, NULL, &pDisk), TSDB_CODE_INVALID_PARA);
EXPECT_NE(tfsNewDisk(0, 0, 0, "", &pDisk), 0);
STfsDisk disk = {0};
EXPECT_EQ(tfsUpdateDiskSize(&disk), TSDB_CODE_INVALID_PARA);
// tfsTier.c
STfsTier tfsTier = {0};
EXPECT_EQ(taosThreadSpinInit(&tfsTier.lock, 0), 0);
EXPECT_EQ(tfsAllocDiskOnTier(&tfsTier), TSDB_CODE_FS_NO_VALID_DISK);
tfsTier.ndisk = 3;
tfsTier.nAvailDisks = 1;
tfsTier.disks[1] = &disk;
disk.disable = 1;
EXPECT_EQ(tfsAllocDiskOnTier(&tfsTier), TSDB_CODE_FS_NO_VALID_DISK);
disk.disable = 0;
disk.size.avail = 0;
EXPECT_EQ(tfsAllocDiskOnTier(&tfsTier), TSDB_CODE_FS_NO_VALID_DISK);
tfsTier.ndisk = TFS_MAX_DISKS_PER_TIER;
SDiskCfg diskCfg = {0};
tstrncpy(diskCfg.dir, "testDataDir", TSDB_FILENAME_LEN);
EXPECT_EQ(tfsMountDiskToTier(&tfsTier, &diskCfg, 0), TSDB_CODE_FS_TOO_MANY_MOUNT);
EXPECT_EQ(taosThreadSpinDestroy(&tfsTier.lock), 0);
// tfs.c
STfs *pTfs = NULL;
EXPECT_EQ(tfsOpen(0, -1, &pTfs), TSDB_CODE_INVALID_PARA);
EXPECT_EQ(tfsOpen(0, 0, &pTfs), TSDB_CODE_INVALID_PARA);
EXPECT_EQ(tfsOpen(0, TFS_MAX_DISKS + 1, &pTfs), TSDB_CODE_INVALID_PARA);
taosMemoryFreeClear(pTfs);
STfs tfs = {0};
STfsTier *pTier = &tfs.tiers[0];
EXPECT_EQ(tfsDiskSpaceAvailable(&tfs, -1), false);
tfs.nlevel = 2;
pTier->ndisk = 3;
pTier->nAvailDisks = 1;
EXPECT_EQ(tfsDiskSpaceAvailable(&tfs, 0), false);
pTier->disks[0] = &disk;
EXPECT_EQ(tfsDiskSpaceAvailable(&tfs, 0), false);
EXPECT_EQ(tfsDiskSpaceSufficient(&tfs, -1, 0), false);
EXPECT_EQ(tfsDiskSpaceSufficient(&tfs, tfs.nlevel + 1, 0), false);
EXPECT_EQ(tfsDiskSpaceSufficient(&tfs, 0, -1), false);
EXPECT_EQ(tfsDiskSpaceSufficient(&tfs, 0, pTier->ndisk), false);
EXPECT_EQ(tfsGetDisksAtLevel(&tfs, -1), 0);
EXPECT_EQ(tfsGetDisksAtLevel(&tfs, tfs.nlevel), 0);
EXPECT_EQ(tfsGetLevel(&tfs), tfs.nlevel);
for (int32_t l = 0; l < tfs.nlevel; ++l) {
EXPECT_EQ(taosThreadSpinInit(&tfs.tiers[l].lock, 0), 0);
}
SDiskID diskID = {0};
disk.size.avail = TFS_MIN_DISK_FREE_SIZE;
EXPECT_EQ(tfsAllocDisk(&tfs, tfs.nlevel, &diskID), 0);
tfs.nlevel = 0;
diskID.level = 0;
EXPECT_EQ(tfsAllocDisk(&tfs, 0, &diskID), 0);
tfs.nlevel = 2;
diskID.id = 10;
EXPECT_EQ(tfsMkdirAt(&tfs, NULL, diskID), TSDB_CODE_FS_INVLD_CFG);
EXPECT_NE(tfsMkdirRecurAt(&tfs, NULL, diskID), 0);
const char *rname = "";
EXPECT_EQ(tfsRmdir(&tfs, rname), 0);
EXPECT_EQ(tfsSearch(&tfs, -1, NULL), -1);
EXPECT_EQ(tfsSearch(&tfs, tfs.nlevel, NULL), -1);
diskCfg.level = -1;
EXPECT_EQ(tfsCheckAndFormatCfg(&tfs, &diskCfg), TSDB_CODE_FS_INVLD_CFG);
diskCfg.level = TFS_MAX_TIERS;
EXPECT_EQ(tfsCheckAndFormatCfg(&tfs, &diskCfg), TSDB_CODE_FS_INVLD_CFG);
diskCfg.level = 0;
diskCfg.primary = -1;
EXPECT_EQ(tfsCheckAndFormatCfg(&tfs, &diskCfg), TSDB_CODE_FS_INVLD_CFG);
diskCfg.primary = 2;
EXPECT_EQ(tfsCheckAndFormatCfg(&tfs, &diskCfg), TSDB_CODE_FS_INVLD_CFG);
diskCfg.primary = 1;
diskCfg.disable = -1;
EXPECT_EQ(tfsCheckAndFormatCfg(&tfs, &diskCfg), TSDB_CODE_FS_INVLD_CFG);
diskCfg.disable = 2;
EXPECT_EQ(tfsCheckAndFormatCfg(&tfs, &diskCfg), TSDB_CODE_FS_INVLD_CFG);
diskCfg.disable = 0;
diskCfg.level = 1;
EXPECT_EQ(tfsCheckAndFormatCfg(&tfs, &diskCfg), TSDB_CODE_FS_INVLD_CFG);
diskCfg.level = 0;
diskCfg.primary = 0;
tstrncpy(diskCfg.dir, "testDataDir1", TSDB_FILENAME_LEN);
EXPECT_NE(tfsCheckAndFormatCfg(&tfs, &diskCfg), 0);
TdFilePtr pFile = taosCreateFile("testDataDir1", TD_FILE_CREATE);
EXPECT_NE(pFile, nullptr);
EXPECT_EQ(tfsCheckAndFormatCfg(&tfs, &diskCfg), TSDB_CODE_FS_INVLD_CFG);
EXPECT_EQ(taosCloseFile(&pFile), 0);
EXPECT_EQ(taosRemoveFile("testDataDir1"), 0);
for (int32_t l = 0; l < tfs.nlevel; ++l) {
EXPECT_EQ(taosThreadSpinDestroy(&tfs.tiers[l].lock), 0);
}
}

View File

@ -151,7 +151,6 @@ typedef struct SCliThrd {
TdThreadMutex msgMtx; TdThreadMutex msgMtx;
SDelayQueue* delayQueue; SDelayQueue* delayQueue;
SDelayQueue* timeoutQueue; SDelayQueue* timeoutQueue;
SDelayQueue* waitConnQueue;
uint64_t nextTimeout; // next timeout uint64_t nextTimeout; // next timeout
STrans* pInst; // STrans* pInst; //
@ -159,8 +158,6 @@ typedef struct SCliThrd {
SHashObj* fqdn2ipCache; SHashObj* fqdn2ipCache;
SCvtAddr* pCvtAddr; SCvtAddr* pCvtAddr;
SHashObj* failFastCache;
SHashObj* batchCache;
SHashObj* connHeapCache; SHashObj* connHeapCache;
SCliReq* stopMsg; SCliReq* stopMsg;
@ -224,8 +221,6 @@ static void cliRecvCb(uv_stream_t* cli, ssize_t nread, const uv_buf_t* buf);
static void cliConnCb(uv_connect_t* req, int status); static void cliConnCb(uv_connect_t* req, int status);
static void cliAsyncCb(uv_async_t* handle); static void cliAsyncCb(uv_async_t* handle);
SCliBatch* cliGetHeadFromList(SCliBatchList* pList);
static void destroyCliConnQTable(SCliConn* conn); static void destroyCliConnQTable(SCliConn* conn);
static void cliHandleException(SCliConn* conn); static void cliHandleException(SCliConn* conn);
@ -1299,8 +1294,8 @@ static void cliHandleException(SCliConn* conn) {
if (conn->registered) { if (conn->registered) {
int8_t ref = transGetRefCount(conn); int8_t ref = transGetRefCount(conn);
if (ref == 0 && !uv_is_closing((uv_handle_t*)conn->stream)) { if (ref == 0 && !uv_is_closing((uv_handle_t*)conn->stream)) {
// tTrace("%s conn %p fd %d,%d,%d,%p uv_closed", CONN_GET_INST_LABEL(conn), conn, conn->stream->u.fd, // tTrace("%s conn %p fd %d,%d,%d,%p uv_closed", CONN_GET_INST_LABEL(conn), conn, conn->stream->u.fd,
// conn->stream->io_watcher.fd, conn->stream->accepted_fd, conn->stream->queued_fds); // conn->stream->io_watcher.fd, conn->stream->accepted_fd, conn->stream->queued_fds);
uv_close((uv_handle_t*)conn->stream, cliDestroy); uv_close((uv_handle_t*)conn->stream, cliDestroy);
} }
} }
@ -2124,144 +2119,7 @@ static void cliDoReq(queue* wq, SCliThrd* pThrd) {
tTrace("cli process batch size:%d", count); tTrace("cli process batch size:%d", count);
} }
} }
SCliBatch* cliGetHeadFromList(SCliBatchList* pList) {
if (QUEUE_IS_EMPTY(&pList->wq) || pList->connCnt > pList->connMax || pList->sending > pList->connMax) {
return NULL;
}
queue* hr = QUEUE_HEAD(&pList->wq);
QUEUE_REMOVE(hr);
pList->sending += 1;
pList->len -= 1;
SCliBatch* batch = QUEUE_DATA(hr, SCliBatch, listq);
return batch;
}
static int32_t createBatch(SCliBatch** ppBatch, SCliBatchList* pList, SCliReq* pReq);
static int32_t createBatchList(SCliBatchList** ppBatchList, char* key, char* ip, uint32_t port);
static void destroyBatchList(SCliBatchList* pList);
static void cliBuildBatch(SCliReq* pReq, queue* h, SCliThrd* pThrd) {
int32_t code = 0;
STrans* pInst = pThrd->pInst;
SReqCtx* pCtx = pReq->ctx;
char* ip = EPSET_GET_INUSE_IP(pCtx->epSet);
uint32_t port = EPSET_GET_INUSE_PORT(pCtx->epSet);
char key[TSDB_FQDN_LEN + 64] = {0};
CONN_CONSTRUCT_HASH_KEY(key, ip, port);
size_t klen = strlen(key);
SCliBatchList** ppBatchList = taosHashGet(pThrd->batchCache, key, klen);
if (ppBatchList == NULL || *ppBatchList == NULL) {
SCliBatchList* pBatchList = NULL;
code = createBatchList(&pBatchList, key, ip, port);
if (code != 0) {
destroyReq(pReq);
return;
}
pBatchList->batchLenLimit = pInst->shareConnLimit;
SCliBatch* pBatch = NULL;
code = createBatch(&pBatch, pBatchList, pReq);
if (code != 0) {
destroyBatchList(pBatchList);
destroyReq(pReq);
return;
}
code = taosHashPut(pThrd->batchCache, key, klen, &pBatchList, sizeof(void*));
if (code != 0) {
destroyBatchList(pBatchList);
}
} else {
if (QUEUE_IS_EMPTY(&(*ppBatchList)->wq)) {
SCliBatch* pBatch = NULL;
code = createBatch(&pBatch, *ppBatchList, pReq);
if (code != 0) {
destroyReq(pReq);
cliDestroyBatch(pBatch);
}
} else {
queue* hdr = QUEUE_TAIL(&((*ppBatchList)->wq));
SCliBatch* pBatch = QUEUE_DATA(hdr, SCliBatch, listq);
if ((pBatch->shareConnLimit + pReq->msg.contLen) < (*ppBatchList)->batchLenLimit) {
QUEUE_PUSH(&pBatch->wq, h);
pBatch->shareConnLimit += pReq->msg.contLen;
pBatch->wLen += 1;
} else {
SCliBatch* tBatch = NULL;
code = createBatch(&tBatch, *ppBatchList, pReq);
if (code != 0) {
destroyReq(pReq);
}
}
}
}
return;
}
static int32_t createBatchList(SCliBatchList** ppBatchList, char* key, char* ip, uint32_t port) {
SCliBatchList* pBatchList = taosMemoryCalloc(1, sizeof(SCliBatchList));
if (pBatchList == NULL) {
tError("failed to create batch list since %s", tstrerror(TSDB_CODE_OUT_OF_MEMORY));
return terrno;
}
QUEUE_INIT(&pBatchList->wq);
pBatchList->port = port;
pBatchList->connMax = 1;
pBatchList->connCnt = 0;
pBatchList->batchLenLimit = 0;
pBatchList->len += 1;
pBatchList->ip = taosStrdup(ip);
pBatchList->dst = taosStrdup(key);
if (pBatchList->ip == NULL || pBatchList->dst == NULL) {
taosMemoryFree(pBatchList->ip);
taosMemoryFree(pBatchList->dst);
taosMemoryFree(pBatchList);
tError("failed to create batch list since %s", tstrerror(TSDB_CODE_OUT_OF_MEMORY));
return terrno;
}
*ppBatchList = pBatchList;
return 0;
}
static void destroyBatchList(SCliBatchList* pList) {
if (pList == NULL) {
return;
}
while (!QUEUE_IS_EMPTY(&pList->wq)) {
queue* h = QUEUE_HEAD(&pList->wq);
QUEUE_REMOVE(h);
SCliBatch* pBatch = QUEUE_DATA(h, SCliBatch, listq);
cliDestroyBatch(pBatch);
}
taosMemoryFree(pList->ip);
taosMemoryFree(pList->dst);
taosMemoryFree(pList);
}
static int32_t createBatch(SCliBatch** ppBatch, SCliBatchList* pList, SCliReq* pReq) {
SCliBatch* pBatch = taosMemoryCalloc(1, sizeof(SCliBatch));
if (pBatch == NULL) {
tError("failed to create batch since %s", tstrerror(TSDB_CODE_OUT_OF_MEMORY));
return terrno;
}
QUEUE_INIT(&pBatch->wq);
QUEUE_INIT(&pBatch->listq);
QUEUE_PUSH(&pBatch->wq, &pReq->q);
pBatch->wLen += 1;
pBatch->shareConnLimit = pReq->msg.contLen;
pBatch->pList = pList;
QUEUE_PUSH(&pList->wq, &pBatch->listq);
pList->len += 1;
*ppBatch = pBatch;
return 0;
}
static void cliDoBatchReq(queue* wq, SCliThrd* pThrd) { return cliDoReq(wq, pThrd); } static void cliDoBatchReq(queue* wq, SCliThrd* pThrd) { return cliDoReq(wq, pThrd); }
static void cliAsyncCb(uv_async_t* handle) { static void cliAsyncCb(uv_async_t* handle) {
@ -2494,10 +2352,6 @@ static int32_t createThrdObj(void* trans, SCliThrd** ppThrd) {
TAOS_CHECK_GOTO(code, NULL, _end); TAOS_CHECK_GOTO(code, NULL, _end);
} }
if ((code = transDQCreate(pThrd->loop, &pThrd->waitConnQueue)) != 0) {
TAOS_CHECK_GOTO(code, NULL, _end);
}
pThrd->destroyAhandleFp = pInst->destroyFp; pThrd->destroyAhandleFp = pInst->destroyFp;
pThrd->fqdn2ipCache = taosHashInit(1024, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK); pThrd->fqdn2ipCache = taosHashInit(1024, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK);
@ -2505,11 +2359,6 @@ static int32_t createThrdObj(void* trans, SCliThrd** ppThrd) {
TAOS_CHECK_GOTO(terrno, NULL, _end); TAOS_CHECK_GOTO(terrno, NULL, _end);
} }
pThrd->batchCache = taosHashInit(1024, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK);
if (pThrd->batchCache == NULL) {
TAOS_CHECK_GOTO(terrno, NULL, _end);
}
pThrd->connHeapCache = taosHashInit(1024, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK); pThrd->connHeapCache = taosHashInit(1024, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK);
if (pThrd->connHeapCache == NULL) { if (pThrd->connHeapCache == NULL) {
TAOS_CHECK_GOTO(TSDB_CODE_OUT_OF_MEMORY, NULL, _end); TAOS_CHECK_GOTO(TSDB_CODE_OUT_OF_MEMORY, NULL, _end);
@ -2553,10 +2402,7 @@ _end:
transDQDestroy(pThrd->delayQueue, NULL); transDQDestroy(pThrd->delayQueue, NULL);
transDQDestroy(pThrd->timeoutQueue, NULL); transDQDestroy(pThrd->timeoutQueue, NULL);
transDQDestroy(pThrd->waitConnQueue, NULL);
taosHashCleanup(pThrd->fqdn2ipCache); taosHashCleanup(pThrd->fqdn2ipCache);
taosHashCleanup(pThrd->failFastCache);
taosHashCleanup(pThrd->batchCache);
taosHashCleanup(pThrd->pIdConnTable); taosHashCleanup(pThrd->pIdConnTable);
taosArrayDestroy(pThrd->pQIdBuf); taosArrayDestroy(pThrd->pQIdBuf);
@ -2580,7 +2426,6 @@ static void destroyThrdObj(SCliThrd* pThrd) {
transDQDestroy(pThrd->delayQueue, destroyReqAndAhanlde); transDQDestroy(pThrd->delayQueue, destroyReqAndAhanlde);
transDQDestroy(pThrd->timeoutQueue, NULL); transDQDestroy(pThrd->timeoutQueue, NULL);
transDQDestroy(pThrd->waitConnQueue, NULL);
tDebug("thread destroy %" PRId64, pThrd->pid); tDebug("thread destroy %" PRId64, pThrd->pid);
for (int i = 0; i < taosArrayGetSize(pThrd->timerList); i++) { for (int i = 0; i < taosArrayGetSize(pThrd->timerList); i++) {
@ -2592,24 +2437,6 @@ static void destroyThrdObj(SCliThrd* pThrd) {
taosMemoryFree(pThrd->loop); taosMemoryFree(pThrd->loop);
taosHashCleanup(pThrd->fqdn2ipCache); taosHashCleanup(pThrd->fqdn2ipCache);
void** pIter = taosHashIterate(pThrd->batchCache, NULL);
while (pIter != NULL) {
SCliBatchList* pBatchList = (SCliBatchList*)(*pIter);
while (!QUEUE_IS_EMPTY(&pBatchList->wq)) {
queue* h = QUEUE_HEAD(&pBatchList->wq);
QUEUE_REMOVE(h);
SCliBatch* pBatch = QUEUE_DATA(h, SCliBatch, listq);
cliDestroyBatch(pBatch);
}
taosMemoryFree(pBatchList->ip);
taosMemoryFree(pBatchList->dst);
taosMemoryFree(pBatchList);
pIter = (void**)taosHashIterate(pThrd->batchCache, pIter);
}
taosHashCleanup(pThrd->batchCache);
void* pIter2 = taosHashIterate(pThrd->connHeapCache, NULL); void* pIter2 = taosHashIterate(pThrd->connHeapCache, NULL);
while (pIter2 != NULL) { while (pIter2 != NULL) {
SHeap* heap = (SHeap*)(pIter2); SHeap* heap = (SHeap*)(pIter2);

View File

@ -615,6 +615,21 @@ TEST_F(TransEnv, http) {
#endif #endif
} }
#if 1
STelemAddrMgmt mgt;
taosTelemetryMgtInit(&mgt, "telemetry.taosdata.com");
int32_t code = taosSendTelemReport(&mgt,tsTelemUri, tsTelemPort, "test", strlen("test"),HTTP_FLAT);
printf("old addr:%s new addr:%s\n",mgt.defaultAddr, mgt.cachedAddr);
taosMsleep(2000);
code = taosSendTelemReport(&mgt,tsTelemUri, tsTelemPort, pCont, len,HTTP_FLAT);
for (int32_t i = 0; i < 1; i++) {
code = taosSendTelemReport(&mgt,tsTelemUri, tsTelemPort, pCont, len,HTTP_FLAT);
printf("old addr:%s new addr:%s\n",mgt.defaultAddr, mgt.cachedAddr);
taosMsleep(2000);
}
taosTelemetryDestroy(&mgt);
#endif
{ {
STelemAddrMgmt mgt; STelemAddrMgmt mgt;
taosTelemetryMgtInit(&mgt, "error"); taosTelemetryMgtInit(&mgt, "error");

View File

@ -208,28 +208,22 @@ static int32_t walReadSeekVerImpl(SWalReader *pReader, int64_t ver) {
SWalFileInfo tmpInfo; SWalFileInfo tmpInfo;
tmpInfo.firstVer = ver; tmpInfo.firstVer = ver;
TAOS_UNUSED(taosThreadRwlockRdlock(&pWal->mutex)); TAOS_UNUSED(taosThreadRwlockRdlock(&pWal->mutex));
SWalFileInfo *gloablPRet = taosArraySearch(pWal->fileInfoSet, &tmpInfo, compareWalFileInfo, TD_LE); SWalFileInfo *globalRet = taosArraySearch(pWal->fileInfoSet, &tmpInfo, compareWalFileInfo, TD_LE);
if (gloablPRet == NULL) { if (globalRet == NULL) {
wError("failed to find WAL log file with ver:%" PRId64, ver); wError("failed to find WAL log file with ver:%" PRId64, ver);
TAOS_UNUSED(taosThreadRwlockUnlock(&pWal->mutex)); TAOS_UNUSED(taosThreadRwlockUnlock(&pWal->mutex));
TAOS_RETURN(TSDB_CODE_WAL_INVALID_VER); TAOS_RETURN(TSDB_CODE_WAL_INVALID_VER);
} }
SWalFileInfo *pRet = taosMemoryMalloc(sizeof(SWalFileInfo)); SWalFileInfo ret;
if (pRet == NULL) { TAOS_MEMCPY(&ret, globalRet, sizeof(SWalFileInfo));
wError("failed to allocate memory for localRet");
TAOS_UNUSED(taosThreadRwlockUnlock(&pWal->mutex));
TAOS_RETURN(terrno);
}
TAOS_MEMCPY(pRet, gloablPRet, sizeof(SWalFileInfo));
TAOS_UNUSED(taosThreadRwlockUnlock(&pWal->mutex)); TAOS_UNUSED(taosThreadRwlockUnlock(&pWal->mutex));
if (pReader->curFileFirstVer != pRet->firstVer) { if (pReader->curFileFirstVer != ret.firstVer) {
// error code was set inner // error code was set inner
TAOS_CHECK_RETURN_WITH_FREE(walReadChangeFile(pReader, pRet->firstVer), pRet); TAOS_CHECK_RETURN(walReadChangeFile(pReader, ret.firstVer));
} }
// error code was set inner // error code was set inner
TAOS_CHECK_RETURN_WITH_FREE(walReadSeekFilePos(pReader, pRet->firstVer, ver), pRet); TAOS_CHECK_RETURN(walReadSeekFilePos(pReader, ret.firstVer, ver));
taosMemoryFree(pRet);
wDebug("vgId:%d, wal version reset from %" PRId64 " to %" PRId64, pReader->pWal->cfg.vgId, pReader->curVersion, ver); wDebug("vgId:%d, wal version reset from %" PRId64 " to %" PRId64, pReader->pWal->cfg.vgId, pReader->curVersion, ver);
pReader->curVersion = ver; pReader->curVersion = ver;
@ -437,15 +431,15 @@ int32_t walReadVer(SWalReader *pReader, int64_t ver) {
seeked = true; seeked = true;
continue; continue;
} else { } else {
wError("vgId:%d, failed to read WAL record head, index:%" PRId64 ", from log file since %s",
pReader->pWal->cfg.vgId, ver, terrstr());
TAOS_UNUSED(taosThreadMutexUnlock(&pReader->mutex));
if (contLen < 0) { if (contLen < 0) {
TAOS_RETURN(terrno); code = terrno;
} else { } else {
TAOS_RETURN(TSDB_CODE_WAL_FILE_CORRUPTED); code = TSDB_CODE_WAL_FILE_CORRUPTED;
} }
wError("vgId:%d, failed to read WAL record head, index:%" PRId64 ", from log file since %s",
pReader->pWal->cfg.vgId, ver, tstrerror(code));
TAOS_UNUSED(taosThreadMutexUnlock(&pReader->mutex));
TAOS_RETURN(code);
} }
} }
@ -478,15 +472,15 @@ int32_t walReadVer(SWalReader *pReader, int64_t ver) {
} }
if ((contLen = taosReadFile(pReader->pLogFile, pReader->pHead->head.body, cryptedBodyLen)) != cryptedBodyLen) { if ((contLen = taosReadFile(pReader->pLogFile, pReader->pHead->head.body, cryptedBodyLen)) != cryptedBodyLen) {
wError("vgId:%d, failed to read WAL record body, index:%" PRId64 ", from log file since %s",
pReader->pWal->cfg.vgId, ver, terrstr());
TAOS_UNUSED(taosThreadMutexUnlock(&pReader->mutex));
if (contLen < 0) { if (contLen < 0) {
TAOS_RETURN(terrno); code = terrno;
} else { } else {
TAOS_RETURN(TSDB_CODE_WAL_FILE_CORRUPTED); code = TSDB_CODE_WAL_FILE_CORRUPTED;
} }
wError("vgId:%d, failed to read WAL record body, index:%" PRId64 ", from log file since %s",
pReader->pWal->cfg.vgId, ver, tstrerror(code));
TAOS_UNUSED(taosThreadMutexUnlock(&pReader->mutex));
TAOS_RETURN(code);
} }
if (pReader->pHead->head.version != ver) { if (pReader->pHead->head.version != ver) {

32
source/util/inc/tlogInt.h Normal file
View File

@ -0,0 +1,32 @@
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef _TD_UTIL_LOG_INT_H_
#define _TD_UTIL_LOG_INT_H_
#ifdef __cplusplus
extern "C" {
#endif
#include "tlog.h"
void taosOpenNewSlowLogFile();
void taosLogObjSetToday(int64_t ts);
#ifdef __cplusplus
}
#endif
#endif /*_TD_UTIL_LOG_INT_H_*/

View File

@ -20,7 +20,7 @@
#ifdef USE_ANALYTICS #ifdef USE_ANALYTICS
#include <curl/curl.h> #include <curl/curl.h>
#define ANAL_ALGO_SPLIT "," #define ANALYTICS_ALOG_SPLIT_CHAR ","
typedef struct { typedef struct {
int64_t ver; int64_t ver;
@ -136,7 +136,7 @@ bool taosAnalGetOptStr(const char *option, const char *optName, char *optValue,
return false; return false;
} }
pEnd = strstr(pStart, ANAL_ALGO_SPLIT); pEnd = strstr(pStart, ANALYTICS_ALOG_SPLIT_CHAR);
if (optMaxLen > 0) { if (optMaxLen > 0) {
if (pEnd > pStart) { if (pEnd > pStart) {
int32_t len = (int32_t)(pEnd - pStart); int32_t len = (int32_t)(pEnd - pStart);
@ -168,7 +168,7 @@ bool taosAnalGetOptInt(const char *option, const char *optName, int64_t *optValu
int32_t bufLen = tsnprintf(buf, sizeof(buf), "%s=", optName); int32_t bufLen = tsnprintf(buf, sizeof(buf), "%s=", optName);
char *pos1 = strstr(option, buf); char *pos1 = strstr(option, buf);
char *pos2 = strstr(option, ANAL_ALGO_SPLIT); char *pos2 = strstr(option, ANALYTICS_ALOG_SPLIT_CHAR);
if (pos1 != NULL) { if (pos1 != NULL) {
*optValue = taosStr2Int64(pos1 + bufLen, NULL, 10); *optValue = taosStr2Int64(pos1 + bufLen, NULL, 10);
return true; return true;

View File

@ -483,7 +483,7 @@ static int32_t taosOpenNewLogFile() {
return 0; return 0;
} }
static void taosOpenNewSlowLogFile() { void taosOpenNewSlowLogFile() {
(void)taosThreadMutexLock(&tsLogObj.logMutex); (void)taosThreadMutexLock(&tsLogObj.logMutex);
int64_t delta = taosGetTimestampSec() - tsLogObj.timestampToday; int64_t delta = taosGetTimestampSec() - tsLogObj.timestampToday;
if (delta >= 0 && delta < 86400) { if (delta >= 0 && delta < 86400) {
@ -539,6 +539,8 @@ void taosResetLog() {
} }
} }
void taosLogObjSetToday(int64_t ts) { tsLogObj.timestampToday = ts; }
static bool taosCheckFileIsOpen(char *logFileName) { static bool taosCheckFileIsOpen(char *logFileName) {
TdFilePtr pFile = taosOpenFile(logFileName, TD_FILE_WRITE); TdFilePtr pFile = taosOpenFile(logFileName, TD_FILE_WRITE);
if (pFile == NULL) { if (pFile == NULL) {
@ -619,6 +621,7 @@ static void processLogFileName(const char *logName, int32_t maxFileNum) {
} }
static int32_t taosInitNormalLog(const char *logName, int32_t maxFileNum) { static int32_t taosInitNormalLog(const char *logName, int32_t maxFileNum) {
int32_t code = 0, lino = 0;
#ifdef WINDOWS_STASH #ifdef WINDOWS_STASH
/* /*
* always set maxFileNum to 1 * always set maxFileNum to 1
@ -653,39 +656,28 @@ static int32_t taosInitNormalLog(const char *logName, int32_t maxFileNum) {
// only an estimate for number of lines // only an estimate for number of lines
int64_t filesize = 0; int64_t filesize = 0;
if (taosFStatFile(tsLogObj.logHandle->pFile, &filesize, NULL) != 0) { TAOS_CHECK_EXIT(taosFStatFile(tsLogObj.logHandle->pFile, &filesize, NULL));
(void)printf("\nfailed to fstat log file:%s, reason:%s\n", name, strerror(errno));
taosUnLockLogFile(tsLogObj.logHandle->pFile);
return terrno;
}
tsLogObj.lines = (int32_t)(filesize / 60); tsLogObj.lines = (int32_t)(filesize / 60);
if (taosLSeekFile(tsLogObj.logHandle->pFile, 0, SEEK_END) < 0) { if (taosLSeekFile(tsLogObj.logHandle->pFile, 0, SEEK_END) < 0) {
TAOS_UNUSED(printf("failed to seek to the end of log file:%s, reason:%s\n", name, tstrerror(terrno))); TAOS_CHECK_EXIT(terrno);
taosUnLockLogFile(tsLogObj.logHandle->pFile);
return terrno;
} }
(void)snprintf(name, sizeof(name), "==================================================\n"); (void)snprintf(name, sizeof(name),
"==================================================\n"
" new log file\n"
"==================================================\n");
if (taosWriteFile(tsLogObj.logHandle->pFile, name, (uint32_t)strlen(name)) <= 0) { if (taosWriteFile(tsLogObj.logHandle->pFile, name, (uint32_t)strlen(name)) <= 0) {
TAOS_UNUSED(printf("failed to write to log file:%s, reason:%s\n", name, tstrerror(terrno))); TAOS_CHECK_EXIT(terrno);
taosUnLockLogFile(tsLogObj.logHandle->pFile);
return terrno;
}
(void)snprintf(name, sizeof(name), " new log file \n");
if (taosWriteFile(tsLogObj.logHandle->pFile, name, (uint32_t)strlen(name)) <= 0) {
TAOS_UNUSED(printf("failed to write to log file:%s, reason:%s\n", name, tstrerror(terrno)));
taosUnLockLogFile(tsLogObj.logHandle->pFile);
return terrno;
}
(void)snprintf(name, sizeof(name), "==================================================\n");
if (taosWriteFile(tsLogObj.logHandle->pFile, name, (uint32_t)strlen(name)) <= 0) {
TAOS_UNUSED(printf("failed to write to log file:%s, reason:%s\n", name, tstrerror(terrno)));
taosUnLockLogFile(tsLogObj.logHandle->pFile);
return terrno;
} }
return 0; _exit:
if (code != 0) {
taosUnLockLogFile(tsLogObj.logHandle->pFile);
TAOS_UNUSED(printf("failed to init normal log file:%s at line %d, reason:%s\n", name, lino, tstrerror(code)));
}
return code;
} }
static void taosUpdateLogNums(ELogLevel level) { static void taosUpdateLogNums(ELogLevel level) {

View File

@ -137,6 +137,10 @@ add_test(
NAME logTest NAME logTest
COMMAND logTest COMMAND logTest
) )
target_include_directories(
logTest
PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}/../inc"
)
IF(COMPILER_SUPPORT_AVX2) IF(COMPILER_SUPPORT_AVX2)
MESSAGE(STATUS "AVX2 instructions is ACTIVATED") MESSAGE(STATUS "AVX2 instructions is ACTIVATED")

View File

@ -2,7 +2,9 @@
#include <stdlib.h> #include <stdlib.h>
#include <time.h> #include <time.h>
#include <random> #include <random>
#include <tdef.h>
#include <tlog.h> #include <tlog.h>
#include <tlogInt.h>
#include <iostream> #include <iostream>
using namespace std; using namespace std;
@ -44,3 +46,96 @@ TEST(log, check_log_refactor) {
} }
taosCloseLog(); taosCloseLog();
} }
extern char *tsLogOutput;
TEST(log, misc) {
// taosInitLog
const char *path = TD_TMP_DIR_PATH "td";
taosRemoveDir(path);
taosMkDir(path);
tstrncpy(tsLogDir, path, PATH_MAX);
EXPECT_EQ(taosInitLog("taoslog", 1, true), 0);
taosOpenNewSlowLogFile();
taosLogObjSetToday(INT64_MIN);
taosPrintSlowLog("slow log test");
// test taosInitLogOutput
const char *pLogName = NULL;
tsLogOutput = (char *)taosMemCalloc(1, TSDB_FILENAME_LEN);
EXPECT_EQ(taosInitLogOutput(&pLogName), TSDB_CODE_INVALID_CFG);
tstrncpy(tsLogOutput, "stdout", TSDB_FILENAME_LEN);
EXPECT_EQ(taosInitLogOutput(&pLogName), 0);
tstrncpy(tsLogOutput, "stderr", TSDB_FILENAME_LEN);
EXPECT_EQ(taosInitLogOutput(&pLogName), 0);
tstrncpy(tsLogOutput, "/dev/null", TSDB_FILENAME_LEN);
EXPECT_EQ(taosInitLogOutput(&pLogName), 0);
tsLogOutput[0] = '#';
EXPECT_EQ(taosInitLogOutput(&pLogName), TSDB_CODE_INVALID_CFG);
tstrncpy(tsLogOutput, "/", TSDB_FILENAME_LEN);
EXPECT_EQ(taosInitLogOutput(&pLogName), 0);
tstrncpy(tsLogOutput, "\\", TSDB_FILENAME_LEN);
EXPECT_EQ(taosInitLogOutput(&pLogName), 0);
tstrncpy(tsLogOutput, "testLogOutput", TSDB_FILENAME_LEN);
EXPECT_EQ(taosInitLogOutput(&pLogName), 0);
tstrncpy(tsLogOutput, "testLogOutputDir/testLogOutput", TSDB_FILENAME_LEN);
EXPECT_EQ(taosInitLogOutput(&pLogName), 0);
tstrncpy(tsLogOutput, ".", TSDB_FILENAME_LEN);
EXPECT_EQ(taosInitLogOutput(&pLogName), TSDB_CODE_INVALID_CFG);
tstrncpy(tsLogOutput, "/..", TSDB_FILENAME_LEN);
EXPECT_EQ(taosInitLogOutput(&pLogName), TSDB_CODE_INVALID_CFG);
tsLogOutput[0] = 0;
// test taosAssertDebug
tsAssert = false;
taosAssertDebug(true, __FILE__, __LINE__, 0, "test_assert_true_without_core");
taosAssertDebug(false, __FILE__, __LINE__, 0, "test_assert_false_with_core");
tsAssert = true;
// test taosLogCrashInfo, taosReadCrashInfo and taosReleaseCrashLogFile
char nodeType[16] = "nodeType";
char *pCrashMsg = (char *)taosMemoryCalloc(1, 16);
EXPECT_NE(pCrashMsg, nullptr);
tstrncpy(pCrashMsg, "crashMsg", 16);
#if !defined(_TD_DARWIN_64) && !defined(WINDOWS)
pid_t pid = taosGetPId();
EXPECT_EQ(pid > 0, true);
siginfo_t sigInfo = {0};
sigInfo.si_pid = pid;
taosLogCrashInfo(nodeType, pCrashMsg, strlen(pCrashMsg), 0, &sigInfo);
#else
taosLogCrashInfo(nodeType, pCrashMsg, strlen(pCrashMsg), 0, nullptr);
#endif
char crashInfo[PATH_MAX] = {0};
snprintf(crashInfo, sizeof(crashInfo), "%s%s.%sCrashLog", tsLogDir, TD_DIRSEP, nodeType);
char *pReadMsg = NULL;
int64_t readMsgLen = 0;
TdFilePtr pFile = NULL;
taosReadCrashInfo(crashInfo, &pReadMsg, &readMsgLen, &pFile);
EXPECT_NE(pReadMsg, nullptr);
EXPECT_NE(pFile, nullptr);
EXPECT_EQ(strncasecmp(pReadMsg, "crashMsg", strlen("crashMsg")), 0);
EXPECT_EQ(taosCloseFile(&pFile), 0);
taosMemoryFreeClear(pReadMsg);
pFile = taosOpenFile(crashInfo, TD_FILE_WRITE);
EXPECT_NE(pFile, nullptr);
EXPECT_EQ(taosWriteFile(pFile, "00000", 1), 1);
EXPECT_EQ(taosCloseFile(&pFile), 0);
taosReadCrashInfo(crashInfo, &pReadMsg, &readMsgLen, &pFile);
EXPECT_EQ(pReadMsg, nullptr);
EXPECT_EQ(pFile, nullptr);
pFile = taosOpenFile(crashInfo, TD_FILE_WRITE);
EXPECT_NE(pFile, nullptr);
taosReleaseCrashLogFile(pFile, true);
// clean up
taosRemoveDir(path);
taosCloseLog();
}

View File

@ -310,11 +310,6 @@ class TDTestCase:
"value": 1024 * 1024 * 20 * 10, "value": 1024 * 1024 * 20 * 10,
"category": "global" "category": "global"
}, },
{
"name": "resolveFQDNRetryTime",
"value": 500,
"category": "global"
},
{ {
"name": "syncHeartbeatInterval", "name": "syncHeartbeatInterval",
"value": 3000, "value": 3000,

View File

@ -47,7 +47,7 @@ class TDTestCase:
tdSql.query('show create database scd2;') tdSql.query('show create database scd2;')
tdSql.checkRows(1) tdSql.checkRows(1)
tdSql.checkData(0, 0, 'scd2') tdSql.checkData(0, 0, 'scd2')
tdSql.checkData(0, 1, "CREATE DATABASE `scd2` BUFFER 256 CACHESIZE 1 CACHEMODEL 'none' COMP 2 DURATION 10d WAL_FSYNC_PERIOD 3000 MAXROWS 4096 MINROWS 100 STT_TRIGGER 3 KEEP 3650d,3650d,3650d PAGES 256 PAGESIZE 4 PRECISION 'ms' REPLICA 1 WAL_LEVEL 1 VGROUPS 2 SINGLE_STABLE 0 TABLE_PREFIX 0 TABLE_SUFFIX 0 TSDB_PAGESIZE 4 WAL_RETENTION_PERIOD 3600 WAL_RETENTION_SIZE 0 KEEP_TIME_OFFSET 0h ENCRYPT_ALGORITHM 'none' S3_CHUNKPAGES 131072 S3_KEEPLOCAL 525600m S3_COMPACT 1 COMPACT_INTERVAL 1d COMPACT_TIME_RANGE -3650d,-10d COMPACT_TIME_OFFSET 0h") tdSql.checkData(0, 1, "CREATE DATABASE `scd2` BUFFER 256 CACHESIZE 1 CACHEMODEL 'none' COMP 2 DURATION 10d WAL_FSYNC_PERIOD 3000 MAXROWS 4096 MINROWS 100 STT_TRIGGER 3 KEEP 3650d,3650d,3650d PAGES 256 PAGESIZE 4 PRECISION 'ms' REPLICA 1 WAL_LEVEL 1 VGROUPS 2 SINGLE_STABLE 0 TABLE_PREFIX 0 TABLE_SUFFIX 0 TSDB_PAGESIZE 4 WAL_RETENTION_PERIOD 3600 WAL_RETENTION_SIZE 0 KEEP_TIME_OFFSET 0h ENCRYPT_ALGORITHM 'none' S3_CHUNKPAGES 131072 S3_KEEPLOCAL 525600m S3_COMPACT 1 COMPACT_INTERVAL 1d COMPACT_TIME_RANGE 0d,0d COMPACT_TIME_OFFSET 0h")
tdSql.query('show create database scd4') tdSql.query('show create database scd4')
tdSql.checkRows(1) tdSql.checkRows(1)
@ -65,7 +65,7 @@ class TDTestCase:
tdSql.query('show create database scd2;') tdSql.query('show create database scd2;')
tdSql.checkRows(1) tdSql.checkRows(1)
tdSql.checkData(0, 0, 'scd2') tdSql.checkData(0, 0, 'scd2')
tdSql.checkData(0, 1, "CREATE DATABASE `scd2` BUFFER 256 CACHESIZE 1 CACHEMODEL 'none' COMP 2 DURATION 10d WAL_FSYNC_PERIOD 3000 MAXROWS 4096 MINROWS 100 STT_TRIGGER 3 KEEP 3650d,3650d,3650d PAGES 256 PAGESIZE 4 PRECISION 'ms' REPLICA 1 WAL_LEVEL 1 VGROUPS 2 SINGLE_STABLE 0 TABLE_PREFIX 0 TABLE_SUFFIX 0 TSDB_PAGESIZE 4 WAL_RETENTION_PERIOD 3600 WAL_RETENTION_SIZE 0 KEEP_TIME_OFFSET 0h ENCRYPT_ALGORITHM 'none' S3_CHUNKPAGES 131072 S3_KEEPLOCAL 525600m S3_COMPACT 1 COMPACT_INTERVAL 1d COMPACT_TIME_RANGE -3650d,-10d COMPACT_TIME_OFFSET 0h") tdSql.checkData(0, 1, "CREATE DATABASE `scd2` BUFFER 256 CACHESIZE 1 CACHEMODEL 'none' COMP 2 DURATION 10d WAL_FSYNC_PERIOD 3000 MAXROWS 4096 MINROWS 100 STT_TRIGGER 3 KEEP 3650d,3650d,3650d PAGES 256 PAGESIZE 4 PRECISION 'ms' REPLICA 1 WAL_LEVEL 1 VGROUPS 2 SINGLE_STABLE 0 TABLE_PREFIX 0 TABLE_SUFFIX 0 TSDB_PAGESIZE 4 WAL_RETENTION_PERIOD 3600 WAL_RETENTION_SIZE 0 KEEP_TIME_OFFSET 0h ENCRYPT_ALGORITHM 'none' S3_CHUNKPAGES 131072 S3_KEEPLOCAL 525600m S3_COMPACT 1 COMPACT_INTERVAL 1d COMPACT_TIME_RANGE 0d,0d COMPACT_TIME_OFFSET 0h")
tdSql.query('show create database scd4') tdSql.query('show create database scd4')
tdSql.checkRows(1) tdSql.checkRows(1)
tdSql.checkData(0, 0, 'scd4') tdSql.checkData(0, 0, 'scd4')

View File

@ -36,7 +36,7 @@
"insert_mode": "taosc", "insert_mode": "taosc",
"line_protocol": "line", "line_protocol": "line",
"childtable_limit": -10, "childtable_limit": -10,
"childtable_offset": 10, "childtable_offset": 0,
"insert_rows": 20, "insert_rows": 20,
"insert_interval": 0, "insert_interval": 0,
"interlace_rows": 0, "interlace_rows": 0,

View File

@ -36,7 +36,7 @@
"insert_mode": "taosc", "insert_mode": "taosc",
"line_protocol": "line", "line_protocol": "line",
"childtable_limit": -10, "childtable_limit": -10,
"childtable_offset": 10, "childtable_offset": 0,
"insert_rows": 20, "insert_rows": 20,
"insert_interval": 0, "insert_interval": 0,
"interlace_rows": 0, "interlace_rows": 0,

View File

@ -61,6 +61,11 @@ check_transactions || exit 1
reset_cache || exit 1 reset_cache || exit 1
go run ./stmt/ws/main.go go run ./stmt/ws/main.go
taos -s "drop database if exists power"
check_transactions || exit 1
reset_cache || exit 1
go run ./stmt2/native/main.go
taos -s "drop database if exists power" taos -s "drop database if exists power"
check_transactions || exit 1 check_transactions || exit 1
reset_cache || exit 1 reset_cache || exit 1

View File

@ -411,6 +411,7 @@
,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/persisit_config.py ,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/persisit_config.py
,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/qmemCtrl.py ,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/qmemCtrl.py
,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/compact_vgroups.py ,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/compact_vgroups.py
,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/compact_auto.py
,,n,system-test,python3 ./test.py -f 0-others/dumpsdb.py ,,n,system-test,python3 ./test.py -f 0-others/dumpsdb.py
,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/compact.py -N 3 ,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/compact.py -N 3
@ -481,6 +482,7 @@
,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/show.py ,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/show.py
,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/show_tag_index.py ,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/show_tag_index.py
,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/information_schema.py ,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/information_schema.py
,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/ins_filesets.py
,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/grant.py ,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/grant.py
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/abs.py ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/abs.py
,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/abs.py -R ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/abs.py -R
@ -1382,6 +1384,8 @@
,,y,script,./test.sh -f tsim/stream/basic2.sim ,,y,script,./test.sh -f tsim/stream/basic2.sim
,,y,script,./test.sh -f tsim/stream/basic3.sim ,,y,script,./test.sh -f tsim/stream/basic3.sim
,,y,script,./test.sh -f tsim/stream/basic4.sim ,,y,script,./test.sh -f tsim/stream/basic4.sim
,,y,script,./test.sh -f tsim/stream/basic5.sim
,,y,script,./test.sh -f tsim/stream/tag.sim
,,y,script,./test.sh -f tsim/stream/snodeCheck.sim ,,y,script,./test.sh -f tsim/stream/snodeCheck.sim
,,y,script,./test.sh -f tsim/stream/concurrentcheckpt.sim ,,y,script,./test.sh -f tsim/stream/concurrentcheckpt.sim
,,y,script,./test.sh -f tsim/stream/checkpointInterval0.sim ,,y,script,./test.sh -f tsim/stream/checkpointInterval0.sim

Some files were not shown because too many files have changed in this diff Show More