other: merge 3.0
This commit is contained in:
commit
99148d6719
|
@ -44,7 +44,7 @@ For more details on features, please read through the entire documentation.
|
|||
|
||||
## Competitive Advantages
|
||||
|
||||
By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine differentiates itself from other [time series databases](https://tdengine.com/tsdb), with the following advantages.
|
||||
By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine differentiates itself from other [time series databases](https://tdengine.com/tsdb/), with the following advantages.
|
||||
|
||||
- **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression.
|
||||
|
||||
|
@ -123,11 +123,10 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
|
|||
|
||||
## Comparison with other databases
|
||||
|
||||
- [Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/performance-comparison-of-tdengine-and-influxdb/)
|
||||
- [Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/query-performance-comparison-test-report-tdengine-vs-influxdb/)
|
||||
- [TDengine vs OpenTSDB](https://tdengine.com/performance-tdengine-vs-opentsdb/)
|
||||
- [TDengine vs Cassandra](https://tdengine.com/performance-tdengine-vs-cassandra/)
|
||||
- [TDengine vs InfluxDB](https://tdengine.com/performance-tdengine-vs-influxdb/)
|
||||
- [TDengine vs. InfluxDB](https://tdengine.com/tsdb-comparison-influxdb-vs-tdengine/)
|
||||
- [TDengine vs. TimescaleDB](https://tdengine.com/tsdb-comparison-timescaledb-vs-tdengine/)
|
||||
- [TDengine vs. OpenTSDB](https://tdengine.com/performance-tdengine-vs-opentsdb/)
|
||||
- [TDengine vs. Cassandra](https://tdengine.com/performance-tdengine-vs-cassandra/)
|
||||
|
||||
## More readings
|
||||
- [Introduction to Time-Series Database](https://tdengine.com/tsdb/)
|
||||
|
|
|
@ -6,7 +6,7 @@ description: This document describes how to install TDengine in a Docker contain
|
|||
|
||||
This document describes how to install TDengine in a Docker container and perform queries and inserts.
|
||||
|
||||
- The easiest way to explore TDengine is through [TDengine Cloud](http://cloud.tdengine.com).
|
||||
- The easiest way to explore TDengine is through [TDengine Cloud](https://cloud.tdengine.com).
|
||||
- To get started with TDengine in a non-containerized environment, see [Quick Install from Package](../../get-started/package).
|
||||
- If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine).
|
||||
|
||||
|
|
|
@ -10,7 +10,7 @@ import PkgListV3 from "/components/PkgListV3";
|
|||
|
||||
This document describes how to install TDengine on Linux/Windows/macOS and perform queries and inserts.
|
||||
|
||||
- The easiest way to explore TDengine is through [TDengine Cloud](http://cloud.tdengine.com).
|
||||
- The easiest way to explore TDengine is through [TDengine Cloud](https://cloud.tdengine.com).
|
||||
- To get started with TDengine on Docker, see [Quick Install on Docker](../../get-started/docker).
|
||||
- If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine).
|
||||
|
||||
|
|
|
@ -288,6 +288,6 @@ Prior to establishing connection, please make sure TDengine is already running a
|
|||
</Tabs>
|
||||
|
||||
:::tip
|
||||
If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](https://docs.tdengine.com/train-faq/faq).
|
||||
If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](../../train-faq/faq).
|
||||
|
||||
:::
|
||||
|
|
|
@ -23,7 +23,7 @@ By subscribing to a topic, a consumer can obtain the latest data in that topic i
|
|||
|
||||
To implement these features, TDengine indexes its write-ahead log (WAL) file for fast random access and provides configurable methods for replacing and retaining this file. You can define a retention period and size for this file. For information, see the CREATE DATABASE statement. In this way, the WAL file is transformed into a persistent storage engine that remembers the order in which events occur. However, note that configuring an overly long retention period for your WAL files makes database compression inefficient. TDengine then uses the WAL file instead of the time-series database as its storage engine for queries in the form of topics. TDengine reads the data from the WAL file; uses a unified query engine instance to perform filtering, transformations, and other operations; and finally pushes the data to consumers.
|
||||
|
||||
Tips:The default data subscription is to consume data from the wal. If the wal is deleted, the consumed data will be incomplete. At this time, you can set the parameter experimental.snapshot.enable to true to obtain all data from the tsdb, but in this way, the consumption order of the data cannot be guaranteed. Therefore, it is recommended to set a reasonable retention policy for WAL based on your consumption situation to ensure that you can subscribe all data from WAL.
|
||||
Tips: Data subscription is to consume data from the wal. If some wal files are deleted according to WAL retention policy, the deleted data can't be consumed any more. So you need to set a reasonable value for parameter `WAL_RETENTION_PERIOD` or `WAL_RETENTION_SIZE` when creating the database and make sure your application consume the data in a timely way to make sure there is no data loss. This behavior is similar to Kafka and other widely used message queue products.
|
||||
|
||||
## Data Schema and API
|
||||
|
||||
|
@ -294,7 +294,6 @@ You configure the following parameters when creating a consumer:
|
|||
| `auto.offset.reset` | enum | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
|
||||
| `enable.auto.commit` | boolean | Commit automatically; true: user application doesn't need to explicitly commit; false: user application need to handle commit by itself | Default value is true |
|
||||
| `auto.commit.interval.ms` | integer | Interval for automatic commits, in milliseconds |
|
||||
| `experimental.snapshot.enable` | boolean | Specify whether to consume data in TSDB; true: both data in WAL and in TSDB can be consumed; false: only data in WAL can be consumed | default value: false |
|
||||
| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages | default value: false
|
||||
|
||||
The method of specifying these parameters depends on the language used:
|
||||
|
@ -312,7 +311,6 @@ tmq_conf_set(conf, "group.id", "cgrpName");
|
|||
tmq_conf_set(conf, "td.connect.user", "root");
|
||||
tmq_conf_set(conf, "td.connect.pass", "taosdata");
|
||||
tmq_conf_set(conf, "auto.offset.reset", "earliest");
|
||||
tmq_conf_set(conf, "experimental.snapshot.enable", "true");
|
||||
tmq_conf_set(conf, "msg.with.table.name", "true");
|
||||
tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
|
||||
|
||||
|
@ -368,7 +366,6 @@ conf := &tmq.ConfigMap{
|
|||
"td.connect.port": "6030",
|
||||
"client.id": "test_tmq_c",
|
||||
"enable.auto.commit": "false",
|
||||
"experimental.snapshot.enable": "true",
|
||||
"msg.with.table.name": "true",
|
||||
}
|
||||
consumer, err := NewConsumer(conf)
|
||||
|
@ -416,7 +413,6 @@ Python programs use the following parameters:
|
|||
| `enable.auto.commit` | string | Commit automatically | pecify `true` or `false` |
|
||||
| `auto.commit.interval.ms` | string | Interval for automatic commits, in milliseconds | |
|
||||
| `auto.offset.reset` | string | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
|
||||
| `experimental.snapshot.enable` | string | Specify whether it's allowed to consume messages from the WAL or from TSDB | Specify `true` or `false` |
|
||||
| `enable.heartbeat.background` | string | Backend heartbeat; if enabled, the consumer does not go offline even if it has not polled for a long time | Specify `true` or `false` |
|
||||
|
||||
</TabItem>
|
||||
|
|
|
@ -33,7 +33,7 @@ column_definition:
|
|||
SHOW STABLES [LIKE tb_name_wildcard];
|
||||
```
|
||||
|
||||
The preceding SQL statement shows all supertables in the current TDengine database, including the name, creation time, number of columns, number of tags, and number of subtables for each supertable.
|
||||
The preceding SQL statement shows all supertables in the current TDengine database.
|
||||
|
||||
### View the CREATE Statement for a Supertable
|
||||
|
||||
|
|
|
@ -55,7 +55,7 @@ window_clause: {
|
|||
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)]
|
||||
|
||||
interp_clause:
|
||||
RANGE(ts_val, ts_val), EVERY(every_val), FILL(fill_mod_and_val)
|
||||
RANGE(ts_val, ts_val) EVERY(every_val) FILL(fill_mod_and_val)
|
||||
|
||||
partition_by_clause:
|
||||
PARTITION BY expr [, expr] ...
|
||||
|
|
|
@ -886,7 +886,7 @@ INTERP(expr)
|
|||
- The output time range of `INTERP` is specified by `RANGE(timestamp1,timestamp2)` parameter, with timestamp1 <= timestamp2. timestamp1 is the starting point of the output time range and must be specified. timestamp2 is the ending point of the output time range and must be specified.
|
||||
- The number of rows in the result set of `INTERP` is determined by the parameter `EVERY(time_unit)`. Starting from timestamp1, one interpolation is performed for every time interval specified `time_unit` parameter. The parameter `time_unit` must be an integer, with no quotes, with a time unit of: a(millisecond)), s(second), m(minute), h(hour), d(day), or w(week). For example, `EVERY(500a)` will interpolate every 500 milliseconds.
|
||||
- Interpolation is performed based on `FILL` parameter. For more information about FILL clause, see [FILL Clause](../distinguished/#fill-clause).
|
||||
- `INTERP` can only be used to interpolate in single timeline. So it must be used with `partition by tbname` when it's used on a STable.
|
||||
- `INTERP` can be applied to supertable by interpolating primary key sorted data of all its childtables. It can also be used with `partition by tbname` when applied to supertable to generate interpolation on each single timeline.
|
||||
- Pseudocolumn `_irowts` can be used along with `INTERP` to return the timestamps associated with interpolation points(support after version 3.0.2.0).
|
||||
- Pseudocolumn `_isfilled` can be used along with `INTERP` to indicate whether the results are original records or data points generated by interpolation algorithm(support after version 3.0.3.0).
|
||||
|
||||
|
|
|
@ -129,6 +129,14 @@ SHOW QNODES;
|
|||
|
||||
Shows information about qnodes in the system.
|
||||
|
||||
## SHOW QUERIES
|
||||
|
||||
```sql
|
||||
SHOW QUERIES;
|
||||
```
|
||||
|
||||
Shows the queries in progress in the system.
|
||||
|
||||
## SHOW SCORES
|
||||
|
||||
```sql
|
||||
|
|
|
@ -36,6 +36,93 @@ REST connection supports all platforms that can run Java.
|
|||
|
||||
Please refer to [version support list](/reference/connector#version-support)
|
||||
|
||||
## Recent update logs
|
||||
|
||||
| taos-jdbcdriver version | major changes |
|
||||
| :---------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------: |
|
||||
| 3.2.1 | JDBC REST connection supports schemaless/prepareStatement over WebSocket |
|
||||
| 3.2.0 | This version has been deprecated |
|
||||
| 3.1.0 | JDBC REST connection supports subscription over WebSocket |
|
||||
| 3.0.1 - 3.0.4 | fix the resultSet data is parsed incorrectly sometimes. 3.0.1 is compiled on JDK 11, you are advised to use other version in the JDK 8 environment |
|
||||
| 3.0.0 | Support for TDengine 3.0 |
|
||||
| 2.0.42 | fix wasNull interface return value in WebSocket connection |
|
||||
| 2.0.41 | fix decode method of username and password in REST connection |
|
||||
| 2.0.39 - 2.0.40 | Add REST connection/request timeout parameters |
|
||||
| 2.0.38 | JDBC REST connections add bulk pull function |
|
||||
| 2.0.37 | Support json tags |
|
||||
| 2.0.36 | Support schemaless writing |
|
||||
|
||||
**Note**: adding `batchfetch` to the REST connection and setting it to true will enable the WebSocket connection.
|
||||
|
||||
### Handling exceptions
|
||||
|
||||
After an error is reported, the error message and error code can be obtained through SQLException.
|
||||
|
||||
```java
|
||||
try (Statement statement = connection.createStatement()) {
|
||||
// executeQuery
|
||||
ResultSet resultSet = statement.executeQuery(sql);
|
||||
// print result
|
||||
printResult(resultSet);
|
||||
} catch (SQLException e) {
|
||||
System.out.println("ERROR Message: " + e.getMessage());
|
||||
System.out.println("ERROR Code: " + e.getErrorCode());
|
||||
e.printStackTrace();
|
||||
}
|
||||
```
|
||||
|
||||
There are four types of error codes that the JDBC connector can report:
|
||||
|
||||
- Error code of the JDBC driver itself (error code between 0x2301 and 0x2350),
|
||||
- Error code of the native connection method (error code between 0x2351 and 0x2360)
|
||||
- Error code of the consumer method (error code between 0x2371 and 0x2380)
|
||||
- Error code of other TDengine function modules.
|
||||
|
||||
For specific error codes, please refer to.
|
||||
|
||||
| Error Code | Description | Suggested Actions |
|
||||
| ---------- | --------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 0x2301 | connection already closed | The connection has been closed, check the connection status, or recreate the connection to execute the relevant instructions. |
|
||||
| 0x2302 | this operation is NOT supported currently! | The current interface does not support the connection. You can use another connection mode. |
|
||||
| 0x2303 | invalid variables | The parameter is invalid. Check the interface specification and adjust the parameter type and size. |
|
||||
| 0x2304 | statement is closed | The statement is closed. Check whether the statement is closed and used again, or whether the connection is normal. |
|
||||
| 0x2305 | resultSet is closed | result set The result set is released. Check whether the result set is released and used again. |
|
||||
| 0x2306 | Batch is empty! | prepare statement Add parameters and then execute batch. |
|
||||
| 0x2307 | Can not issue data manipulation statements with executeQuery() | The update operation should use execute update(), not execute query(). |
|
||||
| 0x2308 | Can not issue SELECT via executeUpdate() | The query operation should use execute query(), not execute update(). |
|
||||
| 0x230d | parameter index out of range | The parameter is out of bounds. Check the proper range of the parameter. |
|
||||
| 0x230e | connection already closed | The connection has been closed. Please check whether the connection is closed and used again, or whether the connection is normal. |
|
||||
| 0x230f | unknown sql type in tdengine | Check the data type supported by TDengine. |
|
||||
| 0x2310 | can't register JDBC-JNI driver | The native driver cannot be registered. Please check whether the url is correct. |
|
||||
| 0x2312 | url is not set | Check whether the REST connection url is correct. |
|
||||
| 0x2314 | numeric value out of range | Check that the correct interface is used for the numeric types in the obtained result set. |
|
||||
| 0x2315 | unknown taos type in tdengine | Whether the correct TDengine data type is specified when converting the TDengine data type to the JDBC data type. |
|
||||
| 0x2317 | | wrong request type was used in the REST connection. |
|
||||
| 0x2318 | | data transmission exception occurred during the REST connection. Please check the network status and try again. |
|
||||
| 0x2319 | user is required | The user name information is missing when creating the connection |
|
||||
| 0x231a | password is required | Password information is missing when creating a connection |
|
||||
| 0x231c | httpEntity is null, sql: | Execution exception occurred during the REST connection |
|
||||
| 0x2350 | unknown error | Unknown exception, please return to the developer on github. |
|
||||
| 0x2352 | Unsupported encoding | An unsupported character encoding set is specified under the native Connection. |
|
||||
| 0x2353 | internal error of database, please see taoslog for more details | An error occurs when the prepare statement is executed on the native connection. Check the taos log to locate the fault. |
|
||||
| 0x2354 | JNI connection is NULL | When the command is executed, the native Connection is closed. Check the connection to TDengine. |
|
||||
| 0x2355 | JNI result set is NULL | The result set is abnormal. Please check the connection status and try again. |
|
||||
| 0x2356 | invalid num of fields | The meta information of the result set obtained by the native connection does not match. |
|
||||
| 0x2357 | empty sql string | Fill in the correct SQL for execution. |
|
||||
| 0x2359 | JNI alloc memory failed, please see taoslog for more details | Memory allocation for the native connection failed. Check the taos log to locate the problem. |
|
||||
| 0x2371 | consumer properties must not be null! | The parameter is empty when you create a subscription. Please fill in the correct parameter. |
|
||||
| 0x2372 | configs contain empty key, failed to set consumer property | The parameter key contains a null value. Please enter the correct parameter. |
|
||||
| 0x2373 | failed to set consumer property, | The parameter value contains a null value. Please enter the correct parameter. |
|
||||
| 0x2375 | topic reference has been destroyed | The topic reference is released during the creation of the data subscription. Check the connection to TDengine. |
|
||||
| 0x2376 | failed to set consumer topic, topic name is empty | During data subscription creation, the subscription topic name is empty. Check that the specified topic name is correct. |
|
||||
| 0x2377 | consumer reference has been destroyed | The subscription data transfer channel has been closed. Please check the connection to TDengine. |
|
||||
| 0x2378 | consumer create error | Failed to create a data subscription. Check the taos log according to the error message to locate the fault. |
|
||||
| - | can't create connection with server within | Increase the connection time by adding the httpConnectTimeout parameter, or check the connection to the taos adapter. |
|
||||
| - | failed to complete the task within the specified time | Increase the execution time by adding the messageWaitTimeout parameter, or check the connection to the taos adapter. |
|
||||
|
||||
- [TDengine Java Connector](https://github.com/taosdata/taos-connector-jdbc/blob/main/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java)
|
||||
<!-- - [TDengine_ERROR_CODE](../error-code) -->
|
||||
|
||||
## TDengine DataType vs. Java DataType
|
||||
|
||||
TDengine currently supports timestamp, number, character, Boolean type, and the corresponding type conversion with Java is as follows:
|
||||
|
@ -82,7 +169,7 @@ Add following dependency in the `pom.xml` file of your Maven project:
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.1.0</version>
|
||||
<version>3.2.1</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
|
@ -97,7 +184,7 @@ cd taos-connector-jdbc
|
|||
mvn clean install -Dmaven.test.skip=true
|
||||
```
|
||||
|
||||
After you have compiled taos-jdbcdriver, the `taos-jdbcdriver-3.0.*-dist.jar` file is created in the target directory. The compiled JAR file is automatically stored in your local Maven repository.
|
||||
After you have compiled taos-jdbcdriver, the `taos-jdbcdriver-3.2.*-dist.jar` file is created in the target directory. The compiled JAR file is automatically stored in your local Maven repository.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
@ -333,35 +420,6 @@ while(resultSet.next()){
|
|||
|
||||
> The query is consistent with operating a relational database. When using subscripts to get the contents of the returned fields, you have to start from 1. However, we recommend using the field names to get the values of the fields in the result set.
|
||||
|
||||
### Handling exceptions
|
||||
|
||||
After an error is reported, the error message and error code can be obtained through SQLException.
|
||||
|
||||
```java
|
||||
try (Statement statement = connection.createStatement()) {
|
||||
// executeQuery
|
||||
ResultSet resultSet = statement.executeQuery(sql);
|
||||
// print result
|
||||
printResult(resultSet);
|
||||
} catch (SQLException e) {
|
||||
System.out.println("ERROR Message: " + e.getMessage());
|
||||
System.out.println("ERROR Code: " + e.getErrorCode());
|
||||
e.printStackTrace();
|
||||
}
|
||||
```
|
||||
|
||||
There are four types of error codes that the JDBC connector can report:
|
||||
|
||||
- Error code of the JDBC driver itself (error code between 0x2301 and 0x2350),
|
||||
- Error code of the native connection method (error code between 0x2351 and 0x2360)
|
||||
- Error code of the consumer method (error code between 0x2371 and 0x2380)
|
||||
- Error code of other TDengine function modules.
|
||||
|
||||
For specific error codes, please refer to.
|
||||
|
||||
- [TDengine Java Connector](https://github.com/taosdata/taos-connector-jdbc/blob/main/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java)
|
||||
<!-- - [TDengine_ERROR_CODE](../error-code) -->
|
||||
|
||||
### Writing data via parameter binding
|
||||
|
||||
TDengine has significantly improved the bind APIs to support data writing (INSERT) scenarios. Writing data in this way avoids the resource consumption of SQL syntax parsing, resulting in significant write performance improvements in many cases.
|
||||
|
@ -369,9 +427,12 @@ TDengine has significantly improved the bind APIs to support data writing (INSER
|
|||
**Note:**
|
||||
|
||||
- JDBC REST connections do not currently support bind interface
|
||||
- The following sample code is based on taos-jdbcdriver-3.1.0
|
||||
- The following sample code is based on taos-jdbcdriver-3.2.1
|
||||
- The setString method should be called for binary type data, and the setNString method should be called for nchar type data
|
||||
- both setString and setNString require the user to declare the width of the corresponding column in the size parameter of the table definition
|
||||
- Do not use `db.?` in prepareStatement when specify the database with the table name, should directly use `?`, then specify the database in setTableName, for example: `prepareStatement.setTableName("db.t1")`.
|
||||
|
||||
<Tabs defaultValue="native">
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
```java
|
||||
public class ParameterBindingDemo {
|
||||
|
@ -599,21 +660,7 @@ public class ParameterBindingDemo {
|
|||
}
|
||||
```
|
||||
|
||||
The methods to set TAGS values:
|
||||
|
||||
```java
|
||||
public void setTagNull(int index, int type)
|
||||
public void setTagBoolean(int index, boolean value)
|
||||
public void setTagInt(int index, int value)
|
||||
public void setTagByte(int index, byte value)
|
||||
public void setTagShort(int index, short value)
|
||||
public void setTagLong(int index, long value)
|
||||
public void setTagTimestamp(int index, long value)
|
||||
public void setTagFloat(int index, float value)
|
||||
public void setTagDouble(int index, double value)
|
||||
public void setTagString(int index, String value)
|
||||
public void setTagNString(int index, String value)
|
||||
```
|
||||
**Note**: both setString and setNString require the user to declare the width of the corresponding column in the size parameter of the table definition
|
||||
|
||||
The methods to set VALUES columns:
|
||||
|
||||
|
@ -630,17 +677,203 @@ public void setString(int columnIndex, ArrayList<String> list, int size) throws
|
|||
public void setNString(int columnIndex, ArrayList<String> list, int size) throws SQLException
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="ws" label="WebSocket connection">
|
||||
|
||||
```java
|
||||
public class ParameterBindingDemo {
|
||||
private static final String host = "127.0.0.1";
|
||||
private static final Random random = new Random(System.currentTimeMillis());
|
||||
private static final int BINARY_COLUMN_SIZE = 30;
|
||||
private static final String[] schemaList = {
|
||||
"create table stable1(ts timestamp, f1 tinyint, f2 smallint, f3 int, f4 bigint) tags(t1 tinyint, t2 smallint, t3 int, t4 bigint)",
|
||||
"create table stable2(ts timestamp, f1 float, f2 double) tags(t1 float, t2 double)",
|
||||
"create table stable3(ts timestamp, f1 bool) tags(t1 bool)",
|
||||
"create table stable4(ts timestamp, f1 binary(" + BINARY_COLUMN_SIZE + ")) tags(t1 binary(" + BINARY_COLUMN_SIZE + "))",
|
||||
"create table stable5(ts timestamp, f1 nchar(" + BINARY_COLUMN_SIZE + ")) tags(t1 nchar(" + BINARY_COLUMN_SIZE + "))"
|
||||
};
|
||||
private static final int numOfSubTable = 10, numOfRow = 10;
|
||||
|
||||
public static void main(String[] args) throws SQLException {
|
||||
|
||||
String jdbcUrl = "jdbc:TAOS-RS://" + host + ":6041/?batchfetch=true";
|
||||
Connection conn = DriverManager.getConnection(jdbcUrl, "root", "taosdata");
|
||||
|
||||
init(conn);
|
||||
|
||||
bindInteger(conn);
|
||||
|
||||
bindFloat(conn);
|
||||
|
||||
bindBoolean(conn);
|
||||
|
||||
bindBytes(conn);
|
||||
|
||||
bindString(conn);
|
||||
|
||||
conn.close();
|
||||
}
|
||||
|
||||
private static void init(Connection conn) throws SQLException {
|
||||
try (Statement stmt = conn.createStatement()) {
|
||||
stmt.execute("drop database if exists test_ws_parabind");
|
||||
stmt.execute("create database if not exists test_ws_parabind");
|
||||
stmt.execute("use test_ws_parabind");
|
||||
for (int i = 0; i < schemaList.length; i++) {
|
||||
stmt.execute(schemaList[i]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void bindInteger(Connection conn) throws SQLException {
|
||||
String sql = "insert into ? using stable1 tags(?,?,?,?) values(?,?,?,?,?)";
|
||||
|
||||
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
|
||||
|
||||
for (int i = 1; i <= numOfSubTable; i++) {
|
||||
// set table name
|
||||
pstmt.setTableName("t1_" + i);
|
||||
// set tags
|
||||
pstmt.setTagByte(1, Byte.parseByte(Integer.toString(random.nextInt(Byte.MAX_VALUE))));
|
||||
pstmt.setTagShort(2, Short.parseShort(Integer.toString(random.nextInt(Short.MAX_VALUE))));
|
||||
pstmt.setTagInt(3, random.nextInt(Integer.MAX_VALUE));
|
||||
pstmt.setTagLong(4, random.nextLong());
|
||||
// set columns
|
||||
long current = System.currentTimeMillis();
|
||||
for (int j = 0; j < numOfRow; j++) {
|
||||
pstmt.setTimestamp(1, new Timestamp(current + j));
|
||||
pstmt.setByte(2, Byte.parseByte(Integer.toString(random.nextInt(Byte.MAX_VALUE))));
|
||||
pstmt.setShort(3, Short.parseShort(Integer.toString(random.nextInt(Short.MAX_VALUE))));
|
||||
pstmt.setInt(4, random.nextInt(Integer.MAX_VALUE));
|
||||
pstmt.setLong(5, random.nextLong());
|
||||
pstmt.addBatch();
|
||||
}
|
||||
pstmt.executeBatch();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void bindFloat(Connection conn) throws SQLException {
|
||||
String sql = "insert into ? using stable2 tags(?,?) values(?,?,?)";
|
||||
|
||||
try(TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
|
||||
|
||||
for (int i = 1; i <= numOfSubTable; i++) {
|
||||
// set table name
|
||||
pstmt.setTableName("t2_" + i);
|
||||
// set tags
|
||||
pstmt.setTagFloat(1, random.nextFloat());
|
||||
pstmt.setTagDouble(2, random.nextDouble());
|
||||
// set columns
|
||||
long current = System.currentTimeMillis();
|
||||
for (int j = 0; j < numOfRow; j++) {
|
||||
pstmt.setTimestamp(1, new Timestamp(current + j));
|
||||
pstmt.setFloat(2, random.nextFloat());
|
||||
pstmt.setDouble(3, random.nextDouble());
|
||||
pstmt.addBatch();
|
||||
}
|
||||
pstmt.executeBatch();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void bindBoolean(Connection conn) throws SQLException {
|
||||
String sql = "insert into ? using stable3 tags(?) values(?,?)";
|
||||
|
||||
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
|
||||
for (int i = 1; i <= numOfSubTable; i++) {
|
||||
// set table name
|
||||
pstmt.setTableName("t3_" + i);
|
||||
// set tags
|
||||
pstmt.setTagBoolean(1, random.nextBoolean());
|
||||
// set columns
|
||||
long current = System.currentTimeMillis();
|
||||
for (int j = 0; j < numOfRow; j++) {
|
||||
pstmt.setTimestamp(1, new Timestamp(current + j));
|
||||
pstmt.setBoolean(2, random.nextBoolean());
|
||||
pstmt.addBatch();
|
||||
}
|
||||
pstmt.executeBatch();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void bindBytes(Connection conn) throws SQLException {
|
||||
String sql = "insert into ? using stable4 tags(?) values(?,?)";
|
||||
|
||||
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
|
||||
|
||||
for (int i = 1; i <= numOfSubTable; i++) {
|
||||
// set table name
|
||||
pstmt.setTableName("t4_" + i);
|
||||
// set tags
|
||||
pstmt.setTagString(1, new String("abc"));
|
||||
|
||||
// set columns
|
||||
long current = System.currentTimeMillis();
|
||||
for (int j = 0; j < numOfRow; j++) {
|
||||
pstmt.setTimestamp(1, new Timestamp(current + j));
|
||||
pstmt.setString(2, "abc");
|
||||
pstmt.addBatch();
|
||||
}
|
||||
pstmt.executeBatch();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void bindString(Connection conn) throws SQLException {
|
||||
String sql = "insert into ? using stable5 tags(?) values(?,?)";
|
||||
|
||||
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
|
||||
|
||||
for (int i = 1; i <= numOfSubTable; i++) {
|
||||
// set table name
|
||||
pstmt.setTableName("t5_" + i);
|
||||
// set tags
|
||||
pstmt.setTagNString(1, "California.SanFrancisco");
|
||||
|
||||
// set columns
|
||||
long current = System.currentTimeMillis();
|
||||
for (int j = 0; j < numOfRow; j++) {
|
||||
pstmt.setTimestamp(0, new Timestamp(current + j));
|
||||
pstmt.setNString(1, "California.SanFrancisco");
|
||||
pstmt.addBatch();
|
||||
}
|
||||
pstmt.executeBatch();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
The methods to set TAGS values:
|
||||
|
||||
```java
|
||||
public void setTagNull(int index, int type)
|
||||
public void setTagBoolean(int index, boolean value)
|
||||
public void setTagInt(int index, int value)
|
||||
public void setTagByte(int index, byte value)
|
||||
public void setTagShort(int index, short value)
|
||||
public void setTagLong(int index, long value)
|
||||
public void setTagTimestamp(int index, long value)
|
||||
public void setTagFloat(int index, float value)
|
||||
public void setTagDouble(int index, double value)
|
||||
public void setTagString(int index, String value)
|
||||
public void setTagNString(int index, String value)
|
||||
```
|
||||
|
||||
### Schemaless Writing
|
||||
|
||||
TDengine supports schemaless writing. It is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. For more information, see [Schemaless Writing](../../schemaless).
|
||||
|
||||
Note:
|
||||
|
||||
- JDBC REST connections do not currently support schemaless writes
|
||||
- The following sample code is based on taos-jdbcdriver-3.1.0
|
||||
<Tabs defaultValue="native">
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
```java
|
||||
public class SchemalessInsertTest {
|
||||
public class SchemalessJniTest {
|
||||
private static final String host = "127.0.0.1";
|
||||
private static final String lineDemo = "st,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000";
|
||||
private static final String telnetDemo = "stb0_0 1626006833 4 host=host0 interface=eth0";
|
||||
|
@ -668,6 +901,41 @@ public class SchemalessInsertTest {
|
|||
}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="ws" label="WebSocket connection">
|
||||
|
||||
```java
|
||||
public class SchemalessWsTest {
|
||||
private static final String host = "127.0.0.1";
|
||||
private static final String lineDemo = "st,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000";
|
||||
private static final String telnetDemo = "stb0_0 1626006833 4 host=host0 interface=eth0";
|
||||
private static final String jsonDemo = "{\"metric\": \"meter_current\",\"timestamp\": 1626846400,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"California.SanFrancisco\", \"id\": \"d1001\"}}";
|
||||
|
||||
public static void main(String[] args) throws SQLException {
|
||||
final String url = "jdbc:TAOS-RS://" + host + ":6041/?user=root&password=taosdata&batchfetch=true";
|
||||
Connection connection = DriverManager.getConnection(url);
|
||||
init(connection);
|
||||
|
||||
SchemalessWriter writer = new SchemalessWriter(connection, "test_ws_schemaless");
|
||||
writer.write(lineDemo, SchemalessProtocolType.LINE, SchemalessTimestampType.NANO_SECONDS);
|
||||
writer.write(telnetDemo, SchemalessProtocolType.TELNET, SchemalessTimestampType.MILLI_SECONDS);
|
||||
writer.write(jsonDemo, SchemalessProtocolType.JSON, SchemalessTimestampType.SECONDS);
|
||||
System.exit(0);
|
||||
}
|
||||
|
||||
private static void init(Connection connection) throws SQLException {
|
||||
try (Statement stmt = connection.createStatement()) {
|
||||
stmt.executeUpdate("drop database if exists test_ws_schemaless");
|
||||
stmt.executeUpdate("create database if not exists test_ws_schemaless keep 36500");
|
||||
stmt.executeUpdate("use test_ws_schemaless");
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Data Subscription
|
||||
|
||||
The TDengine Java Connector supports subscription functionality with the following application API.
|
||||
|
@ -711,8 +979,9 @@ TaosConsumer consumer = new TaosConsumer<>(config);
|
|||
```java
|
||||
while(true) {
|
||||
ConsumerRecords<ResultBean> records = consumer.poll(Duration.ofMillis(100));
|
||||
for (ResultBean record : records) {
|
||||
process(record);
|
||||
for (ConsumerRecord<ResultBean> record : records) {
|
||||
ResultBean bean = record.value();
|
||||
process(bean);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
@ -765,8 +1034,9 @@ public abstract class ConsumerLoop {
|
|||
|
||||
while (!shutdown.get()) {
|
||||
ConsumerRecords<ResultBean> records = consumer.poll(Duration.ofMillis(100));
|
||||
for (ResultBean record : records) {
|
||||
process(record);
|
||||
for (ConsumerRecord<ResultBean> record : records) {
|
||||
ResultBean bean = record.value();
|
||||
process(bean);
|
||||
}
|
||||
}
|
||||
consumer.unsubscribe();
|
||||
|
@ -841,8 +1111,9 @@ public abstract class ConsumerLoop {
|
|||
|
||||
while (!shutdown.get()) {
|
||||
ConsumerRecords<ResultBean> records = consumer.poll(Duration.ofMillis(100));
|
||||
for (ResultBean record : records) {
|
||||
process(record);
|
||||
for (ConsumerRecord<ResultBean> record : records) {
|
||||
ResultBean bean = record.value();
|
||||
process(bean);
|
||||
}
|
||||
}
|
||||
consumer.unsubscribe();
|
||||
|
@ -968,20 +1239,6 @@ The source code of the sample application is under `TDengine/examples/JDBC`:
|
|||
|
||||
[JDBC example](https://github.com/taosdata/TDengine/tree/3.0/examples/JDBC)
|
||||
|
||||
## Recent update logs
|
||||
|
||||
| taos-jdbcdriver version | major changes |
|
||||
| :---------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------: |
|
||||
| 3.1.0 | JDBC REST connection supports subscription over WebSocket |
|
||||
| 3.0.1 - 3.0.4 | fix the resultSet data is parsed incorrectly sometimes. 3.0.1 is compiled on JDK 11, you are advised to use other version in the JDK 8 environment |
|
||||
| 3.0.0 | Support for TDengine 3.0 |
|
||||
| 2.0.42 | fix wasNull interface return value in WebSocket connection |
|
||||
| 2.0.41 | fix decode method of username and password in REST connection |
|
||||
| 2.0.39 - 2.0.40 | Add REST connection/request timeout parameters |
|
||||
| 2.0.38 | JDBC REST connections add bulk pull function |
|
||||
| 2.0.37 | Support json tags |
|
||||
| 2.0.36 | Support schemaless writing |
|
||||
|
||||
## Frequently Asked Questions
|
||||
|
||||
1. Why is there no performance improvement when using Statement's `addBatch()` and `executeBatch()` to perform `batch data writing/update`?
|
||||
|
|
|
@ -12,8 +12,8 @@ After TDengine starts, it automatically writes many metrics in specific interval
|
|||
|
||||
To deploy TDinsight, we need
|
||||
- a single-node TDengine server or a multi-node TDengine cluster and a [Grafana] server are required. This dashboard requires TDengine 3.0.1.0 and above, with the monitoring feature enabled. For detailed configuration, please refer to [TDengine monitoring configuration](../config/#monitoring-parameters).
|
||||
- taosAdapter has been instaleld and running, please refer to [taosAdapter](../taosadapter).
|
||||
- taosKeeper has been installed and running, please refer to [taosKeeper](../taoskeeper).
|
||||
- taosAdapter has been installed and running, please refer to [taosAdapter](../taosadapter).
|
||||
- taosKeeper has been installed and running, please refer to [taosKeeper](../taosKeeper).
|
||||
|
||||
Please record
|
||||
- The endpoint of taosAdapter REST service, for example `http://tdengine.local:6041`
|
||||
|
|
|
@ -46,15 +46,14 @@ Execute in any directory:
|
|||
|
||||
````
|
||||
curl -O http://packages.confluent.io/archive/7.1/confluent-7.1.1.tar.gz
|
||||
tar xzf confluent-7.1.1.tar.gz -C /opt/test
|
||||
tar xzf confluent-7.1.1.tar.gz -C /opt/
|
||||
````
|
||||
|
||||
Then you need to add the `$CONFLUENT_HOME/bin` directory to the PATH.
|
||||
|
||||
```title=".profile"
|
||||
export CONFLUENT_HOME=/opt/confluent-7.1.1
|
||||
PATH=$CONFLUENT_HOME/bin
|
||||
export PATH
|
||||
export PATH=$CONFLUENT_HOME/bin:$PATH
|
||||
```
|
||||
|
||||
Users can append the above script to the current user's profile file (~/.profile or ~/.bash_profile)
|
||||
|
@ -329,7 +328,15 @@ DROP DATABASE IF EXISTS test;
|
|||
CREATE DATABASE test;
|
||||
USE test;
|
||||
CREATE STABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT);
|
||||
INSERT INTO d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000) d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:15.000',12.60000,218,0.33000) d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:16.800',12.30000,221,0.31000) d1002 USING meters TAGS(California.SanFrancisco, 3) VALUES('2018-10-03 14:38:16.650',10.30000,218,0.25000) d1003 USING meters TAGS(California.LoSangeles, 2) VALUES('2018-10-03 14:38:05.500',11.80000,221,0.28000) d1003 USING meters TAGS(California.LoSangeles, 2) VALUES('2018-10-03 14:38:16.600',13.40000,223,0.29000) d1004 USING meters TAGS(California.LoSangeles, 3) VALUES('2018-10-03 14:38:05.000',10.80000,223,0.29000) d1004 USING meters TAGS(California.LoSangeles, 3) VALUES('2018-10-03 14:38:06.500',11.50000,221,0.35000);
|
||||
|
||||
INSERT INTO d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000) \
|
||||
d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES('2018-10-03 14:38:15.000',12.60000,218,0.33000) \
|
||||
d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES('2018-10-03 14:38:16.800',12.30000,221,0.31000) \
|
||||
d1002 USING meters TAGS('California.SanFrancisco', 3) VALUES('2018-10-03 14:38:16.650',10.30000,218,0.25000) \
|
||||
d1003 USING meters TAGS('California.LosAngeles', 2) VALUES('2018-10-03 14:38:05.500',11.80000,221,0.28000) \
|
||||
d1003 USING meters TAGS('California.LosAngeles', 2) VALUES('2018-10-03 14:38:16.600',13.40000,223,0.29000) \
|
||||
d1004 USING meters TAGS('California.LosAngeles', 3) VALUES('2018-10-03 14:38:05.000',10.80000,223,0.29000) \
|
||||
d1004 USING meters TAGS('California.LosAngeles', 3) VALUES('2018-10-03 14:38:06.500',11.50000,221,0.35000);
|
||||
```
|
||||
|
||||
Use TDengine CLI to execute SQL script
|
||||
|
@ -384,7 +391,7 @@ confluent local services connect connector status
|
|||
You should now have two active connectors if you followed the previous steps. Use the following command to unload:
|
||||
|
||||
````
|
||||
confluent local services connect connector unload TDengineSourceConnector
|
||||
confluent local services connect connector unload TDengineSinkConnector
|
||||
confluent local services connect connector unload TDengineSourceConnector
|
||||
````
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ Please refer to the [official documentation](https://grafana.com/grafana/downloa
|
|||
|
||||
### TDengine
|
||||
|
||||
Download the latest TDengine-server from the [Downloads](http://tdengine.com/en/all-downloads/) page on the TAOSData website and install it.
|
||||
Download and install the [latest version of TDengine](https://docs.tdengine.com/releases/tdengine/).
|
||||
|
||||
## Data Connection Setup
|
||||
|
||||
|
|
|
@ -38,7 +38,7 @@ Please refer to the [official documentation](https://grafana.com/grafana/downloa
|
|||
|
||||
### Install TDengine
|
||||
|
||||
Download the latest TDengine-server from the [Downloads](http://tdengine.com/en/all-downloads/) page on the TAOSData website and install it.
|
||||
Download and install the [latest version of TDengine](https://docs.tdengine.com/releases/tdengine/).
|
||||
|
||||
## Data Connection Setup
|
||||
|
||||
|
|
|
@ -32,7 +32,7 @@ TDengine 3.0 is not compatible with the configuration and data files from previo
|
|||
2. Run `sudo rm -rf /var/log/taos/` to delete your log files.
|
||||
3. Run `sudo rm -rf /var/lib/taos/` to delete your data files.
|
||||
4. Install TDengine 3.0.
|
||||
5. For assistance in migrating data to TDengine 3.0, contact [TDengine Support](https://tdengine.com/support).
|
||||
5. For assistance in migrating data to TDengine 3.0, contact [TDengine Support](https://tdengine.com/support/).
|
||||
|
||||
### 2. How can I resolve the "Unable to establish connection" error?
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.1.0</version>
|
||||
<version>3.2.1</version>
|
||||
</dependency>
|
||||
<!-- ANCHOR_END: dep-->
|
||||
<dependency>
|
||||
|
|
|
@ -1,5 +1,6 @@
|
|||
package com.taos.example;
|
||||
|
||||
import com.taosdata.jdbc.tmq.ConsumerRecord;
|
||||
import com.taosdata.jdbc.tmq.ConsumerRecords;
|
||||
import com.taosdata.jdbc.tmq.TMQConstants;
|
||||
import com.taosdata.jdbc.tmq.TaosConsumer;
|
||||
|
@ -64,7 +65,8 @@ public class SubscribeDemo {
|
|||
consumer.subscribe(Collections.singletonList(TOPIC));
|
||||
while (!shutdown.get()) {
|
||||
ConsumerRecords<Meters> meters = consumer.poll(Duration.ofMillis(100));
|
||||
for (Meters meter : meters) {
|
||||
for (ConsumerRecord<Meters> recode : meters) {
|
||||
Meters meter = recode.value();
|
||||
System.out.println(meter);
|
||||
}
|
||||
}
|
||||
|
|
|
@ -25,7 +25,8 @@ import CDemo from "./_sub_c.mdx";
|
|||
|
||||
本文档不对消息队列本身的基础知识做介绍,如果需要了解,请自行搜索。
|
||||
|
||||
注意:默认是从wal消费数据,如果wal被删除,消费到的数据会不全,此时可以将参数 experimental.snapshot.enable 设置为true,从tsdb获取全部数据,但是这样的话就不能保证数据的消费顺序。所以建议根据自己的消费情况合理的设置wal的保留策略,保证可以从wal里订阅到全部数据。
|
||||
注意:数据订阅是从 WAL 消费数据,如果一些 WAL 文件被基于 WAL 保留策略删除,则已经删除的 WAL 文件中的数据就无法再消费到。需要根据业务需要在创建数据库时合理设置 `WAL_RETENTION_PERIOD` 或 `WAL_RETENTION_SIZE` ,并确保应用及时消费数据,这样才不会产生数据丢失的现象。数据订阅的行为与 Kafka 等广泛使用的消息队列类产品的行为相似。
|
||||
|
||||
## 主要数据结构和 API
|
||||
|
||||
不同语言下, TMQ 订阅相关的 API 及数据结构如下:
|
||||
|
@ -293,7 +294,6 @@ CREATE TOPIC topic_name AS DATABASE db_name;
|
|||
| `auto.offset.reset` | enum | 消费组订阅的初始位置 | <br />`earliest`: default;从头开始订阅; <br/>`latest`: 仅从最新数据开始订阅; <br/>`none`: 没有提交的 offset 无法订阅 |
|
||||
| `enable.auto.commit` | boolean | 是否启用消费位点自动提交,true: 自动提交,客户端应用无需commit;false:客户端应用需要自行commit | 默认值为 true |
|
||||
| `auto.commit.interval.ms` | integer | 消费记录自动提交消费位点时间间隔,单位为毫秒 | 默认值为 5000 |
|
||||
| `experimental.snapshot.enable` | boolean | 是否允许从 TSDB 消费数据。当其关闭时,只能消费依据 WAL 保留策略仍然在WAL中的数据;当其打开时,除WAL中的数据以外,也能够消费已经从WAL中删除但落盘到TSDB中的数据 | 实验功能,默认关闭 |
|
||||
| `msg.with.table.name` | boolean | 是否允许从消息中解析表名, 不适用于列订阅(列订阅时可将 tbname 作为列写入 subquery 语句) |默认关闭 |
|
||||
|
||||
对于不同编程语言,其设置方式如下:
|
||||
|
@ -311,7 +311,6 @@ tmq_conf_set(conf, "group.id", "cgrpName");
|
|||
tmq_conf_set(conf, "td.connect.user", "root");
|
||||
tmq_conf_set(conf, "td.connect.pass", "taosdata");
|
||||
tmq_conf_set(conf, "auto.offset.reset", "earliest");
|
||||
tmq_conf_set(conf, "experimental.snapshot.enable", "true");
|
||||
tmq_conf_set(conf, "msg.with.table.name", "true");
|
||||
tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
|
||||
|
||||
|
@ -367,7 +366,6 @@ conf := &tmq.ConfigMap{
|
|||
"td.connect.port": "6030",
|
||||
"client.id": "test_tmq_c",
|
||||
"enable.auto.commit": "false",
|
||||
"experimental.snapshot.enable": "true",
|
||||
"msg.with.table.name": "true",
|
||||
}
|
||||
consumer, err := NewConsumer(conf)
|
||||
|
@ -417,7 +415,6 @@ consumer = Consumer({"group.id": "local", "td.connect.ip": "127.0.0.1"})
|
|||
| `enable.auto.commit` | string | 启用自动提交 | 合法值:`true`, `false` |
|
||||
| `auto.commit.interval.ms` | string | 以毫秒为单位的自动提交时间间隔 | 默认值:5000 ms |
|
||||
| `auto.offset.reset` | string | 消费组订阅的初始位置 | 可选:`earliest`(default), `latest`, `none` |
|
||||
| `experimental.snapshot.enable` | string | 是否允许从 TSDB 消费数据 | 合法值:`true`, `false` |
|
||||
|
||||
</TabItem>
|
||||
|
||||
|
|
|
@ -36,6 +36,93 @@ REST 连接支持所有能运行 Java 的平台。
|
|||
|
||||
请参考[版本支持列表](../#版本支持)
|
||||
|
||||
## 最近更新记录
|
||||
|
||||
| taos-jdbcdriver 版本 | 主要变化 |
|
||||
| :------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------: |
|
||||
| 3.2.1 | 新增功能:WebSocket 连接支持 schemaless 与 prepareStatement 写入。变更:consumer poll 返回结果集为 ConsumerRecord,可通过 value() 获取指定结果集数据。 |
|
||||
| 3.2.0 | 存在连接问题,不推荐使用 |
|
||||
| 3.1.0 | WebSocket 连接支持订阅功能 |
|
||||
| 3.0.1 - 3.0.4 | 修复一些情况下结果集数据解析错误的问题。3.0.1 在 JDK 11 环境编译,JDK 8 环境下建议使用其他版本 |
|
||||
| 3.0.0 | 支持 TDengine 3.0 |
|
||||
| 2.0.42 | 修在 WebSocket 连接中 wasNull 接口返回值 |
|
||||
| 2.0.41 | 修正 REST 连接中用户名和密码转码方式 |
|
||||
| 2.0.39 - 2.0.40 | 增加 REST 连接/请求 超时设置 |
|
||||
| 2.0.38 | JDBC REST 连接增加批量拉取功能 |
|
||||
| 2.0.37 | 增加对 json tag 支持 |
|
||||
| 2.0.36 | 增加对 schemaless 写入支持 |
|
||||
|
||||
**注**:REST 连接中增加 `batchfetch` 参数并设置为 true,将开启 WebSocket 连接。
|
||||
|
||||
## 处理异常
|
||||
|
||||
在报错后,通过 SQLException 可以获取到错误的信息和错误码:
|
||||
|
||||
```java
|
||||
try (Statement statement = connection.createStatement()) {
|
||||
// executeQuery
|
||||
ResultSet resultSet = statement.executeQuery(sql);
|
||||
// print result
|
||||
printResult(resultSet);
|
||||
} catch (SQLException e) {
|
||||
System.out.println("ERROR Message: " + e.getMessage());
|
||||
System.out.println("ERROR Code: " + e.getErrorCode());
|
||||
e.printStackTrace();
|
||||
}
|
||||
```
|
||||
|
||||
JDBC 连接器可能报错的错误码包括 4 种:
|
||||
|
||||
- JDBC driver 本身的报错(错误码在 0x2301 到 0x2350 之间)
|
||||
- 原生连接方法的报错(错误码在 0x2351 到 0x2360 之间)
|
||||
- 数据订阅的报错(错误码在 0x2371 到 0x2380 之间)
|
||||
- TDengine 其他功能模块的报错。
|
||||
|
||||
具体的错误码请参考:
|
||||
|
||||
| Error Code | Description | Suggested Actions |
|
||||
| ---------- | --------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
|
||||
| 0x2301 | connection already closed | 连接已经关闭,检查连接情况,或重新创建连接去执行相关指令。 |
|
||||
| 0x2302 | this operation is NOT supported currently! | 当前使用接口不支持,可以更换其他连接方式。 |
|
||||
| 0x2303 | invalid variables | 参数不合法,请检查相应接口规范,调整参数类型及大小。 |
|
||||
| 0x2304 | statement is closed | statement 已经关闭,请检查 statement 是否关闭后再次使用,或是连接是否正常。 |
|
||||
| 0x2305 | resultSet is closed | resultSet 结果集已经释放,请检查 resultSet 是否释放后再次使用。 |
|
||||
| 0x2306 | Batch is empty! | prepareStatement 添加参数后再执行 executeBatch。 |
|
||||
| 0x2307 | Can not issue data manipulation statements with executeQuery() | 更新操作应该使用 executeUpdate(),而不是 executeQuery()。 |
|
||||
| 0x2308 | Can not issue SELECT via executeUpdate() | 查询操作应该使用 executeQuery(),而不是 executeUpdate()。 |
|
||||
| 0x230d | parameter index out of range | 参数越界,请检查参数的合理范围。 |
|
||||
| 0x230e | connection already closed | 连接已经关闭,请检查 Connection 是否关闭后再次使用,或是连接是否正常。 |
|
||||
| 0x230f | unknown sql type in tdengine | 请检查 TDengine 支持的 Data Type 类型。 |
|
||||
| 0x2310 | can't register JDBC-JNI driver | 不能注册 JNI 驱动,请检查 url 是否填写正确。 |
|
||||
| 0x2312 | url is not set | 请检查 REST 连接 url 是否填写正确。 |
|
||||
| 0x2314 | numeric value out of range | 请检查获取结果集中数值类型是否使用了正确的接口。 |
|
||||
| 0x2315 | unknown taos type in tdengine | 在 TDengine 数据类型与 JDBC 数据类型转换时,是否指定了正确的 TDengine 数据类型。 |
|
||||
| 0x2317 | | REST 连接中使用了错误的请求类型。 |
|
||||
| 0x2318 | | REST 连接中出现了数据传输异常,请检查网络情况并重试。 |
|
||||
| 0x2319 | user is required | 创建连接时缺少用户名信息 |
|
||||
| 0x231a | password is required | 创建连接时缺少密码信息 |
|
||||
| 0x231c | httpEntity is null, sql: | REST 连接中执行出现异常 |
|
||||
| 0x2350 | unknown error | 未知异常,请在 github 反馈给开发人员。 |
|
||||
| 0x2352 | Unsupported encoding | 本地连接下指定了不支持的字符编码集 |
|
||||
| 0x2353 | internal error of database, please see taoslog for more details | 本地连接执行 prepareStatement 时出现错误,请检查 taos log 进行问题定位。 |
|
||||
| 0x2354 | JNI connection is NULL | 本地连接执行命令时,Connection 已经关闭。请检查与 TDengine 的连接情况。 |
|
||||
| 0x2355 | JNI result set is NULL | 本地连接获取结果集,结果集异常,请检查连接情况,并重试。 |
|
||||
| 0x2356 | invalid num of fields | 本地连接获取结果集的 meta 信息不匹配。 |
|
||||
| 0x2357 | empty sql string | 填写正确的 SQL 进行执行。 |
|
||||
| 0x2359 | JNI alloc memory failed, please see taoslog for more details | 本地连接分配内存错误,请检查 taos log 进行问题定位。 |
|
||||
| 0x2371 | consumer properties must not be null! | 创建订阅时参数为空,请填写正确的参数。 |
|
||||
| 0x2372 | configs contain empty key, failed to set consumer property | 参数 key 中包含空值,请填写正确的参数。 |
|
||||
| 0x2373 | failed to set consumer property, | 参数 value 中包含空值,请填写正确的参数。 |
|
||||
| 0x2375 | topic reference has been destroyed | 创建数据订阅过程中,topic 引用被释放。请检查与 TDengine 的连接情况。 |
|
||||
| 0x2376 | failed to set consumer topic, topic name is empty | 创建数据订阅过程中,订阅 topic 名称为空。请检查指定的 topic 名称是否填写正确。 |
|
||||
| 0x2377 | consumer reference has been destroyed | 订阅数据传输通道已经关闭,请检查与 TDengine 的连接情况。 |
|
||||
| 0x2378 | consumer create error | 创建数据订阅失败,请根据错误信息检查 taos log 进行问题定位。 |
|
||||
| - | can't create connection with server within | 通过增加参数 httpConnectTimeout 增加连接耗时,或是请检查与 taosAdapter 之间的连接情况。 |
|
||||
| - | failed to complete the task within the specified time | 通过增加参数 messageWaitTimeout 增加执行耗时,或是请检查与 taosAdapter 之间的连接情况。 |
|
||||
|
||||
- [TDengine Java Connector](https://github.com/taosdata/taos-connector-jdbc/blob/main/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java)
|
||||
<!-- - [TDengine_ERROR_CODE](../error-code) -->
|
||||
|
||||
## TDengine DataType 和 Java DataType
|
||||
|
||||
TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下:
|
||||
|
@ -82,7 +169,7 @@ Maven 项目中,在 pom.xml 中添加以下依赖:
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.1.0</version>
|
||||
<version>3.2.1</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
|
@ -97,7 +184,7 @@ cd taos-connector-jdbc
|
|||
mvn clean install -Dmaven.test.skip=true
|
||||
```
|
||||
|
||||
编译后,在 target 目录下会产生 taos-jdbcdriver-3.0.\*-dist.jar 的 jar 包,并自动将编译的 jar 文件放在本地的 Maven 仓库中。
|
||||
编译后,在 target 目录下会产生 taos-jdbcdriver-3.2.\*-dist.jar 的 jar 包,并自动将编译的 jar 文件放在本地的 Maven 仓库中。
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
@ -336,35 +423,6 @@ while(resultSet.next()){
|
|||
|
||||
> 查询和操作关系型数据库一致,使用下标获取返回字段内容时从 1 开始,建议使用字段名称获取。
|
||||
|
||||
### 处理异常
|
||||
|
||||
在报错后,通过 SQLException 可以获取到错误的信息和错误码:
|
||||
|
||||
```java
|
||||
try (Statement statement = connection.createStatement()) {
|
||||
// executeQuery
|
||||
ResultSet resultSet = statement.executeQuery(sql);
|
||||
// print result
|
||||
printResult(resultSet);
|
||||
} catch (SQLException e) {
|
||||
System.out.println("ERROR Message: " + e.getMessage());
|
||||
System.out.println("ERROR Code: " + e.getErrorCode());
|
||||
e.printStackTrace();
|
||||
}
|
||||
```
|
||||
|
||||
JDBC 连接器可能报错的错误码包括 4 种:
|
||||
|
||||
- JDBC driver 本身的报错(错误码在 0x2301 到 0x2350 之间)
|
||||
- 原生连接方法的报错(错误码在 0x2351 到 0x2360 之间)
|
||||
- 数据订阅的报错(错误码在 0x2371 到 0x2380 之间)
|
||||
- TDengine 其他功能模块的报错。
|
||||
|
||||
具体的错误码请参考:
|
||||
|
||||
- [TDengine Java Connector](https://github.com/taosdata/taos-connector-jdbc/blob/main/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java)
|
||||
<!-- - [TDengine_ERROR_CODE](../error-code) -->
|
||||
|
||||
### 通过参数绑定写入数据
|
||||
|
||||
TDengine 的 JDBC 原生连接实现大幅改进了参数绑定方式对数据写入(INSERT)场景的支持。采用这种方式写入数据时,能避免 SQL 语法解析的资源消耗,从而在很多情况下显著提升写入性能。
|
||||
|
@ -372,9 +430,12 @@ TDengine 的 JDBC 原生连接实现大幅改进了参数绑定方式对数据
|
|||
**注意**:
|
||||
|
||||
- JDBC REST 连接目前不支持参数绑定
|
||||
- 以下示例代码基于 taos-jdbcdriver-3.1.0
|
||||
- 以下示例代码基于 taos-jdbcdriver-3.2.1
|
||||
- binary 类型数据需要调用 setString 方法,nchar 类型数据需要调用 setNString 方法
|
||||
- setString 和 setNString 都要求用户在 size 参数里声明表定义中对应列的列宽
|
||||
- 预处理语句中指定数据库与子表名称不要使用 `db.?`,应直接使用 `?`,然后在 setTableName 中指定数据库,如:`prepareStatement.setTableName("db.t1")`。
|
||||
|
||||
<Tabs defaultValue="native">
|
||||
<TabItem value="native" label="原生连接">
|
||||
|
||||
```java
|
||||
public class ParameterBindingDemo {
|
||||
|
@ -602,21 +663,7 @@ public class ParameterBindingDemo {
|
|||
}
|
||||
```
|
||||
|
||||
用于设定 TAGS 取值的方法总共有:
|
||||
|
||||
```java
|
||||
public void setTagNull(int index, int type)
|
||||
public void setTagBoolean(int index, boolean value)
|
||||
public void setTagInt(int index, int value)
|
||||
public void setTagByte(int index, byte value)
|
||||
public void setTagShort(int index, short value)
|
||||
public void setTagLong(int index, long value)
|
||||
public void setTagTimestamp(int index, long value)
|
||||
public void setTagFloat(int index, float value)
|
||||
public void setTagDouble(int index, double value)
|
||||
public void setTagString(int index, String value)
|
||||
public void setTagNString(int index, String value)
|
||||
```
|
||||
**注**:setString 和 setNString 都要求用户在 size 参数里声明表定义中对应列的列宽
|
||||
|
||||
用于设定 VALUES 数据列的取值的方法总共有:
|
||||
|
||||
|
@ -633,17 +680,203 @@ public void setString(int columnIndex, ArrayList<String> list, int size) throws
|
|||
public void setNString(int columnIndex, ArrayList<String> list, int size) throws SQLException
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="rest" label="WebSocket 连接">
|
||||
|
||||
```java
|
||||
public class ParameterBindingDemo {
|
||||
private static final String host = "127.0.0.1";
|
||||
private static final Random random = new Random(System.currentTimeMillis());
|
||||
private static final int BINARY_COLUMN_SIZE = 30;
|
||||
private static final String[] schemaList = {
|
||||
"create table stable1(ts timestamp, f1 tinyint, f2 smallint, f3 int, f4 bigint) tags(t1 tinyint, t2 smallint, t3 int, t4 bigint)",
|
||||
"create table stable2(ts timestamp, f1 float, f2 double) tags(t1 float, t2 double)",
|
||||
"create table stable3(ts timestamp, f1 bool) tags(t1 bool)",
|
||||
"create table stable4(ts timestamp, f1 binary(" + BINARY_COLUMN_SIZE + ")) tags(t1 binary(" + BINARY_COLUMN_SIZE + "))",
|
||||
"create table stable5(ts timestamp, f1 nchar(" + BINARY_COLUMN_SIZE + ")) tags(t1 nchar(" + BINARY_COLUMN_SIZE + "))"
|
||||
};
|
||||
private static final int numOfSubTable = 10, numOfRow = 10;
|
||||
|
||||
public static void main(String[] args) throws SQLException {
|
||||
|
||||
String jdbcUrl = "jdbc:TAOS-RS://" + host + ":6041/?batchfetch=true";
|
||||
Connection conn = DriverManager.getConnection(jdbcUrl, "root", "taosdata");
|
||||
|
||||
init(conn);
|
||||
|
||||
bindInteger(conn);
|
||||
|
||||
bindFloat(conn);
|
||||
|
||||
bindBoolean(conn);
|
||||
|
||||
bindBytes(conn);
|
||||
|
||||
bindString(conn);
|
||||
|
||||
conn.close();
|
||||
}
|
||||
|
||||
private static void init(Connection conn) throws SQLException {
|
||||
try (Statement stmt = conn.createStatement()) {
|
||||
stmt.execute("drop database if exists test_ws_parabind");
|
||||
stmt.execute("create database if not exists test_ws_parabind");
|
||||
stmt.execute("use test_ws_parabind");
|
||||
for (int i = 0; i < schemaList.length; i++) {
|
||||
stmt.execute(schemaList[i]);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void bindInteger(Connection conn) throws SQLException {
|
||||
String sql = "insert into ? using stable1 tags(?,?,?,?) values(?,?,?,?,?)";
|
||||
|
||||
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
|
||||
|
||||
for (int i = 1; i <= numOfSubTable; i++) {
|
||||
// set table name
|
||||
pstmt.setTableName("t1_" + i);
|
||||
// set tags
|
||||
pstmt.setTagByte(1, Byte.parseByte(Integer.toString(random.nextInt(Byte.MAX_VALUE))));
|
||||
pstmt.setTagShort(2, Short.parseShort(Integer.toString(random.nextInt(Short.MAX_VALUE))));
|
||||
pstmt.setTagInt(3, random.nextInt(Integer.MAX_VALUE));
|
||||
pstmt.setTagLong(4, random.nextLong());
|
||||
// set columns
|
||||
long current = System.currentTimeMillis();
|
||||
for (int j = 0; j < numOfRow; j++) {
|
||||
pstmt.setTimestamp(1, new Timestamp(current + j));
|
||||
pstmt.setByte(2, Byte.parseByte(Integer.toString(random.nextInt(Byte.MAX_VALUE))));
|
||||
pstmt.setShort(3, Short.parseShort(Integer.toString(random.nextInt(Short.MAX_VALUE))));
|
||||
pstmt.setInt(4, random.nextInt(Integer.MAX_VALUE));
|
||||
pstmt.setLong(5, random.nextLong());
|
||||
pstmt.addBatch();
|
||||
}
|
||||
pstmt.executeBatch();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void bindFloat(Connection conn) throws SQLException {
|
||||
String sql = "insert into ? using stable2 tags(?,?) values(?,?,?)";
|
||||
|
||||
try(TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
|
||||
|
||||
for (int i = 1; i <= numOfSubTable; i++) {
|
||||
// set table name
|
||||
pstmt.setTableName("t2_" + i);
|
||||
// set tags
|
||||
pstmt.setTagFloat(1, random.nextFloat());
|
||||
pstmt.setTagDouble(2, random.nextDouble());
|
||||
// set columns
|
||||
long current = System.currentTimeMillis();
|
||||
for (int j = 0; j < numOfRow; j++) {
|
||||
pstmt.setTimestamp(1, new Timestamp(current + j));
|
||||
pstmt.setFloat(2, random.nextFloat());
|
||||
pstmt.setDouble(3, random.nextDouble());
|
||||
pstmt.addBatch();
|
||||
}
|
||||
pstmt.executeBatch();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void bindBoolean(Connection conn) throws SQLException {
|
||||
String sql = "insert into ? using stable3 tags(?) values(?,?)";
|
||||
|
||||
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
|
||||
for (int i = 1; i <= numOfSubTable; i++) {
|
||||
// set table name
|
||||
pstmt.setTableName("t3_" + i);
|
||||
// set tags
|
||||
pstmt.setTagBoolean(1, random.nextBoolean());
|
||||
// set columns
|
||||
long current = System.currentTimeMillis();
|
||||
for (int j = 0; j < numOfRow; j++) {
|
||||
pstmt.setTimestamp(1, new Timestamp(current + j));
|
||||
pstmt.setBoolean(2, random.nextBoolean());
|
||||
pstmt.addBatch();
|
||||
}
|
||||
pstmt.executeBatch();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void bindBytes(Connection conn) throws SQLException {
|
||||
String sql = "insert into ? using stable4 tags(?) values(?,?)";
|
||||
|
||||
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
|
||||
|
||||
for (int i = 1; i <= numOfSubTable; i++) {
|
||||
// set table name
|
||||
pstmt.setTableName("t4_" + i);
|
||||
// set tags
|
||||
pstmt.setTagString(1, new String("abc"));
|
||||
|
||||
// set columns
|
||||
long current = System.currentTimeMillis();
|
||||
for (int j = 0; j < numOfRow; j++) {
|
||||
pstmt.setTimestamp(1, new Timestamp(current + j));
|
||||
pstmt.setString(2, "abc");
|
||||
pstmt.addBatch();
|
||||
}
|
||||
pstmt.executeBatch();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private static void bindString(Connection conn) throws SQLException {
|
||||
String sql = "insert into ? using stable5 tags(?) values(?,?)";
|
||||
|
||||
try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) {
|
||||
|
||||
for (int i = 1; i <= numOfSubTable; i++) {
|
||||
// set table name
|
||||
pstmt.setTableName("t5_" + i);
|
||||
// set tags
|
||||
pstmt.setTagNString(1, "California.SanFrancisco");
|
||||
|
||||
// set columns
|
||||
long current = System.currentTimeMillis();
|
||||
for (int j = 0; j < numOfRow; j++) {
|
||||
pstmt.setTimestamp(0, new Timestamp(current + j));
|
||||
pstmt.setNString(1, "California.SanFrancisco");
|
||||
pstmt.addBatch();
|
||||
}
|
||||
pstmt.executeBatch();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
用于设定 TAGS 取值的方法总共有:
|
||||
|
||||
```java
|
||||
public void setTagNull(int index, int type)
|
||||
public void setTagBoolean(int index, boolean value)
|
||||
public void setTagInt(int index, int value)
|
||||
public void setTagByte(int index, byte value)
|
||||
public void setTagShort(int index, short value)
|
||||
public void setTagLong(int index, long value)
|
||||
public void setTagTimestamp(int index, long value)
|
||||
public void setTagFloat(int index, float value)
|
||||
public void setTagDouble(int index, double value)
|
||||
public void setTagString(int index, String value)
|
||||
public void setTagNString(int index, String value)
|
||||
```
|
||||
|
||||
### 无模式写入
|
||||
|
||||
TDengine 支持无模式写入功能。无模式写入兼容 InfluxDB 的 行协议(Line Protocol)、OpenTSDB 的 telnet 行协议和 OpenTSDB 的 JSON 格式协议。详情请参见[无模式写入](../../reference/schemaless/)。
|
||||
|
||||
**注意**:
|
||||
|
||||
- JDBC REST 连接目前不支持无模式写入
|
||||
- 以下示例代码基于 taos-jdbcdriver-3.1.0
|
||||
<Tabs defaultValue="native">
|
||||
<TabItem value="native" label="原生连接">
|
||||
|
||||
```java
|
||||
public class SchemalessInsertTest {
|
||||
public class SchemalessJniTest {
|
||||
private static final String host = "127.0.0.1";
|
||||
private static final String lineDemo = "st,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000";
|
||||
private static final String telnetDemo = "stb0_0 1626006833 4 host=host0 interface=eth0";
|
||||
|
@ -671,6 +904,41 @@ public class SchemalessInsertTest {
|
|||
}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="ws" label="WebSocket 连接">
|
||||
|
||||
```java
|
||||
public class SchemalessWsTest {
|
||||
private static final String host = "127.0.0.1";
|
||||
private static final String lineDemo = "st,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000";
|
||||
private static final String telnetDemo = "stb0_0 1626006833 4 host=host0 interface=eth0";
|
||||
private static final String jsonDemo = "{\"metric\": \"meter_current\",\"timestamp\": 1626846400,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"California.SanFrancisco\", \"id\": \"d1001\"}}";
|
||||
|
||||
public static void main(String[] args) throws SQLException {
|
||||
final String url = "jdbc:TAOS-RS://" + host + ":6041/?user=root&password=taosdata&batchfetch=true";
|
||||
Connection connection = DriverManager.getConnection(url);
|
||||
init(connection);
|
||||
|
||||
SchemalessWriter writer = new SchemalessWriter(connection, "test_ws_schemaless");
|
||||
writer.write(lineDemo, SchemalessProtocolType.LINE, SchemalessTimestampType.NANO_SECONDS);
|
||||
writer.write(telnetDemo, SchemalessProtocolType.TELNET, SchemalessTimestampType.MILLI_SECONDS);
|
||||
writer.write(jsonDemo, SchemalessProtocolType.JSON, SchemalessTimestampType.SECONDS);
|
||||
System.exit(0);
|
||||
}
|
||||
|
||||
private static void init(Connection connection) throws SQLException {
|
||||
try (Statement stmt = connection.createStatement()) {
|
||||
stmt.executeUpdate("drop database if exists test_ws_schemaless");
|
||||
stmt.executeUpdate("create database if not exists test_ws_schemaless keep 36500");
|
||||
stmt.executeUpdate("use test_ws_schemaless");
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### 数据订阅
|
||||
|
||||
TDengine Java 连接器支持订阅功能,应用 API 如下:
|
||||
|
@ -714,8 +982,9 @@ TaosConsumer consumer = new TaosConsumer<>(config);
|
|||
```java
|
||||
while(true) {
|
||||
ConsumerRecords<ResultBean> records = consumer.poll(Duration.ofMillis(100));
|
||||
for (ResultBean record : records) {
|
||||
process(record);
|
||||
for (ConsumerRecord<ResultBean> record : records) {
|
||||
ResultBean bean = record.value();
|
||||
process(bean);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
@ -766,8 +1035,9 @@ public abstract class ConsumerLoop {
|
|||
|
||||
while (!shutdown.get()) {
|
||||
ConsumerRecords<ResultBean> records = consumer.poll(Duration.ofMillis(100));
|
||||
for (ResultBean record : records) {
|
||||
process(record);
|
||||
for (ConsumerRecord<ResultBean> record : records) {
|
||||
ResultBean bean = record.value();
|
||||
process(bean);
|
||||
}
|
||||
}
|
||||
consumer.unsubscribe();
|
||||
|
@ -844,8 +1114,9 @@ public abstract class ConsumerLoop {
|
|||
|
||||
while (!shutdown.get()) {
|
||||
ConsumerRecords<ResultBean> records = consumer.poll(Duration.ofMillis(100));
|
||||
for (ResultBean record : records) {
|
||||
process(record);
|
||||
for (ConsumerRecord<ResultBean> record : records) {
|
||||
ResultBean bean = record.value();
|
||||
process(bean);
|
||||
}
|
||||
}
|
||||
consumer.unsubscribe();
|
||||
|
@ -971,20 +1242,6 @@ public static void main(String[] args) throws Exception {
|
|||
|
||||
请参考:[JDBC example](https://github.com/taosdata/TDengine/tree/3.0/examples/JDBC)
|
||||
|
||||
## 最近更新记录
|
||||
|
||||
| taos-jdbcdriver 版本 | 主要变化 |
|
||||
| :------------------: | :--------------------------------------------------------------------------------------------: |
|
||||
| 3.1.0 | WebSocket 连接支持订阅功能 |
|
||||
| 3.0.1 - 3.0.4 | 修复一些情况下结果集数据解析错误的问题。3.0.1 在 JDK 11 环境编译,JDK 8 环境下建议使用其他版本 |
|
||||
| 3.0.0 | 支持 TDengine 3.0 |
|
||||
| 2.0.42 | 修在 WebSocket 连接中 wasNull 接口返回值 |
|
||||
| 2.0.41 | 修正 REST 连接中用户名和密码转码方式 |
|
||||
| 2.0.39 - 2.0.40 | 增加 REST 连接/请求 超时设置 |
|
||||
| 2.0.38 | JDBC REST 连接增加批量拉取功能 |
|
||||
| 2.0.37 | 增加对 json tag 支持 |
|
||||
| 2.0.36 | 增加对 schemaless 写入支持 |
|
||||
|
||||
## 常见问题
|
||||
|
||||
1. 使用 Statement 的 `addBatch()` 和 `executeBatch()` 来执行“批量写入/更新”,为什么没有带来性能上的提升?
|
||||
|
|
|
@ -33,7 +33,7 @@ column_definition:
|
|||
SHOW STABLES [LIKE tb_name_wildcard];
|
||||
```
|
||||
|
||||
查看数据库内全部 STable,及其相关信息,包括 STable 的名称、创建时间、列数量、标签(TAG)数量、通过该 STable 建表的数量。
|
||||
查看数据库内全部超级表。
|
||||
|
||||
### 显示一个超级表的创建语句
|
||||
|
||||
|
|
|
@ -55,7 +55,7 @@ window_clause: {
|
|||
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)]
|
||||
|
||||
interp_clause:
|
||||
RANGE(ts_val, ts_val), EVERY(every_val), FILL(fill_mod_and_val)
|
||||
RANGE(ts_val, ts_val) EVERY(every_val) FILL(fill_mod_and_val)
|
||||
|
||||
partition_by_clause:
|
||||
PARTITION BY expr [, expr] ...
|
||||
|
|
|
@ -888,7 +888,7 @@ INTERP(expr)
|
|||
- INTERP 的输出时间范围根据 RANGE(timestamp1,timestamp2)字段来指定,需满足 timestamp1 <= timestamp2。其中 timestamp1(必选值)为输出时间范围的起始值,即如果 timestamp1 时刻符合插值条件则 timestamp1 为输出的第一条记录,timestamp2(必选值)为输出时间范围的结束值,即输出的最后一条记录的 timestamp 不能大于 timestamp2。
|
||||
- INTERP 根据 EVERY(time_unit) 字段来确定输出时间范围内的结果条数,即从 timestamp1 开始每隔固定长度的时间(time_unit 值)进行插值,time_unit 可取值时间单位:1a(毫秒),1s(秒),1m(分),1h(小时),1d(天),1w(周)。例如 EVERY(500a) 将对于指定数据每500毫秒间隔进行一次插值.
|
||||
- INTERP 根据 FILL 字段来决定在每个符合输出条件的时刻如何进行插值。关于 FILL 子句如何使用请参考 [FILL 子句](../distinguished/#fill-子句)
|
||||
- INTERP 只能在一个时间序列内进行插值,因此当作用于超级表时必须跟 partition by tbname 一起使用。
|
||||
- INTERP 作用于超级表时, 会将该超级表下的所有子表数据按照主键列排序后进行插值计算,也可以搭配 PARTITION BY tbname 使用,将结果强制规约到单个时间线。
|
||||
- INTERP 可以与伪列 _irowts 一起使用,返回插值点所对应的时间戳(3.0.2.0版本以后支持)。
|
||||
- INTERP 可以与伪列 _isfilled 一起使用,显示返回结果是否为原始记录或插值算法产生的数据(3.0.3.0版本以后支持)。
|
||||
|
||||
|
|
|
@ -129,6 +129,14 @@ SHOW QNODES;
|
|||
|
||||
显示当前系统中 QNODE (查询节点)的信息。
|
||||
|
||||
## SHOW QUERIES
|
||||
|
||||
```sql
|
||||
SHOW QUERIES;
|
||||
```
|
||||
|
||||
显示当前系统中正在进行的查询。
|
||||
|
||||
## SHOW SCORES
|
||||
|
||||
```sql
|
||||
|
|
|
@ -735,7 +735,6 @@ charset 的有效值是 UTF-8。
|
|||
| 16 | maxTmrCtrl | 是 | 否 | 3.0 行为未知 |
|
||||
| 17 | monitorReplica | 是 | 否 | 由 RAFT 协议管理多副本 |
|
||||
| 18 | smlTagNullName | 是 | 否 | 3.0 行为未知 |
|
||||
| 19 | keepColumnName | 是 | 否 | 3.0 行为未知 |
|
||||
| 20 | ratioOfQueryCores | 是 | 否 | 由 线程池 相关配置参数决定 |
|
||||
| 21 | maxStreamCompDelay | 是 | 否 | 3.0 行为未知 |
|
||||
| 22 | maxFirstStreamCompDelay | 是 | 否 | 3.0 行为未知 |
|
||||
|
|
|
@ -48,15 +48,14 @@ Confluent 提供了 Docker 和二进制包两种安装方式。本文仅介绍
|
|||
|
||||
```
|
||||
curl -O http://packages.confluent.io/archive/7.1/confluent-7.1.1.tar.gz
|
||||
tar xzf confluent-7.1.1.tar.gz -C /opt/test
|
||||
tar xzf confluent-7.1.1.tar.gz -C /opt/
|
||||
```
|
||||
|
||||
然后需要把 `$CONFLUENT_HOME/bin` 目录加入 PATH。
|
||||
|
||||
```title=".profile"
|
||||
export CONFLUENT_HOME=/opt/confluent-7.1.1
|
||||
PATH=$CONFLUENT_HOME/bin
|
||||
export PATH
|
||||
export PATH=$CONFLUENT_HOME/bin:$PATH
|
||||
```
|
||||
|
||||
以上脚本可以追加到当前用户的 profile 文件(~/.profile 或 ~/.bash_profile)
|
||||
|
@ -333,7 +332,15 @@ DROP DATABASE IF EXISTS test;
|
|||
CREATE DATABASE test;
|
||||
USE test;
|
||||
CREATE STABLE meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT);
|
||||
INSERT INTO d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000) d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:15.000',12.60000,218,0.33000) d1001 USING meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:16.800',12.30000,221,0.31000) d1002 USING meters TAGS(California.SanFrancisco, 3) VALUES('2018-10-03 14:38:16.650',10.30000,218,0.25000) d1003 USING meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 14:38:05.500',11.80000,221,0.28000) d1003 USING meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 14:38:16.600',13.40000,223,0.29000) d1004 USING meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 14:38:05.000',10.80000,223,0.29000) d1004 USING meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 14:38:06.500',11.50000,221,0.35000);
|
||||
|
||||
INSERT INTO d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000) \
|
||||
d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES('2018-10-03 14:38:15.000',12.60000,218,0.33000) \
|
||||
d1001 USING meters TAGS('California.SanFrancisco', 2) VALUES('2018-10-03 14:38:16.800',12.30000,221,0.31000) \
|
||||
d1002 USING meters TAGS('California.SanFrancisco', 3) VALUES('2018-10-03 14:38:16.650',10.30000,218,0.25000) \
|
||||
d1003 USING meters TAGS('California.LosAngeles', 2) VALUES('2018-10-03 14:38:05.500',11.80000,221,0.28000) \
|
||||
d1003 USING meters TAGS('California.LosAngeles', 2) VALUES('2018-10-03 14:38:16.600',13.40000,223,0.29000) \
|
||||
d1004 USING meters TAGS('California.LosAngeles', 3) VALUES('2018-10-03 14:38:05.000',10.80000,223,0.29000) \
|
||||
d1004 USING meters TAGS('California.LosAngeles', 3) VALUES('2018-10-03 14:38:06.500',11.50000,221,0.35000);
|
||||
```
|
||||
|
||||
使用 TDengine CLI, 执行 SQL 文件。
|
||||
|
@ -388,7 +395,7 @@ confluent local services connect connector status
|
|||
如果按照前述操作,此时应有两个活跃的 connector。使用下面的命令 unload:
|
||||
|
||||
```
|
||||
confluent local services connect connector unload TDengineSourceConnector
|
||||
confluent local services connect connector unload TDengineSinkConnector
|
||||
confluent local services connect connector unload TDengineSourceConnector
|
||||
```
|
||||
|
||||
|
|
|
@ -26,6 +26,10 @@ extern "C" {
|
|||
#include "tgrantCfg.h"
|
||||
#endif
|
||||
|
||||
#ifndef GRANTS_COL_MAX_LEN
|
||||
#define GRANTS_COL_MAX_LEN 196
|
||||
#endif
|
||||
|
||||
typedef enum {
|
||||
TSDB_GRANT_ALL,
|
||||
TSDB_GRANT_TIME,
|
||||
|
@ -47,6 +51,31 @@ typedef enum {
|
|||
int32_t grantCheck(EGrantType grant);
|
||||
|
||||
#ifndef GRANTS_CFG
|
||||
#ifdef TD_ENTERPRISE
|
||||
#define GRANTS_SCHEMA \
|
||||
static const SSysDbTableSchema grantsSchema[] = { \
|
||||
{.name = "version", .bytes = 9 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "expire_time", .bytes = 19 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "expired", .bytes = 5 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "storage", .bytes = 21 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "timeseries", .bytes = 21 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "databases", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "users", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "accounts", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "dnodes", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "connections", .bytes = 11 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "streams", .bytes = 9 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "cpu_cores", .bytes = 9 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "speed", .bytes = 9 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "querytime", .bytes = 9 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "opc_da", .bytes = GRANTS_COL_MAX_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "opc_ua", .bytes = GRANTS_COL_MAX_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "pi", .bytes = GRANTS_COL_MAX_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "kafka", .bytes = GRANTS_COL_MAX_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "influxdb", .bytes = GRANTS_COL_MAX_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "mqtt", .bytes = GRANTS_COL_MAX_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
}
|
||||
#else
|
||||
#define GRANTS_SCHEMA \
|
||||
static const SSysDbTableSchema grantsSchema[] = { \
|
||||
{.name = "version", .bytes = 9 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
|
@ -64,6 +93,7 @@ int32_t grantCheck(EGrantType grant);
|
|||
{.name = "speed", .bytes = 9 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
{.name = "querytime", .bytes = 9 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR}, \
|
||||
}
|
||||
#endif
|
||||
#define GRANT_CFG_ADD
|
||||
#define GRANT_CFG_SET
|
||||
#define GRANT_CFG_GET
|
||||
|
|
|
@ -1233,6 +1233,14 @@ typedef struct {
|
|||
SEp ep;
|
||||
} SDnodeEp;
|
||||
|
||||
typedef struct {
|
||||
int32_t id;
|
||||
int8_t isMnode;
|
||||
SEp ep;
|
||||
char active[TSDB_ACTIVE_KEY_LEN];
|
||||
char connActive[TSDB_CONN_ACTIVE_KEY_LEN];
|
||||
} SDnodeInfo;
|
||||
|
||||
typedef struct {
|
||||
int64_t dnodeVer;
|
||||
SDnodeCfg dnodeCfg;
|
||||
|
@ -1626,6 +1634,21 @@ typedef struct {
|
|||
int32_t tSerializeSDropDnodeReq(void* buf, int32_t bufLen, SDropDnodeReq* pReq);
|
||||
int32_t tDeserializeSDropDnodeReq(void* buf, int32_t bufLen, SDropDnodeReq* pReq);
|
||||
|
||||
enum {
|
||||
RESTORE_TYPE__ALL = 1,
|
||||
RESTORE_TYPE__MNODE,
|
||||
RESTORE_TYPE__VNODE,
|
||||
RESTORE_TYPE__QNODE,
|
||||
};
|
||||
|
||||
typedef struct {
|
||||
int32_t dnodeId;
|
||||
int8_t restoreType;
|
||||
} SRestoreDnodeReq;
|
||||
|
||||
int32_t tSerializeSRestoreDnodeReq(void* buf, int32_t bufLen, SRestoreDnodeReq* pReq);
|
||||
int32_t tDeserializeSRestoreDnodeReq(void* buf, int32_t bufLen, SRestoreDnodeReq* pReq);
|
||||
|
||||
typedef struct {
|
||||
int32_t dnodeId;
|
||||
char config[TSDB_DNODE_CONFIG_LEN];
|
||||
|
|
|
@ -178,6 +178,7 @@ enum {
|
|||
// TD_DEF_MSG_TYPE(TDMT_MND_STREAM_BEGIN_CHECKPOINT, "stream-begin-checkpoint", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_MAX_MSG, "mnd-max", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_BALANCE_VGROUP_LEADER, "balance-vgroup-leader", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_RESTORE_DNODE, "restore-dnode", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_PAUSE_STREAM, "pause-stream", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_RESUME_STREAM, "resume-stream", NULL, NULL)
|
||||
|
||||
|
|
|
@ -67,287 +67,289 @@
|
|||
#define TK_DNODE 49
|
||||
#define TK_PORT 50
|
||||
#define TK_DNODES 51
|
||||
#define TK_NK_IPTOKEN 52
|
||||
#define TK_FORCE 53
|
||||
#define TK_LOCAL 54
|
||||
#define TK_QNODE 55
|
||||
#define TK_BNODE 56
|
||||
#define TK_SNODE 57
|
||||
#define TK_MNODE 58
|
||||
#define TK_DATABASE 59
|
||||
#define TK_USE 60
|
||||
#define TK_FLUSH 61
|
||||
#define TK_TRIM 62
|
||||
#define TK_COMPACT 63
|
||||
#define TK_IF 64
|
||||
#define TK_NOT 65
|
||||
#define TK_EXISTS 66
|
||||
#define TK_BUFFER 67
|
||||
#define TK_CACHEMODEL 68
|
||||
#define TK_CACHESIZE 69
|
||||
#define TK_COMP 70
|
||||
#define TK_DURATION 71
|
||||
#define TK_NK_VARIABLE 72
|
||||
#define TK_MAXROWS 73
|
||||
#define TK_MINROWS 74
|
||||
#define TK_KEEP 75
|
||||
#define TK_PAGES 76
|
||||
#define TK_PAGESIZE 77
|
||||
#define TK_TSDB_PAGESIZE 78
|
||||
#define TK_PRECISION 79
|
||||
#define TK_REPLICA 80
|
||||
#define TK_VGROUPS 81
|
||||
#define TK_SINGLE_STABLE 82
|
||||
#define TK_RETENTIONS 83
|
||||
#define TK_SCHEMALESS 84
|
||||
#define TK_WAL_LEVEL 85
|
||||
#define TK_WAL_FSYNC_PERIOD 86
|
||||
#define TK_WAL_RETENTION_PERIOD 87
|
||||
#define TK_WAL_RETENTION_SIZE 88
|
||||
#define TK_WAL_ROLL_PERIOD 89
|
||||
#define TK_WAL_SEGMENT_SIZE 90
|
||||
#define TK_STT_TRIGGER 91
|
||||
#define TK_TABLE_PREFIX 92
|
||||
#define TK_TABLE_SUFFIX 93
|
||||
#define TK_NK_COLON 94
|
||||
#define TK_MAX_SPEED 95
|
||||
#define TK_START 96
|
||||
#define TK_TIMESTAMP 97
|
||||
#define TK_END 98
|
||||
#define TK_TABLE 99
|
||||
#define TK_NK_LP 100
|
||||
#define TK_NK_RP 101
|
||||
#define TK_STABLE 102
|
||||
#define TK_ADD 103
|
||||
#define TK_COLUMN 104
|
||||
#define TK_MODIFY 105
|
||||
#define TK_RENAME 106
|
||||
#define TK_TAG 107
|
||||
#define TK_SET 108
|
||||
#define TK_NK_EQ 109
|
||||
#define TK_USING 110
|
||||
#define TK_TAGS 111
|
||||
#define TK_BOOL 112
|
||||
#define TK_TINYINT 113
|
||||
#define TK_SMALLINT 114
|
||||
#define TK_INT 115
|
||||
#define TK_INTEGER 116
|
||||
#define TK_BIGINT 117
|
||||
#define TK_FLOAT 118
|
||||
#define TK_DOUBLE 119
|
||||
#define TK_BINARY 120
|
||||
#define TK_NCHAR 121
|
||||
#define TK_UNSIGNED 122
|
||||
#define TK_JSON 123
|
||||
#define TK_VARCHAR 124
|
||||
#define TK_MEDIUMBLOB 125
|
||||
#define TK_BLOB 126
|
||||
#define TK_VARBINARY 127
|
||||
#define TK_DECIMAL 128
|
||||
#define TK_COMMENT 129
|
||||
#define TK_MAX_DELAY 130
|
||||
#define TK_WATERMARK 131
|
||||
#define TK_ROLLUP 132
|
||||
#define TK_TTL 133
|
||||
#define TK_SMA 134
|
||||
#define TK_DELETE_MARK 135
|
||||
#define TK_FIRST 136
|
||||
#define TK_LAST 137
|
||||
#define TK_SHOW 138
|
||||
#define TK_PRIVILEGES 139
|
||||
#define TK_DATABASES 140
|
||||
#define TK_TABLES 141
|
||||
#define TK_STABLES 142
|
||||
#define TK_MNODES 143
|
||||
#define TK_QNODES 144
|
||||
#define TK_FUNCTIONS 145
|
||||
#define TK_INDEXES 146
|
||||
#define TK_ACCOUNTS 147
|
||||
#define TK_APPS 148
|
||||
#define TK_CONNECTIONS 149
|
||||
#define TK_LICENCES 150
|
||||
#define TK_GRANTS 151
|
||||
#define TK_QUERIES 152
|
||||
#define TK_SCORES 153
|
||||
#define TK_TOPICS 154
|
||||
#define TK_VARIABLES 155
|
||||
#define TK_CLUSTER 156
|
||||
#define TK_BNODES 157
|
||||
#define TK_SNODES 158
|
||||
#define TK_TRANSACTIONS 159
|
||||
#define TK_DISTRIBUTED 160
|
||||
#define TK_CONSUMERS 161
|
||||
#define TK_SUBSCRIPTIONS 162
|
||||
#define TK_VNODES 163
|
||||
#define TK_ALIVE 164
|
||||
#define TK_LIKE 165
|
||||
#define TK_TBNAME 166
|
||||
#define TK_QTAGS 167
|
||||
#define TK_AS 168
|
||||
#define TK_INDEX 169
|
||||
#define TK_FUNCTION 170
|
||||
#define TK_INTERVAL 171
|
||||
#define TK_COUNT 172
|
||||
#define TK_LAST_ROW 173
|
||||
#define TK_TOPIC 174
|
||||
#define TK_META 175
|
||||
#define TK_CONSUMER 176
|
||||
#define TK_GROUP 177
|
||||
#define TK_DESC 178
|
||||
#define TK_DESCRIBE 179
|
||||
#define TK_RESET 180
|
||||
#define TK_QUERY 181
|
||||
#define TK_CACHE 182
|
||||
#define TK_EXPLAIN 183
|
||||
#define TK_ANALYZE 184
|
||||
#define TK_VERBOSE 185
|
||||
#define TK_NK_BOOL 186
|
||||
#define TK_RATIO 187
|
||||
#define TK_NK_FLOAT 188
|
||||
#define TK_OUTPUTTYPE 189
|
||||
#define TK_AGGREGATE 190
|
||||
#define TK_BUFSIZE 191
|
||||
#define TK_LANGUAGE 192
|
||||
#define TK_REPLACE 193
|
||||
#define TK_STREAM 194
|
||||
#define TK_INTO 195
|
||||
#define TK_PAUSE 196
|
||||
#define TK_RESUME 197
|
||||
#define TK_TRIGGER 198
|
||||
#define TK_AT_ONCE 199
|
||||
#define TK_WINDOW_CLOSE 200
|
||||
#define TK_IGNORE 201
|
||||
#define TK_EXPIRED 202
|
||||
#define TK_FILL_HISTORY 203
|
||||
#define TK_UPDATE 204
|
||||
#define TK_SUBTABLE 205
|
||||
#define TK_UNTREATED 206
|
||||
#define TK_KILL 207
|
||||
#define TK_CONNECTION 208
|
||||
#define TK_TRANSACTION 209
|
||||
#define TK_BALANCE 210
|
||||
#define TK_VGROUP 211
|
||||
#define TK_LEADER 212
|
||||
#define TK_MERGE 213
|
||||
#define TK_REDISTRIBUTE 214
|
||||
#define TK_SPLIT 215
|
||||
#define TK_DELETE 216
|
||||
#define TK_INSERT 217
|
||||
#define TK_NULL 218
|
||||
#define TK_NK_QUESTION 219
|
||||
#define TK_NK_ARROW 220
|
||||
#define TK_ROWTS 221
|
||||
#define TK_QSTART 222
|
||||
#define TK_QEND 223
|
||||
#define TK_QDURATION 224
|
||||
#define TK_WSTART 225
|
||||
#define TK_WEND 226
|
||||
#define TK_WDURATION 227
|
||||
#define TK_IROWTS 228
|
||||
#define TK_ISFILLED 229
|
||||
#define TK_CAST 230
|
||||
#define TK_NOW 231
|
||||
#define TK_TODAY 232
|
||||
#define TK_TIMEZONE 233
|
||||
#define TK_CLIENT_VERSION 234
|
||||
#define TK_SERVER_VERSION 235
|
||||
#define TK_SERVER_STATUS 236
|
||||
#define TK_CURRENT_USER 237
|
||||
#define TK_CASE 238
|
||||
#define TK_WHEN 239
|
||||
#define TK_THEN 240
|
||||
#define TK_ELSE 241
|
||||
#define TK_BETWEEN 242
|
||||
#define TK_IS 243
|
||||
#define TK_NK_LT 244
|
||||
#define TK_NK_GT 245
|
||||
#define TK_NK_LE 246
|
||||
#define TK_NK_GE 247
|
||||
#define TK_NK_NE 248
|
||||
#define TK_MATCH 249
|
||||
#define TK_NMATCH 250
|
||||
#define TK_CONTAINS 251
|
||||
#define TK_IN 252
|
||||
#define TK_JOIN 253
|
||||
#define TK_INNER 254
|
||||
#define TK_SELECT 255
|
||||
#define TK_DISTINCT 256
|
||||
#define TK_WHERE 257
|
||||
#define TK_PARTITION 258
|
||||
#define TK_BY 259
|
||||
#define TK_SESSION 260
|
||||
#define TK_STATE_WINDOW 261
|
||||
#define TK_EVENT_WINDOW 262
|
||||
#define TK_SLIDING 263
|
||||
#define TK_FILL 264
|
||||
#define TK_VALUE 265
|
||||
#define TK_VALUE_F 266
|
||||
#define TK_NONE 267
|
||||
#define TK_PREV 268
|
||||
#define TK_NULL_F 269
|
||||
#define TK_LINEAR 270
|
||||
#define TK_NEXT 271
|
||||
#define TK_HAVING 272
|
||||
#define TK_RANGE 273
|
||||
#define TK_EVERY 274
|
||||
#define TK_ORDER 275
|
||||
#define TK_SLIMIT 276
|
||||
#define TK_SOFFSET 277
|
||||
#define TK_LIMIT 278
|
||||
#define TK_OFFSET 279
|
||||
#define TK_ASC 280
|
||||
#define TK_NULLS 281
|
||||
#define TK_ABORT 282
|
||||
#define TK_AFTER 283
|
||||
#define TK_ATTACH 284
|
||||
#define TK_BEFORE 285
|
||||
#define TK_BEGIN 286
|
||||
#define TK_BITAND 287
|
||||
#define TK_BITNOT 288
|
||||
#define TK_BITOR 289
|
||||
#define TK_BLOCKS 290
|
||||
#define TK_CHANGE 291
|
||||
#define TK_COMMA 292
|
||||
#define TK_CONCAT 293
|
||||
#define TK_CONFLICT 294
|
||||
#define TK_COPY 295
|
||||
#define TK_DEFERRED 296
|
||||
#define TK_DELIMITERS 297
|
||||
#define TK_DETACH 298
|
||||
#define TK_DIVIDE 299
|
||||
#define TK_DOT 300
|
||||
#define TK_EACH 301
|
||||
#define TK_FAIL 302
|
||||
#define TK_FILE 303
|
||||
#define TK_FOR 304
|
||||
#define TK_GLOB 305
|
||||
#define TK_ID 306
|
||||
#define TK_IMMEDIATE 307
|
||||
#define TK_IMPORT 308
|
||||
#define TK_INITIALLY 309
|
||||
#define TK_INSTEAD 310
|
||||
#define TK_ISNULL 311
|
||||
#define TK_KEY 312
|
||||
#define TK_MODULES 313
|
||||
#define TK_NK_BITNOT 314
|
||||
#define TK_NK_SEMI 315
|
||||
#define TK_NOTNULL 316
|
||||
#define TK_OF 317
|
||||
#define TK_PLUS 318
|
||||
#define TK_PRIVILEGE 319
|
||||
#define TK_RAISE 320
|
||||
#define TK_RESTRICT 321
|
||||
#define TK_ROW 322
|
||||
#define TK_SEMI 323
|
||||
#define TK_STAR 324
|
||||
#define TK_STATEMENT 325
|
||||
#define TK_STRICT 326
|
||||
#define TK_STRING 327
|
||||
#define TK_TIMES 328
|
||||
#define TK_VALUES 329
|
||||
#define TK_VARIABLE 330
|
||||
#define TK_VIEW 331
|
||||
#define TK_WAL 332
|
||||
#define TK_RESTORE 52
|
||||
#define TK_NK_IPTOKEN 53
|
||||
#define TK_FORCE 54
|
||||
#define TK_LOCAL 55
|
||||
#define TK_QNODE 56
|
||||
#define TK_BNODE 57
|
||||
#define TK_SNODE 58
|
||||
#define TK_MNODE 59
|
||||
#define TK_VNODE 60
|
||||
#define TK_DATABASE 61
|
||||
#define TK_USE 62
|
||||
#define TK_FLUSH 63
|
||||
#define TK_TRIM 64
|
||||
#define TK_COMPACT 65
|
||||
#define TK_IF 66
|
||||
#define TK_NOT 67
|
||||
#define TK_EXISTS 68
|
||||
#define TK_BUFFER 69
|
||||
#define TK_CACHEMODEL 70
|
||||
#define TK_CACHESIZE 71
|
||||
#define TK_COMP 72
|
||||
#define TK_DURATION 73
|
||||
#define TK_NK_VARIABLE 74
|
||||
#define TK_MAXROWS 75
|
||||
#define TK_MINROWS 76
|
||||
#define TK_KEEP 77
|
||||
#define TK_PAGES 78
|
||||
#define TK_PAGESIZE 79
|
||||
#define TK_TSDB_PAGESIZE 80
|
||||
#define TK_PRECISION 81
|
||||
#define TK_REPLICA 82
|
||||
#define TK_VGROUPS 83
|
||||
#define TK_SINGLE_STABLE 84
|
||||
#define TK_RETENTIONS 85
|
||||
#define TK_SCHEMALESS 86
|
||||
#define TK_WAL_LEVEL 87
|
||||
#define TK_WAL_FSYNC_PERIOD 88
|
||||
#define TK_WAL_RETENTION_PERIOD 89
|
||||
#define TK_WAL_RETENTION_SIZE 90
|
||||
#define TK_WAL_ROLL_PERIOD 91
|
||||
#define TK_WAL_SEGMENT_SIZE 92
|
||||
#define TK_STT_TRIGGER 93
|
||||
#define TK_TABLE_PREFIX 94
|
||||
#define TK_TABLE_SUFFIX 95
|
||||
#define TK_NK_COLON 96
|
||||
#define TK_MAX_SPEED 97
|
||||
#define TK_START 98
|
||||
#define TK_TIMESTAMP 99
|
||||
#define TK_END 100
|
||||
#define TK_TABLE 101
|
||||
#define TK_NK_LP 102
|
||||
#define TK_NK_RP 103
|
||||
#define TK_STABLE 104
|
||||
#define TK_ADD 105
|
||||
#define TK_COLUMN 106
|
||||
#define TK_MODIFY 107
|
||||
#define TK_RENAME 108
|
||||
#define TK_TAG 109
|
||||
#define TK_SET 110
|
||||
#define TK_NK_EQ 111
|
||||
#define TK_USING 112
|
||||
#define TK_TAGS 113
|
||||
#define TK_BOOL 114
|
||||
#define TK_TINYINT 115
|
||||
#define TK_SMALLINT 116
|
||||
#define TK_INT 117
|
||||
#define TK_INTEGER 118
|
||||
#define TK_BIGINT 119
|
||||
#define TK_FLOAT 120
|
||||
#define TK_DOUBLE 121
|
||||
#define TK_BINARY 122
|
||||
#define TK_NCHAR 123
|
||||
#define TK_UNSIGNED 124
|
||||
#define TK_JSON 125
|
||||
#define TK_VARCHAR 126
|
||||
#define TK_MEDIUMBLOB 127
|
||||
#define TK_BLOB 128
|
||||
#define TK_VARBINARY 129
|
||||
#define TK_DECIMAL 130
|
||||
#define TK_COMMENT 131
|
||||
#define TK_MAX_DELAY 132
|
||||
#define TK_WATERMARK 133
|
||||
#define TK_ROLLUP 134
|
||||
#define TK_TTL 135
|
||||
#define TK_SMA 136
|
||||
#define TK_DELETE_MARK 137
|
||||
#define TK_FIRST 138
|
||||
#define TK_LAST 139
|
||||
#define TK_SHOW 140
|
||||
#define TK_PRIVILEGES 141
|
||||
#define TK_DATABASES 142
|
||||
#define TK_TABLES 143
|
||||
#define TK_STABLES 144
|
||||
#define TK_MNODES 145
|
||||
#define TK_QNODES 146
|
||||
#define TK_FUNCTIONS 147
|
||||
#define TK_INDEXES 148
|
||||
#define TK_ACCOUNTS 149
|
||||
#define TK_APPS 150
|
||||
#define TK_CONNECTIONS 151
|
||||
#define TK_LICENCES 152
|
||||
#define TK_GRANTS 153
|
||||
#define TK_QUERIES 154
|
||||
#define TK_SCORES 155
|
||||
#define TK_TOPICS 156
|
||||
#define TK_VARIABLES 157
|
||||
#define TK_CLUSTER 158
|
||||
#define TK_BNODES 159
|
||||
#define TK_SNODES 160
|
||||
#define TK_TRANSACTIONS 161
|
||||
#define TK_DISTRIBUTED 162
|
||||
#define TK_CONSUMERS 163
|
||||
#define TK_SUBSCRIPTIONS 164
|
||||
#define TK_VNODES 165
|
||||
#define TK_ALIVE 166
|
||||
#define TK_LIKE 167
|
||||
#define TK_TBNAME 168
|
||||
#define TK_QTAGS 169
|
||||
#define TK_AS 170
|
||||
#define TK_INDEX 171
|
||||
#define TK_FUNCTION 172
|
||||
#define TK_INTERVAL 173
|
||||
#define TK_COUNT 174
|
||||
#define TK_LAST_ROW 175
|
||||
#define TK_TOPIC 176
|
||||
#define TK_META 177
|
||||
#define TK_CONSUMER 178
|
||||
#define TK_GROUP 179
|
||||
#define TK_DESC 180
|
||||
#define TK_DESCRIBE 181
|
||||
#define TK_RESET 182
|
||||
#define TK_QUERY 183
|
||||
#define TK_CACHE 184
|
||||
#define TK_EXPLAIN 185
|
||||
#define TK_ANALYZE 186
|
||||
#define TK_VERBOSE 187
|
||||
#define TK_NK_BOOL 188
|
||||
#define TK_RATIO 189
|
||||
#define TK_NK_FLOAT 190
|
||||
#define TK_OUTPUTTYPE 191
|
||||
#define TK_AGGREGATE 192
|
||||
#define TK_BUFSIZE 193
|
||||
#define TK_LANGUAGE 194
|
||||
#define TK_REPLACE 195
|
||||
#define TK_STREAM 196
|
||||
#define TK_INTO 197
|
||||
#define TK_PAUSE 198
|
||||
#define TK_RESUME 199
|
||||
#define TK_TRIGGER 200
|
||||
#define TK_AT_ONCE 201
|
||||
#define TK_WINDOW_CLOSE 202
|
||||
#define TK_IGNORE 203
|
||||
#define TK_EXPIRED 204
|
||||
#define TK_FILL_HISTORY 205
|
||||
#define TK_UPDATE 206
|
||||
#define TK_SUBTABLE 207
|
||||
#define TK_UNTREATED 208
|
||||
#define TK_KILL 209
|
||||
#define TK_CONNECTION 210
|
||||
#define TK_TRANSACTION 211
|
||||
#define TK_BALANCE 212
|
||||
#define TK_VGROUP 213
|
||||
#define TK_LEADER 214
|
||||
#define TK_MERGE 215
|
||||
#define TK_REDISTRIBUTE 216
|
||||
#define TK_SPLIT 217
|
||||
#define TK_DELETE 218
|
||||
#define TK_INSERT 219
|
||||
#define TK_NULL 220
|
||||
#define TK_NK_QUESTION 221
|
||||
#define TK_NK_ARROW 222
|
||||
#define TK_ROWTS 223
|
||||
#define TK_QSTART 224
|
||||
#define TK_QEND 225
|
||||
#define TK_QDURATION 226
|
||||
#define TK_WSTART 227
|
||||
#define TK_WEND 228
|
||||
#define TK_WDURATION 229
|
||||
#define TK_IROWTS 230
|
||||
#define TK_ISFILLED 231
|
||||
#define TK_CAST 232
|
||||
#define TK_NOW 233
|
||||
#define TK_TODAY 234
|
||||
#define TK_TIMEZONE 235
|
||||
#define TK_CLIENT_VERSION 236
|
||||
#define TK_SERVER_VERSION 237
|
||||
#define TK_SERVER_STATUS 238
|
||||
#define TK_CURRENT_USER 239
|
||||
#define TK_CASE 240
|
||||
#define TK_WHEN 241
|
||||
#define TK_THEN 242
|
||||
#define TK_ELSE 243
|
||||
#define TK_BETWEEN 244
|
||||
#define TK_IS 245
|
||||
#define TK_NK_LT 246
|
||||
#define TK_NK_GT 247
|
||||
#define TK_NK_LE 248
|
||||
#define TK_NK_GE 249
|
||||
#define TK_NK_NE 250
|
||||
#define TK_MATCH 251
|
||||
#define TK_NMATCH 252
|
||||
#define TK_CONTAINS 253
|
||||
#define TK_IN 254
|
||||
#define TK_JOIN 255
|
||||
#define TK_INNER 256
|
||||
#define TK_SELECT 257
|
||||
#define TK_DISTINCT 258
|
||||
#define TK_WHERE 259
|
||||
#define TK_PARTITION 260
|
||||
#define TK_BY 261
|
||||
#define TK_SESSION 262
|
||||
#define TK_STATE_WINDOW 263
|
||||
#define TK_EVENT_WINDOW 264
|
||||
#define TK_SLIDING 265
|
||||
#define TK_FILL 266
|
||||
#define TK_VALUE 267
|
||||
#define TK_VALUE_F 268
|
||||
#define TK_NONE 269
|
||||
#define TK_PREV 270
|
||||
#define TK_NULL_F 271
|
||||
#define TK_LINEAR 272
|
||||
#define TK_NEXT 273
|
||||
#define TK_HAVING 274
|
||||
#define TK_RANGE 275
|
||||
#define TK_EVERY 276
|
||||
#define TK_ORDER 277
|
||||
#define TK_SLIMIT 278
|
||||
#define TK_SOFFSET 279
|
||||
#define TK_LIMIT 280
|
||||
#define TK_OFFSET 281
|
||||
#define TK_ASC 282
|
||||
#define TK_NULLS 283
|
||||
#define TK_ABORT 284
|
||||
#define TK_AFTER 285
|
||||
#define TK_ATTACH 286
|
||||
#define TK_BEFORE 287
|
||||
#define TK_BEGIN 288
|
||||
#define TK_BITAND 289
|
||||
#define TK_BITNOT 290
|
||||
#define TK_BITOR 291
|
||||
#define TK_BLOCKS 292
|
||||
#define TK_CHANGE 293
|
||||
#define TK_COMMA 294
|
||||
#define TK_CONCAT 295
|
||||
#define TK_CONFLICT 296
|
||||
#define TK_COPY 297
|
||||
#define TK_DEFERRED 298
|
||||
#define TK_DELIMITERS 299
|
||||
#define TK_DETACH 300
|
||||
#define TK_DIVIDE 301
|
||||
#define TK_DOT 302
|
||||
#define TK_EACH 303
|
||||
#define TK_FAIL 304
|
||||
#define TK_FILE 305
|
||||
#define TK_FOR 306
|
||||
#define TK_GLOB 307
|
||||
#define TK_ID 308
|
||||
#define TK_IMMEDIATE 309
|
||||
#define TK_IMPORT 310
|
||||
#define TK_INITIALLY 311
|
||||
#define TK_INSTEAD 312
|
||||
#define TK_ISNULL 313
|
||||
#define TK_KEY 314
|
||||
#define TK_MODULES 315
|
||||
#define TK_NK_BITNOT 316
|
||||
#define TK_NK_SEMI 317
|
||||
#define TK_NOTNULL 318
|
||||
#define TK_OF 319
|
||||
#define TK_PLUS 320
|
||||
#define TK_PRIVILEGE 321
|
||||
#define TK_RAISE 322
|
||||
#define TK_RESTRICT 323
|
||||
#define TK_ROW 324
|
||||
#define TK_SEMI 325
|
||||
#define TK_STAR 326
|
||||
#define TK_STATEMENT 327
|
||||
#define TK_STRICT 328
|
||||
#define TK_STRING 329
|
||||
#define TK_TIMES 330
|
||||
#define TK_VALUES 331
|
||||
#define TK_VARIABLE 332
|
||||
#define TK_VIEW 333
|
||||
#define TK_WAL 334
|
||||
|
||||
#define TK_NK_SPACE 600
|
||||
#define TK_NK_COMMENT 601
|
||||
|
|
|
@ -350,6 +350,11 @@ typedef struct SDropComponentNodeStmt {
|
|||
int32_t dnodeId;
|
||||
} SDropComponentNodeStmt;
|
||||
|
||||
typedef struct SRestoreComponentNodeStmt {
|
||||
ENodeType type;
|
||||
int32_t dnodeId;
|
||||
} SRestoreComponentNodeStmt;
|
||||
|
||||
typedef struct SCreateTopicStmt {
|
||||
ENodeType type;
|
||||
char topicName[TSDB_TABLE_NAME_LEN];
|
||||
|
|
|
@ -211,6 +211,10 @@ typedef enum ENodeType {
|
|||
QUERY_NODE_SHOW_DB_ALIVE_STMT,
|
||||
QUERY_NODE_SHOW_CLUSTER_ALIVE_STMT,
|
||||
QUERY_NODE_BALANCE_VGROUP_LEADER_STMT,
|
||||
QUERY_NODE_RESTORE_DNODE_STMT,
|
||||
QUERY_NODE_RESTORE_QNODE_STMT,
|
||||
QUERY_NODE_RESTORE_MNODE_STMT,
|
||||
QUERY_NODE_RESTORE_VNODE_STMT,
|
||||
QUERY_NODE_PAUSE_STREAM_STMT,
|
||||
QUERY_NODE_RESUME_STREAM_STMT,
|
||||
|
||||
|
|
|
@ -133,6 +133,16 @@ int32_t tfsMkdirAt(STfs *pTfs, const char *rname, SDiskID diskId);
|
|||
*/
|
||||
int32_t tfsMkdirRecurAt(STfs *pTfs, const char *rname, SDiskID diskId);
|
||||
|
||||
/**
|
||||
* @brief check directories exist in tfs.
|
||||
*
|
||||
* @param pTfs The fs object.
|
||||
* @param rname The rel name of directory.
|
||||
* @param diskId The disk ID.
|
||||
* @return true for exist, false for not exist.
|
||||
*/
|
||||
bool tfsDirExistAt(STfs *pTfs, const char *rname, SDiskID diskId);
|
||||
|
||||
/**
|
||||
* @brief Remove directory at all levels in tfs.
|
||||
*
|
||||
|
|
|
@ -407,6 +407,7 @@ int32_t* taosGetErrno();
|
|||
#define TSDB_CODE_SNODE_NOT_DEPLOYED TAOS_DEF_ERROR_CODE(0, 0x0411)
|
||||
#define TSDB_CODE_MNODE_NOT_CATCH_UP TAOS_DEF_ERROR_CODE(0, 0x0412) // internal
|
||||
#define TSDB_CODE_MNODE_ALREADY_IS_VOTER TAOS_DEF_ERROR_CODE(0, 0x0413) // internal
|
||||
#define TSDB_CODE_MNODE_ONLY_TWO_MNODE TAOS_DEF_ERROR_CODE(0, 0x0414) // internal
|
||||
|
||||
// vnode
|
||||
// #define TSDB_CODE_VND_ACTION_IN_PROGRESS TAOS_DEF_ERROR_CODE(0, 0x0500) // 2.x
|
||||
|
@ -443,6 +444,7 @@ int32_t* taosGetErrno();
|
|||
#define TSDB_CODE_VND_QUERY_BUSY TAOS_DEF_ERROR_CODE(0, 0x0531)
|
||||
#define TSDB_CODE_VND_NOT_CATCH_UP TAOS_DEF_ERROR_CODE(0, 0x0532) // internal
|
||||
#define TSDB_CODE_VND_ALREADY_IS_VOTER TAOS_DEF_ERROR_CODE(0, 0x0533) // internal
|
||||
#define TSDB_CODE_VND_DIR_ALREADY_EXIST TAOS_DEF_ERROR_CODE(0, 0x0534)
|
||||
|
||||
// tsdb
|
||||
#define TSDB_CODE_TDB_INVALID_TABLE_ID TAOS_DEF_ERROR_CODE(0, 0x0600)
|
||||
|
@ -737,28 +739,21 @@ int32_t* taosGetErrno();
|
|||
//tsma
|
||||
#define TSDB_CODE_TSMA_INIT_FAILED TAOS_DEF_ERROR_CODE(0, 0x3100)
|
||||
#define TSDB_CODE_TSMA_ALREADY_EXIST TAOS_DEF_ERROR_CODE(0, 0x3101)
|
||||
#define TSDB_CODE_TSMA_NO_INDEX_IN_META TAOS_DEF_ERROR_CODE(0, 0x3102)
|
||||
#define TSDB_CODE_TSMA_INVALID_ENV TAOS_DEF_ERROR_CODE(0, 0x3103)
|
||||
#define TSDB_CODE_TSMA_INVALID_STAT TAOS_DEF_ERROR_CODE(0, 0x3104)
|
||||
#define TSDB_CODE_TSMA_INVALID_PTR TAOS_DEF_ERROR_CODE(0, 0x3105)
|
||||
#define TSDB_CODE_TSMA_INVALID_PARA TAOS_DEF_ERROR_CODE(0, 0x3106)
|
||||
#define TSDB_CODE_TSMA_NO_INDEX_IN_CACHE TAOS_DEF_ERROR_CODE(0, 0x3107)
|
||||
#define TSDB_CODE_TSMA_INVALID_ENV TAOS_DEF_ERROR_CODE(0, 0x3102)
|
||||
#define TSDB_CODE_TSMA_INVALID_STAT TAOS_DEF_ERROR_CODE(0, 0x3103)
|
||||
#define TSDB_CODE_TSMA_INVALID_PTR TAOS_DEF_ERROR_CODE(0, 0x3104)
|
||||
#define TSDB_CODE_TSMA_INVALID_PARA TAOS_DEF_ERROR_CODE(0, 0x3105)
|
||||
|
||||
//rsma
|
||||
#define TSDB_CODE_RSMA_INVALID_ENV TAOS_DEF_ERROR_CODE(0, 0x3150)
|
||||
#define TSDB_CODE_RSMA_INVALID_STAT TAOS_DEF_ERROR_CODE(0, 0x3151)
|
||||
#define TSDB_CODE_RSMA_QTASKINFO_CREATE TAOS_DEF_ERROR_CODE(0, 0x3152)
|
||||
#define TSDB_CODE_RSMA_FS_COMMIT TAOS_DEF_ERROR_CODE(0, 0x3153)
|
||||
#define TSDB_CODE_RSMA_REMOVE_EXISTS TAOS_DEF_ERROR_CODE(0, 0x3154)
|
||||
#define TSDB_CODE_RSMA_FETCH_MSG_MSSED_UP TAOS_DEF_ERROR_CODE(0, 0x3155)
|
||||
#define TSDB_CODE_RSMA_EMPTY_INFO TAOS_DEF_ERROR_CODE(0, 0x3156)
|
||||
#define TSDB_CODE_RSMA_INVALID_SCHEMA TAOS_DEF_ERROR_CODE(0, 0x3157)
|
||||
#define TSDB_CODE_RSMA_REGEX_MATCH TAOS_DEF_ERROR_CODE(0, 0x3158)
|
||||
#define TSDB_CODE_RSMA_STREAM_STATE_OPEN TAOS_DEF_ERROR_CODE(0, 0x3159)
|
||||
#define TSDB_CODE_RSMA_STREAM_STATE_COMMIT TAOS_DEF_ERROR_CODE(0, 0x3160)
|
||||
#define TSDB_CODE_RSMA_FS_REF TAOS_DEF_ERROR_CODE(0, 0x3161)
|
||||
#define TSDB_CODE_RSMA_FS_SYNC TAOS_DEF_ERROR_CODE(0, 0x3162)
|
||||
#define TSDB_CODE_RSMA_FS_UPDATE TAOS_DEF_ERROR_CODE(0, 0x3163)
|
||||
#define TSDB_CODE_RSMA_INVALID_SCHEMA TAOS_DEF_ERROR_CODE(0, 0x3153)
|
||||
#define TSDB_CODE_RSMA_STREAM_STATE_OPEN TAOS_DEF_ERROR_CODE(0, 0x3154)
|
||||
#define TSDB_CODE_RSMA_STREAM_STATE_COMMIT TAOS_DEF_ERROR_CODE(0, 0x3155)
|
||||
#define TSDB_CODE_RSMA_FS_REF TAOS_DEF_ERROR_CODE(0, 0x3156)
|
||||
#define TSDB_CODE_RSMA_FS_SYNC TAOS_DEF_ERROR_CODE(0, 0x3157)
|
||||
#define TSDB_CODE_RSMA_FS_UPDATE TAOS_DEF_ERROR_CODE(0, 0x3158)
|
||||
|
||||
//index
|
||||
#define TSDB_CODE_INDEX_REBUILDING TAOS_DEF_ERROR_CODE(0, 0x3200)
|
||||
|
|
|
@ -267,6 +267,9 @@ typedef enum ELogicConditionType {
|
|||
#define TSDB_DNODE_CONFIG_LEN 128
|
||||
#define TSDB_DNODE_VALUE_LEN 256
|
||||
|
||||
#define TSDB_ACTIVE_KEY_LEN 109 // history 109:?
|
||||
#define TSDB_CONN_ACTIVE_KEY_LEN 257 // history 257:?
|
||||
|
||||
#define TSDB_DEFAULT_PKT_SIZE 65480 // same as RPC_MAX_UDP_SIZE
|
||||
|
||||
#define TSDB_PAYLOAD_SIZE TSDB_DEFAULT_PKT_SIZE
|
||||
|
|
|
@ -2543,6 +2543,31 @@ int32_t tmq_get_topic_assignment(tmq_t* tmq, const char* pTopicName, tmq_topic_a
|
|||
*numOfAssignment = num;
|
||||
}
|
||||
|
||||
for (int32_t j = 0; j < (*numOfAssignment); ++j) {
|
||||
tmq_topic_assignment* p = &(*assignment)[j];
|
||||
|
||||
for(int32_t i = 0; i < taosArrayGetSize(pTopic->vgs); ++i) {
|
||||
SMqClientVg* pClientVg = taosArrayGet(pTopic->vgs, i);
|
||||
if (pClientVg->vgId != p->vgId) {
|
||||
continue;
|
||||
}
|
||||
|
||||
SVgOffsetInfo* pOffsetInfo = &pClientVg->offsetInfo;
|
||||
|
||||
pOffsetInfo->currentOffset.type = TMQ_OFFSET__LOG;
|
||||
|
||||
char offsetBuf[80] = {0};
|
||||
tFormatOffset(offsetBuf, tListLen(offsetBuf), &pOffsetInfo->currentOffset);
|
||||
|
||||
tscDebug("vgId:%d offset is update to:%s", p->vgId, offsetBuf);
|
||||
|
||||
pOffsetInfo->walVerBegin = p->begin;
|
||||
pOffsetInfo->walVerEnd = p->end;
|
||||
pOffsetInfo->currentOffset.version = p->currentOffset;
|
||||
pOffsetInfo->committedOffset.version = p->currentOffset;
|
||||
}
|
||||
}
|
||||
|
||||
destroyCommonInfo(pCommon);
|
||||
return code;
|
||||
} else {
|
||||
|
@ -2590,7 +2615,8 @@ int32_t tmq_offset_seek(tmq_t* tmq, const char* pTopicName, int32_t vgId, int64_
|
|||
}
|
||||
|
||||
if (offset < pOffsetInfo->walVerBegin || offset > pOffsetInfo->walVerEnd) {
|
||||
tscError("consumer:0x%" PRIx64 " invalid seek params, offset:%" PRId64, tmq->consumerId, offset);
|
||||
tscError("consumer:0x%" PRIx64 " invalid seek params, offset:%" PRId64 ", valid range:[%" PRId64 ", %" PRId64 "]",
|
||||
tmq->consumerId, offset, pOffsetInfo->walVerBegin, pOffsetInfo->walVerEnd);
|
||||
return TSDB_CODE_INVALID_PARA;
|
||||
}
|
||||
|
||||
|
|
|
@ -1133,6 +1133,8 @@ TEST(clientCase, sub_tb_test) {
|
|||
return;
|
||||
}
|
||||
|
||||
tmq_offset_seek(tmq, "topic_t1", pAssign[0].vgId, 0);
|
||||
|
||||
while (1) {
|
||||
TAOS_RES* pRes = tmq_consumer_poll(tmq, timeout);
|
||||
<<<<<<< HEAD
|
||||
|
|
|
@ -35,6 +35,10 @@ static const SSysDbTableSchema dnodesSchema[] = {
|
|||
{.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true},
|
||||
{.name = "reboot_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true},
|
||||
{.name = "note", .bytes = 256 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true},
|
||||
#ifdef TD_ENTERPRISE
|
||||
{.name = "active_code", .bytes = TSDB_ACTIVE_KEY_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true},
|
||||
{.name = "c_active_code", .bytes = TSDB_CONN_ACTIVE_KEY_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true},
|
||||
#endif
|
||||
};
|
||||
|
||||
static const SSysDbTableSchema mnodesSchema[] = {
|
||||
|
|
|
@ -1723,6 +1723,33 @@ int32_t tDeserializeSDropDnodeReq(void *buf, int32_t bufLen, SDropDnodeReq *pReq
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t tSerializeSRestoreDnodeReq(void *buf, int32_t bufLen, SRestoreDnodeReq *pReq) {
|
||||
SEncoder encoder = {0};
|
||||
tEncoderInit(&encoder, buf, bufLen);
|
||||
|
||||
if (tStartEncode(&encoder) < 0) return -1;
|
||||
if (tEncodeI32(&encoder, pReq->dnodeId) < 0) return -1;
|
||||
if (tEncodeI8(&encoder, pReq->restoreType) < 0) return -1;
|
||||
tEndEncode(&encoder);
|
||||
|
||||
int32_t tlen = encoder.pos;
|
||||
tEncoderClear(&encoder);
|
||||
return tlen;
|
||||
}
|
||||
|
||||
int32_t tDeserializeSRestoreDnodeReq(void *buf, int32_t bufLen, SRestoreDnodeReq *pReq) {
|
||||
SDecoder decoder = {0};
|
||||
tDecoderInit(&decoder, buf, bufLen);
|
||||
|
||||
if (tStartDecode(&decoder) < 0) return -1;
|
||||
if (tDecodeI32(&decoder, &pReq->dnodeId) < 0) return -1;
|
||||
if (tDecodeI8(&decoder, &pReq->restoreType) < 0) return -1;
|
||||
tEndDecode(&decoder);
|
||||
|
||||
tDecoderClear(&decoder);
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t tSerializeSMCfgDnodeReq(void *buf, int32_t bufLen, SMCfgDnodeReq *pReq) {
|
||||
SEncoder encoder = {0};
|
||||
tEncoderInit(&encoder, buf, bufLen);
|
||||
|
@ -7652,7 +7679,6 @@ void tDestroySSubmitRsp2(SSubmitRsp2 *pRsp, int32_t flag) {
|
|||
}
|
||||
}
|
||||
|
||||
<<<<<<< HEAD
|
||||
int32_t tSerializeSMPauseStreamReq(void *buf, int32_t bufLen, const SMPauseStreamReq *pReq) {
|
||||
SEncoder encoder = {0};
|
||||
tEncoderInit(&encoder, buf, bufLen);
|
||||
|
@ -7705,7 +7731,6 @@ int32_t tDeserializeSMResumeStreamReq(void *buf, int32_t bufLen, SMResumeStreamR
|
|||
return 0;
|
||||
}
|
||||
|
||||
=======
|
||||
int32_t tEncodeMqSubTopicEp(void **buf, const SMqSubTopicEp *pTopicEp) {
|
||||
int32_t tlen = 0;
|
||||
tlen += taosEncodeString(buf, pTopicEp->topic);
|
||||
|
@ -7743,4 +7768,3 @@ void tDeleteMqSubTopicEp(SMqSubTopicEp *pSubTopicEp) {
|
|||
pSubTopicEp->schema.nCols = 0;
|
||||
taosArrayDestroy(pSubTopicEp->vgs);
|
||||
}
|
||||
>>>>>>> enh/3.0
|
||||
|
|
|
@ -177,6 +177,7 @@ SArray *mmGetMsgHandles() {
|
|||
if (dmSetMgmtHandle(pArray, TDMT_MND_SERVER_VERSION, mmPutMsgToReadQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_MND_CREATE_INDEX, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_MND_DROP_INDEX, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_MND_RESTORE_DNODE, mmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
|
||||
|
||||
if (dmSetMgmtHandle(pArray, TDMT_SCH_QUERY, mmPutMsgToQueryQueue, 1) == NULL) goto _OVER;
|
||||
if (dmSetMgmtHandle(pArray, TDMT_SCH_MERGE_QUERY, mmPutMsgToQueryQueue, 1) == NULL) goto _OVER;
|
||||
|
|
|
@ -255,7 +255,7 @@ int32_t vmProcessCreateVnodeReq(SVnodeMgmt *pMgmt, SRpcMsg *pMsg) {
|
|||
|
||||
SVnodeObj *pVnode = vmAcquireVnode(pMgmt, req.vgId);
|
||||
if (pVnode != NULL) {
|
||||
dInfo("vgId:%d, already exist", req.vgId);
|
||||
dError("vgId:%d, already exist", req.vgId);
|
||||
tFreeSCreateVnodeReq(&req);
|
||||
vmReleaseVnode(pMgmt, pVnode);
|
||||
terrno = TSDB_CODE_VND_ALREADY_EXIST;
|
||||
|
@ -264,7 +264,22 @@ int32_t vmProcessCreateVnodeReq(SVnodeMgmt *pMgmt, SRpcMsg *pMsg) {
|
|||
}
|
||||
|
||||
snprintf(path, TSDB_FILENAME_LEN, "vnode%svnode%d", TD_DIRSEP, vnodeCfg.vgId);
|
||||
if (vnodeCreate(path, &vnodeCfg, pMgmt->pTfs) < 0) {
|
||||
|
||||
if (pMgmt->pTfs) {
|
||||
if (tfsDirExistAt(pMgmt->pTfs, path, (SDiskID){0})) {
|
||||
terrno = TSDB_CODE_VND_DIR_ALREADY_EXIST;
|
||||
dError("vgId:%d, failed to restore vnode since %s", req.vgId, terrstr());
|
||||
return -1;
|
||||
}
|
||||
} else {
|
||||
if (taosDirExist(path)) {
|
||||
terrno = TSDB_CODE_VND_DIR_ALREADY_EXIST;
|
||||
dError("vgId:%d, failed to restore vnode since %s", req.vgId, terrstr());
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
|
||||
if (vnodeCreate(path, &vnodeCfg, pMgmt->pTfs) < 0) {
|
||||
tFreeSCreateVnodeReq(&req);
|
||||
dError("vgId:%d, failed to create vnode since %s", req.vgId, terrstr());
|
||||
code = terrno;
|
||||
|
@ -344,6 +359,7 @@ int32_t vmProcessAlterVnodeTypeReq(SVnodeMgmt *pMgmt, SRpcMsg *pMsg) {
|
|||
ESyncRole role = vnodeGetRole(pVnode->pImpl);
|
||||
dInfo("vgId:%d, checking node role:%d", req.vgId, role);
|
||||
if(role == TAOS_SYNC_ROLE_VOTER){
|
||||
dError("vgId:%d, failed to alter vnode type since node already is role:%d", req.vgId, role);
|
||||
terrno = TSDB_CODE_VND_ALREADY_IS_VOTER;
|
||||
vmReleaseVnode(pMgmt, pVnode);
|
||||
return -1;
|
||||
|
@ -380,7 +396,7 @@ int32_t vmProcessAlterVnodeTypeReq(SVnodeMgmt *pMgmt, SRpcMsg *pMsg) {
|
|||
}
|
||||
|
||||
SReplica *pReplica = NULL;
|
||||
if(req.selfIndex > 0){
|
||||
if(req.selfIndex != -1){
|
||||
pReplica = &req.replicas[req.selfIndex];
|
||||
}
|
||||
else{
|
||||
|
|
|
@ -218,6 +218,7 @@ static int32_t dmProcessAlterNodeTypeReq(EDndNodeType ntype, SRpcMsg *pMsg) {
|
|||
ESyncRole role = (*pWrapper->func.nodeRoleFp)(pWrapper->pMgmt);
|
||||
dInfo("node:%s, checking node role:%d", pWrapper->name, role);
|
||||
if(role == TAOS_SYNC_ROLE_VOTER){
|
||||
dError("node:%s, failed to alter node type since node already is role:%d", pWrapper->name, role);
|
||||
terrno = TSDB_CODE_MNODE_ALREADY_IS_VOTER;
|
||||
return -1;
|
||||
}
|
||||
|
|
|
@ -173,7 +173,7 @@ _OVER:
|
|||
dmResetEps(pData, pData->dnodeEps);
|
||||
|
||||
if (pData->oldDnodeEps == NULL && dmIsEpChanged(pData, pData->dnodeId, tsLocalEp)) {
|
||||
dError("localEp %s different with %s and need reconfigured", tsLocalEp, file);
|
||||
dError("localEp %s different with %s and need to be reconfigured", tsLocalEp, file);
|
||||
terrno = TSDB_CODE_INVALID_CFG;
|
||||
return -1;
|
||||
}
|
||||
|
|
|
@ -6,6 +6,7 @@ IF (TD_ENTERPRISE)
|
|||
LIST(APPEND MNODE_SRC ${TD_ENTERPRISE_DIR}/src/plugins/privilege/src/privilege.c)
|
||||
LIST(APPEND MNODE_SRC ${TD_ENTERPRISE_DIR}/src/plugins/mnode/src/mndDb.c)
|
||||
LIST(APPEND MNODE_SRC ${TD_ENTERPRISE_DIR}/src/plugins/mnode/src/mndVgroup.c)
|
||||
LIST(APPEND MNODE_SRC ${TD_ENTERPRISE_DIR}/src/plugins/mnode/src/mndDnode.c)
|
||||
ENDIF ()
|
||||
|
||||
add_library(mnode STATIC ${MNODE_SRC})
|
||||
|
|
|
@ -206,6 +206,8 @@ typedef struct {
|
|||
uint16_t port;
|
||||
char fqdn[TSDB_FQDN_LEN];
|
||||
char ep[TSDB_EP_LEN];
|
||||
char active[TSDB_ACTIVE_KEY_LEN];
|
||||
char connActive[TSDB_CONN_ACTIVE_KEY_LEN];
|
||||
} SDnodeObj;
|
||||
|
||||
typedef struct {
|
||||
|
|
|
@ -29,7 +29,7 @@ void mndReleaseDnode(SMnode *pMnode, SDnodeObj *pDnode);
|
|||
SEpSet mndGetDnodeEpset(SDnodeObj *pDnode);
|
||||
int32_t mndGetDnodeSize(SMnode *pMnode);
|
||||
bool mndIsDnodeOnline(SDnodeObj *pDnode, int64_t curMs);
|
||||
void mndGetDnodeData(SMnode *pMnode, SArray *pDnodeEps);
|
||||
void mndGetDnodeData(SMnode *pMnode, SArray *pDnodeInfo);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
|
|
|
@ -29,6 +29,10 @@ void mndReleaseMnode(SMnode *pMnode, SMnodeObj *pObj);
|
|||
bool mndIsMnode(SMnode *pMnode, int32_t dnodeId);
|
||||
void mndGetMnodeEpSet(SMnode *pMnode, SEpSet *pEpSet);
|
||||
int32_t mndSetDropMnodeInfoToTrans(SMnode *pMnode, STrans *pTrans, SMnodeObj *pObj, bool force);
|
||||
int32_t mndSetRestoreCreateMnodeRedoActions(SMnode *pMnode, STrans *pTrans, SDnodeObj *pDnode, SMnodeObj *pObj);
|
||||
int32_t mndSetCreateMnodeCommitLogs(SMnode *pMnode, STrans *pTrans, SMnodeObj *pObj);
|
||||
int32_t mndSetRestoreAlterMnodeTypeRedoActions(SMnode *pMnode, STrans *pTrans, SDnodeObj *pDnode, SMnodeObj *pObj);
|
||||
int32_t mndSetRestoreCreateMnodeRedoLogs(SMnode *pMnode, STrans *pTrans, SMnodeObj *pObj);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
|
|
|
@ -30,6 +30,9 @@ SQnodeObj *mndAcquireQnode(SMnode *pMnode, int32_t qnodeId);
|
|||
void mndReleaseQnode(SMnode *pMnode, SQnodeObj *pObj);
|
||||
int32_t mndCreateQnodeList(SMnode *pMnode, SArray **pList, int32_t limit);
|
||||
int32_t mndSetDropQnodeInfoToTrans(SMnode *pMnode, STrans *pTrans, SQnodeObj *pObj, bool force);
|
||||
bool mndQnodeInDnode(SQnodeObj *pQnode, int32_t dnodeId);
|
||||
int32_t mndSetCreateQnodeCommitLogs(STrans *pTrans, SQnodeObj *pObj);
|
||||
int32_t mndSetCreateQnodeRedoActions(STrans *pTrans, SDnodeObj *pDnode, SQnodeObj *pObj);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
|
|
|
@ -49,6 +49,9 @@ int32_t mndBuildCompactVgroupAction(SMnode *pMnode, STrans *pTrans, SDbObj *pDb,
|
|||
void *mndBuildCreateVnodeReq(SMnode *, SDnodeObj *pDnode, SDbObj *pDb, SVgObj *pVgroup, int32_t *pContLen);
|
||||
void *mndBuildDropVnodeReq(SMnode *, SDnodeObj *pDnode, SDbObj *pDb, SVgObj *pVgroup, int32_t *pContLen);
|
||||
bool mndVgroupInDb(SVgObj *pVgroup, int64_t dbUid);
|
||||
bool mndVgroupInDnode(SVgObj *pVgroup, int32_t dnodeId);
|
||||
int32_t mndBuildRestoreAlterVgroupAction(SMnode *pMnode, STrans *pTrans, SDbObj *db, SVgObj *pVgroup,
|
||||
SDnodeObj *pDnode);
|
||||
|
||||
int32_t mndSplitVgroup(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SVgObj *pVgroup);
|
||||
|
||||
|
|
|
@ -27,7 +27,7 @@
|
|||
#include "tmisce.h"
|
||||
#include "mndCluster.h"
|
||||
|
||||
#define TSDB_DNODE_VER_NUMBER 1
|
||||
#define TSDB_DNODE_VER_NUMBER 2
|
||||
#define TSDB_DNODE_RESERVE_SIZE 64
|
||||
|
||||
static const char *offlineReason[] = {
|
||||
|
@ -58,6 +58,7 @@ static int32_t mndProcessDropDnodeReq(SRpcMsg *pReq);
|
|||
static int32_t mndProcessConfigDnodeReq(SRpcMsg *pReq);
|
||||
static int32_t mndProcessConfigDnodeRsp(SRpcMsg *pRsp);
|
||||
static int32_t mndProcessStatusReq(SRpcMsg *pReq);
|
||||
static int32_t mndProcessRestoreDnodeReq(SRpcMsg *pReq);
|
||||
|
||||
static int32_t mndRetrieveConfigs(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBlock, int32_t rows);
|
||||
static void mndCancelGetNextConfig(SMnode *pMnode, void *pIter);
|
||||
|
@ -83,6 +84,7 @@ int32_t mndInitDnode(SMnode *pMnode) {
|
|||
mndSetMsgHandle(pMnode, TDMT_MND_STATUS, mndProcessStatusReq);
|
||||
mndSetMsgHandle(pMnode, TDMT_MND_DNODE_LIST, mndProcessDnodeListReq);
|
||||
mndSetMsgHandle(pMnode, TDMT_MND_SHOW_VARIABLES, mndProcessShowVariablesReq);
|
||||
mndSetMsgHandle(pMnode, TDMT_MND_RESTORE_DNODE, mndProcessRestoreDnodeReq);
|
||||
|
||||
mndAddShowRetrieveHandle(pMnode, TSDB_MGMT_TABLE_CONFIGS, mndRetrieveConfigs);
|
||||
mndAddShowFreeIterHandle(pMnode, TSDB_MGMT_TABLE_CONFIGS, mndCancelGetNextConfig);
|
||||
|
@ -139,6 +141,10 @@ static SSdbRaw *mndDnodeActionEncode(SDnodeObj *pDnode) {
|
|||
SDB_SET_INT16(pRaw, dataPos, pDnode->port, _OVER)
|
||||
SDB_SET_BINARY(pRaw, dataPos, pDnode->fqdn, TSDB_FQDN_LEN, _OVER)
|
||||
SDB_SET_RESERVE(pRaw, dataPos, TSDB_DNODE_RESERVE_SIZE, _OVER)
|
||||
SDB_SET_INT16(pRaw, dataPos, TSDB_ACTIVE_KEY_LEN, _OVER)
|
||||
SDB_SET_BINARY(pRaw, dataPos, pDnode->active, TSDB_ACTIVE_KEY_LEN, _OVER)
|
||||
SDB_SET_INT16(pRaw, dataPos, TSDB_CONN_ACTIVE_KEY_LEN, _OVER)
|
||||
SDB_SET_BINARY(pRaw, dataPos, pDnode->connActive, TSDB_CONN_ACTIVE_KEY_LEN, _OVER)
|
||||
SDB_SET_DATALEN(pRaw, dataPos, _OVER);
|
||||
|
||||
terrno = 0;
|
||||
|
@ -161,7 +167,7 @@ static SSdbRow *mndDnodeActionDecode(SSdbRaw *pRaw) {
|
|||
|
||||
int8_t sver = 0;
|
||||
if (sdbGetRawSoftVer(pRaw, &sver) != 0) goto _OVER;
|
||||
if (sver != TSDB_DNODE_VER_NUMBER) {
|
||||
if (sver < 1 || sver > TSDB_DNODE_VER_NUMBER) {
|
||||
terrno = TSDB_CODE_SDB_INVALID_DATA_VER;
|
||||
goto _OVER;
|
||||
}
|
||||
|
@ -179,6 +185,13 @@ static SSdbRow *mndDnodeActionDecode(SSdbRaw *pRaw) {
|
|||
SDB_GET_INT16(pRaw, dataPos, &pDnode->port, _OVER)
|
||||
SDB_GET_BINARY(pRaw, dataPos, pDnode->fqdn, TSDB_FQDN_LEN, _OVER)
|
||||
SDB_GET_RESERVE(pRaw, dataPos, TSDB_DNODE_RESERVE_SIZE, _OVER)
|
||||
if (sver > 1) {
|
||||
int16_t keyLen = 0;
|
||||
SDB_GET_INT16(pRaw, dataPos, &keyLen, _OVER)
|
||||
SDB_GET_BINARY(pRaw, dataPos, pDnode->active, keyLen, _OVER)
|
||||
SDB_GET_INT16(pRaw, dataPos, &keyLen, _OVER)
|
||||
SDB_GET_BINARY(pRaw, dataPos, pDnode->connActive, keyLen, _OVER)
|
||||
}
|
||||
|
||||
terrno = 0;
|
||||
if (tmsgUpdateDnodeInfo(&pDnode->id, NULL, pDnode->fqdn, &pDnode->port)) {
|
||||
|
@ -294,6 +307,11 @@ int32_t mndGetDnodeSize(SMnode *pMnode) {
|
|||
return sdbGetSize(pSdb, SDB_DNODE);
|
||||
}
|
||||
|
||||
int32_t mndGetDbSize(SMnode *pMnode) {
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
return sdbGetSize(pSdb, SDB_DB);
|
||||
}
|
||||
|
||||
bool mndIsDnodeOnline(SDnodeObj *pDnode, int64_t curMs) {
|
||||
int64_t interval = TABS(pDnode->lastAccessTime - curMs);
|
||||
if (interval > 5000 * (int64_t)tsStatusInterval) {
|
||||
|
@ -305,7 +323,7 @@ bool mndIsDnodeOnline(SDnodeObj *pDnode, int64_t curMs) {
|
|||
return true;
|
||||
}
|
||||
|
||||
void mndGetDnodeData(SMnode *pMnode, SArray *pDnodeEps) {
|
||||
static void mndGetDnodeEps(SMnode *pMnode, SArray *pDnodeEps) {
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
|
||||
int32_t numOfEps = 0;
|
||||
|
@ -330,6 +348,34 @@ void mndGetDnodeData(SMnode *pMnode, SArray *pDnodeEps) {
|
|||
}
|
||||
}
|
||||
|
||||
void mndGetDnodeData(SMnode *pMnode, SArray *pDnodeInfo) {
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
|
||||
int32_t numOfEps = 0;
|
||||
void *pIter = NULL;
|
||||
while (1) {
|
||||
SDnodeObj *pDnode = NULL;
|
||||
ESdbStatus objStatus = 0;
|
||||
pIter = sdbFetchAll(pSdb, SDB_DNODE, pIter, (void **)&pDnode, &objStatus, true);
|
||||
if (pIter == NULL) break;
|
||||
|
||||
SDnodeInfo dInfo;
|
||||
dInfo.id = pDnode->id;
|
||||
dInfo.ep.port = pDnode->port;
|
||||
tstrncpy(dInfo.ep.fqdn, pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
tstrncpy(dInfo.active, pDnode->active, TSDB_ACTIVE_KEY_LEN);
|
||||
tstrncpy(dInfo.connActive, pDnode->connActive, TSDB_CONN_ACTIVE_KEY_LEN);
|
||||
sdbRelease(pSdb, pDnode);
|
||||
if (mndIsMnode(pMnode, pDnode->id)) {
|
||||
dInfo.isMnode = 1;
|
||||
} else {
|
||||
dInfo.isMnode = 0;
|
||||
}
|
||||
|
||||
taosArrayPush(pDnodeInfo, &dInfo);
|
||||
}
|
||||
}
|
||||
|
||||
static int32_t mndCheckClusterCfgPara(SMnode *pMnode, SDnodeObj *pDnode, const SClusterCfg *pCfg) {
|
||||
if (pCfg->statusInterval != tsStatusInterval) {
|
||||
mError("dnode:%d, statusInterval:%d inconsistent with cluster:%d", pDnode->id, pCfg->statusInterval,
|
||||
|
@ -536,7 +582,7 @@ static int32_t mndProcessStatusReq(SRpcMsg *pReq) {
|
|||
goto _OVER;
|
||||
}
|
||||
|
||||
mndGetDnodeData(pMnode, statusRsp.pDnodeEps);
|
||||
mndGetDnodeEps(pMnode, statusRsp.pDnodeEps);
|
||||
|
||||
int32_t contLen = tSerializeSStatusRsp(NULL, 0, &statusRsp);
|
||||
void *pHead = rpcMallocCont(contLen);
|
||||
|
@ -745,6 +791,18 @@ _OVER:
|
|||
return code;
|
||||
}
|
||||
|
||||
extern int32_t mndProcessRestoreDnodeReqImpl(SRpcMsg *pReq);
|
||||
|
||||
int32_t mndProcessRestoreDnodeReq(SRpcMsg *pReq){
|
||||
return mndProcessRestoreDnodeReqImpl(pReq);
|
||||
}
|
||||
|
||||
#ifndef TD_ENTERPRISE
|
||||
int32_t mndProcessRestoreDnodeReqImpl(SRpcMsg *pReq){
|
||||
return 0;
|
||||
}
|
||||
#endif
|
||||
|
||||
static int32_t mndDropDnode(SMnode *pMnode, SRpcMsg *pReq, SDnodeObj *pDnode, SMnodeObj *pMObj, SQnodeObj *pQObj,
|
||||
SSnodeObj *pSObj, int32_t numOfVnodes, bool force) {
|
||||
int32_t code = -1;
|
||||
|
@ -1041,6 +1099,7 @@ static int32_t mndRetrieveDnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
|
|||
ESdbStatus objStatus = 0;
|
||||
SDnodeObj *pDnode = NULL;
|
||||
int64_t curMs = taosGetTimestampMs();
|
||||
char buf[TSDB_CONN_ACTIVE_KEY_LEN + VARSTR_HEADER_SIZE]; // make sure TSDB_CONN_ACTIVE_KEY_LEN >= TSDB_EP_LEN
|
||||
|
||||
while (numOfRows < rows) {
|
||||
pShow->pIter = sdbFetchAll(pSdb, SDB_DNODE, pShow->pIter, (void **)&pDnode, &objStatus, true);
|
||||
|
@ -1052,7 +1111,6 @@ static int32_t mndRetrieveDnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
|
|||
SColumnInfoData *pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||
colDataSetVal(pColInfo, numOfRows, (const char *)&pDnode->id, false);
|
||||
|
||||
char buf[tListLen(pDnode->ep) + VARSTR_HEADER_SIZE] = {0};
|
||||
STR_WITH_MAXSIZE_TO_VARSTR(buf, pDnode->ep, pShow->pMeta->pSchemas[cols].bytes);
|
||||
|
||||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||
|
@ -1077,10 +1135,9 @@ static int32_t mndRetrieveDnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
|
|||
status = "offline";
|
||||
}
|
||||
|
||||
char b1[16] = {0};
|
||||
STR_TO_VARSTR(b1, status);
|
||||
STR_TO_VARSTR(buf, status);
|
||||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||
colDataSetVal(pColInfo, numOfRows, b1, false);
|
||||
colDataSetVal(pColInfo, numOfRows, buf, false);
|
||||
|
||||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||
colDataSetVal(pColInfo, numOfRows, (const char *)&pDnode->createdTime, false);
|
||||
|
@ -1095,6 +1152,16 @@ static int32_t mndRetrieveDnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
|
|||
colDataSetVal(pColInfo, numOfRows, b, false);
|
||||
taosMemoryFreeClear(b);
|
||||
|
||||
#ifdef TD_ENTERPRISE
|
||||
STR_TO_VARSTR(buf, pDnode->active);
|
||||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||
colDataSetVal(pColInfo, numOfRows, buf, false);
|
||||
|
||||
STR_TO_VARSTR(buf, pDnode->connActive);
|
||||
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
|
||||
colDataSetVal(pColInfo, numOfRows, buf, false);
|
||||
#endif
|
||||
|
||||
numOfRows++;
|
||||
sdbRelease(pSdb, pDnode);
|
||||
}
|
||||
|
|
|
@ -275,6 +275,14 @@ static int32_t mndSetCreateMnodeRedoLogs(SMnode *pMnode, STrans *pTrans, SMnodeO
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t mndSetRestoreCreateMnodeRedoLogs(SMnode *pMnode, STrans *pTrans, SMnodeObj *pObj) {
|
||||
SSdbRaw *pRedoRaw = mndMnodeActionEncode(pObj);
|
||||
if (pRedoRaw == NULL) return -1;
|
||||
if (mndTransAppendRedolog(pTrans, pRedoRaw) != 0) return -1;
|
||||
if (sdbSetRawStatus(pRedoRaw, SDB_STATUS_READY) != 0) return -1;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndSetCreateMnodeUndoLogs(SMnode *pMnode, STrans *pTrans, SMnodeObj *pObj) {
|
||||
SSdbRaw *pUndoRaw = mndMnodeActionEncode(pObj);
|
||||
if (pUndoRaw == NULL) return -1;
|
||||
|
@ -283,7 +291,7 @@ static int32_t mndSetCreateMnodeUndoLogs(SMnode *pMnode, STrans *pTrans, SMnodeO
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndSetCreateMnodeCommitLogs(SMnode *pMnode, STrans *pTrans, SMnodeObj *pObj) {
|
||||
int32_t mndSetCreateMnodeCommitLogs(SMnode *pMnode, STrans *pTrans, SMnodeObj *pObj) {
|
||||
SSdbRaw *pCommitRaw = mndMnodeActionEncode(pObj);
|
||||
if (pCommitRaw == NULL) return -1;
|
||||
if (mndTransAppendCommitlog(pTrans, pCommitRaw) != 0) return -1;
|
||||
|
@ -421,6 +429,55 @@ static int32_t mndSetCreateMnodeRedoActions(SMnode *pMnode, STrans *pTrans, SDno
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t mndSetRestoreCreateMnodeRedoActions(SMnode *pMnode, STrans *pTrans, SDnodeObj *pDnode, SMnodeObj *pObj) {
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
void *pIter = NULL;
|
||||
SDCreateMnodeReq createReq = {0};
|
||||
SEpSet createEpset = {0};
|
||||
|
||||
while (1) {
|
||||
SMnodeObj *pMObj = NULL;
|
||||
pIter = sdbFetch(pSdb, SDB_MNODE, pIter, (void **)&pMObj);
|
||||
if (pIter == NULL) break;
|
||||
|
||||
if(pMObj->id == pDnode->id) {
|
||||
sdbRelease(pSdb, pMObj);
|
||||
continue;
|
||||
}
|
||||
|
||||
if(pMObj->role == TAOS_SYNC_ROLE_VOTER){
|
||||
createReq.replicas[createReq.replica].id = pMObj->id;
|
||||
createReq.replicas[createReq.replica].port = pMObj->pDnode->port;
|
||||
memcpy(createReq.replicas[createReq.replica].fqdn, pMObj->pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
createReq.replica++;
|
||||
}
|
||||
else{
|
||||
createReq.learnerReplicas[createReq.learnerReplica].id = pMObj->id;
|
||||
createReq.learnerReplicas[createReq.learnerReplica].port = pMObj->pDnode->port;
|
||||
memcpy(createReq.learnerReplicas[createReq.learnerReplica].fqdn, pMObj->pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
createReq.learnerReplica++;
|
||||
}
|
||||
|
||||
sdbRelease(pSdb, pMObj);
|
||||
}
|
||||
|
||||
createReq.learnerReplicas[createReq.learnerReplica].id = pDnode->id;
|
||||
createReq.learnerReplicas[createReq.learnerReplica].port = pDnode->port;
|
||||
memcpy(createReq.learnerReplicas[createReq.learnerReplica].fqdn, pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
createReq.learnerReplica++;
|
||||
|
||||
createReq.lastIndex = pObj->lastIndex;
|
||||
|
||||
createEpset.inUse = 0;
|
||||
createEpset.numOfEps = 1;
|
||||
createEpset.eps[0].port = pDnode->port;
|
||||
memcpy(createEpset.eps[0].fqdn, pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
|
||||
if (mndBuildCreateMnodeRedoAction(pTrans, &createReq, &createEpset) != 0) return -1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndSetAlterMnodeTypeRedoActions(SMnode *pMnode, STrans *pTrans, SDnodeObj *pDnode, SMnodeObj *pObj) {
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
void *pIter = NULL;
|
||||
|
@ -465,6 +522,55 @@ static int32_t mndSetAlterMnodeTypeRedoActions(SMnode *pMnode, STrans *pTrans, S
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t mndSetRestoreAlterMnodeTypeRedoActions(SMnode *pMnode, STrans *pTrans, SDnodeObj *pDnode, SMnodeObj *pObj) {
|
||||
SSdb *pSdb = pMnode->pSdb;
|
||||
void *pIter = NULL;
|
||||
SDAlterMnodeTypeReq alterReq = {0};
|
||||
SEpSet createEpset = {0};
|
||||
|
||||
while (1) {
|
||||
SMnodeObj *pMObj = NULL;
|
||||
pIter = sdbFetch(pSdb, SDB_MNODE, pIter, (void **)&pMObj);
|
||||
if (pIter == NULL) break;
|
||||
|
||||
if(pMObj->id == pDnode->id) {
|
||||
sdbRelease(pSdb, pMObj);
|
||||
continue;
|
||||
}
|
||||
|
||||
if(pMObj->role == TAOS_SYNC_ROLE_VOTER){
|
||||
alterReq.replicas[alterReq.replica].id = pMObj->id;
|
||||
alterReq.replicas[alterReq.replica].port = pMObj->pDnode->port;
|
||||
memcpy(alterReq.replicas[alterReq.replica].fqdn, pMObj->pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
alterReq.replica++;
|
||||
}
|
||||
else{
|
||||
alterReq.learnerReplicas[alterReq.learnerReplica].id = pMObj->id;
|
||||
alterReq.learnerReplicas[alterReq.learnerReplica].port = pMObj->pDnode->port;
|
||||
memcpy(alterReq.learnerReplicas[alterReq.learnerReplica].fqdn, pMObj->pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
alterReq.learnerReplica++;
|
||||
}
|
||||
|
||||
sdbRelease(pSdb, pMObj);
|
||||
}
|
||||
|
||||
alterReq.replicas[alterReq.replica].id = pDnode->id;
|
||||
alterReq.replicas[alterReq.replica].port = pDnode->port;
|
||||
memcpy(alterReq.replicas[alterReq.replica].fqdn, pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
alterReq.replica++;
|
||||
|
||||
alterReq.lastIndex = pObj->lastIndex;
|
||||
|
||||
createEpset.inUse = 0;
|
||||
createEpset.numOfEps = 1;
|
||||
createEpset.eps[0].port = pDnode->port;
|
||||
memcpy(createEpset.eps[0].fqdn, pDnode->fqdn, TSDB_FQDN_LEN);
|
||||
|
||||
if (mndBuildAlterMnodeTypeRedoAction(pTrans, &alterReq, &createEpset) != 0) return -1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndCreateMnode(SMnode *pMnode, SRpcMsg *pReq, SDnodeObj *pDnode, SMCreateMnodeReq *pCreate) {
|
||||
int32_t code = -1;
|
||||
|
||||
|
|
|
@ -180,7 +180,7 @@ static int32_t mndSetCreateQnodeUndoLogs(STrans *pTrans, SQnodeObj *pObj) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndSetCreateQnodeCommitLogs(STrans *pTrans, SQnodeObj *pObj) {
|
||||
int32_t mndSetCreateQnodeCommitLogs(STrans *pTrans, SQnodeObj *pObj) {
|
||||
SSdbRaw *pCommitRaw = mndQnodeActionEncode(pObj);
|
||||
if (pCommitRaw == NULL) return -1;
|
||||
if (mndTransAppendCommitlog(pTrans, pCommitRaw) != 0) return -1;
|
||||
|
@ -188,7 +188,11 @@ static int32_t mndSetCreateQnodeCommitLogs(STrans *pTrans, SQnodeObj *pObj) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndSetCreateQnodeRedoActions(STrans *pTrans, SDnodeObj *pDnode, SQnodeObj *pObj) {
|
||||
bool mndQnodeInDnode(SQnodeObj *pQnode, int32_t dnodeId) {
|
||||
return pQnode->pDnode->id == dnodeId;
|
||||
}
|
||||
|
||||
int32_t mndSetCreateQnodeRedoActions(STrans *pTrans, SDnodeObj *pDnode, SQnodeObj *pObj) {
|
||||
SDCreateQnodeReq createReq = {0};
|
||||
createReq.dnodeId = pDnode->id;
|
||||
|
||||
|
|
|
@ -1155,6 +1155,28 @@ int32_t mndAddCreateVnodeAction(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SVg
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t mndRestoreAddCreateVnodeAction(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SVgObj *pVgroup, SDnodeObj *pDnode) {
|
||||
STransAction action = {0};
|
||||
|
||||
action.epSet = mndGetDnodeEpset(pDnode);
|
||||
|
||||
int32_t contLen = 0;
|
||||
void *pReq = mndBuildCreateVnodeReq(pMnode, pDnode, pDb, pVgroup, &contLen);
|
||||
if (pReq == NULL) return -1;
|
||||
|
||||
action.pCont = pReq;
|
||||
action.contLen = contLen;
|
||||
action.msgType = TDMT_DND_CREATE_VNODE;
|
||||
action.acceptableCode = TSDB_CODE_VND_ALREADY_EXIST;
|
||||
|
||||
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
|
||||
taosMemoryFree(pReq);
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int32_t mndAddAlterVnodeConfirmAction(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SVgObj *pVgroup) {
|
||||
STransAction action = {0};
|
||||
action.epSet = mndGetVgroupEpset(pMnode, pVgroup);
|
||||
|
@ -1274,6 +1296,29 @@ int32_t mndAddAlterVnodeTypeAction(SMnode *pMnode, STrans *pTrans, SDbObj *pDb,
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t mndRestoreAddAlterVnodeTypeAction(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SVgObj *pVgroup,
|
||||
SDnodeObj *pDnode) {
|
||||
STransAction action = {0};
|
||||
action.epSet = mndGetDnodeEpset(pDnode);
|
||||
|
||||
int32_t contLen = 0;
|
||||
void *pReq = mndBuildAlterVnodeReplicaReq(pMnode, pDb, pVgroup, pDnode->id, &contLen);
|
||||
if (pReq == NULL) return -1;
|
||||
|
||||
action.pCont = pReq;
|
||||
action.contLen = contLen;
|
||||
action.msgType = TDMT_DND_ALTER_VNODE_TYPE;
|
||||
action.acceptableCode = TSDB_CODE_VND_ALREADY_IS_VOTER;
|
||||
action.retryCode = TSDB_CODE_VND_NOT_CATCH_UP;
|
||||
|
||||
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
|
||||
taosMemoryFree(pReq);
|
||||
return -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndAddDisableVnodeWriteAction(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SVgObj *pVgroup,
|
||||
int32_t dnodeId) {
|
||||
SDnodeObj *pDnode = mndAcquireDnode(pMnode, dnodeId);
|
||||
|
@ -2113,6 +2158,55 @@ int32_t mndBuildAlterVgroupAction(SMnode *pMnode, STrans *pTrans, SDbObj *pOldDb
|
|||
return 0;
|
||||
}
|
||||
|
||||
int32_t mndBuildRestoreAlterVgroupAction(SMnode *pMnode, STrans *pTrans, SDbObj *db, SVgObj *pVgroup,
|
||||
SDnodeObj *pDnode) {
|
||||
SVgObj newVgroup = {0};
|
||||
memcpy(&newVgroup, pVgroup, sizeof(SVgObj));
|
||||
|
||||
mInfo("db:%s, vgId:%d, restore vnodes, vn:0 dnode:%d", pVgroup->dbName, pVgroup->vgId,
|
||||
pVgroup->vnodeGid[0].dnodeId);
|
||||
|
||||
if(newVgroup.replica == 1){
|
||||
int selected = 0;
|
||||
for(int i = 0; i < newVgroup.replica; i++){
|
||||
newVgroup.vnodeGid[i].nodeRole = TAOS_SYNC_ROLE_VOTER;
|
||||
if(newVgroup.vnodeGid[i].dnodeId == pDnode->id){
|
||||
selected = i;
|
||||
}
|
||||
}
|
||||
if (mndAddCreateVnodeAction(pMnode, pTrans, db, &newVgroup, &newVgroup.vnodeGid[selected]) != 0) return -1;
|
||||
}
|
||||
else if(newVgroup.replica == 3){
|
||||
for(int i = 0; i < newVgroup.replica; i++){
|
||||
if(newVgroup.vnodeGid[i].dnodeId == pDnode->id){
|
||||
newVgroup.vnodeGid[i].nodeRole = TAOS_SYNC_ROLE_LEARNER;
|
||||
}
|
||||
else{
|
||||
newVgroup.vnodeGid[i].nodeRole = TAOS_SYNC_ROLE_VOTER;
|
||||
}
|
||||
}
|
||||
if (mndRestoreAddCreateVnodeAction(pMnode, pTrans, db, &newVgroup, pDnode) != 0) return -1;
|
||||
|
||||
for(int i = 0; i < newVgroup.replica; i++){
|
||||
newVgroup.vnodeGid[i].nodeRole = TAOS_SYNC_ROLE_VOTER;
|
||||
if(newVgroup.vnodeGid[i].dnodeId == pDnode->id){
|
||||
}
|
||||
}
|
||||
if (mndRestoreAddAlterVnodeTypeAction(pMnode, pTrans, db, &newVgroup, pDnode) != 0)
|
||||
return -1;
|
||||
}
|
||||
|
||||
SSdbRaw *pVgRaw = mndVgroupActionEncode(&newVgroup);
|
||||
if (pVgRaw == NULL) return -1;
|
||||
if (mndTransAppendCommitlog(pTrans, pVgRaw) != 0) {
|
||||
sdbFreeRaw(pVgRaw);
|
||||
return -1;
|
||||
}
|
||||
(void)sdbSetRawStatus(pVgRaw, SDB_STATUS_READY);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int32_t mndAddAdjustVnodeHashRangeAction(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SVgObj *pVgroup) {
|
||||
return 0;
|
||||
}
|
||||
|
@ -2440,6 +2534,13 @@ _OVER:
|
|||
|
||||
bool mndVgroupInDb(SVgObj *pVgroup, int64_t dbUid) { return !pVgroup->isTsma && pVgroup->dbUid == dbUid; }
|
||||
|
||||
bool mndVgroupInDnode(SVgObj *pVgroup, int32_t dnodeId) {
|
||||
for(int i = 0; i < pVgroup->replica; i++){
|
||||
if(pVgroup->vnodeGid[i].dnodeId == dnodeId) return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
static void *mndBuildCompactVnodeReq(SMnode *pMnode, SDbObj *pDb, SVgObj *pVgroup, int32_t *pContLen, int64_t compactTs,
|
||||
STimeWindow tw) {
|
||||
SCompactVnodeReq compactReq = {0};
|
||||
|
|
|
@ -396,8 +396,12 @@ int32_t tqProcessSeekReq(STQ* pTq, int64_t sversion, char* msg, int32_t msgLen)
|
|||
}
|
||||
|
||||
// save the new offset value
|
||||
tqDebug("vgId:%d sub:%s seek to %" PRId64 " prev offset:%" PRId64, vgId, pOffset->subKey, pOffset->val.version,
|
||||
if (pSavedOffset != NULL) {
|
||||
tqDebug("vgId:%d sub:%s seek to:%" PRId64 " prev offset:%" PRId64, vgId, pOffset->subKey, pOffset->val.version,
|
||||
pSavedOffset->val.version);
|
||||
} else {
|
||||
tqDebug("vgId:%d sub:%s seek to:%"PRId64" not saved yet", vgId, pOffset->subKey, pOffset->val.version);
|
||||
}
|
||||
|
||||
if (tqOffsetWrite(pTq->pOffsetStore, pOffset) < 0) {
|
||||
tqError("failed to save offset, vgId:%d sub:%s seek to %" PRId64, vgId, pOffset->subKey, pOffset->val.version);
|
||||
|
|
|
@ -180,11 +180,12 @@ void tsdbCacheSerialize(SLastCol *pLastCol, char **value, size_t *size) {
|
|||
*(SLastCol *)(*value) = *pLastCol;
|
||||
if (IS_VAR_DATA_TYPE(pColVal->type)) {
|
||||
uint8_t *pVal = pColVal->value.pData;
|
||||
pColVal->value.pData = *value + sizeof(*pLastCol);
|
||||
SColVal *pDColVal = &((SLastCol *)(*value))->colVal;
|
||||
pDColVal->value.pData = *value + sizeof(*pLastCol);
|
||||
if (pColVal->value.nData > 0) {
|
||||
memcpy(pColVal->value.pData, pVal, pColVal->value.nData);
|
||||
memcpy(pDColVal->value.pData, pVal, pColVal->value.nData);
|
||||
} else {
|
||||
pColVal->value.pData = NULL;
|
||||
pDColVal->value.pData = NULL;
|
||||
}
|
||||
}
|
||||
*size = length;
|
||||
|
@ -340,6 +341,16 @@ _exit:
|
|||
return code;
|
||||
}
|
||||
|
||||
static void reallocVarData(SColVal *pColVal) {
|
||||
if (IS_VAR_DATA_TYPE(pColVal->type)) {
|
||||
uint8_t *pVal = pColVal->value.pData;
|
||||
pColVal->value.pData = taosMemoryMalloc(pColVal->value.nData);
|
||||
if (pColVal->value.nData) {
|
||||
memcpy(pColVal->value.pData, pVal, pColVal->value.nData);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static int32_t mergeLastCid(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray, SCacheRowsReader *pr, int16_t *aCols,
|
||||
int nCols, int16_t *slotIds);
|
||||
|
||||
|
@ -349,6 +360,7 @@ static int32_t mergeLastRowCid(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray,
|
|||
int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsReader *pr, int32_t ltype) {
|
||||
static char const *alstring[2] = {"last_row", "last"};
|
||||
char const *lstring = alstring[ltype];
|
||||
rocksdb_writebatch_t *wb = NULL;
|
||||
int32_t code = 0;
|
||||
|
||||
SArray *pCidList = pr->pCidList;
|
||||
|
@ -387,14 +399,7 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR
|
|||
int16_t cid = *(int16_t *)taosArrayGet(pCidList, i);
|
||||
SLastCol noneCol = {.ts = TSKEY_MIN, .colVal = COL_VAL_NONE(cid, pr->pSchema->columns[pr->pSlotIds[i]].type)};
|
||||
if (pLastCol) {
|
||||
SColVal *pColVal = &pLastCol->colVal;
|
||||
if (IS_VAR_DATA_TYPE(pColVal->type)) {
|
||||
uint8_t *pVal = pColVal->value.pData;
|
||||
pColVal->value.pData = taosMemoryMalloc(pColVal->value.nData);
|
||||
if (pColVal->value.nData) {
|
||||
memcpy(pColVal->value.pData, pVal, pColVal->value.nData);
|
||||
}
|
||||
}
|
||||
reallocVarData(&pLastCol->colVal);
|
||||
} else {
|
||||
taosThreadMutexLock(&pTsdb->rCache.rMutex);
|
||||
|
||||
|
@ -423,7 +428,7 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR
|
|||
}
|
||||
|
||||
// store result back to rocks cache
|
||||
rocksdb_writebatch_t *wb = pTsdb->rCache.writebatch;
|
||||
wb = pTsdb->rCache.writebatch;
|
||||
char *value = NULL;
|
||||
size_t vlen = 0;
|
||||
tsdbCacheSerialize(pLastCol, &value, &vlen);
|
||||
|
@ -432,13 +437,19 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR
|
|||
rocksdb_writebatch_put(wb, key, klen, value, vlen);
|
||||
|
||||
taosMemoryFree(value);
|
||||
} else {
|
||||
reallocVarData(&pLastCol->colVal);
|
||||
}
|
||||
|
||||
if (wb) {
|
||||
char *err = NULL;
|
||||
rocksdb_write(pTsdb->rCache.db, pTsdb->rCache.writeoptions, wb, &err);
|
||||
if (NULL != err) {
|
||||
tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, err);
|
||||
rocksdb_free(err);
|
||||
}
|
||||
|
||||
rocksdb_writebatch_clear(wb);
|
||||
}
|
||||
|
||||
taosThreadMutexUnlock(&pTsdb->rCache.rMutex);
|
||||
|
@ -1382,7 +1393,12 @@ static int32_t getNextRowFromFS(void *iter, TSDBROW **ppRow, bool *pIgnoreEarlie
|
|||
*pIgnoreEarlierTs = false;
|
||||
tBlockDataReset(state->pBlockData);
|
||||
TABLEID tid = {.suid = state->suid, .uid = state->uid};
|
||||
code = tBlockDataInit(state->pBlockData, &tid, state->pTSchema, aCols, nCols);
|
||||
int nTmpCols = nCols;
|
||||
if (aCols[0] == PRIMARYKEY_TIMESTAMP_COL_ID && nCols == 1) {
|
||||
nTmpCols = 0;
|
||||
skipBlock = false;
|
||||
}
|
||||
code = tBlockDataInit(state->pBlockData, &tid, state->pTSchema, aCols, nTmpCols);
|
||||
if (code) goto _err;
|
||||
|
||||
code = tsdbReadDataBlock(*state->pDataFReader, &block, state->pBlockData);
|
||||
|
|
|
@ -38,6 +38,11 @@ typedef struct STimeSliceOperatorInfo {
|
|||
SColumn tsCol; // primary timestamp column
|
||||
SExprSupp scalarSup; // scalar calculation
|
||||
struct SFillColInfo* pFillColInfo; // fill column info
|
||||
int64_t prevTs;
|
||||
bool prevTsSet;
|
||||
uint64_t groupId;
|
||||
SGroupKeys* pPrevGroupKey;
|
||||
SSDataBlock* pNextGroupRes;
|
||||
} STimeSliceOperatorInfo;
|
||||
|
||||
static void destroyTimeSliceOperatorInfo(void* param);
|
||||
|
@ -168,12 +173,49 @@ static bool isIsfilledPseudoColumn(SExprInfo* pExprInfo) {
|
|||
return (IS_BOOLEAN_TYPE(pExprInfo->base.resSchema.type) && strcasecmp(name, "_isfilled") == 0);
|
||||
}
|
||||
|
||||
static bool checkDuplicateTimestamps(STimeSliceOperatorInfo* pSliceInfo, SColumnInfoData* pTsCol,
|
||||
int32_t curIndex, int32_t rows) {
|
||||
|
||||
|
||||
int64_t currentTs = *(int64_t*)colDataGetData(pTsCol, curIndex);
|
||||
if (currentTs > pSliceInfo->win.ekey) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if ((pSliceInfo->prevTsSet == true) && (currentTs == pSliceInfo->prevTs)) {
|
||||
return true;
|
||||
}
|
||||
|
||||
pSliceInfo->prevTsSet = true;
|
||||
pSliceInfo->prevTs = currentTs;
|
||||
|
||||
if (currentTs == pSliceInfo->win.ekey && curIndex < rows - 1) {
|
||||
int64_t nextTs = *(int64_t*)colDataGetData(pTsCol, curIndex + 1);
|
||||
if (currentTs == nextTs) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
static bool isInterpFunc(SExprInfo* pExprInfo) {
|
||||
int32_t functionType = pExprInfo->pExpr->_function.functionType;
|
||||
return (functionType == FUNCTION_TYPE_INTERP);
|
||||
}
|
||||
|
||||
static bool isGroupKeyFunc(SExprInfo* pExprInfo) {
|
||||
int32_t functionType = pExprInfo->pExpr->_function.functionType;
|
||||
return (functionType == FUNCTION_TYPE_GROUP_KEY);
|
||||
}
|
||||
|
||||
static bool genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp* pExprSup, SSDataBlock* pResBlock,
|
||||
bool beforeTs) {
|
||||
SSDataBlock* pSrcBlock, int32_t index, bool beforeTs) {
|
||||
int32_t rows = pResBlock->info.rows;
|
||||
timeSliceEnsureBlockCapacity(pSliceInfo, pResBlock);
|
||||
// todo set the correct primary timestamp column
|
||||
|
||||
|
||||
// output the result
|
||||
bool hasInterp = true;
|
||||
for (int32_t j = 0; j < pExprSup->numOfExprs; ++j) {
|
||||
|
@ -189,6 +231,30 @@ static bool genInterpolationResult(STimeSliceOperatorInfo* pSliceInfo, SExprSupp
|
|||
bool isFilled = true;
|
||||
colDataAppend(pDst, pResBlock->info.rows, (char*)&isFilled, false);
|
||||
continue;
|
||||
} else if (!isInterpFunc(pExprInfo)) {
|
||||
if (isGroupKeyFunc(pExprInfo)) {
|
||||
if (pSrcBlock != NULL) {
|
||||
int32_t srcSlot = pExprInfo->base.pParam[0].pCol->slotId;
|
||||
SColumnInfoData* pSrc = taosArrayGet(pSrcBlock->pDataBlock, srcSlot);
|
||||
|
||||
if (colDataIsNull_s(pSrc, index)) {
|
||||
colDataSetNULL(pDst, pResBlock->info.rows);
|
||||
continue;
|
||||
}
|
||||
|
||||
char* v = colDataGetData(pSrc, index);
|
||||
colDataSetVal(pDst, pResBlock->info.rows, v, false);
|
||||
} else {
|
||||
// use stored group key
|
||||
SGroupKeys* pkey = pSliceInfo->pPrevGroupKey;
|
||||
if (pkey->isNull == false) {
|
||||
colDataSetVal(pDst, rows, pkey->pData, false);
|
||||
} else {
|
||||
colDataSetNULL(pDst, rows);
|
||||
}
|
||||
}
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
int32_t srcSlot = pExprInfo->base.pParam[0].pCol->slotId;
|
||||
|
@ -414,7 +480,31 @@ static int32_t initFillLinearInfo(STimeSliceOperatorInfo* pInfo, SSDataBlock* pB
|
|||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t initKeeperInfo(STimeSliceOperatorInfo* pInfo, SSDataBlock* pBlock) {
|
||||
static int32_t initGroupKeyKeeper(STimeSliceOperatorInfo* pInfo, SExprSupp* pExprSup) {
|
||||
if (pInfo->pPrevGroupKey != NULL) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
pInfo->pPrevGroupKey = taosMemoryCalloc(1, sizeof(SGroupKeys));
|
||||
if (pInfo->pPrevGroupKey == NULL) {
|
||||
return TSDB_CODE_OUT_OF_MEMORY;
|
||||
}
|
||||
|
||||
for (int32_t i = 0; i < pExprSup->numOfExprs; ++i) {
|
||||
SExprInfo* pExprInfo = &pExprSup->pExprInfo[i];
|
||||
|
||||
if (isGroupKeyFunc(pExprInfo)) {
|
||||
pInfo->pPrevGroupKey->bytes = pExprInfo->base.resSchema.bytes;
|
||||
pInfo->pPrevGroupKey->type = pExprInfo->base.resSchema.type;
|
||||
pInfo->pPrevGroupKey->isNull = false;
|
||||
pInfo->pPrevGroupKey->pData = taosMemoryCalloc(1, pInfo->pPrevGroupKey->bytes);
|
||||
}
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t initKeeperInfo(STimeSliceOperatorInfo* pInfo, SSDataBlock* pBlock, SExprSupp* pExprSup) {
|
||||
int32_t code;
|
||||
code = initPrevRowsKeeper(pInfo, pBlock);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
|
@ -431,9 +521,202 @@ static int32_t initKeeperInfo(STimeSliceOperatorInfo* pInfo, SSDataBlock* pBlock
|
|||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
||||
code = initGroupKeyKeeper(pInfo, pExprSup);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
return TSDB_CODE_FAILED;
|
||||
}
|
||||
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t resetPrevRowsKeeper(STimeSliceOperatorInfo* pInfo) {
|
||||
if (pInfo->pPrevRow == NULL) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
for (int32_t i = 0; i < taosArrayGetSize(pInfo->pLinearInfo); ++i) {
|
||||
SGroupKeys *pKey = taosArrayGet(pInfo->pPrevRow, i);
|
||||
pKey->isNull = false;
|
||||
}
|
||||
|
||||
pInfo->isPrevRowSet = false;
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t resetNextRowsKeeper(STimeSliceOperatorInfo* pInfo) {
|
||||
if (pInfo->pNextRow == NULL) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
for (int32_t i = 0; i < taosArrayGetSize(pInfo->pLinearInfo); ++i) {
|
||||
SGroupKeys *pKey = taosArrayGet(pInfo->pPrevRow, i);
|
||||
pKey->isNull = false;
|
||||
}
|
||||
|
||||
pInfo->isNextRowSet = false;
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t resetFillLinearInfo(STimeSliceOperatorInfo* pInfo) {
|
||||
if (pInfo->pLinearInfo == NULL) {
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
for (int32_t i = 0; i < taosArrayGetSize(pInfo->pLinearInfo); ++i) {
|
||||
SFillLinearInfo *pLinearInfo = taosArrayGet(pInfo->pLinearInfo, i);
|
||||
pLinearInfo->start.key = INT64_MIN;
|
||||
pLinearInfo->end.key = INT64_MIN;
|
||||
pLinearInfo->isStartSet = false;
|
||||
pLinearInfo->isEndSet = false;
|
||||
}
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static int32_t resetKeeperInfo(STimeSliceOperatorInfo* pInfo) {
|
||||
resetPrevRowsKeeper(pInfo);
|
||||
resetNextRowsKeeper(pInfo);
|
||||
resetFillLinearInfo(pInfo);
|
||||
|
||||
return TSDB_CODE_SUCCESS;
|
||||
}
|
||||
|
||||
static void doTimesliceImpl(SOperatorInfo* pOperator, STimeSliceOperatorInfo* pSliceInfo, SSDataBlock* pBlock,
|
||||
SExecTaskInfo* pTaskInfo) {
|
||||
SSDataBlock* pResBlock = pSliceInfo->pRes;
|
||||
SInterval* pInterval = &pSliceInfo->interval;
|
||||
|
||||
SColumnInfoData* pTsCol = taosArrayGet(pBlock->pDataBlock, pSliceInfo->tsCol.slotId);
|
||||
for (int32_t i = 0; i < pBlock->info.rows; ++i) {
|
||||
int64_t ts = *(int64_t*)colDataGetData(pTsCol, i);
|
||||
|
||||
// check for duplicate timestamps
|
||||
if (checkDuplicateTimestamps(pSliceInfo, pTsCol, i, pBlock->info.rows)) {
|
||||
T_LONG_JMP(pTaskInfo->env, TSDB_CODE_FUNC_DUP_TIMESTAMP);
|
||||
}
|
||||
|
||||
if (pSliceInfo->current > pSliceInfo->win.ekey) {
|
||||
break;
|
||||
}
|
||||
|
||||
if (ts == pSliceInfo->current) {
|
||||
addCurrentRowToResult(pSliceInfo, &pOperator->exprSupp, pResBlock, pBlock, i);
|
||||
|
||||
doKeepPrevRows(pSliceInfo, pBlock, i);
|
||||
doKeepLinearInfo(pSliceInfo, pBlock, i);
|
||||
|
||||
pSliceInfo->current =
|
||||
taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit, pInterval->precision);
|
||||
if (pSliceInfo->current > pSliceInfo->win.ekey) {
|
||||
break;
|
||||
}
|
||||
} else if (ts < pSliceInfo->current) {
|
||||
// in case of interpolation window starts and ends between two datapoints, fill(prev) need to interpolate
|
||||
doKeepPrevRows(pSliceInfo, pBlock, i);
|
||||
doKeepLinearInfo(pSliceInfo, pBlock, i);
|
||||
|
||||
if (i < pBlock->info.rows - 1) {
|
||||
// in case of interpolation window starts and ends between two datapoints, fill(next) need to interpolate
|
||||
doKeepNextRows(pSliceInfo, pBlock, i + 1);
|
||||
int64_t nextTs = *(int64_t*)colDataGetData(pTsCol, i + 1);
|
||||
if (nextTs > pSliceInfo->current) {
|
||||
while (pSliceInfo->current < nextTs && pSliceInfo->current <= pSliceInfo->win.ekey) {
|
||||
if (!genInterpolationResult(pSliceInfo, &pOperator->exprSupp, pResBlock, pBlock, i, false) &&
|
||||
pSliceInfo->fillType == TSDB_FILL_LINEAR) {
|
||||
break;
|
||||
} else {
|
||||
pSliceInfo->current = taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit,
|
||||
pInterval->precision);
|
||||
}
|
||||
}
|
||||
|
||||
if (pSliceInfo->current > pSliceInfo->win.ekey) {
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
// ignore current row, and do nothing
|
||||
}
|
||||
} else { // it is the last row of current block
|
||||
doKeepPrevRows(pSliceInfo, pBlock, i);
|
||||
}
|
||||
} else { // ts > pSliceInfo->current
|
||||
// in case of interpolation window starts and ends between two datapoints, fill(next) need to interpolate
|
||||
doKeepNextRows(pSliceInfo, pBlock, i);
|
||||
doKeepLinearInfo(pSliceInfo, pBlock, i);
|
||||
|
||||
while (pSliceInfo->current < ts && pSliceInfo->current <= pSliceInfo->win.ekey) {
|
||||
if (!genInterpolationResult(pSliceInfo, &pOperator->exprSupp, pResBlock, pBlock, i, true) &&
|
||||
pSliceInfo->fillType == TSDB_FILL_LINEAR) {
|
||||
break;
|
||||
} else {
|
||||
pSliceInfo->current =
|
||||
taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit, pInterval->precision);
|
||||
}
|
||||
}
|
||||
|
||||
// add current row if timestamp match
|
||||
if (ts == pSliceInfo->current && pSliceInfo->current <= pSliceInfo->win.ekey) {
|
||||
addCurrentRowToResult(pSliceInfo, &pOperator->exprSupp, pResBlock, pBlock, i);
|
||||
|
||||
pSliceInfo->current =
|
||||
taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit, pInterval->precision);
|
||||
}
|
||||
doKeepPrevRows(pSliceInfo, pBlock, i);
|
||||
|
||||
if (pSliceInfo->current > pSliceInfo->win.ekey) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void genInterpAfterDataBlock(STimeSliceOperatorInfo* pSliceInfo, SOperatorInfo* pOperator, int32_t index) {
|
||||
SSDataBlock* pResBlock = pSliceInfo->pRes;
|
||||
SInterval* pInterval = &pSliceInfo->interval;
|
||||
|
||||
while (pSliceInfo->current <= pSliceInfo->win.ekey && pSliceInfo->fillType != TSDB_FILL_NEXT &&
|
||||
pSliceInfo->fillType != TSDB_FILL_LINEAR) {
|
||||
genInterpolationResult(pSliceInfo, &pOperator->exprSupp, pResBlock, NULL, index, false);
|
||||
pSliceInfo->current =
|
||||
taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit, pInterval->precision);
|
||||
}
|
||||
}
|
||||
|
||||
static void copyPrevGroupKey(SExprSupp* pExprSup, SGroupKeys* pGroupKey, SSDataBlock* pSrcBlock) {
|
||||
for (int32_t j = 0; j < pExprSup->numOfExprs; ++j) {
|
||||
SExprInfo* pExprInfo = &pExprSup->pExprInfo[j];
|
||||
|
||||
if (isGroupKeyFunc(pExprInfo)) {
|
||||
int32_t srcSlot = pExprInfo->base.pParam[0].pCol->slotId;
|
||||
SColumnInfoData* pSrc = taosArrayGet(pSrcBlock->pDataBlock, srcSlot);
|
||||
|
||||
if (colDataIsNull_s(pSrc, 0)) {
|
||||
pGroupKey->isNull = true;
|
||||
break;
|
||||
}
|
||||
|
||||
char* v = colDataGetData(pSrc, 0);
|
||||
if (IS_VAR_DATA_TYPE(pGroupKey->type)) {
|
||||
memcpy(pGroupKey->pData, v, varDataTLen(v));
|
||||
} else {
|
||||
memcpy(pGroupKey->pData, v, pGroupKey->bytes);
|
||||
}
|
||||
|
||||
pGroupKey->isNull = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static void resetTimesliceInfo(STimeSliceOperatorInfo* pSliceInfo) {
|
||||
pSliceInfo->current = pSliceInfo->win.skey;
|
||||
pSliceInfo->prevTsSet = false;
|
||||
resetKeeperInfo(pSliceInfo);
|
||||
}
|
||||
|
||||
static SSDataBlock* doTimeslice(SOperatorInfo* pOperator) {
|
||||
if (pOperator->status == OP_EXEC_DONE) {
|
||||
return NULL;
|
||||
|
@ -451,118 +734,62 @@ static SSDataBlock* doTimeslice(SOperatorInfo* pOperator) {
|
|||
|
||||
blockDataCleanup(pResBlock);
|
||||
|
||||
while (1) {
|
||||
if (pSliceInfo->pNextGroupRes != NULL) {
|
||||
setInputDataBlock(pSup, pSliceInfo->pNextGroupRes, order, MAIN_SCAN, true);
|
||||
doTimesliceImpl(pOperator, pSliceInfo, pSliceInfo->pNextGroupRes, pTaskInfo);
|
||||
copyPrevGroupKey(&pOperator->exprSupp, pSliceInfo->pPrevGroupKey, pSliceInfo->pNextGroupRes);
|
||||
pSliceInfo->pNextGroupRes = NULL;
|
||||
}
|
||||
|
||||
while (1) {
|
||||
SSDataBlock* pBlock = downstream->fpSet.getNextFn(downstream);
|
||||
if (pBlock == NULL) {
|
||||
setOperatorCompleted(pOperator);
|
||||
break;
|
||||
}
|
||||
|
||||
if (pSliceInfo->groupId == 0 && pBlock->info.id.groupId != 0) {
|
||||
pSliceInfo->groupId = pBlock->info.id.groupId;
|
||||
} else {
|
||||
if (pSliceInfo->groupId != pBlock->info.id.groupId) {
|
||||
pSliceInfo->groupId = pBlock->info.id.groupId;
|
||||
pSliceInfo->pNextGroupRes = pBlock;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (pSliceInfo->scalarSup.pExprInfo != NULL) {
|
||||
SExprSupp* pExprSup = &pSliceInfo->scalarSup;
|
||||
projectApplyFunctions(pExprSup->pExprInfo, pBlock, pBlock, pExprSup->pCtx, pExprSup->numOfExprs, NULL);
|
||||
}
|
||||
|
||||
int32_t code = initKeeperInfo(pSliceInfo, pBlock);
|
||||
int32_t code = initKeeperInfo(pSliceInfo, pBlock, &pOperator->exprSupp);
|
||||
if (code != TSDB_CODE_SUCCESS) {
|
||||
T_LONG_JMP(pTaskInfo->env, code);
|
||||
}
|
||||
|
||||
// the pDataBlock are always the same one, no need to call this again
|
||||
setInputDataBlock(pSup, pBlock, order, MAIN_SCAN, true);
|
||||
|
||||
SColumnInfoData* pTsCol = taosArrayGet(pBlock->pDataBlock, pSliceInfo->tsCol.slotId);
|
||||
for (int32_t i = 0; i < pBlock->info.rows; ++i) {
|
||||
int64_t ts = *(int64_t*)colDataGetData(pTsCol, i);
|
||||
|
||||
if (pSliceInfo->current > pSliceInfo->win.ekey) {
|
||||
setOperatorCompleted(pOperator);
|
||||
break;
|
||||
}
|
||||
|
||||
if (ts == pSliceInfo->current) {
|
||||
addCurrentRowToResult(pSliceInfo, &pOperator->exprSupp, pResBlock, pBlock, i);
|
||||
|
||||
doKeepPrevRows(pSliceInfo, pBlock, i);
|
||||
doKeepLinearInfo(pSliceInfo, pBlock, i);
|
||||
|
||||
pSliceInfo->current =
|
||||
taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit, pInterval->precision);
|
||||
if (pSliceInfo->current > pSliceInfo->win.ekey) {
|
||||
setOperatorCompleted(pOperator);
|
||||
break;
|
||||
}
|
||||
} else if (ts < pSliceInfo->current) {
|
||||
// in case of interpolation window starts and ends between two datapoints, fill(prev) need to interpolate
|
||||
doKeepPrevRows(pSliceInfo, pBlock, i);
|
||||
doKeepLinearInfo(pSliceInfo, pBlock, i);
|
||||
|
||||
if (i < pBlock->info.rows - 1) {
|
||||
// in case of interpolation window starts and ends between two datapoints, fill(next) need to interpolate
|
||||
doKeepNextRows(pSliceInfo, pBlock, i + 1);
|
||||
int64_t nextTs = *(int64_t*)colDataGetData(pTsCol, i + 1);
|
||||
if (nextTs > pSliceInfo->current) {
|
||||
while (pSliceInfo->current < nextTs && pSliceInfo->current <= pSliceInfo->win.ekey) {
|
||||
if (!genInterpolationResult(pSliceInfo, &pOperator->exprSupp, pResBlock, false) &&
|
||||
pSliceInfo->fillType == TSDB_FILL_LINEAR) {
|
||||
break;
|
||||
} else {
|
||||
pSliceInfo->current = taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit,
|
||||
pInterval->precision);
|
||||
}
|
||||
}
|
||||
|
||||
if (pSliceInfo->current > pSliceInfo->win.ekey) {
|
||||
setOperatorCompleted(pOperator);
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
// ignore current row, and do nothing
|
||||
}
|
||||
} else { // it is the last row of current block
|
||||
doKeepPrevRows(pSliceInfo, pBlock, i);
|
||||
}
|
||||
} else { // ts > pSliceInfo->current
|
||||
// in case of interpolation window starts and ends between two datapoints, fill(next) need to interpolate
|
||||
doKeepNextRows(pSliceInfo, pBlock, i);
|
||||
doKeepLinearInfo(pSliceInfo, pBlock, i);
|
||||
|
||||
while (pSliceInfo->current < ts && pSliceInfo->current <= pSliceInfo->win.ekey) {
|
||||
if (!genInterpolationResult(pSliceInfo, &pOperator->exprSupp, pResBlock, true) &&
|
||||
pSliceInfo->fillType == TSDB_FILL_LINEAR) {
|
||||
break;
|
||||
} else {
|
||||
pSliceInfo->current =
|
||||
taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit, pInterval->precision);
|
||||
}
|
||||
}
|
||||
|
||||
// add current row if timestamp match
|
||||
if (ts == pSliceInfo->current && pSliceInfo->current <= pSliceInfo->win.ekey) {
|
||||
addCurrentRowToResult(pSliceInfo, &pOperator->exprSupp, pResBlock, pBlock, i);
|
||||
|
||||
pSliceInfo->current =
|
||||
taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit, pInterval->precision);
|
||||
}
|
||||
doKeepPrevRows(pSliceInfo, pBlock, i);
|
||||
|
||||
if (pSliceInfo->current > pSliceInfo->win.ekey) {
|
||||
setOperatorCompleted(pOperator);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
doTimesliceImpl(pOperator, pSliceInfo, pBlock, pTaskInfo);
|
||||
copyPrevGroupKey(&pOperator->exprSupp, pSliceInfo->pPrevGroupKey, pBlock);
|
||||
}
|
||||
|
||||
// check if need to interpolate after last datablock
|
||||
// except for fill(next), fill(linear)
|
||||
while (pSliceInfo->current <= pSliceInfo->win.ekey && pSliceInfo->fillType != TSDB_FILL_NEXT &&
|
||||
pSliceInfo->fillType != TSDB_FILL_LINEAR) {
|
||||
genInterpolationResult(pSliceInfo, &pOperator->exprSupp, pResBlock, false);
|
||||
pSliceInfo->current =
|
||||
taosTimeAdd(pSliceInfo->current, pInterval->interval, pInterval->intervalUnit, pInterval->precision);
|
||||
}
|
||||
genInterpAfterDataBlock(pSliceInfo, pOperator, 0);
|
||||
|
||||
doFilter(pResBlock, pOperator->exprSupp.pFilterInfo, NULL);
|
||||
if (pOperator->status == OP_EXEC_DONE) {
|
||||
break;
|
||||
}
|
||||
|
||||
// restore initial value for next group
|
||||
resetTimesliceInfo(pSliceInfo);
|
||||
if (pResBlock->info.rows >= 4096) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// restore the value
|
||||
setTaskStatus(pOperator->pTaskInfo, TASK_COMPLETED);
|
||||
|
@ -614,6 +841,11 @@ SOperatorInfo* createTimeSliceOperatorInfo(SOperatorInfo* downstream, SPhysiNode
|
|||
pInfo->win = pInterpPhyNode->timeRange;
|
||||
pInfo->interval.interval = pInterpPhyNode->interval;
|
||||
pInfo->current = pInfo->win.skey;
|
||||
pInfo->prevTsSet = false;
|
||||
pInfo->prevTs = 0;
|
||||
pInfo->groupId = 0;
|
||||
pInfo->pPrevGroupKey = NULL;
|
||||
pInfo->pNextGroupRes = NULL;
|
||||
|
||||
if (downstream->operatorType == QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN) {
|
||||
STableScanInfo* pScanInfo = (STableScanInfo*)downstream->info;
|
||||
|
@ -661,6 +893,10 @@ void destroyTimeSliceOperatorInfo(void* param) {
|
|||
taosMemoryFree(pKey->end.val);
|
||||
}
|
||||
taosArrayDestroy(pInfo->pLinearInfo);
|
||||
|
||||
taosMemoryFree(pInfo->pPrevGroupKey->pData);
|
||||
taosMemoryFree(pInfo->pPrevGroupKey);
|
||||
|
||||
cleanupExprSupp(&pInfo->scalarSup);
|
||||
|
||||
for (int32_t i = 0; i < pInfo->pFillColInfo->numOfFillExpr; ++i) {
|
||||
|
|
|
@ -261,6 +261,14 @@ const char* nodesNodeName(ENodeType type) {
|
|||
return "DeleteStmt";
|
||||
case QUERY_NODE_INSERT_STMT:
|
||||
return "InsertStmt";
|
||||
case QUERY_NODE_RESTORE_DNODE_STMT:
|
||||
return "RestoreDnodeStmt";
|
||||
case QUERY_NODE_RESTORE_QNODE_STMT:
|
||||
return "RestoreQnodeStmt";
|
||||
case QUERY_NODE_RESTORE_MNODE_STMT:
|
||||
return "RestoreMnodeStmt";
|
||||
case QUERY_NODE_RESTORE_VNODE_STMT:
|
||||
return "RestoreVnodeStmt";
|
||||
case QUERY_NODE_LOGIC_PLAN_SCAN:
|
||||
return "LogicScan";
|
||||
case QUERY_NODE_LOGIC_PLAN_JOIN:
|
||||
|
@ -5537,6 +5545,35 @@ static int32_t jsonToDropDnodeStmt(const SJson* pJson, void* pObj) {
|
|||
return code;
|
||||
}
|
||||
|
||||
static const char* jkRestoreComponentNodeStmtDnodeId = "DnodeId";
|
||||
|
||||
static int32_t restoreComponentNodeStmtToJson(const void* pObj, SJson* pJson) {
|
||||
const SRestoreComponentNodeStmt* pNode = (const SRestoreComponentNodeStmt*)pObj;
|
||||
return tjsonAddIntegerToObject(pJson, jkRestoreComponentNodeStmtDnodeId, pNode->dnodeId);
|
||||
}
|
||||
|
||||
static int32_t jsonToRestoreComponentNodeStmt(const SJson* pJson, void* pObj) {
|
||||
SRestoreComponentNodeStmt* pNode = (SRestoreComponentNodeStmt*)pObj;
|
||||
return tjsonGetIntValue(pJson, jkRestoreComponentNodeStmtDnodeId, &pNode->dnodeId);
|
||||
}
|
||||
|
||||
static int32_t jsonToRestoreDnodeStmt(const SJson* pJson, void* pObj) {
|
||||
return jsonToRestoreComponentNodeStmt(pJson, pObj);
|
||||
}
|
||||
static int32_t jsonToRestoreQnodeStmt(const SJson* pJson, void* pObj) {
|
||||
return jsonToRestoreComponentNodeStmt(pJson, pObj);
|
||||
}
|
||||
static int32_t jsonToRestoreMnodeStmt(const SJson* pJson, void* pObj) {
|
||||
return jsonToRestoreComponentNodeStmt(pJson, pObj);
|
||||
}
|
||||
static int32_t jsonToRestoreVnodeStmt(const SJson* pJson, void* pObj) {
|
||||
return jsonToRestoreComponentNodeStmt(pJson, pObj);
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
static const char* jkCreateTopicStmtTopicName = "TopicName";
|
||||
static const char* jkCreateTopicStmtSubscribeDbName = "SubscribeDbName";
|
||||
static const char* jkCreateTopicStmtIgnoreExists = "IgnoreExists";
|
||||
|
@ -6824,6 +6861,14 @@ static int32_t jsonToSpecificNode(const SJson* pJson, void* pObj) {
|
|||
return jsonToDeleteStmt(pJson, pObj);
|
||||
case QUERY_NODE_INSERT_STMT:
|
||||
return jsonToInsertStmt(pJson, pObj);
|
||||
case QUERY_NODE_RESTORE_DNODE_STMT:
|
||||
return jsonToRestoreDnodeStmt(pJson, pObj);
|
||||
case QUERY_NODE_RESTORE_QNODE_STMT:
|
||||
return jsonToRestoreQnodeStmt(pJson, pObj);
|
||||
case QUERY_NODE_RESTORE_MNODE_STMT:
|
||||
return jsonToRestoreMnodeStmt(pJson, pObj);
|
||||
case QUERY_NODE_RESTORE_VNODE_STMT:
|
||||
return jsonToRestoreVnodeStmt(pJson, pObj);
|
||||
case QUERY_NODE_LOGIC_PLAN_SCAN:
|
||||
return jsonToLogicScanNode(pJson, pObj);
|
||||
case QUERY_NODE_LOGIC_PLAN_JOIN:
|
||||
|
|
|
@ -459,6 +459,11 @@ SNode* nodesMakeNode(ENodeType type) {
|
|||
return makeNode(type, sizeof(SInsertStmt));
|
||||
case QUERY_NODE_QUERY:
|
||||
return makeNode(type, sizeof(SQuery));
|
||||
case QUERY_NODE_RESTORE_DNODE_STMT:
|
||||
case QUERY_NODE_RESTORE_QNODE_STMT:
|
||||
case QUERY_NODE_RESTORE_MNODE_STMT:
|
||||
case QUERY_NODE_RESTORE_VNODE_STMT:
|
||||
return makeNode(type, sizeof(SRestoreComponentNodeStmt));
|
||||
case QUERY_NODE_LOGIC_PLAN_SCAN:
|
||||
return makeNode(type, sizeof(SScanLogicNode));
|
||||
case QUERY_NODE_LOGIC_PLAN_JOIN:
|
||||
|
@ -1058,6 +1063,11 @@ void nodesDestroyNode(SNode* pNode) {
|
|||
nodesDestroyNode(pQuery->pPrepareRoot);
|
||||
break;
|
||||
}
|
||||
case QUERY_NODE_RESTORE_DNODE_STMT: // no pointer field
|
||||
case QUERY_NODE_RESTORE_QNODE_STMT: // no pointer field
|
||||
case QUERY_NODE_RESTORE_MNODE_STMT: // no pointer field
|
||||
case QUERY_NODE_RESTORE_VNODE_STMT: // no pointer field
|
||||
break;
|
||||
case QUERY_NODE_LOGIC_PLAN_SCAN: {
|
||||
SScanLogicNode* pLogicNode = (SScanLogicNode*)pNode;
|
||||
destroyLogicNode((SLogicNode*)pLogicNode);
|
||||
|
|
|
@ -202,6 +202,7 @@ SNode* createIndexOption(SAstCreateContext* pCxt, SNodeList* pFuncs, SNode* pInt
|
|||
SNode* createDropIndexStmt(SAstCreateContext* pCxt, bool ignoreNotExists, SNode* pIndexName);
|
||||
SNode* createCreateComponentNodeStmt(SAstCreateContext* pCxt, ENodeType type, const SToken* pDnodeId);
|
||||
SNode* createDropComponentNodeStmt(SAstCreateContext* pCxt, ENodeType type, const SToken* pDnodeId);
|
||||
SNode* createRestoreComponentNodeStmt(SAstCreateContext* pCxt, ENodeType type, const SToken* pDnodeId);
|
||||
SNode* createCreateTopicStmtUseQuery(SAstCreateContext* pCxt, bool ignoreExists, SToken* pTopicName, SNode* pQuery);
|
||||
SNode* createCreateTopicStmtUseDb(SAstCreateContext* pCxt, bool ignoreExists, SToken* pTopicName, SToken* pSubDbName,
|
||||
bool withMeta);
|
||||
|
|
|
@ -123,7 +123,7 @@ priv_level(A) ::= topic_name(B).
|
|||
with_opt(A) ::= . { A = NULL; }
|
||||
with_opt(A) ::= WITH search_condition(B). { A = B; }
|
||||
|
||||
/************************************************ create/drop/alter dnode *********************************************/
|
||||
/************************************************ create/drop/alter/restore dnode *********************************************/
|
||||
cmd ::= CREATE DNODE dnode_endpoint(A). { pCxt->pRootNode = createCreateDnodeStmt(pCxt, &A, NULL); }
|
||||
cmd ::= CREATE DNODE dnode_endpoint(A) PORT NK_INTEGER(B). { pCxt->pRootNode = createCreateDnodeStmt(pCxt, &A, &B); }
|
||||
cmd ::= DROP DNODE NK_INTEGER(A) force_opt(B). { pCxt->pRootNode = createDropDnodeStmt(pCxt, &A, B); }
|
||||
|
@ -132,6 +132,7 @@ cmd ::= ALTER DNODE NK_INTEGER(A) NK_STRING(B).
|
|||
cmd ::= ALTER DNODE NK_INTEGER(A) NK_STRING(B) NK_STRING(C). { pCxt->pRootNode = createAlterDnodeStmt(pCxt, &A, &B, &C); }
|
||||
cmd ::= ALTER ALL DNODES NK_STRING(A). { pCxt->pRootNode = createAlterDnodeStmt(pCxt, NULL, &A, NULL); }
|
||||
cmd ::= ALTER ALL DNODES NK_STRING(A) NK_STRING(B). { pCxt->pRootNode = createAlterDnodeStmt(pCxt, NULL, &A, &B); }
|
||||
cmd ::= RESTORE DNODE NK_INTEGER(A). { pCxt->pRootNode = createRestoreComponentNodeStmt(pCxt, QUERY_NODE_RESTORE_DNODE_STMT, &A); }
|
||||
|
||||
%type dnode_endpoint { SToken }
|
||||
%destructor dnode_endpoint { }
|
||||
|
@ -148,9 +149,10 @@ force_opt(A) ::= FORCE.
|
|||
cmd ::= ALTER LOCAL NK_STRING(A). { pCxt->pRootNode = createAlterLocalStmt(pCxt, &A, NULL); }
|
||||
cmd ::= ALTER LOCAL NK_STRING(A) NK_STRING(B). { pCxt->pRootNode = createAlterLocalStmt(pCxt, &A, &B); }
|
||||
|
||||
/************************************************ create/drop qnode ***************************************************/
|
||||
/************************************************ create/drop/restore qnode ***************************************************/
|
||||
cmd ::= CREATE QNODE ON DNODE NK_INTEGER(A). { pCxt->pRootNode = createCreateComponentNodeStmt(pCxt, QUERY_NODE_CREATE_QNODE_STMT, &A); }
|
||||
cmd ::= DROP QNODE ON DNODE NK_INTEGER(A). { pCxt->pRootNode = createDropComponentNodeStmt(pCxt, QUERY_NODE_DROP_QNODE_STMT, &A); }
|
||||
cmd ::= RESTORE QNODE ON DNODE NK_INTEGER(A). { pCxt->pRootNode = createRestoreComponentNodeStmt(pCxt, QUERY_NODE_RESTORE_QNODE_STMT, &A); }
|
||||
|
||||
/************************************************ create/drop bnode ***************************************************/
|
||||
cmd ::= CREATE BNODE ON DNODE NK_INTEGER(A). { pCxt->pRootNode = createCreateComponentNodeStmt(pCxt, QUERY_NODE_CREATE_BNODE_STMT, &A); }
|
||||
|
@ -160,9 +162,13 @@ cmd ::= DROP BNODE ON DNODE NK_INTEGER(A).
|
|||
cmd ::= CREATE SNODE ON DNODE NK_INTEGER(A). { pCxt->pRootNode = createCreateComponentNodeStmt(pCxt, QUERY_NODE_CREATE_SNODE_STMT, &A); }
|
||||
cmd ::= DROP SNODE ON DNODE NK_INTEGER(A). { pCxt->pRootNode = createDropComponentNodeStmt(pCxt, QUERY_NODE_DROP_SNODE_STMT, &A); }
|
||||
|
||||
/************************************************ create/drop mnode ***************************************************/
|
||||
/************************************************ create/drop/restore mnode ***************************************************/
|
||||
cmd ::= CREATE MNODE ON DNODE NK_INTEGER(A). { pCxt->pRootNode = createCreateComponentNodeStmt(pCxt, QUERY_NODE_CREATE_MNODE_STMT, &A); }
|
||||
cmd ::= DROP MNODE ON DNODE NK_INTEGER(A). { pCxt->pRootNode = createDropComponentNodeStmt(pCxt, QUERY_NODE_DROP_MNODE_STMT, &A); }
|
||||
cmd ::= RESTORE MNODE ON DNODE NK_INTEGER(A). { pCxt->pRootNode = createRestoreComponentNodeStmt(pCxt, QUERY_NODE_RESTORE_MNODE_STMT, &A); }
|
||||
|
||||
/************************************************ restore vnode ***************************************************/
|
||||
cmd ::= RESTORE VNODE ON DNODE NK_INTEGER(A). { pCxt->pRootNode = createRestoreComponentNodeStmt(pCxt, QUERY_NODE_RESTORE_VNODE_STMT, &A); }
|
||||
|
||||
/************************************************ create/drop/use database ********************************************/
|
||||
cmd ::= CREATE DATABASE not_exists_opt(A) db_name(B) db_options(C). { pCxt->pRootNode = createCreateDatabaseStmt(pCxt, A, &B, C); }
|
||||
|
|
|
@ -1674,6 +1674,14 @@ SNode* createDropComponentNodeStmt(SAstCreateContext* pCxt, ENodeType type, cons
|
|||
return (SNode*)pStmt;
|
||||
}
|
||||
|
||||
SNode* createRestoreComponentNodeStmt(SAstCreateContext* pCxt, ENodeType type, const SToken* pDnodeId) {
|
||||
CHECK_PARSER_STATUS(pCxt);
|
||||
SRestoreComponentNodeStmt* pStmt = (SRestoreComponentNodeStmt*)nodesMakeNode(type);
|
||||
CHECK_OUT_OF_MEM(pStmt);
|
||||
pStmt->dnodeId = taosStr2Int32(pDnodeId->z, NULL, 10);
|
||||
return (SNode*)pStmt;
|
||||
}
|
||||
|
||||
SNode* createCreateTopicStmtUseQuery(SAstCreateContext* pCxt, bool ignoreExists, SToken* pTopicName, SNode* pQuery) {
|
||||
CHECK_PARSER_STATUS(pCxt);
|
||||
if (!checkTopicName(pCxt, pTopicName)) {
|
||||
|
|
|
@ -183,6 +183,7 @@ static SKeyword keywordTable[] = {
|
|||
{"REPLICA", TK_REPLICA},
|
||||
{"RESET", TK_RESET},
|
||||
{"RESUME", TK_RESUME},
|
||||
{"RESTORE", TK_RESTORE},
|
||||
{"RETENTIONS", TK_RETENTIONS},
|
||||
{"REVOKE", TK_REVOKE},
|
||||
{"ROLLUP", TK_ROLLUP},
|
||||
|
@ -255,6 +256,7 @@ static SKeyword keywordTable[] = {
|
|||
{"VERBOSE", TK_VERBOSE},
|
||||
{"VGROUP", TK_VGROUP},
|
||||
{"VGROUPS", TK_VGROUPS},
|
||||
{"VNODE", TK_VNODE},
|
||||
{"VNODES", TK_VNODES},
|
||||
{"WAL_FSYNC_PERIOD", TK_WAL_FSYNC_PERIOD},
|
||||
{"WAL_LEVEL", TK_WAL_LEVEL},
|
||||
|
|
|
@ -1523,9 +1523,7 @@ static int32_t translateInterpFunc(STranslateContext* pCxt, SFunctionNode* pFunc
|
|||
SSelectStmt* pSelect = (SSelectStmt*)pCxt->pCurrStmt;
|
||||
SNode* pTable = pSelect->pFromTable;
|
||||
|
||||
if ((NULL != pTable && (QUERY_NODE_REAL_TABLE != nodeType(pTable) ||
|
||||
(TSDB_CHILD_TABLE != ((SRealTableNode*)pTable)->pMeta->tableType &&
|
||||
TSDB_NORMAL_TABLE != ((SRealTableNode*)pTable)->pMeta->tableType)))) {
|
||||
if ((NULL != pTable && QUERY_NODE_REAL_TABLE != nodeType(pTable))) {
|
||||
return generateSyntaxErrMsgExt(&pCxt->msgBuf, TSDB_CODE_PAR_ONLY_SUPPORT_SINGLE_TABLE,
|
||||
"%s is only supported in single table query", pFunc->functionName);
|
||||
}
|
||||
|
@ -5532,6 +5530,29 @@ static int32_t translateAlterDnode(STranslateContext* pCxt, SAlterDnodeStmt* pSt
|
|||
return buildCmdMsg(pCxt, TDMT_MND_CONFIG_DNODE, (FSerializeFunc)tSerializeSMCfgDnodeReq, &cfgReq);
|
||||
}
|
||||
|
||||
static int32_t translateRestoreDnode(STranslateContext* pCxt, SRestoreComponentNodeStmt* pStmt) {
|
||||
SRestoreDnodeReq restoreReq = {0};
|
||||
restoreReq.dnodeId = pStmt->dnodeId;
|
||||
switch (nodeType((SNode*)pStmt)) {
|
||||
case QUERY_NODE_RESTORE_DNODE_STMT:
|
||||
restoreReq.restoreType = RESTORE_TYPE__ALL;
|
||||
break;
|
||||
case QUERY_NODE_RESTORE_QNODE_STMT:
|
||||
restoreReq.restoreType = RESTORE_TYPE__QNODE;
|
||||
break;
|
||||
case QUERY_NODE_RESTORE_MNODE_STMT:
|
||||
restoreReq.restoreType = RESTORE_TYPE__MNODE;
|
||||
break;
|
||||
case QUERY_NODE_RESTORE_VNODE_STMT:
|
||||
restoreReq.restoreType = RESTORE_TYPE__VNODE;
|
||||
break;
|
||||
default:
|
||||
return -1;
|
||||
}
|
||||
return buildCmdMsg(pCxt, TDMT_MND_RESTORE_DNODE, (FSerializeFunc)tSerializeSRestoreDnodeReq, &restoreReq);
|
||||
}
|
||||
|
||||
|
||||
static int32_t getSmaIndexDstVgId(STranslateContext* pCxt, const char* pDbName, const char* pTableName,
|
||||
int32_t* pVgId) {
|
||||
SVgroupInfo vg = {0};
|
||||
|
@ -7094,6 +7115,12 @@ static int32_t translateQuery(STranslateContext* pCxt, SNode* pNode) {
|
|||
case QUERY_NODE_SHOW_CREATE_STABLE_STMT:
|
||||
code = translateShowCreateTable(pCxt, (SShowCreateTableStmt*)pNode);
|
||||
break;
|
||||
case QUERY_NODE_RESTORE_DNODE_STMT:
|
||||
case QUERY_NODE_RESTORE_QNODE_STMT:
|
||||
case QUERY_NODE_RESTORE_MNODE_STMT:
|
||||
case QUERY_NODE_RESTORE_VNODE_STMT:
|
||||
code = translateRestoreDnode(pCxt, (SRestoreComponentNodeStmt*)pNode);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -187,6 +187,63 @@ TEST_F(ParserExplainToSyncdbTest, redistributeVgroup) {
|
|||
run("REDISTRIBUTE VGROUP 5 DNODE 10 DNODE 20 DNODE 30");
|
||||
}
|
||||
|
||||
TEST_F(ParserExplainToSyncdbTest, restoreDnode) {
|
||||
useDb("root", "test");
|
||||
|
||||
SRestoreDnodeReq expect = {0};
|
||||
|
||||
auto clearRestoreDnodeReq = [&]() { memset(&expect, 0, sizeof(SRestoreDnodeReq)); };
|
||||
|
||||
auto setRestoreDnodeReq = [&](int32_t dnodeId, int8_t type) {
|
||||
expect.dnodeId = dnodeId;
|
||||
expect.restoreType = type;
|
||||
};
|
||||
|
||||
setCheckDdlFunc([&](const SQuery* pQuery, ParserStage stage) {
|
||||
int32_t expectNodeType = 0;
|
||||
switch (expect.restoreType) {
|
||||
case RESTORE_TYPE__ALL:
|
||||
expectNodeType = QUERY_NODE_RESTORE_DNODE_STMT;
|
||||
break;
|
||||
case RESTORE_TYPE__MNODE:
|
||||
expectNodeType = QUERY_NODE_RESTORE_MNODE_STMT;
|
||||
break;
|
||||
case RESTORE_TYPE__VNODE:
|
||||
expectNodeType = QUERY_NODE_RESTORE_VNODE_STMT;
|
||||
break;
|
||||
case RESTORE_TYPE__QNODE:
|
||||
expectNodeType = QUERY_NODE_RESTORE_QNODE_STMT;
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
ASSERT_EQ(nodeType(pQuery->pRoot), expectNodeType);
|
||||
ASSERT_EQ(pQuery->pCmdMsg->msgType, TDMT_MND_RESTORE_DNODE);
|
||||
SRestoreDnodeReq req = {0};
|
||||
ASSERT_EQ(tDeserializeSRestoreDnodeReq(pQuery->pCmdMsg->pMsg, pQuery->pCmdMsg->msgLen, &req), TSDB_CODE_SUCCESS);
|
||||
ASSERT_EQ(req.dnodeId, expect.dnodeId);
|
||||
ASSERT_EQ(req.restoreType, expect.restoreType);
|
||||
});
|
||||
|
||||
setRestoreDnodeReq(1, RESTORE_TYPE__ALL);
|
||||
run("RESTORE DNODE 1");
|
||||
clearRestoreDnodeReq();
|
||||
|
||||
setRestoreDnodeReq(2, RESTORE_TYPE__MNODE);
|
||||
run("RESTORE MNODE ON DNODE 2");
|
||||
clearRestoreDnodeReq();
|
||||
|
||||
setRestoreDnodeReq(1, RESTORE_TYPE__VNODE);
|
||||
run("RESTORE VNODE ON DNODE 1");
|
||||
clearRestoreDnodeReq();
|
||||
|
||||
setRestoreDnodeReq(2, RESTORE_TYPE__QNODE);
|
||||
run("RESTORE QNODE ON DNODE 2");
|
||||
clearRestoreDnodeReq();
|
||||
}
|
||||
|
||||
|
||||
|
||||
// todo reset query cache
|
||||
|
||||
TEST_F(ParserExplainToSyncdbTest, revoke) {
|
||||
|
|
|
@ -283,6 +283,14 @@ int32_t tfsMkdir(STfs *pTfs, const char *rname) {
|
|||
return 0;
|
||||
}
|
||||
|
||||
bool tfsDirExistAt(STfs *pTfs, const char *rname, SDiskID diskId) {
|
||||
STfsDisk *pDisk = TFS_DISK_AT(pTfs, diskId);
|
||||
char aname[TMPNAME_LEN];
|
||||
|
||||
snprintf(aname, TMPNAME_LEN, "%s%s%s", pDisk->path, TD_DIRSEP, rname);
|
||||
return taosDirExist(aname);
|
||||
}
|
||||
|
||||
int32_t tfsRmdir(STfs *pTfs, const char *rname) {
|
||||
if (rname[0] == 0) {
|
||||
return 0;
|
||||
|
|
|
@ -321,6 +321,9 @@ TAOS_DEFINE_ERROR(TSDB_CODE_QNODE_NOT_DEPLOYED, "Qnode not deployed")
|
|||
TAOS_DEFINE_ERROR(TSDB_CODE_SNODE_NOT_FOUND, "Snode not found")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_SNODE_ALREADY_DEPLOYED, "Snode already deployed")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_SNODE_NOT_DEPLOYED, "Snode not deployed")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_MNODE_NOT_CATCH_UP, "Mnode didn't catch the leader")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_MNODE_ALREADY_IS_VOTER, "Mnode already is a leader")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_MNODE_ONLY_TWO_MNODE, "Only two mnodes exist")
|
||||
|
||||
// vnode
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_VND_INVALID_VGROUP_ID, "Vnode is closed or removed")
|
||||
|
@ -336,6 +339,9 @@ TAOS_DEFINE_ERROR(TSDB_CODE_VND_NO_AVAIL_BUFPOOL, "No availabe buffer po
|
|||
TAOS_DEFINE_ERROR(TSDB_CODE_VND_STOPPED, "Vnode stopped")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_VND_DUP_REQUEST, "Duplicate write request")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_VND_QUERY_BUSY, "Query busy")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_VND_NOT_CATCH_UP, "Vnode didn't catch up its leader")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_VND_ALREADY_IS_VOTER, "Vnode already is a voter")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_VND_DIR_ALREADY_EXIST, "Vnode directory already exist")
|
||||
|
||||
// tsdb
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_TDB_INVALID_TABLE_ID, "Invalid table ID")
|
||||
|
@ -399,9 +405,9 @@ TAOS_DEFINE_ERROR(TSDB_CODE_QRY_QWORKER_QUIT, "Vnode/Qnode is quitti
|
|||
|
||||
// grant
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_GRANT_EXPIRED, "License expired")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_GRANT_DNODE_LIMITED, "DNode creation limited by licence")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_GRANT_DNODE_LIMITED, "DNode creation limited by license")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_GRANT_ACCT_LIMITED, "Account creation limited by license")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_GRANT_TIMESERIES_LIMITED, "Table creation limited by license")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_GRANT_TIMESERIES_LIMITED, "Time series limited by license")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_GRANT_DB_LIMITED, "DB creation limited by license")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_GRANT_USER_LIMITED, "User creation limited by license")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_GRANT_CONN_LIMITED, "Conn creation limited by license")
|
||||
|
@ -595,23 +601,16 @@ TAOS_DEFINE_ERROR(TSDB_CODE_SML_INTERNAL_ERROR, "Internal error")
|
|||
//tsma
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSMA_INIT_FAILED, "Tsma init failed")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSMA_ALREADY_EXIST, "Tsma already exists")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSMA_NO_INDEX_IN_META, "No tsma index in meta")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSMA_INVALID_ENV, "Invalid tsma env")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSMA_INVALID_STAT, "Invalid tsma state")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSMA_INVALID_PTR, "Invalid tsma pointer")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSMA_INVALID_PARA, "Invalid tsma parameters")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_TSMA_NO_INDEX_IN_CACHE, "No tsma index in cache")
|
||||
|
||||
//rsma
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_RSMA_INVALID_ENV, "Invalid rsma env")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_RSMA_INVALID_STAT, "Invalid rsma state")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_RSMA_QTASKINFO_CREATE, "Rsma qtaskinfo creation error")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_RSMA_FS_COMMIT, "Rsma fs commit error")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_RSMA_REMOVE_EXISTS, "Rsma remove exists")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_RSMA_FETCH_MSG_MSSED_UP, "Rsma fetch msg is messed up")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_RSMA_EMPTY_INFO, "Rsma info is empty")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_RSMA_INVALID_SCHEMA, "Rsma invalid schema")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_RSMA_REGEX_MATCH, "Rsma regex match")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_RSMA_STREAM_STATE_OPEN, "Rsma stream state open")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_RSMA_STREAM_STATE_COMMIT, "Rsma stream state commit")
|
||||
TAOS_DEFINE_ERROR(TSDB_CODE_RSMA_FS_REF, "Rsma fs ref error")
|
||||
|
|
|
@ -2,7 +2,7 @@ FROM python:3.8
|
|||
RUN pip3 config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
|
||||
RUN pip3 install pandas psutil fabric2 requests faker simplejson toml pexpect tzlocal distro
|
||||
RUN apt-get update
|
||||
RUN apt-get install -y psmisc sudo tree libjansson-dev libsnappy-dev liblzma-dev libz-dev zlib1g pkg-config build-essential valgrind \
|
||||
RUN apt-get install -y psmisc sudo tree libgeos-dev libjansson-dev libsnappy-dev liblzma-dev libz-dev zlib1g pkg-config build-essential valgrind \
|
||||
vim libjemalloc-dev openssh-server screen sshpass net-tools dirmngr gnupg apt-transport-https ca-certificates software-properties-common r-base iputils-ping
|
||||
RUN apt-key adv --keyserver keyserver.ubuntu.com --recv-keys E298A3A825C0D65DFD57CBB651716619E084DAB9
|
||||
RUN add-apt-repository 'deb https://cloud.r-project.org/bin/linux/ubuntu focal-cran40/'
|
||||
|
|
|
@ -811,7 +811,8 @@
|
|||
,,y,system-test,./pytest.sh python3 ./test.py -f 6-cluster/5dnode3mnodeRestartDnodeInsertData.py -N 6 -M 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 6-cluster/5dnode3mnodeRestartDnodeInsertData.py -N 6 -M 3 -n 3
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 6-cluster/5dnode3mnodeRestartDnodeInsertDataAsync.py -N 6 -M 3
|
||||
#,y,system-test,./pytest.sh python3 ./test.py -f 6-cluster/5dnode3mnodeRestartDnodeInsertDataAsync.py -N 6 -M 3 -n 3
|
||||
#,,y,system-test,./pytest.sh python3 ./test.py -f 6-cluster/5dnode3mnodeRestartDnodeInsertDataAsync.py -N 6 -M 3 -n 3
|
||||
,,n,system-test,python3 ./test.py -f 6-cluster/manually-test/6dnode3mnodeInsertLessDataAlterRep3to1to3.py -N 6 -M 3
|
||||
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 6-cluster/5dnode3mnodeAdd1Ddnoe.py -N 7 -M 3 -C 6
|
||||
,,y,system-test,./pytest.sh python3 ./test.py -f 6-cluster/5dnode3mnodeAdd1Ddnoe.py -N 7 -M 3 -C 6 -n 3
|
||||
|
|
|
@ -55,7 +55,9 @@ python_error=`cat ${LOG_DIR}/*.info | grep -w "stack" | wc -l`
|
|||
# /home/chr/TDengine/source/libs/scalar/src/filter.c:3149:14: runtime error: applying non-zero offset 18446744073709551615 to null pointer
|
||||
# /home/TDinternal/community/source/libs/scalar/src/sclvector.c:1109:66: runtime error: signed integer overflow: 9223372034707292160 + 1676867897049 cannot be represented in type 'long int'
|
||||
|
||||
runtime_error=`cat ${LOG_DIR}/*.asan | grep "runtime error" | grep -v "trees.c:873" | grep -v "sclfunc.c.*outside the range of representable values of type"| grep -v "signed integer overflow" |grep -v "strerror.c"| grep -v "asan_malloc_linux.cc" |wc -l`
|
||||
#0 0x7f2d64f5a808 in __interceptor_malloc ../../../../src/libsanitizer/asan/asan_malloc_linux.cc:144
|
||||
#1 0x7f2d63fcf459 in strerror /build/glibc-SzIz7B/glibc-2.31/string/strerror.c:38
|
||||
runtime_error=`cat ${LOG_DIR}/*.asan | grep "runtime error" | grep -v "trees.c:873" | grep -v "sclfunc.c.*outside the range of representable values of type"| grep -v "signed integer overflow" |grep -v "strerror.c"| grep -v "asan_malloc_linux.cc" |grep -v "strerror.c"|wc -l`
|
||||
|
||||
echo -e "\033[44;32;1m"asan error_num: $error_num"\033[0m"
|
||||
echo -e "\033[44;32;1m"asan memory_leak: $memory_leak"\033[0m"
|
||||
|
|
|
@ -23,6 +23,8 @@ class TDTestCase:
|
|||
stbname = "stb"
|
||||
ctbname1 = "ctb1"
|
||||
ctbname2 = "ctb2"
|
||||
ctbname3 = "ctb3"
|
||||
num_of_ctables = 3
|
||||
|
||||
tdSql.prepare()
|
||||
|
||||
|
@ -816,17 +818,26 @@ class TDTestCase:
|
|||
)
|
||||
|
||||
tdSql.execute(
|
||||
f'''create table if not exists {dbname}.{ctbname2} using {dbname}.{stbname} tags(1)
|
||||
f'''create table if not exists {dbname}.{ctbname2} using {dbname}.{stbname} tags(2)
|
||||
'''
|
||||
)
|
||||
|
||||
tdSql.execute(f"insert into {dbname}.{ctbname1} values ('2020-02-01 00:00:05', 5, 5, 5, 5, 5.0, 5.0, true, 'varchar', 'nchar')")
|
||||
tdSql.execute(f"insert into {dbname}.{ctbname1} values ('2020-02-01 00:00:10', 10, 10, 10, 10, 10.0, 10.0, true, 'varchar', 'nchar')")
|
||||
tdSql.execute(f"insert into {dbname}.{ctbname1} values ('2020-02-01 00:00:15', 15, 15, 15, 15, 15.0, 15.0, true, 'varchar', 'nchar')")
|
||||
tdSql.execute(
|
||||
f'''create table if not exists {dbname}.{ctbname3} using {dbname}.{stbname} tags(3)
|
||||
'''
|
||||
)
|
||||
|
||||
tdSql.execute(f"insert into {dbname}.{ctbname2} values ('2020-02-02 00:00:05', 5, 5, 5, 5, 5.0, 5.0, true, 'varchar', 'nchar')")
|
||||
tdSql.execute(f"insert into {dbname}.{ctbname2} values ('2020-02-02 00:00:10', 10, 10, 10, 10, 10.0, 10.0, true, 'varchar', 'nchar')")
|
||||
tdSql.execute(f"insert into {dbname}.{ctbname2} values ('2020-02-02 00:00:15', 15, 15, 15, 15, 15.0, 15.0, true, 'varchar', 'nchar')")
|
||||
tdSql.execute(f"insert into {dbname}.{ctbname1} values ('2020-02-01 00:00:01', 1, 1, 1, 1, 1.0, 1.0, true, 'varchar', 'nchar')")
|
||||
tdSql.execute(f"insert into {dbname}.{ctbname1} values ('2020-02-01 00:00:07', 7, 7, 7, 7, 7.0, 7.0, true, 'varchar', 'nchar')")
|
||||
tdSql.execute(f"insert into {dbname}.{ctbname1} values ('2020-02-01 00:00:13', 13, 13, 13, 13, 13.0, 13.0, true, 'varchar', 'nchar')")
|
||||
|
||||
tdSql.execute(f"insert into {dbname}.{ctbname2} values ('2020-02-01 00:00:03', 3, 3, 3, 3, 3.0, 3.0, true, 'varchar', 'nchar')")
|
||||
tdSql.execute(f"insert into {dbname}.{ctbname2} values ('2020-02-01 00:00:09', 9, 9, 9, 9, 9.0, 9.0, true, 'varchar', 'nchar')")
|
||||
tdSql.execute(f"insert into {dbname}.{ctbname2} values ('2020-02-01 00:00:15', 15, 15, 15, 15, 15.0, 15.0, true, 'varchar', 'nchar')")
|
||||
|
||||
tdSql.execute(f"insert into {dbname}.{ctbname3} values ('2020-02-01 00:00:05', 5, 5, 5, 5, 5.0, 5.0, true, 'varchar', 'nchar')")
|
||||
tdSql.execute(f"insert into {dbname}.{ctbname3} values ('2020-02-01 00:00:11', 11, 11, 11, 11, 11.0, 11.0, true, 'varchar', 'nchar')")
|
||||
tdSql.execute(f"insert into {dbname}.{ctbname3} values ('2020-02-01 00:00:17', 17, 17, 17, 17, 17.0, 17.0, true, 'varchar', 'nchar')")
|
||||
|
||||
|
||||
tdSql.execute(f"flush database {dbname}");
|
||||
|
@ -834,7 +845,7 @@ class TDTestCase:
|
|||
# test fill null
|
||||
|
||||
## | {. | | .} |
|
||||
tdSql.query(f"select interp(c0) from {dbname}.{tbname} range('2020-02-01 00:00:05', '2020-02-11 00:00:05') every(1d) fill(null)")
|
||||
tdSql.query(f"select interp(c0) from {dbname}.{tbname} range('2020-02-01 00:00:05', '2020-02-11 00:00:06') every(1d) fill(null)")
|
||||
tdSql.checkRows(11)
|
||||
tdSql.checkData(0, 0, 5)
|
||||
tdSql.checkData(1, 0, None)
|
||||
|
@ -881,7 +892,7 @@ class TDTestCase:
|
|||
# test fill value
|
||||
|
||||
## | {. | | .} |
|
||||
tdSql.query(f"select interp(c0) from {dbname}.{tbname} range('2020-02-01 00:00:05', '2020-02-11 00:00:05') every(1d) fill(value, 1)")
|
||||
tdSql.query(f"select interp(c0) from {dbname}.{tbname} range('2020-02-01 00:00:05', '2020-02-11 00:00:06') every(1d) fill(value, 1)")
|
||||
tdSql.checkRows(11)
|
||||
tdSql.checkData(0, 0, 5)
|
||||
tdSql.checkData(1, 0, 1)
|
||||
|
@ -895,7 +906,7 @@ class TDTestCase:
|
|||
tdSql.checkData(9, 0, 1)
|
||||
tdSql.checkData(10, 0, 15)
|
||||
|
||||
## | . | {} | . |
|
||||
# | . | {} | . |
|
||||
tdSql.query(f"select interp(c0) from {dbname}.{tbname} range('2020-02-03 00:00:05', '2020-02-07 00:00:05') every(1d) fill(value, 1)")
|
||||
tdSql.checkRows(5)
|
||||
tdSql.checkData(0, 0, 1)
|
||||
|
@ -928,7 +939,7 @@ class TDTestCase:
|
|||
# test fill prev
|
||||
|
||||
## | {. | | .} |
|
||||
tdSql.query(f"select interp(c0) from {dbname}.{tbname} range('2020-02-01 00:00:05', '2020-02-11 00:00:05') every(1d) fill(prev)")
|
||||
tdSql.query(f"select interp(c0) from {dbname}.{tbname} range('2020-02-01 00:00:05', '2020-02-11 00:00:06') every(1d) fill(prev)")
|
||||
tdSql.checkRows(11)
|
||||
tdSql.checkData(0, 0, 5)
|
||||
tdSql.checkData(1, 0, 5)
|
||||
|
@ -973,7 +984,7 @@ class TDTestCase:
|
|||
# test fill next
|
||||
|
||||
## | {. | | .} |
|
||||
tdSql.query(f"select interp(c0) from {dbname}.{tbname} range('2020-02-01 00:00:05', '2020-02-11 00:00:05') every(1d) fill(next)")
|
||||
tdSql.query(f"select interp(c0) from {dbname}.{tbname} range('2020-02-01 00:00:05', '2020-02-11 00:00:06') every(1d) fill(next)")
|
||||
tdSql.checkRows(11)
|
||||
tdSql.checkData(0, 0, 5)
|
||||
tdSql.checkData(1, 0, 15)
|
||||
|
@ -1015,7 +1026,7 @@ class TDTestCase:
|
|||
# test fill linear
|
||||
|
||||
## | {. | | .} |
|
||||
tdSql.query(f"select interp(c0) from {dbname}.{tbname} range('2020-02-01 00:00:05', '2020-02-11 00:00:05') every(1d) fill(linear)")
|
||||
tdSql.query(f"select interp(c0) from {dbname}.{tbname} range('2020-02-01 00:00:05', '2020-02-11 00:00:06') every(1d) fill(linear)")
|
||||
tdSql.checkRows(11)
|
||||
tdSql.checkData(0, 0, 5)
|
||||
tdSql.checkData(1, 0, 6)
|
||||
|
@ -2391,19 +2402,516 @@ class TDTestCase:
|
|||
|
||||
|
||||
|
||||
tdLog.printNoPrefix("==========step13:stable cases")
|
||||
tdLog.printNoPrefix("==========step13:test stable cases")
|
||||
|
||||
tdSql.error(f"select interp(c0) from {dbname}.{stbname} range('2020-02-01 00:00:04', '2020-02-01 00:00:16') every(1s) fill(null)")
|
||||
#tdSql.checkRows(13)
|
||||
# select interp from supertable
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(null)")
|
||||
tdSql.checkRows(19)
|
||||
|
||||
#tdSql.query(f"select interp(c0) from {dbname}.{ctbname1} range('2020-02-01 00:00:04', '2020-02-01 00:00:16') every(1s) fill(null)")
|
||||
#tdSql.checkRows(13)
|
||||
tdSql.checkData(0, 2, None)
|
||||
tdSql.checkData(1, 2, 1)
|
||||
tdSql.checkData(2, 2, None)
|
||||
tdSql.checkData(3, 2, 3)
|
||||
tdSql.checkData(4, 2, None)
|
||||
tdSql.checkData(5, 2, 5)
|
||||
tdSql.checkData(6, 2, None)
|
||||
tdSql.checkData(7, 2, 7)
|
||||
tdSql.checkData(8, 2, None)
|
||||
tdSql.checkData(9, 2, 9)
|
||||
tdSql.checkData(10, 2, None)
|
||||
tdSql.checkData(11, 2, 11)
|
||||
tdSql.checkData(12, 2, None)
|
||||
tdSql.checkData(13, 2, 13)
|
||||
tdSql.checkData(14, 2, None)
|
||||
tdSql.checkData(15, 2, 15)
|
||||
tdSql.checkData(16, 2, None)
|
||||
tdSql.checkData(17, 2, 17)
|
||||
tdSql.checkData(18, 2, None)
|
||||
|
||||
tdSql.error(f"select interp(c0) from {dbname}.{stbname} partition by tbname range('2020-02-01 00:00:04', '2020-02-02 00:00:16') every(1s) fill(null)")
|
||||
#tdSql.checkRows(13)
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(value, 0)")
|
||||
tdSql.checkRows(19)
|
||||
|
||||
tdSql.checkData(0, 2, 0)
|
||||
tdSql.checkData(1, 2, 1)
|
||||
tdSql.checkData(2, 2, 0)
|
||||
tdSql.checkData(3, 2, 3)
|
||||
tdSql.checkData(4, 2, 0)
|
||||
tdSql.checkData(5, 2, 5)
|
||||
tdSql.checkData(6, 2, 0)
|
||||
tdSql.checkData(7, 2, 7)
|
||||
tdSql.checkData(8, 2, 0)
|
||||
tdSql.checkData(9, 2, 9)
|
||||
tdSql.checkData(10, 2, 0)
|
||||
tdSql.checkData(11, 2, 11)
|
||||
tdSql.checkData(12, 2, 0)
|
||||
tdSql.checkData(13, 2, 13)
|
||||
tdSql.checkData(14, 2, 0)
|
||||
tdSql.checkData(15, 2, 15)
|
||||
tdSql.checkData(16, 2, 0)
|
||||
tdSql.checkData(17, 2, 17)
|
||||
tdSql.checkData(18, 2, 0)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(prev)")
|
||||
tdSql.checkRows(18)
|
||||
|
||||
tdSql.checkData(0, 0, '2020-02-01 00:00:01.000')
|
||||
tdSql.checkData(0, 1, False)
|
||||
|
||||
tdSql.checkData(0, 2, 1)
|
||||
tdSql.checkData(1, 2, 1)
|
||||
tdSql.checkData(2, 2, 3)
|
||||
tdSql.checkData(3, 2, 3)
|
||||
tdSql.checkData(4, 2, 5)
|
||||
tdSql.checkData(5, 2, 5)
|
||||
tdSql.checkData(6, 2, 7)
|
||||
tdSql.checkData(7, 2, 7)
|
||||
tdSql.checkData(8, 2, 9)
|
||||
tdSql.checkData(9, 2, 9)
|
||||
tdSql.checkData(10, 2, 11)
|
||||
tdSql.checkData(11, 2, 11)
|
||||
tdSql.checkData(12, 2, 13)
|
||||
tdSql.checkData(13, 2, 13)
|
||||
tdSql.checkData(14, 2, 15)
|
||||
tdSql.checkData(15, 2, 15)
|
||||
tdSql.checkData(16, 2, 17)
|
||||
tdSql.checkData(17, 2, 17)
|
||||
|
||||
tdSql.checkData(17, 0, '2020-02-01 00:00:18.000')
|
||||
tdSql.checkData(17, 1, True)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(next)")
|
||||
tdSql.checkRows(18)
|
||||
|
||||
tdSql.checkData(0, 0, '2020-02-01 00:00:00.000')
|
||||
tdSql.checkData(0, 1, True)
|
||||
|
||||
tdSql.checkData(0, 2, 1)
|
||||
tdSql.checkData(1, 2, 1)
|
||||
tdSql.checkData(2, 2, 3)
|
||||
tdSql.checkData(3, 2, 3)
|
||||
tdSql.checkData(4, 2, 5)
|
||||
tdSql.checkData(5, 2, 5)
|
||||
tdSql.checkData(6, 2, 7)
|
||||
tdSql.checkData(7, 2, 7)
|
||||
tdSql.checkData(8, 2, 9)
|
||||
tdSql.checkData(9, 2, 9)
|
||||
tdSql.checkData(10, 2, 11)
|
||||
tdSql.checkData(11, 2, 11)
|
||||
tdSql.checkData(12, 2, 13)
|
||||
tdSql.checkData(13, 2, 13)
|
||||
tdSql.checkData(14, 2, 15)
|
||||
tdSql.checkData(15, 2, 15)
|
||||
tdSql.checkData(16, 2, 17)
|
||||
tdSql.checkData(17, 2, 17)
|
||||
|
||||
tdSql.checkData(17, 0, '2020-02-01 00:00:17.000')
|
||||
tdSql.checkData(17, 1, False)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
|
||||
tdSql.checkRows(17)
|
||||
|
||||
tdSql.checkData(0, 2, 1)
|
||||
tdSql.checkData(1, 2, 2)
|
||||
tdSql.checkData(2, 2, 3)
|
||||
tdSql.checkData(3, 2, 4)
|
||||
tdSql.checkData(4, 2, 5)
|
||||
tdSql.checkData(5, 2, 6)
|
||||
tdSql.checkData(6, 2, 7)
|
||||
tdSql.checkData(7, 2, 8)
|
||||
tdSql.checkData(8, 2, 9)
|
||||
tdSql.checkData(9, 2, 10)
|
||||
tdSql.checkData(10, 2, 11)
|
||||
tdSql.checkData(11, 2, 12)
|
||||
tdSql.checkData(12, 2, 13)
|
||||
tdSql.checkData(13, 2, 14)
|
||||
tdSql.checkData(14, 2, 15)
|
||||
tdSql.checkData(15, 2, 16)
|
||||
tdSql.checkData(16, 2, 17)
|
||||
|
||||
# select interp from supertable partition by tbname
|
||||
|
||||
tdSql.query(f"select tbname, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(null)")
|
||||
|
||||
point_idx = {1, 7, 13, 22, 28, 34, 43, 49, 55}
|
||||
point_dict = {1:1, 7:7, 13:13, 22:3, 28:9, 34:15, 43:5, 49:11, 55:17}
|
||||
rows_per_partition = 19
|
||||
tdSql.checkRows(rows_per_partition * num_of_ctables)
|
||||
for i in range(num_of_ctables):
|
||||
for j in range(rows_per_partition):
|
||||
row = j + i * rows_per_partition
|
||||
tdSql.checkData(row, 0, f'ctb{i + 1}')
|
||||
tdSql.checkData(j, 1, f'2020-02-01 00:00:{j}.000')
|
||||
if row in point_idx:
|
||||
tdSql.checkData(row, 2, False)
|
||||
else:
|
||||
tdSql.checkData(row, 2, True)
|
||||
|
||||
if row in point_idx:
|
||||
tdSql.checkData(row, 3, point_dict[row])
|
||||
else:
|
||||
tdSql.checkData(row, 3, None)
|
||||
|
||||
tdSql.query(f"select tbname, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(value, 0)")
|
||||
|
||||
point_idx = {1, 7, 13, 22, 28, 34, 43, 49, 55}
|
||||
point_dict = {1:1, 7:7, 13:13, 22:3, 28:9, 34:15, 43:5, 49:11, 55:17}
|
||||
rows_per_partition = 19
|
||||
tdSql.checkRows(rows_per_partition * num_of_ctables)
|
||||
for i in range(num_of_ctables):
|
||||
for j in range(rows_per_partition):
|
||||
row = j + i * rows_per_partition
|
||||
tdSql.checkData(row, 0, f'ctb{i + 1}')
|
||||
tdSql.checkData(j, 1, f'2020-02-01 00:00:{j}.000')
|
||||
if row in point_idx:
|
||||
tdSql.checkData(row, 2, False)
|
||||
else:
|
||||
tdSql.checkData(row, 2, True)
|
||||
|
||||
if row in point_idx:
|
||||
tdSql.checkData(row, 3, point_dict[row])
|
||||
else:
|
||||
tdSql.checkData(row, 3, 0)
|
||||
|
||||
tdSql.query(f"select tbname, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(prev)")
|
||||
|
||||
tdSql.checkRows(48)
|
||||
for i in range(0, 18):
|
||||
tdSql.checkData(i, 0, 'ctb1')
|
||||
|
||||
for i in range(18, 34):
|
||||
tdSql.checkData(i, 0, 'ctb2')
|
||||
|
||||
for i in range(34, 48):
|
||||
tdSql.checkData(i, 0, 'ctb3')
|
||||
|
||||
tdSql.checkData(0, 1, '2020-02-01 00:00:01.000')
|
||||
tdSql.checkData(17, 1, '2020-02-01 00:00:18.000')
|
||||
|
||||
tdSql.checkData(18, 1, '2020-02-01 00:00:03.000')
|
||||
tdSql.checkData(33, 1, '2020-02-01 00:00:18.000')
|
||||
|
||||
tdSql.checkData(34, 1, '2020-02-01 00:00:05.000')
|
||||
tdSql.checkData(47, 1, '2020-02-01 00:00:18.000')
|
||||
|
||||
for i in range(0, 6):
|
||||
tdSql.checkData(i, 3, 1)
|
||||
|
||||
for i in range(6, 12):
|
||||
tdSql.checkData(i, 3, 7)
|
||||
|
||||
for i in range(12, 18):
|
||||
tdSql.checkData(i, 3, 13)
|
||||
|
||||
for i in range(18, 24):
|
||||
tdSql.checkData(i, 3, 3)
|
||||
|
||||
for i in range(24, 30):
|
||||
tdSql.checkData(i, 3, 9)
|
||||
|
||||
for i in range(30, 34):
|
||||
tdSql.checkData(i, 3, 15)
|
||||
|
||||
for i in range(34, 40):
|
||||
tdSql.checkData(i, 3, 5)
|
||||
|
||||
for i in range(40, 46):
|
||||
tdSql.checkData(i, 3, 11)
|
||||
|
||||
for i in range(46, 48):
|
||||
tdSql.checkData(i, 3, 17)
|
||||
|
||||
tdSql.query(f"select tbname, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(next)")
|
||||
|
||||
tdSql.checkRows(48)
|
||||
for i in range(0, 14):
|
||||
tdSql.checkData(i, 0, 'ctb1')
|
||||
|
||||
for i in range(14, 30):
|
||||
tdSql.checkData(i, 0, 'ctb2')
|
||||
|
||||
for i in range(30, 48):
|
||||
tdSql.checkData(i, 0, 'ctb3')
|
||||
|
||||
tdSql.checkData(0, 1, '2020-02-01 00:00:00.000')
|
||||
tdSql.checkData(13, 1, '2020-02-01 00:00:13.000')
|
||||
|
||||
tdSql.checkData(14, 1, '2020-02-01 00:00:00.000')
|
||||
tdSql.checkData(29, 1, '2020-02-01 00:00:15.000')
|
||||
|
||||
tdSql.checkData(30, 1, '2020-02-01 00:00:00.000')
|
||||
tdSql.checkData(47, 1, '2020-02-01 00:00:17.000')
|
||||
|
||||
for i in range(0, 2):
|
||||
tdSql.checkData(i, 3, 1)
|
||||
|
||||
for i in range(2, 8):
|
||||
tdSql.checkData(i, 3, 7)
|
||||
|
||||
for i in range(8, 14):
|
||||
tdSql.checkData(i, 3, 13)
|
||||
|
||||
for i in range(14, 18):
|
||||
tdSql.checkData(i, 3, 3)
|
||||
|
||||
for i in range(18, 24):
|
||||
tdSql.checkData(i, 3, 9)
|
||||
|
||||
for i in range(24, 30):
|
||||
tdSql.checkData(i, 3, 15)
|
||||
|
||||
for i in range(30, 36):
|
||||
tdSql.checkData(i, 3, 5)
|
||||
|
||||
for i in range(36, 42):
|
||||
tdSql.checkData(i, 3, 11)
|
||||
|
||||
for i in range(42, 48):
|
||||
tdSql.checkData(i, 3, 17)
|
||||
|
||||
tdSql.query(f"select tbname, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
|
||||
|
||||
tdSql.checkRows(39)
|
||||
for i in range(0, 13):
|
||||
tdSql.checkData(i, 0, 'ctb1')
|
||||
|
||||
for i in range(13, 26):
|
||||
tdSql.checkData(i, 0, 'ctb2')
|
||||
|
||||
for i in range(26, 39):
|
||||
tdSql.checkData(i, 0, 'ctb3')
|
||||
|
||||
tdSql.checkData(0, 1, '2020-02-01 00:00:01.000')
|
||||
tdSql.checkData(12, 1, '2020-02-01 00:00:13.000')
|
||||
|
||||
tdSql.checkData(13, 1, '2020-02-01 00:00:03.000')
|
||||
tdSql.checkData(25, 1, '2020-02-01 00:00:15.000')
|
||||
|
||||
tdSql.checkData(26, 1, '2020-02-01 00:00:05.000')
|
||||
tdSql.checkData(38, 1, '2020-02-01 00:00:17.000')
|
||||
|
||||
for i in range(0, 13):
|
||||
tdSql.checkData(i, 3, i + 1)
|
||||
|
||||
for i in range(13, 26):
|
||||
tdSql.checkData(i, 3, i - 10)
|
||||
|
||||
for i in range(26, 39):
|
||||
tdSql.checkData(i, 3, i - 21)
|
||||
|
||||
# select interp from supertable partition by column
|
||||
|
||||
tdSql.query(f"select c0, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by c0 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(null)")
|
||||
tdSql.checkRows(171)
|
||||
|
||||
tdSql.query(f"select c0, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by c0 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(value, 0)")
|
||||
tdSql.checkRows(171)
|
||||
|
||||
tdSql.query(f"select c0, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by c0 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(prev)")
|
||||
tdSql.checkRows(90)
|
||||
|
||||
tdSql.query(f"select c0, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by c0 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(next)")
|
||||
tdSql.checkRows(90)
|
||||
|
||||
tdSql.query(f"select c0, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by c0 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
|
||||
tdSql.checkRows(9)
|
||||
|
||||
# select interp from supertable partition by tag
|
||||
|
||||
tdSql.query(f"select t1, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by t1 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(null)")
|
||||
tdSql.checkRows(57)
|
||||
|
||||
tdSql.query(f"select t1, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by t1 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(value, 0)")
|
||||
tdSql.checkRows(57)
|
||||
|
||||
tdSql.query(f"select t1, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by t1 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(prev)")
|
||||
tdSql.checkRows(48)
|
||||
|
||||
tdSql.query(f"select t1, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by t1 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(next)")
|
||||
tdSql.checkRows(48)
|
||||
|
||||
tdSql.query(f"select t1, _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by t1 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
|
||||
tdSql.checkRows(39)
|
||||
|
||||
# select interp from supertable filter
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where ts between '2020-02-01 00:00:01.000' and '2020-02-01 00:00:13.000' range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
|
||||
tdSql.checkRows(13)
|
||||
|
||||
for i in range(13):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where c0 <= 13 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
|
||||
tdSql.checkRows(13)
|
||||
|
||||
for i in range(13):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where t1 = 1 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
|
||||
tdSql.checkRows(13)
|
||||
|
||||
for i in range(13):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where tbname = 'ctb1' range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
|
||||
tdSql.checkRows(13)
|
||||
|
||||
for i in range(13):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where ts between '2020-02-01 00:00:01.000' and '2020-02-01 00:00:13.000' partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
|
||||
tdSql.checkRows(27)
|
||||
|
||||
for i in range(13):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where c0 <= 13 partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
|
||||
tdSql.checkRows(27)
|
||||
|
||||
for i in range(13):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where t1 = 1 partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
|
||||
tdSql.checkRows(13)
|
||||
|
||||
for i in range(13):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where tbname = 'ctb1' partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
|
||||
tdSql.checkRows(13)
|
||||
|
||||
for i in range(13):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
# select interp from supertable filter limit
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear) limit 13")
|
||||
tdSql.checkRows(13)
|
||||
|
||||
for i in range(13):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear) limit 20")
|
||||
tdSql.checkRows(17)
|
||||
|
||||
for i in range(17):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where ts between '2020-02-01 00:00:01.000' and '2020-02-01 00:00:13.000' range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear) limit 10")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
for i in range(10):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where c0 <= 13 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear) limit 10")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
for i in range(10):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where t1 = 1 range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear) limit 10")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
for i in range(10):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where tbname = 'ctb1' range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear) limit 10")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
for i in range(10):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear) limit 13")
|
||||
tdSql.checkRows(13)
|
||||
|
||||
for i in range(13):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear) limit 40")
|
||||
tdSql.checkRows(39)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where ts between '2020-02-01 00:00:01.000' and '2020-02-01 00:00:13.000' partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear) limit 10")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
for i in range(10):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where c0 <= 13 partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear) limit 10")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
for i in range(10):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where t1 = 1 partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear) limit 10")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
for i in range(10):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} where tbname = 'ctb1' partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear) limit 10")
|
||||
tdSql.checkRows(10)
|
||||
|
||||
for i in range(10):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 1)
|
||||
|
||||
# select interp from supertable with scalar expression
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(1 + 1) from {dbname}.{stbname} range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
|
||||
tdSql.checkRows(17)
|
||||
|
||||
for i in range(17):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, 2.0)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0 + 1) from {dbname}.{stbname} range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
|
||||
tdSql.checkRows(17)
|
||||
|
||||
for i in range(17):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, i + 2)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0 * 2) from {dbname}.{stbname} range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
|
||||
tdSql.checkRows(17)
|
||||
|
||||
for i in range(17):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, (i + 1) * 2)
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0 + c1) from {dbname}.{stbname} range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(linear)")
|
||||
tdSql.checkRows(17)
|
||||
|
||||
for i in range(17):
|
||||
tdSql.checkData(i, 0, f'2020-02-01 00:00:{i + 1}.000')
|
||||
tdSql.checkData(i, 2, (i + 1) * 2)
|
||||
|
||||
# check duplicate timestamp
|
||||
|
||||
# add duplicate timestamp for different child tables
|
||||
tdSql.execute(f"insert into {dbname}.{ctbname1} values ('2020-02-01 00:00:15', 15, 15, 15, 15, 15.0, 15.0, true, 'varchar', 'nchar')")
|
||||
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} range('2020-02-01 00:00:00', '2020-02-01 00:00:14') every(1s) fill(null)")
|
||||
tdSql.error(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} range('2020-02-01 00:00:00', '2020-02-01 00:00:15') every(1s) fill(null)")
|
||||
tdSql.error(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(null)")
|
||||
tdSql.query(f"select _irowts, _isfilled, interp(c0) from {dbname}.{stbname} partition by tbname range('2020-02-01 00:00:00', '2020-02-01 00:00:18') every(1s) fill(null)")
|
||||
|
||||
#tdSql.query(f"select _irowts,interp(c0) from {dbname}.{stbname} partition by tbname range('2020-02-01 00:00:04', '2020-02-02 00:00:16') every(1h) fill(prev)")
|
||||
#tdSql.query(f"select tbname,_irowts,interp(c0) from {dbname}.{stbname} partition by tbname range('2020-02-01 00:00:04', '2020-02-02 00:00:16') every(1h) fill(prev)")
|
||||
|
||||
tdLog.printNoPrefix("======step 14: test interp pseudo columns")
|
||||
tdSql.error(f"select _irowts, c6 from {dbname}.{tbname}")
|
||||
|
|
|
@ -22,7 +22,8 @@ class TDTestCase:
|
|||
tdSql.execute("insert into db.ctb using db.stb tags(1) (ts, c1) values (now, 1)")
|
||||
|
||||
tdSql.query("select count(*) from information_schema.ins_columns")
|
||||
tdSql.checkData(0, 0, 277)
|
||||
# enterprise version: 285, community version: 277
|
||||
tdSql.checkData(0, 0, 285)
|
||||
|
||||
tdSql.query("select * from information_schema.ins_columns where table_name = 'ntb'")
|
||||
tdSql.checkRows(14)
|
||||
|
|
|
@ -256,12 +256,12 @@ class ClusterComCheck:
|
|||
if vgroup_status_first.count('leader') == 1 and vgroup_status_first.count('follower') == 2:
|
||||
if vgroup_status_last.count('leader') == 1 and vgroup_status_last.count('follower') == 2:
|
||||
ready_time= (count + 1)
|
||||
tdLog.success(f"all vgroups of {db_name} are ready in {ready_time} s")
|
||||
tdLog.success(f"elections of {db_name} all vgroups are ready in {ready_time} s")
|
||||
return True
|
||||
count+=1
|
||||
else:
|
||||
tdLog.debug(tdSql.queryResult)
|
||||
tdLog.notice(f"all vgroups leader of {db_name} is selected {count}s ")
|
||||
tdLog.notice(f"elections of {db_name} all vgroups are failed in{count}s ")
|
||||
caller = inspect.getframeinfo(inspect.stack()[1][0])
|
||||
args = (caller.filename, caller.lineno)
|
||||
tdLog.exit("%s(%d) failed " % args)
|
||||
|
|
|
@ -0,0 +1,222 @@
|
|||
import taos
|
||||
import sys
|
||||
import time
|
||||
import os
|
||||
|
||||
from util.log import *
|
||||
from util.sql import *
|
||||
from util.cases import *
|
||||
from util.dnodes import TDDnodes
|
||||
from util.dnodes import TDDnode
|
||||
from util.cluster import *
|
||||
sys.path.append("./6-cluster")
|
||||
from clusterCommonCreate import *
|
||||
from clusterCommonCheck import clusterComCheck
|
||||
|
||||
import time
|
||||
import socket
|
||||
import subprocess
|
||||
from multiprocessing import Process
|
||||
import threading
|
||||
import time
|
||||
import inspect
|
||||
import ctypes
|
||||
|
||||
class TDTestCase:
|
||||
|
||||
def init(self, conn, logSql, replicaVar=1):
|
||||
tdLog.debug(f"start to excute {__file__}")
|
||||
self.TDDnodes = None
|
||||
tdSql.init(conn.cursor())
|
||||
self.host = socket.gethostname()
|
||||
|
||||
|
||||
def getBuildPath(self):
|
||||
selfPath = os.path.dirname(os.path.realpath(__file__))
|
||||
|
||||
if ("community" in selfPath):
|
||||
projPath = selfPath[:selfPath.find("community")]
|
||||
else:
|
||||
projPath = selfPath[:selfPath.find("tests")]
|
||||
|
||||
for root, dirs, files in os.walk(projPath):
|
||||
if ("taosd" in files):
|
||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||
if ("packaging" not in rootRealPath):
|
||||
buildPath = root[:len(root) - len("/build/bin")]
|
||||
break
|
||||
return buildPath
|
||||
|
||||
def _async_raise(self, tid, exctype):
|
||||
"""raises the exception, performs cleanup if needed"""
|
||||
if not inspect.isclass(exctype):
|
||||
exctype = type(exctype)
|
||||
res = ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, ctypes.py_object(exctype))
|
||||
if res == 0:
|
||||
raise ValueError("invalid thread id")
|
||||
elif res != 1:
|
||||
# """if it returns a number greater than one, you're in trouble,
|
||||
# and you should call it again with exc=NULL to revert the effect"""
|
||||
ctypes.pythonapi.PyThreadState_SetAsyncExc(tid, None)
|
||||
raise SystemError("PyThreadState_SetAsyncExc failed")
|
||||
|
||||
def stopThread(self,thread):
|
||||
self._async_raise(thread.ident, SystemExit)
|
||||
|
||||
|
||||
def insertData(self,countstart,countstop):
|
||||
# fisrt add data : db\stable\childtable\general table
|
||||
|
||||
for couti in range(countstart,countstop):
|
||||
tdLog.debug("drop database if exists db%d" %couti)
|
||||
tdSql.execute("drop database if exists db%d" %couti)
|
||||
print("create database if not exists db%d replica 1 duration 300" %couti)
|
||||
tdSql.execute("create database if not exists db%d replica 1 duration 300" %couti)
|
||||
tdSql.execute("use db%d" %couti)
|
||||
tdSql.execute(
|
||||
'''create table stb1
|
||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
||||
tags (t1 int)
|
||||
'''
|
||||
)
|
||||
tdSql.execute(
|
||||
'''
|
||||
create table t1
|
||||
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
|
||||
'''
|
||||
)
|
||||
for i in range(4):
|
||||
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
|
||||
|
||||
|
||||
def fiveDnodeThreeMnode(self,dnodeNumbers,mnodeNums,restartNumbers,stopRole):
|
||||
tdLog.printNoPrefix("======== test case 1: ")
|
||||
paraDict = {'dbName': 'db0_0',
|
||||
'dropFlag': 1,
|
||||
'event': '',
|
||||
'vgroups': 4,
|
||||
'replica': 1,
|
||||
'stbName': 'stb',
|
||||
'stbNumbers': 2,
|
||||
'colPrefix': 'c',
|
||||
'tagPrefix': 't',
|
||||
'colSchema': [{'type': 'INT', 'count':1}, {'type': 'binary', 'len':20, 'count':1}],
|
||||
'tagSchema': [{'type': 'INT', 'count':1}, {'type': 'binary', 'len':20, 'count':1}],
|
||||
'ctbPrefix': 'ctb',
|
||||
'ctbNum': 1000,
|
||||
'startTs': 1640966400000, # 2022-01-01 00:00:00.000
|
||||
"rowsPerTbl": 100,
|
||||
"batchNum": 5000
|
||||
}
|
||||
|
||||
dnodeNumbers = int(dnodeNumbers)
|
||||
mnodeNums = int(mnodeNums)
|
||||
vnodeNumbers = int(dnodeNumbers-mnodeNums)
|
||||
allctbNumbers = (paraDict['stbNumbers']*paraDict["ctbNum"])
|
||||
rowsPerStb = paraDict["ctbNum"]*paraDict["rowsPerTbl"]
|
||||
rowsall = rowsPerStb*paraDict['stbNumbers']
|
||||
dbNumbers = 1
|
||||
replica3 = 3
|
||||
tdLog.info("first check dnode and mnode")
|
||||
tdSql.query("select * from information_schema.ins_dnodes;")
|
||||
tdSql.checkData(0,1,'%s:6030'%self.host)
|
||||
tdSql.checkData(4,1,'%s:6430'%self.host)
|
||||
clusterComCheck.checkDnodes(dnodeNumbers)
|
||||
|
||||
#check mnode status
|
||||
tdLog.info("check mnode status")
|
||||
clusterComCheck.checkMnodeStatus(mnodeNums)
|
||||
|
||||
# add some error operations and
|
||||
tdLog.info("Confirm the status of the dnode again")
|
||||
tdSql.error("create mnode on dnode 2")
|
||||
tdSql.query("select * from information_schema.ins_dnodes;")
|
||||
print(tdSql.queryResult)
|
||||
clusterComCheck.checkDnodes(dnodeNumbers)
|
||||
|
||||
# create database and stable
|
||||
clusterComCreate.create_database(tdSql, paraDict["dbName"],paraDict["dropFlag"], paraDict["vgroups"],paraDict['replica'])
|
||||
tdLog.info("Take turns stopping Mnodes ")
|
||||
|
||||
tdDnodes=cluster.dnodes
|
||||
stopcount =0
|
||||
threads=[]
|
||||
|
||||
# create stable:stb_0
|
||||
stableName= paraDict['stbName']
|
||||
newTdSql=tdCom.newTdSql()
|
||||
clusterComCreate.create_stables(newTdSql, paraDict["dbName"],stableName,paraDict['stbNumbers'])
|
||||
#create child table:ctb_0
|
||||
for i in range(paraDict['stbNumbers']):
|
||||
stableName= '%s_%d'%(paraDict['stbName'],i)
|
||||
newTdSql=tdCom.newTdSql()
|
||||
clusterComCreate.create_ctable(newTdSql, paraDict["dbName"],stableName,stableName, paraDict['ctbNum'])
|
||||
#insert date
|
||||
for i in range(paraDict['stbNumbers']):
|
||||
stableName= '%s_%d'%(paraDict['stbName'],i)
|
||||
newTdSql=tdCom.newTdSql()
|
||||
threads.append(threading.Thread(target=clusterComCreate.insert_data, args=(newTdSql, paraDict["dbName"],stableName,paraDict["ctbNum"],paraDict["rowsPerTbl"],paraDict["batchNum"],paraDict["startTs"])))
|
||||
for tr in threads:
|
||||
tr.start()
|
||||
TdSqlEx=tdCom.newTdSql()
|
||||
tdLog.info("alter database db0_0 replica 3")
|
||||
TdSqlEx.execute('alter database db0_0 replica 3')
|
||||
while stopcount < restartNumbers:
|
||||
tdLog.info(" restart loop: %d"%stopcount )
|
||||
if stopRole == "mnode":
|
||||
for i in range(mnodeNums):
|
||||
tdDnodes[i].stoptaosd()
|
||||
# sleep(10)
|
||||
tdDnodes[i].starttaosd()
|
||||
# sleep(10)
|
||||
elif stopRole == "vnode":
|
||||
for i in range(vnodeNumbers):
|
||||
tdDnodes[i+mnodeNums].stoptaosd()
|
||||
# sleep(10)
|
||||
tdDnodes[i+mnodeNums].starttaosd()
|
||||
# sleep(10)
|
||||
elif stopRole == "dnode":
|
||||
for i in range(dnodeNumbers):
|
||||
tdDnodes[i].stoptaosd()
|
||||
# tdLog.info('select cast(c2 as nchar(10)) from db0_0.stb_1;')
|
||||
# TdSqlEx.execute('select cast(c2 as nchar(10)) from db0_0.stb_1;')
|
||||
# tdLog.info('select avg(c1) from db0_0.stb_0 interval(10s);')
|
||||
# TdSqlEx.execute('select avg(c1) from db0_0.stb_0 interval(10s);')
|
||||
# sleep(10)
|
||||
tdDnodes[i].starttaosd()
|
||||
# sleep(10)
|
||||
# dnodeNumbers don't include database of schema
|
||||
if clusterComCheck.checkDnodes(dnodeNumbers):
|
||||
tdLog.info("123")
|
||||
else:
|
||||
print("456")
|
||||
|
||||
self.stopThread(threads)
|
||||
tdLog.exit("one or more of dnodes failed to start ")
|
||||
# self.check3mnode()
|
||||
stopcount+=1
|
||||
|
||||
for tr in threads:
|
||||
tr.join()
|
||||
clusterComCheck.checkDnodes(dnodeNumbers)
|
||||
clusterComCheck.checkDbRows(dbNumbers)
|
||||
# clusterComCheck.checkDb(dbNumbers,1,paraDict["dbName"])
|
||||
|
||||
# tdSql.execute("use %s" %(paraDict["dbName"]))
|
||||
tdSql.query("show %s.stables"%(paraDict["dbName"]))
|
||||
tdSql.checkRows(paraDict["stbNumbers"])
|
||||
# for i in range(paraDict['stbNumbers']):
|
||||
# stableName= '%s.%s_%d'%(paraDict["dbName"],paraDict['stbName'],i)
|
||||
# tdSql.query("select count(*) from %s"%stableName)
|
||||
# tdSql.checkData(0,0,rowsPerStb)
|
||||
clusterComCheck.check_vgroups_status(vgroup_numbers=paraDict["vgroups"],db_replica=replica3,db_name=paraDict["dbName"],count_number=240)
|
||||
def run(self):
|
||||
# print(self.master_dnode.cfgDict)
|
||||
self.fiveDnodeThreeMnode(dnodeNumbers=6,mnodeNums=3,restartNumbers=4,stopRole='dnode')
|
||||
|
||||
def stop(self):
|
||||
tdSql.close()
|
||||
tdLog.success(f"{__file__} successfully executed")
|
||||
|
||||
tdCases.addLinux(__file__, TDTestCase())
|
||||
tdCases.addWindows(__file__, TDTestCase())
|
|
@ -177,7 +177,7 @@ class TDTestCase:
|
|||
tdSql.query("select count(*) from %s"%stableName)
|
||||
tdSql.checkData(0,0,rowsPerStb)
|
||||
|
||||
clusterComCheck.check_vgroups_status(vgroup_numbers=paraDict["vgroups"],db_replica=replica1,db_name=paraDict["dbName"],count_number=20)
|
||||
clusterComCheck.check_vgroups_status(vgroup_numbers=paraDict["vgroups"],db_replica=replica1,db_name=paraDict["dbName"],count_number=40)
|
||||
sleep(5)
|
||||
tdLog.info(f"show transactions;alter database db0_0 replica {replica3};")
|
||||
TdSqlEx.execute(f'show transactions;')
|
||||
|
|
|
@ -64,11 +64,11 @@ class TDTestCase:
|
|||
self._async_raise(thread.ident, SystemExit)
|
||||
|
||||
|
||||
def insertData(self,dbname,tableCount,rowsPerCount):
|
||||
def insertData(self,dbname,tableCount,rowsPerCount,vgroups):
|
||||
# tableCount : create table number
|
||||
# rowsPerCount : rows per table
|
||||
# fisrt add data : db\stable\childtable\general table
|
||||
os.system(f"taosBenchmark -d {dbname} -n {tableCount} -t {rowsPerCount} -z 1 -k 10000 -y ")
|
||||
os.system(f"taosBenchmark -d {dbname} -n {tableCount} -t {rowsPerCount} -v {vgroups} -z 1 -k 10000 -y ")
|
||||
|
||||
|
||||
def fiveDnodeThreeMnode(self,dnodeNumbers,mnodeNums,restartNumbers,stopRole):
|
||||
|
@ -76,10 +76,10 @@ class TDTestCase:
|
|||
paraDict = {'dbName': 'db0_0',
|
||||
'dropFlag': 1,
|
||||
'event': '',
|
||||
'vgroups': 4,
|
||||
'vgroups': 6,
|
||||
'replica': 1,
|
||||
'stbName': 'stb',
|
||||
'stbNumbers': 2,
|
||||
'stbName': 'meters',
|
||||
'stbNumbers': 1,
|
||||
'colPrefix': 'c',
|
||||
'tagPrefix': 't',
|
||||
'colSchema': [{'type': 'INT', 'count':1}, {'type': 'binary', 'len':20, 'count':1}],
|
||||
|
@ -124,7 +124,7 @@ class TDTestCase:
|
|||
threads=[]
|
||||
|
||||
# create stable:stb_0
|
||||
threads.append(threading.Thread(target=self.insertData, args=(paraDict["dbName"],paraDict["ctbNum"],paraDict["rowsPerTbl"])))
|
||||
threads.append(threading.Thread(target=self.insertData, args=(paraDict["dbName"],paraDict["ctbNum"],paraDict["rowsPerTbl"],paraDict["vgroups"])))
|
||||
for tr in threads:
|
||||
tr.start()
|
||||
TdSqlEx=tdCom.newTdSql()
|
||||
|
@ -174,10 +174,13 @@ class TDTestCase:
|
|||
# tdSql.execute("use %s" %(paraDict["dbName"]))
|
||||
tdSql.query("show %s.stables"%(paraDict["dbName"]))
|
||||
tdSql.checkRows(paraDict["stbNumbers"])
|
||||
for i in range(paraDict['stbNumbers']):
|
||||
stableName= '%s.%s_%d'%(paraDict["dbName"],paraDict['stbName'],i)
|
||||
# for i in range(paraDict['stbNumbers']):
|
||||
# stableName= '%s.%s_%d'%(paraDict["dbName"],paraDict['stbName'],i)
|
||||
# tdSql.query("select count(*) from %s"%stableName)
|
||||
# tdSql.checkData(0,0,rowsPerStb)
|
||||
stableName= '%s.%s'%(paraDict["dbName"],paraDict['stbName'])
|
||||
tdSql.query("select count(*) from %s"%stableName)
|
||||
tdSql.checkData(0,0,rowsPerStb)
|
||||
tdSql.checkData(0,0,rowsall)
|
||||
clusterComCheck.check_vgroups_status(vgroup_numbers=paraDict["vgroups"],db_replica=3,db_name=paraDict["dbName"],count_number=240)
|
||||
def run(self):
|
||||
# print(self.master_dnode.cfgDict)
|
||||
|
|
Loading…
Reference in New Issue