diff --git a/docs/en/02-intro/index.md b/docs/en/02-intro/index.md
index b4636e54a6..5d21fbaf90 100644
--- a/docs/en/02-intro/index.md
+++ b/docs/en/02-intro/index.md
@@ -3,7 +3,7 @@ title: Introduction
toc_max_heading_level: 2
---
-TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](../develop/cache), [stream processing](../develop/stream), [data subscription](../develop/tmq) and other functionalities to reduce the system complexity and cost of development and operation.
+TDengine is an open source, high-performance, cloud native [time-series database](https://tdengine.com/tsdb/) optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](/develop/cache), [stream processing](/develop/stream), [data subscription](/develop/tmq) and other functionalities to reduce the system complexity and cost of development and operation.
This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine.
@@ -11,21 +11,33 @@ This section introduces the major features, competitive advantages, typical use-
The major features are listed below:
-1. While TDengine supports [using SQL to insert](/develop/insert-data/sql-writing), it also supports [Schemaless writing](/reference/schemaless/) just like NoSQL databases. TDengine also supports standard protocols like [InfluxDB LINE](/develop/insert-data/influxdb-line),[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) among others.
-2. TDengine supports seamless integration with third-party data collection agents like [Telegraf](/third-party/telegraf),[Prometheus](/third-party/prometheus),[StatsD](/third-party/statsd),[collectd](/third-party/collectd),[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQX](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). These agents can write data into TDengine with simple configuration and without a single line of code.
-3. Support for [all kinds of queries](/develop/query-data), including aggregation, nested query, downsampling, interpolation and others.
-4. Support for [user defined functions](/develop/udf).
-5. Support for [caching](/develop/cache). TDengine always saves the last data point in cache, so Redis is not needed in some scenarios.
-6. Support for [continuous query](../develop/stream).
-7. Support for [data subscription](../develop/tmq) with the capability to specify filter conditions.
-8. Support for [cluster](../deployment/), with the capability of increasing processing power by adding more nodes. High availability is supported by replication.
-9. Provides an interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc queries.
-10. Provides many ways to [import](/operation/import) and [export](/operation/export) data.
-11. Provides [monitoring](/operation/monitor) on running instances of TDengine.
-12. Provides [connectors](/reference/connector/) for [C/C++](/reference/connector/cpp), [Java](/reference/connector/java), [Python](/reference/connector/python), [Go](/reference/connector/go), [Rust](/reference/connector/rust), [Node.js](/reference/connector/node) and other programming languages.
-13. Provides a [REST API](/reference/rest-api/).
-14. Supports seamless integration with [Grafana](/third-party/grafana) for visualization.
-15. Supports seamless integration with Google Data Studio.
+1. Insert data
+ * supports [using SQL to insert](/develop/insert-data/sql-writing).
+ * supports [schemaless writing](/reference/schemaless/) just like NoSQL databases. It also supports standard protocols like [InfluxDB LINE](/develop/insert-data/influxdb-line),[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) among others.
+ * supports seamless integration with third-party tools like [Telegraf](/third-party/telegraf/), [Prometheus](/third-party/prometheus/), [collectd](/third-party/collectd/), [StatsD](/third-party/statsd/), [TCollector](/third-party/tcollector/) and [icinga2/](/third-party/icinga2/), they can write data into TDengine with simple configuration and without a single line of code.
+2. Query data
+ * supports standard [SQL](/taos-sql/), including nested query.
+ * supports [time series specific functions](/taos-sql/function/#time-series-extensions) and [time series specific queries](/taos-sql/distinguished), like downsampling, interpolation, cumulated sum, time weighted average, state window, session window and many others.
+ * supports [user defined functions](/taos-sql/udf).
+3. [Caching](/develop/cache/): TDengine always saves the last data point in cache, so Redis is not needed for time-series data processing.
+4. [Stream Processing](/develop/stream/): not only is the continuous query is supported, but TDengine also supports even driven stream processing, so Flink or spark is not needed for time-series daata processing.
+5. [Data Dubscription](/develop/tmq/): application can subscribe a table or a set of tables. API is the same as Kafka, but you can specify filter conditions.
+6. Visualization
+ * supports seamless integration with [Grafana](/third-party/grafana/) for visualization.
+ * supports seamless integration with Google Data Studio.
+7. Cluster
+ * supports [cluster](/deployment/) with the capability of increasing processing power by adding more nodes.
+ * supports [deployment on Kubernetes](/deployment/k8s/)
+ * supports high availability via data replication.
+8. Administration
+ * provides [monitoring](/operation/monitor) on running instances of TDengine.
+ * provides many ways to [import](/operation/import) and [export](/operation/export) data.
+9. Tools
+ * provides an interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc queries.
+ * provides a tool [taosBenchmark](/reference/taosbenchmark/) for testing the performance of TDengine.
+10. Programming
+ * provides [connectors](/reference/connector/) for [C/C++](/reference/connector/cpp), [Java](/reference/connector/java), [Python](/reference/connector/python), [Go](/reference/connector/go), [Rust](/reference/connector/rust), [Node.js](/reference/connector/node) and other programming languages.
+ * provides a [REST API](/reference/rest-api/).
For more details on features, please read through the entire documentation.
diff --git a/docs/en/07-develop/03-insert-data/05-high-volume.md b/docs/en/07-develop/03-insert-data/05-high-volume.md
new file mode 100644
index 0000000000..8163ae03b2
--- /dev/null
+++ b/docs/en/07-develop/03-insert-data/05-high-volume.md
@@ -0,0 +1,444 @@
+---
+sidebar_label: High Performance Writing
+title: High Performance Writing
+---
+
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+
+This chapter introduces how to write data into TDengine with high throughput.
+
+## How to achieve high performance data writing
+
+To achieve high performance writing, there are a few aspects to consider. In the following sections we will describe these important factors in achieving high performance writing.
+
+### Application Program
+
+From the perspective of application program, you need to consider:
+
+1. The data size of each single write, also known as batch size. Generally speaking, higher batch size generates better writing performance. However, once the batch size is over a specific value, you will not get any additional benefit anymore. When using SQL to write into TDengine, it's better to put as much as possible data in single SQL. The maximum SQL length supported by TDengine is 1,048,576 bytes, i.e. 1 MB. It can be configured by parameter `maxSQLLength` on client side, and the default value is 65,480.
+
+2. The number of concurrent connections. Normally more connections can get better result. However, once the number of connections exceeds the processing ability of the server side, the performance may downgrade.
+
+3. The distribution of data to be written across tables or sub-tables. Writing to single table in one batch is more efficient than writing to multiple tables in one batch.
+
+4. Data Writing Protocol.
+ - Prameter binding mode is more efficient than SQL because it doesn't have the cost of parsing SQL.
+ - Writing to known existing tables is more efficient than wirting to uncertain tables in automatic creating mode because the later needs to check whether the table exists or not before actually writing data into it
+ - Writing in SQL is more efficient than writing in schemaless mode because schemaless writing creats table automatically and may alter table schema
+
+Application programs need to take care of the above factors and try to take advantage of them. The application progam should write to single table in each write batch. The batch size needs to be tuned to a proper value on a specific system. The number of concurrent connections needs to be tuned to a proper value too to achieve the best writing throughput.
+
+### Data Source
+
+Application programs need to read data from data source then write into TDengine. If you meet one or more of below situations, you need to setup message queues between the threads for reading from data source and the threads for writing into TDengine.
+
+1. There are multiple data sources, the data generation speed of each data source is much slower than the speed of single writing thread. In this case, the purpose of message queues is to consolidate the data from multiple data sources together to increase the batch size of single write.
+2. The speed of data generation from single data source is much higher than the speed of single writing thread. The purpose of message queue in this case is to provide buffer so that data is not lost and multiple writing threads can get data from the buffer.
+3. The data for single table are from multiple data source. In this case the purpose of message queues is to combine the data for single table together to improve the write efficiency.
+
+If the data source is Kafka, then the appication program is a consumer of Kafka, you can benefit from some kafka features to achieve high performance writing:
+
+1. Put the data for a table in single partition of single topic so that it's easier to put the data for each table together and write in batch
+2. Subscribe multiple topics to accumulate data together.
+3. Add more consumers to gain more concurrency and throughput.
+4. Incrase the size of single fetch to increase the size of write batch.
+
+### Tune TDengine
+
+TDengine is a distributed and high performance time series database, there are also some ways to tune TDengine to get better writing performance.
+
+1. Set proper number of `vgroups` according to available CPU cores. Normally, we recommend 2 \* number_of_cores as a starting point. If the verification result shows this is not enough to utilize CPU resources, you can use a higher value.
+2. Set proper `minTablesPerVnode`, `tableIncStepPerVnode`, and `maxVgroupsPerDb` according to the number of tables so that tables are distributed even across vgroups. The purpose is to balance the workload among all vnodes so that system resources can be utilized better to get higher performance.
+
+For more performance tuning parameters, please refer to [Configuration Parameters](../../../reference/config).
+
+## Sample Programs
+
+This section will introduce the sample programs to demonstrate how to write into TDengine with high performance.
+
+### Scenario
+
+Below are the scenario for the sample programs of high performance wrting.
+
+- Application program reads data from data source, the sample program simulates a data source by generating data
+- The speed of single writing thread is much slower than the speed of generating data, so the program starts multiple writing threads while each thread establish a connection to TDengine and each thread has a message queue of fixed size.
+- Application program maps the received data to different writing threads based on table name to make sure all the data for each table is always processed by a specific writing thread.
+- Each writing thread writes the received data into TDengine once the message queue becomes empty or the read data meets a threshold.
+
+
+
+### Sample Programs
+
+The sample programs listed in this section are based on the scenario described previously. If your scenarios is different, please try to adjust the code based on the principles described in this chapter.
+
+The sample programs assume the source data is for all the different sub tables in same super table (meters). The super table has been created before the sample program starts to writing data. Sub tables are created automatically according to received data. If there are multiple super tables in your case, please try to adjust the part of creating table automatically.
+
+
+
+
+**Program Inventory**
+
+| Class | Description |
+| ---------------- | ----------------------------------------------------------------------------------------------------- |
+| FastWriteExample | Main Program |
+| ReadTask | Read data from simulated data source and put into a queue according to the hash value of table name |
+| WriteTask | Read data from Queue, compose a wirte batch and write into TDengine |
+| MockDataSource | Generate data for some sub tables of super table meters |
+| SQLWriter | WriteTask uses this class to compose SQL, create table automatically, check SQL length and write data |
+| StmtWriter | Write in Parameter binding mode (Not finished yet) |
+| DataBaseMonitor | Calculate the writing speed and output on console every 10 seconds |
+
+Below is the list of complete code of the classes in above table and more detailed description.
+
+
+FastWriteExample
+The main Program is responsible for:
+
+1. Create message queues
+2. Start writing threads
+3. Start reading threads
+4. Otuput writing speed every 10 seconds
+
+The main program provides 4 parameters for tuning:
+
+1. The number of reading threads, default value is 1
+2. The number of writing threads, default alue is 2
+3. The total number of tables in the generated data, default value is 1000. These tables are distributed evenly across all writing threads. If the number of tables is very big, it will cost much time to firstly create these tables.
+4. The batch size of single write, default value is 3,000
+
+The capacity of message queue also impacts performance and can be tuned by modifying program. Normally it's always better to have a larger message queue. A larger message queue means lower possibility of being blocked when enqueueing and higher throughput. But a larger message queue consumes more memory space. The default value used in the sample programs is already big enoug.
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/FastWriteExample.java}}
+```
+
+
+
+
+ReadTask
+
+ReadTask reads data from data source. Each ReadTask is associated with a simulated data source, each data source generates data for a group of specific tables, and the data of any table is only generated from a single specific data source.
+
+ReadTask puts data in message queue in blocking mode. That means, the putting operation is blocked if the message queue is full.
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/ReadTask.java}}
+```
+
+
+
+
+WriteTask
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/WriteTask.java}}
+```
+
+
+
+
+
+MockDataSource
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/MockDataSource.java}}
+```
+
+
+
+
+
+SQLWriter
+
+SQLWriter class encapsulates the logic of composing SQL and writing data. Please be noted that the tables have not been created before writing, but are created automatically when catching the exception of table doesn't exist. For other exceptions caught, the SQL which caused the exception are logged for you to debug.
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/SQLWriter.java}}
+```
+
+
+
+
+
+DataBaseMonitor
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/DataBaseMonitor.java}}
+```
+
+
+
+**Steps to Launch**
+
+
+Launch Java Sample Program
+
+You need to set environment variable `TDENGINE_JDBC_URL` before launching the program. If TDengine Server is setup on localhost, then the default value for user name, password and port can be used, like below:
+
+```
+TDENGINE_JDBC_URL="jdbc:TAOS://localhost:6030?user=root&password=taosdata"
+```
+
+**Launch in IDE**
+
+1. Clone TDengine repolitory
+ ```
+ git clone git@github.com:taosdata/TDengine.git --depth 1
+ ```
+2. Use IDE to open `docs/examples/java` directory
+3. Configure environment variable `TDENGINE_JDBC_URL`, you can also configure it before launching the IDE, if so you can skip this step.
+4. Run class `com.taos.example.highvolume.FastWriteExample`
+
+**Launch on server**
+
+If you want to launch the sample program on a remote server, please follow below steps:
+
+1. Package the sample programs. Execute below command under directory `TDengine/docs/examples/java` :
+ ```
+ mvn package
+ ```
+2. Create `examples/java` directory on the server
+ ```
+ mkdir -p examples/java
+ ```
+3. Copy dependencies (below commands assume you are working on a local Windows host and try to launch on a remote Linux host)
+ - Copy dependent packages
+ ```
+ scp -r .\target\lib @:~/examples/java
+ ```
+ - Copy the jar of sample programs
+ ```
+ scp -r .\target\javaexample-1.0.jar @:~/examples/java
+ ```
+4. Configure environment variable
+ Edit `~/.bash_profile` or `~/.bashrc` and add below:
+
+ ```
+ export TDENGINE_JDBC_URL="jdbc:TAOS://localhost:6030?user=root&password=taosdata"
+ ```
+
+ If your TDengine server is not deployed on localhost or doesn't use default port, you need to change the above URL to correct value in your environment.
+
+5. Launch the sample program
+
+ ```
+ java -classpath lib/*:javaexample-1.0.jar com.taos.example.highvolume.FastWriteExample
+ ```
+
+6. The sample program doesn't exit unless you press CTRL + C to terminate it.
+ Below is the output of running on a server of 16 cores, 64GB memory and SSD hard disk.
+
+ ```
+ root@vm85$ java -classpath lib/*:javaexample-1.0.jar com.taos.example.highvolume.FastWriteExample 2 12
+ 18:56:35.896 [main] INFO c.t.e.highvolume.FastWriteExample - readTaskCount=2, writeTaskCount=12 tableCount=1000 maxBatchSize=3000
+ 18:56:36.011 [WriteThread-0] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.015 [WriteThread-0] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.021 [WriteThread-1] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.022 [WriteThread-1] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.031 [WriteThread-2] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.032 [WriteThread-2] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.041 [WriteThread-3] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.042 [WriteThread-3] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.093 [WriteThread-4] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.094 [WriteThread-4] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.099 [WriteThread-5] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.100 [WriteThread-5] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.100 [WriteThread-6] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.101 [WriteThread-6] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.103 [WriteThread-7] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.104 [WriteThread-7] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.105 [WriteThread-8] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.107 [WriteThread-8] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.108 [WriteThread-9] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.109 [WriteThread-9] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.156 [WriteThread-10] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.157 [WriteThread-11] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.158 [WriteThread-10] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.158 [ReadThread-0] INFO com.taos.example.highvolume.ReadTask - started
+ 18:56:36.158 [ReadThread-1] INFO com.taos.example.highvolume.ReadTask - started
+ 18:56:36.158 [WriteThread-11] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:46.369 [main] INFO c.t.e.highvolume.FastWriteExample - count=18554448 speed=1855444
+ 18:56:56.946 [main] INFO c.t.e.highvolume.FastWriteExample - count=39059660 speed=2050521
+ 18:57:07.322 [main] INFO c.t.e.highvolume.FastWriteExample - count=59403604 speed=2034394
+ 18:57:18.032 [main] INFO c.t.e.highvolume.FastWriteExample - count=80262938 speed=2085933
+ 18:57:28.432 [main] INFO c.t.e.highvolume.FastWriteExample - count=101139906 speed=2087696
+ 18:57:38.921 [main] INFO c.t.e.highvolume.FastWriteExample - count=121807202 speed=2066729
+ 18:57:49.375 [main] INFO c.t.e.highvolume.FastWriteExample - count=142952417 speed=2114521
+ 18:58:00.689 [main] INFO c.t.e.highvolume.FastWriteExample - count=163650306 speed=2069788
+ 18:58:11.646 [main] INFO c.t.e.highvolume.FastWriteExample - count=185019808 speed=2136950
+ ```
+
+
+
+
+
+
+**Program Inventory**
+
+Sample programs in Python uses multi-process and cross-process message queues.
+
+| Function/CLass | Description |
+| ---------------------------- | --------------------------------------------------------------------------- |
+| main Function | Program entry point, create child processes and message queues |
+| run_monitor_process Function | Create database, super table, calculate writing speed and output to console |
+| run_read_task Function | Read data and distribute to message queues |
+| MockDataSource Class | Simulate data source, return next 1,000 rows of each table |
+| run_write_task Function | Read as much as possible data from message queue and write in batch |
+| SQLWriter Class | Write in SQL and create table utomatically |
+| StmtWriter Class | Write in parameter binding mode (not finished yet) |
+
+
+main function
+
+`main` function is responsible for creating message queues and fork child processes, there are 3 kinds of child processes:
+
+1. Monitoring process, initializes database and calculating writing speed
+2. Reading process (n), reads data from data source
+3. Writing process (m), wirtes data into TDengine
+
+`main` function provides 5 parameters:
+
+1. The number of reading tasks, default value is 1
+2. The number of writing tasks, default value is 1
+3. The number of tables, default value is 1,000
+4. The capacity of message queue, default value is 1,000,000 bytes
+5. The batch size in single write, default value is 3000
+
+```python
+{{#include docs/examples/python/fast_write_example.py:main}}
+```
+
+
+
+
+run_monitor_process
+
+Monitoring process initilizes database and monitoring writing speed.
+
+```python
+{{#include docs/examples/python/fast_write_example.py:monitor}}
+```
+
+
+
+
+
+run_read_task function
+
+Reading process reads data from other data system and distributes to the message queue allocated for it.
+
+```python
+{{#include docs/examples/python/fast_write_example.py:read}}
+```
+
+
+
+
+
+MockDataSource
+
+Below is the simulated data source, we assume table name exists in each generated data.
+
+```python
+{{#include docs/examples/python/mockdatasource.py}}
+```
+
+
+
+
+run_write_task function
+
+Writing process tries to read as much as possible data from message queue and writes in batch.
+
+```python
+{{#include docs/examples/python/fast_write_example.py:write}}
+```
+
+
+
+
+
+SQLWriter class encapsulates the logic of composing SQL and writing data. Please be noted that the tables have not been created before writing, but are created automatically when catching the exception of table doesn't exist. For other exceptions caught, the SQL which caused the exception are logged for you to debug. This class also checks the SQL length, if the SQL length is closed to `maxSQLLength` the SQL will be executed immediately. To improve writing efficiency, it's better to increase `maxSQLLength` properly.
+
+SQLWriter
+
+```python
+{{#include docs/examples/python/sql_writer.py}}
+```
+
+
+
+**Steps to Launch**
+
+
+
+Launch Sample Program in Python
+
+1. Prerequisities
+
+ - TDengine client driver has been installed
+ - Python3 has been installed, the the version >= 3.8
+ - TDengine Python connector `taospy` has been installed
+
+2. Install faster-fifo to replace python builtin multiprocessing.Queue
+
+ ```
+ pip3 install faster-fifo
+ ```
+
+3. Click the "Copy" in the above sample programs to copy `fast_write_example.py` 、 `sql_writer.py` and `mockdatasource.py`.
+
+4. Execute the program
+
+ ```
+ python3 fast_write_example.py
+ ```
+
+ Below is the output of running on a server of 16 cores, 64GB memory and SSD hard disk.
+
+ ```
+ root@vm85$ python3 fast_write_example.py 8 8
+ 2022-07-14 19:13:45,869 [root] - READ_TASK_COUNT=8, WRITE_TASK_COUNT=8, TABLE_COUNT=1000, QUEUE_SIZE=1000000, MAX_BATCH_SIZE=3000
+ 2022-07-14 19:13:48,882 [root] - WriteTask-0 started with pid 718347
+ 2022-07-14 19:13:48,883 [root] - WriteTask-1 started with pid 718348
+ 2022-07-14 19:13:48,884 [root] - WriteTask-2 started with pid 718349
+ 2022-07-14 19:13:48,884 [root] - WriteTask-3 started with pid 718350
+ 2022-07-14 19:13:48,885 [root] - WriteTask-4 started with pid 718351
+ 2022-07-14 19:13:48,885 [root] - WriteTask-5 started with pid 718352
+ 2022-07-14 19:13:48,886 [root] - WriteTask-6 started with pid 718353
+ 2022-07-14 19:13:48,886 [root] - WriteTask-7 started with pid 718354
+ 2022-07-14 19:13:48,887 [root] - ReadTask-0 started with pid 718355
+ 2022-07-14 19:13:48,888 [root] - ReadTask-1 started with pid 718356
+ 2022-07-14 19:13:48,889 [root] - ReadTask-2 started with pid 718357
+ 2022-07-14 19:13:48,889 [root] - ReadTask-3 started with pid 718358
+ 2022-07-14 19:13:48,890 [root] - ReadTask-4 started with pid 718359
+ 2022-07-14 19:13:48,891 [root] - ReadTask-5 started with pid 718361
+ 2022-07-14 19:13:48,892 [root] - ReadTask-6 started with pid 718364
+ 2022-07-14 19:13:48,893 [root] - ReadTask-7 started with pid 718365
+ 2022-07-14 19:13:56,042 [DataBaseMonitor] - count=6676310 speed=667631.0
+ 2022-07-14 19:14:06,196 [DataBaseMonitor] - count=20004310 speed=1332800.0
+ 2022-07-14 19:14:16,366 [DataBaseMonitor] - count=32290310 speed=1228600.0
+ 2022-07-14 19:14:26,527 [DataBaseMonitor] - count=44438310 speed=1214800.0
+ 2022-07-14 19:14:36,673 [DataBaseMonitor] - count=56608310 speed=1217000.0
+ 2022-07-14 19:14:46,834 [DataBaseMonitor] - count=68757310 speed=1214900.0
+ 2022-07-14 19:14:57,280 [DataBaseMonitor] - count=80992310 speed=1223500.0
+ 2022-07-14 19:15:07,689 [DataBaseMonitor] - count=93805310 speed=1281300.0
+ 2022-07-14 19:15:18,020 [DataBaseMonitor] - count=106111310 speed=1230600.0
+ 2022-07-14 19:15:28,356 [DataBaseMonitor] - count=118394310 speed=1228300.0
+ 2022-07-14 19:15:38,690 [DataBaseMonitor] - count=130742310 speed=1234800.0
+ 2022-07-14 19:15:49,000 [DataBaseMonitor] - count=143051310 speed=1230900.0
+ 2022-07-14 19:15:59,323 [DataBaseMonitor] - count=155276310 speed=1222500.0
+ 2022-07-14 19:16:09,649 [DataBaseMonitor] - count=167603310 speed=1232700.0
+ 2022-07-14 19:16:19,995 [DataBaseMonitor] - count=179976310 speed=1237300.0
+ ```
+
+
+
+:::note
+Don't establish connection to TDengine in the parent process if using Python connector in multi-process way, otherwise all the connections in child processes are blocked always. This is a known issue.
+
+:::
+
+
+
diff --git a/docs/en/07-develop/03-insert-data/highvolume.webp b/docs/en/07-develop/03-insert-data/highvolume.webp
new file mode 100644
index 0000000000..46dfc74ae3
Binary files /dev/null and b/docs/en/07-develop/03-insert-data/highvolume.webp differ
diff --git a/docs/en/10-deployment/01-deploy.md b/docs/en/10-deployment/01-deploy.md
index bfbb547bd4..a445b684dc 100644
--- a/docs/en/10-deployment/01-deploy.md
+++ b/docs/en/10-deployment/01-deploy.md
@@ -114,7 +114,9 @@ The above process can be repeated to add more dnodes in the cluster.
Any node that is in the cluster and online can be the firstEp of new nodes.
Nodes use the firstEp parameter only when joining a cluster for the first time. After a node has joined the cluster, it stores the latest mnode in its end point list and no longer makes use of firstEp.
-However, firstEp is used by clients that connect to the cluster. For example, if you run `taos shell` without arguments, it connects to the firstEp by default.
+
+However, firstEp is used by clients that connect to the cluster. For example, if you run TDengine CLI `taos` without arguments, it connects to the firstEp by default.
+
Two dnodes that are launched without a firstEp value operate independently of each other. It is not possible to add one dnode to the other dnode and form a cluster. It is also not possible to form two independent clusters into a new cluster.
:::
diff --git a/docs/en/10-deployment/03-k8s.md b/docs/en/10-deployment/03-k8s.md
index b3f71ed5bd..b0aa677713 100644
--- a/docs/en/10-deployment/03-k8s.md
+++ b/docs/en/10-deployment/03-k8s.md
@@ -9,6 +9,7 @@ TDengine is a cloud-native time-series database that can be deployed on Kubernet
Before deploying TDengine on Kubernetes, perform the following:
+* Current steps are compatible with Kubernetes v1.5 and later version.
* Install and configure minikube, kubectl, and helm.
* Install and deploy Kubernetes and ensure that it can be accessed and used normally. Update any container registries or other services as necessary.
@@ -100,7 +101,7 @@ spec:
# Must set if you want a cluster.
- name: TAOS_FIRST_EP
value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)"
- # TAOS_FQND should always be setted in k8s env.
+ # TAOS_FQDN should always be set in k8s env.
- name: TAOS_FQDN
value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local"
volumeMounts:
diff --git a/docs/en/12-taos-sql/03-table.md b/docs/en/12-taos-sql/03-table.md
index bf32cf171b..5a2c8ed6ee 100644
--- a/docs/en/12-taos-sql/03-table.md
+++ b/docs/en/12-taos-sql/03-table.md
@@ -57,7 +57,7 @@ table_option: {
3. MAX_DELAY: specifies the maximum latency for pushing computation results. The default value is 15 minutes or the value of the INTERVAL parameter, whichever is smaller. Enter a value between 0 and 15 minutes in milliseconds, seconds, or minutes. You can enter multiple values separated by commas (,). Note: Retain the default value if possible. Configuring a small MAX_DELAY may cause results to be frequently pushed, affecting storage and query performance. This parameter applies only to supertables and takes effect only when the RETENTIONS parameter has been specified for the database.
4. ROLLUP: specifies aggregate functions to roll up. Rolling up a function provides downsampled results based on multiple axes. This parameter applies only to supertables and takes effect only when the RETENTIONS parameter has been specified for the database. You can specify only one function to roll up. The rollup takes effect on all columns except TS. Enter one of the following values: avg, sum, min, max, last, or first.
5. SMA: specifies functions on which to enable small materialized aggregates (SMA). SMA is user-defined precomputation of aggregates based on data blocks. Enter one of the following values: max, min, or sum This parameter can be used with supertables and standard tables.
-6. TTL: specifies the time to live (TTL) for the table. If the period specified by the TTL parameter elapses without any data being written to the table, TDengine will automatically delete the table. Note: The system may not delete the table at the exact moment that the TTL expires. Enter a value in days. The default value is 0. Note: The TTL parameter has a higher priority than the KEEP parameter. If a table is marked for deletion because the TTL has expired, it will be deleted even if the time specified by the KEEP parameter has not elapsed. This parameter can be used with standard tables and subtables.
+6. TTL: specifies the time to live (TTL) for the table. If TTL is specified when creatinga table, after the time period for which the table has been existing is over TTL, TDengine will automatically delete the table. Please be noted that the system may not delete the table at the exact moment that the TTL expires but guarantee there is such a system and finally the table will be deleted. The unit of TTL is in days. The default value is 0, i.e. never expire.
## Create Subtables
diff --git a/docs/en/12-taos-sql/10-function.md b/docs/en/12-taos-sql/10-function.md
index d35fd31099..d6905c84a1 100644
--- a/docs/en/12-taos-sql/10-function.md
+++ b/docs/en/12-taos-sql/10-function.md
@@ -1139,7 +1139,7 @@ SELECT STATECOUNT(field_name, oper, val) FROM { tb_name | stb_name } [WHERE clau
**Applicable parameter values**:
-- oper : Can be one of `LT` (lower than), `GT` (greater than), `LE` (lower than or equal to), `GE` (greater than or equal to), `NE` (not equal to), `EQ` (equal to), the value is case insensitive
+- oper : Can be one of `'LT'` (lower than), `'GT'` (greater than), `'LE'` (lower than or equal to), `'GE'` (greater than or equal to), `'NE'` (not equal to), `'EQ'` (equal to), the value is case insensitive, the value must be in quotes.
- val : Numeric types
**Return value type**: Integer
@@ -1166,7 +1166,7 @@ SELECT stateDuration(field_name, oper, val, unit) FROM { tb_name | stb_name } [W
**Applicable parameter values**:
-- oper : Can be one of `LT` (lower than), `GT` (greater than), `LE` (lower than or equal to), `GE` (greater than or equal to), `NE` (not equal to), `EQ` (equal to), the value is case insensitive
+- oper : Can be one of `'LT'` (lower than), `'GT'` (greater than), `'LE'` (lower than or equal to), `'GE'` (greater than or equal to), `'NE'` (not equal to), `'EQ'` (equal to), the value is case insensitive, the value must be in quotes.
- val : Numeric types
- unit: The unit of time interval. Enter one of the following options: 1b (nanoseconds), 1u (microseconds), 1a (milliseconds), 1s (seconds), 1m (minutes), 1h (hours), 1d (days), or 1w (weeks) If you do not enter a unit of time, the precision of the current database is used by default.
diff --git a/docs/en/12-taos-sql/19-limit.md b/docs/en/12-taos-sql/19-limit.md
index 0486ea3094..678c38a22e 100644
--- a/docs/en/12-taos-sql/19-limit.md
+++ b/docs/en/12-taos-sql/19-limit.md
@@ -30,7 +30,7 @@ The following characters cannot occur in a password: single quotation marks ('),
- Maximum number of columns is 4096. There must be at least 2 columns, and the first column must be timestamp.
- The maximum length of a tag name is 64 bytes
- Maximum number of tags is 128. There must be at least 1 tag. The total length of tag values cannot exceed 16 KB.
-- Maximum length of single SQL statement is 1 MB (1048576 bytes). It can be configured in the parameter `maxSQLLength` in the client side, the applicable range is [65480, 1048576].
+- Maximum length of single SQL statement is 1 MB (1048576 bytes).
- At most 4096 columns can be returned by `SELECT`. Functions in the query statement constitute columns. An error is returned if the limit is exceeded.
- Maximum numbers of databases, STables, tables are dependent only on the system resources.
- The number of replicas can only be 1 or 3.
diff --git a/docs/en/12-taos-sql/24-show.md b/docs/en/12-taos-sql/24-show.md
index 6b56161322..c9adb0cf78 100644
--- a/docs/en/12-taos-sql/24-show.md
+++ b/docs/en/12-taos-sql/24-show.md
@@ -194,7 +194,7 @@ Shows information about streams in the system.
SHOW SUBSCRIPTIONS;
```
-Shows all subscriptions in the current database.
+Shows all subscriptions in the system.
## SHOW TABLES
diff --git a/docs/en/13-operation/01-pkg-install.md b/docs/en/13-operation/01-pkg-install.md
index b6cc0582bc..d7713b943f 100644
--- a/docs/en/13-operation/01-pkg-install.md
+++ b/docs/en/13-operation/01-pkg-install.md
@@ -1,12 +1,12 @@
---
-title: Install & Uninstall
+title: Install and Uninstall
description: Install, Uninstall, Start, Stop and Upgrade
---
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
-TDengine community version provides deb and rpm packages for users to choose from, based on their system environment. The deb package supports Debian, Ubuntu and derivative systems. The rpm package supports CentOS, RHEL, SUSE and derivative systems. Furthermore, a tar.gz package is provided for TDengine Enterprise customers.
+This document gives more information about installing, uninstalling, and upgrading TDengine.
## Install
@@ -35,12 +35,28 @@ TDengine is removed successfully!
```
+Apt-get package of taosTools can be uninstalled as below:
+
+```
+$ sudo apt remove taostools
+Reading package lists... Done
+Building dependency tree
+Reading state information... Done
+The following packages will be REMOVED:
+ taostools
+0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
+After this operation, 68.3 MB disk space will be freed.
+Do you want to continue? [Y/n]
+(Reading database ... 147973 files and directories currently installed.)
+Removing taostools (2.1.2) ...
+```
+
Deb package of TDengine can be uninstalled as below:
-```bash
+```
$ sudo dpkg -r tdengine
(Reading database ... 137504 files and directories currently installed.)
Removing tdengine (3.0.0.0) ...
@@ -48,6 +64,14 @@ TDengine is removed successfully!
```
+Deb package of taosTools can be uninstalled as below:
+
+```
+$ sudo dpkg -r taostools
+(Reading database ... 147973 files and directories currently installed.)
+Removing taostools (2.1.2) ...
+```
+
@@ -59,6 +83,13 @@ $ sudo rpm -e tdengine
TDengine is removed successfully!
```
+RPM package of taosTools can be uninstalled as below:
+
+```
+sudo rpm -e taostools
+taosToole is removed successfully!
+```
+
@@ -67,115 +98,69 @@ tar.gz package of TDengine can be uninstalled as below:
```
$ rmtaos
-Nginx for TDengine is running, stopping it...
TDengine is removed successfully!
+```
-taosKeeper is removed successfully!
+tar.gz package of taosTools can be uninstalled as below:
+
+```
+$ rmtaostools
+Start to uninstall taos tools ...
+
+taos tools is uninstalled successfully!
```
+
+
+Run C:\TDengine\unins000.exe to uninstall TDengine on a Windows system.
-:::note
+:::info
-- We strongly recommend not to use multiple kinds of installation packages on a single host TDengine.
-- After deb package is installed, if the installation directory is removed manually, uninstall or reinstall will not work. This issue can be resolved by using the command below which cleans up TDengine package information. You can then reinstall if needed.
+- We strongly recommend not to use multiple kinds of installation packages on a single host TDengine. The packages may affect each other and cause errors.
-```bash
- $ sudo rm -f /var/lib/dpkg/info/tdengine*
-```
+- After deb package is installed, if the installation directory is removed manually, uninstall or reinstall will not work. This issue can be resolved by using the command below which cleans up TDengine package information.
-- After rpm package is installed, if the installation directory is removed manually, uninstall or reinstall will not work. This issue can be resolved by using the command below which cleans up TDengine package information. You can then reinstall if needed.
+ ```
+ $ sudo rm -f /var/lib/dpkg/info/tdengine*
+ ```
-```bash
- $ sudo rpm -e --noscripts tdengine
-```
+You can then reinstall if needed.
+
+- After rpm package is installed, if the installation directory is removed manually, uninstall or reinstall will not work. This issue can be resolved by using the command below which cleans up TDengine package information.
+
+ ```
+ $ sudo rpm -e --noscripts tdengine
+ ```
+
+You can then reinstall if needed.
:::
-## Installation Directory
-
-TDengine is installed at /usr/local/taos if successful.
-
-```bash
-$ cd /usr/local/taos
-$ ll
-$ ll
-total 28
-drwxr-xr-x 7 root root 4096 Feb 22 09:34 ./
-drwxr-xr-x 12 root root 4096 Feb 22 09:34 ../
-drwxr-xr-x 2 root root 4096 Feb 22 09:34 bin/
-drwxr-xr-x 2 root root 4096 Feb 22 09:34 cfg/
-lrwxrwxrwx 1 root root 13 Feb 22 09:34 data -> /var/lib/taos/
-drwxr-xr-x 2 root root 4096 Feb 22 09:34 driver/
-drwxr-xr-x 10 root root 4096 Feb 22 09:34 examples/
-drwxr-xr-x 2 root root 4096 Feb 22 09:34 include/
-lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
-```
-
-During the installation process:
-
-- Configuration directory, data directory, and log directory are created automatically if they don't exist
-- The default configuration file is located at /etc/taos/taos.cfg, which is a copy of /usr/local/taos/cfg/taos.cfg
-- The default data directory is /var/lib/taos, which is a soft link to /usr/local/taos/data
-- The default log directory is /var/log/taos, which is a soft link to /usr/local/taos/log
-- The executables at /usr/local/taos/bin are linked to /usr/bin
-- The DLL files at /usr/local/taos/driver are linked to /usr/lib
-- The header files at /usr/local/taos/include are linked to /usr/include
-
-:::note
+Uninstalling and Modifying Files
- When TDengine is uninstalled, the configuration /etc/taos/taos.cfg, data directory /var/lib/taos, log directory /var/log/taos are kept. They can be deleted manually with caution, because data can't be recovered. Please follow data integrity, security, backup or relevant SOPs before deleting any data.
+
- When reinstalling TDengine, if the default configuration file /etc/taos/taos.cfg exists, it will be kept and the configuration file in the installation package will be renamed to taos.cfg.orig and stored at /usr/local/taos/cfg to be used as configuration sample. Otherwise the configuration file in the installation package will be installed to /etc/taos/taos.cfg and used.
-## Start and Stop
-
-Linux system services `systemd`, `systemctl` or `service` are used to start, stop and restart TDengine. The server process of TDengine is `taosd`, which is started automatically after the Linux system is started. System operators can use `systemd`, `systemctl` or `service` to start, stop or restart TDengine server.
-
-For example, if using `systemctl` , the commands to start, stop, restart and check TDengine server are below:
-
-- Start server:`systemctl start taosd`
-
-- Stop server:`systemctl stop taosd`
-
-- Restart server:`systemctl restart taosd`
-
-- Check server status:`systemctl status taosd`
-
-Another component named as `taosAdapter` is to provide HTTP service for TDengine, it should be started and stopped using `systemctl`.
-
-If the server process is OK, the output of `systemctl status` is like below:
-
-```
-Active: active (running)
-```
-
-Otherwise, the output is as below:
-
-```
-Active: inactive (dead)
-```
## Upgrade
-
There are two aspects in upgrade operation: upgrade installation package and upgrade a running server.
To upgrade a package, follow the steps mentioned previously to first uninstall the old version then install the new version.
Upgrading a running server is much more complex. First please check the version number of the old version and the new version. The version number of TDengine consists of 4 sections, only if the first 3 sections match can the old version be upgraded to the new version. The steps of upgrading a running server are as below:
-
- Stop inserting data
- Make sure all data is persisted to disk
-- Make some simple queries (Such as total rows in stables, tables and so on. Note down the values. Follow best practices and relevant SOPs.)
- Stop the cluster of TDengine
- Uninstall old version and install new version
- Start the cluster of TDengine
-- Execute simple queries, such as the ones executed prior to installing the new package, to make sure there is no data loss
+- Execute simple queries, such as the ones executed prior to installing the new package, to make sure there is no data loss
- Run some simple data insertion statements to make sure the cluster works well
- Restore business services
:::warning
-
TDengine doesn't guarantee any lower version is compatible with the data generated by a higher version, so it's never recommended to downgrade the version.
:::
diff --git a/docs/en/13-operation/03-tolerance.md b/docs/en/13-operation/03-tolerance.md
index ba9d5d75e3..21a5a90282 100644
--- a/docs/en/13-operation/03-tolerance.md
+++ b/docs/en/13-operation/03-tolerance.md
@@ -27,4 +27,4 @@ The number of dnodes in a TDengine cluster must NOT be lower than the number of
As long as the dnodes of a TDengine cluster are deployed on different physical machines and the replica number is higher than 1, high availability can be achieved without any other assistance. For disaster recovery, dnodes of a TDengine cluster should be deployed in geographically different data centers.
-Alternatively, you can use taosX to synchronize the data from one TDengine cluster to another cluster in a remote location. For more information, see [taosX](../../reference/taosX).
+Alternatively, you can use taosX to synchronize the data from one TDengine cluster to another cluster in a remote location. However, taosX is only available in TDengine enterprise version, for more information please contact tdengine.com.
diff --git a/docs/en/14-reference/11-docker/index.md b/docs/en/14-reference/11-docker/index.md
index b3c3cddd9a..be1d72ff9c 100644
--- a/docs/en/14-reference/11-docker/index.md
+++ b/docs/en/14-reference/11-docker/index.md
@@ -72,7 +72,7 @@ Next, ensure the hostname "tdengine" is resolvable in `/etc/hosts`.
echo 127.0.0.1 tdengine |sudo tee -a /etc/hosts
```
-Finally, the TDengine service can be accessed from the taos shell or any connector with "tdengine" as the server address.
+Finally, the TDengine service can be accessed from the TDengine CLI or any connector with "tdengine" as the server address.
```shell
taos -h tdengine -P 6030
diff --git a/docs/en/14-reference/12-config/index.md b/docs/en/14-reference/12-config/index.md
index 532e16de5c..02921c3f6a 100644
--- a/docs/en/14-reference/12-config/index.md
+++ b/docs/en/14-reference/12-config/index.md
@@ -344,7 +344,7 @@ The charset that takes effect is UTF-8.
| Attribute | Description |
| -------- | --------------------------------- |
| Applicable | Server and Client |
-| Meaning | The interval for taos shell to send heartbeat to mnode |
+| Meaning | The interval for TDengine CLI to send heartbeat to mnode |
| Unit | second |
| Value Range | 1-120 |
| Default Value | 3 |
@@ -380,6 +380,35 @@ The charset that takes effect is UTF-8.
| Unit | bytes |
| Value Range | 0: always compress; >0: only compress when the size of any column data exceeds the threshold; -1: always uncompress |
| Default Value | -1 |
+| Default Value | -1 |
+| Note | available from version 2.3.0.0 | |
+
+## Continuous Query Parameters |
+
+### minSlidingTime
+
+| Attribute | Description |
+| ------------- | -------------------------------------------------------- |
+| Applicable | Server Only |
+| Meaning | Minimum sliding time of time window |
+| Unit | millisecond or microsecond , depending on time precision |
+| Value Range | 10-1000000 |
+| Default Value | 10 |
+
+### minIntervalTime
+
+| Attribute | Description |
+| ------------- | --------------------------- |
+| Applicable | Server Only |
+| Meaning | Minimum size of time window |
+| Unit | millisecond |
+| Value Range | 1-1000000 |
+| Default Value | 10 |
+
+:::info
+To prevent system resource from being exhausted by multiple concurrent streams, a random delay is applied on each stream automatically. `maxFirstStreamCompDelay` is the maximum delay time before a continuous query is started the first time. `streamCompDelayRatio` is the ratio for calculating delay time, with the size of the time window as base. `maxStreamCompDelay` is the maximum delay time. The actual delay time is a random time not bigger than `maxStreamCompDelay`. If a continuous query fails, `retryStreamComDelay` is the delay time before retrying it, also not bigger than `maxStreamCompDelay`.
+
+:::
## Log Parameters
diff --git a/docs/en/20-third-party/import_dashboard.webp b/docs/en/20-third-party/import_dashboard.webp
new file mode 100644
index 0000000000..164e3f4690
Binary files /dev/null and b/docs/en/20-third-party/import_dashboard.webp differ
diff --git a/docs/en/20-third-party/import_dashboard1.webp b/docs/en/20-third-party/import_dashboard1.webp
new file mode 100644
index 0000000000..d4fb374ce8
Binary files /dev/null and b/docs/en/20-third-party/import_dashboard1.webp differ
diff --git a/docs/en/20-third-party/import_dashboard2.webp b/docs/en/20-third-party/import_dashboard2.webp
new file mode 100644
index 0000000000..9f74dc96be
Binary files /dev/null and b/docs/en/20-third-party/import_dashboard2.webp differ
diff --git a/docs/examples/java/src/main/java/com/taos/example/RestInsertExample.java b/docs/examples/java/src/main/java/com/taos/example/RestInsertExample.java
index af97fe4373..9d85bf2a94 100644
--- a/docs/examples/java/src/main/java/com/taos/example/RestInsertExample.java
+++ b/docs/examples/java/src/main/java/com/taos/example/RestInsertExample.java
@@ -16,14 +16,14 @@ public class RestInsertExample {
private static List getRawData() {
return Arrays.asList(
- "d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,California.SanFrancisco,2",
- "d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,California.SanFrancisco,2",
- "d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,California.SanFrancisco,2",
- "d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,California.SanFrancisco,3",
- "d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,California.LosAngeles,2",
- "d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,California.LosAngeles,2",
- "d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,California.LosAngeles,3",
- "d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,California.LosAngeles,3"
+ "d1001,2018-10-03 14:38:05.000,10.30000,219,0.31000,'California.SanFrancisco',2",
+ "d1001,2018-10-03 14:38:15.000,12.60000,218,0.33000,'California.SanFrancisco',2",
+ "d1001,2018-10-03 14:38:16.800,12.30000,221,0.31000,'California.SanFrancisco',2",
+ "d1002,2018-10-03 14:38:16.650,10.30000,218,0.25000,'California.SanFrancisco',3",
+ "d1003,2018-10-03 14:38:05.500,11.80000,221,0.28000,'California.LosAngeles',2",
+ "d1003,2018-10-03 14:38:16.600,13.40000,223,0.29000,'California.LosAngeles',2",
+ "d1004,2018-10-03 14:38:05.000,10.80000,223,0.29000,'California.LosAngeles',3",
+ "d1004,2018-10-03 14:38:06.500,11.50000,221,0.35000,'California.LosAngeles',3"
);
}
diff --git a/docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java b/docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java
index 50e8b35771..179e6e6911 100644
--- a/docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java
+++ b/docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java
@@ -57,7 +57,7 @@ public class SubscribeDemo {
properties.setProperty(TMQConstants.ENABLE_AUTO_COMMIT, "true");
properties.setProperty(TMQConstants.GROUP_ID, "test");
properties.setProperty(TMQConstants.VALUE_DESERIALIZER,
- "com.taosdata.jdbc.MetersDeserializer");
+ "com.taos.example.MetersDeserializer");
// poll data
try (TaosConsumer consumer = new TaosConsumer<>(properties)) {
diff --git a/docs/examples/java/src/main/java/com/taos/example/highvolume/DataBaseMonitor.java b/docs/examples/java/src/main/java/com/taos/example/highvolume/DataBaseMonitor.java
new file mode 100644
index 0000000000..04b149a4b9
--- /dev/null
+++ b/docs/examples/java/src/main/java/com/taos/example/highvolume/DataBaseMonitor.java
@@ -0,0 +1,63 @@
+package com.taos.example.highvolume;
+
+import java.sql.*;
+
+/**
+ * Prepare target database.
+ * Count total records in database periodically so that we can estimate the writing speed.
+ */
+public class DataBaseMonitor {
+ private Connection conn;
+ private Statement stmt;
+
+ public DataBaseMonitor init() throws SQLException {
+ if (conn == null) {
+ String jdbcURL = System.getenv("TDENGINE_JDBC_URL");
+ conn = DriverManager.getConnection(jdbcURL);
+ stmt = conn.createStatement();
+ }
+ return this;
+ }
+
+ public void close() {
+ try {
+ stmt.close();
+ } catch (SQLException e) {
+ }
+ try {
+ conn.close();
+ } catch (SQLException e) {
+ }
+ }
+
+ public void prepareDatabase() throws SQLException {
+ stmt.execute("DROP DATABASE IF EXISTS test");
+ stmt.execute("CREATE DATABASE test");
+ stmt.execute("CREATE STABLE test.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)");
+ }
+
+ public Long count() throws SQLException {
+ if (!stmt.isClosed()) {
+ ResultSet result = stmt.executeQuery("SELECT count(*) from test.meters");
+ result.next();
+ return result.getLong(1);
+ }
+ return null;
+ }
+
+ /**
+ * show test.stables;
+ *
+ * name | created_time | columns | tags | tables |
+ * ============================================================================================
+ * meters | 2022-07-20 08:39:30.902 | 4 | 2 | 620000 |
+ */
+ public Long getTableCount() throws SQLException {
+ if (!stmt.isClosed()) {
+ ResultSet result = stmt.executeQuery("show test.stables");
+ result.next();
+ return result.getLong(5);
+ }
+ return null;
+ }
+}
\ No newline at end of file
diff --git a/docs/examples/java/src/main/java/com/taos/example/highvolume/FastWriteExample.java b/docs/examples/java/src/main/java/com/taos/example/highvolume/FastWriteExample.java
new file mode 100644
index 0000000000..41b59551ca
--- /dev/null
+++ b/docs/examples/java/src/main/java/com/taos/example/highvolume/FastWriteExample.java
@@ -0,0 +1,70 @@
+package com.taos.example.highvolume;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.*;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.ArrayBlockingQueue;
+import java.util.concurrent.BlockingQueue;
+
+
+public class FastWriteExample {
+ final static Logger logger = LoggerFactory.getLogger(FastWriteExample.class);
+
+ final static int taskQueueCapacity = 1000000;
+ final static List> taskQueues = new ArrayList<>();
+ final static List readTasks = new ArrayList<>();
+ final static List writeTasks = new ArrayList<>();
+ final static DataBaseMonitor databaseMonitor = new DataBaseMonitor();
+
+ public static void stopAll() {
+ logger.info("shutting down");
+ readTasks.forEach(task -> task.stop());
+ writeTasks.forEach(task -> task.stop());
+ databaseMonitor.close();
+ }
+
+ public static void main(String[] args) throws InterruptedException, SQLException {
+ int readTaskCount = args.length > 0 ? Integer.parseInt(args[0]) : 1;
+ int writeTaskCount = args.length > 1 ? Integer.parseInt(args[1]) : 3;
+ int tableCount = args.length > 2 ? Integer.parseInt(args[2]) : 1000;
+ int maxBatchSize = args.length > 3 ? Integer.parseInt(args[3]) : 3000;
+
+ logger.info("readTaskCount={}, writeTaskCount={} tableCount={} maxBatchSize={}",
+ readTaskCount, writeTaskCount, tableCount, maxBatchSize);
+
+ databaseMonitor.init().prepareDatabase();
+
+ // Create task queues, whiting tasks and start writing threads.
+ for (int i = 0; i < writeTaskCount; ++i) {
+ BlockingQueue queue = new ArrayBlockingQueue<>(taskQueueCapacity);
+ taskQueues.add(queue);
+ WriteTask task = new WriteTask(queue, maxBatchSize);
+ Thread t = new Thread(task);
+ t.setName("WriteThread-" + i);
+ t.start();
+ }
+
+ // create reading tasks and start reading threads
+ int tableCountPerTask = tableCount / readTaskCount;
+ for (int i = 0; i < readTaskCount; ++i) {
+ ReadTask task = new ReadTask(i, taskQueues, tableCountPerTask);
+ Thread t = new Thread(task);
+ t.setName("ReadThread-" + i);
+ t.start();
+ }
+
+ Runtime.getRuntime().addShutdownHook(new Thread(FastWriteExample::stopAll));
+
+ long lastCount = 0;
+ while (true) {
+ Thread.sleep(10000);
+ long numberOfTable = databaseMonitor.getTableCount();
+ long count = databaseMonitor.count();
+ logger.info("numberOfTable={} count={} speed={}", numberOfTable, count, (count - lastCount) / 10);
+ lastCount = count;
+ }
+ }
+}
\ No newline at end of file
diff --git a/docs/examples/java/src/main/java/com/taos/example/highvolume/MockDataSource.java b/docs/examples/java/src/main/java/com/taos/example/highvolume/MockDataSource.java
new file mode 100644
index 0000000000..6fe83f002e
--- /dev/null
+++ b/docs/examples/java/src/main/java/com/taos/example/highvolume/MockDataSource.java
@@ -0,0 +1,53 @@
+package com.taos.example.highvolume;
+
+import java.util.Iterator;
+
+/**
+ * Generate test data
+ */
+class MockDataSource implements Iterator {
+ private String tbNamePrefix;
+ private int tableCount;
+ private long maxRowsPerTable = 1000000000L;
+
+ // 100 milliseconds between two neighbouring rows.
+ long startMs = System.currentTimeMillis() - maxRowsPerTable * 100;
+ private int currentRow = 0;
+ private int currentTbId = -1;
+
+ // mock values
+ String[] location = {"LosAngeles", "SanDiego", "Hollywood", "Compton", "San Francisco"};
+ float[] current = {8.8f, 10.7f, 9.9f, 8.9f, 9.4f};
+ int[] voltage = {119, 116, 111, 113, 118};
+ float[] phase = {0.32f, 0.34f, 0.33f, 0.329f, 0.141f};
+
+ public MockDataSource(String tbNamePrefix, int tableCount) {
+ this.tbNamePrefix = tbNamePrefix;
+ this.tableCount = tableCount;
+ }
+
+ @Override
+ public boolean hasNext() {
+ currentTbId += 1;
+ if (currentTbId == tableCount) {
+ currentTbId = 0;
+ currentRow += 1;
+ }
+ return currentRow < maxRowsPerTable;
+ }
+
+ @Override
+ public String next() {
+ long ts = startMs + 100 * currentRow;
+ int groupId = currentTbId % 5 == 0 ? currentTbId / 5 : currentTbId / 5 + 1;
+ StringBuilder sb = new StringBuilder(tbNamePrefix + "_" + currentTbId + ","); // tbName
+ sb.append(ts).append(','); // ts
+ sb.append(current[currentRow % 5]).append(','); // current
+ sb.append(voltage[currentRow % 5]).append(','); // voltage
+ sb.append(phase[currentRow % 5]).append(','); // phase
+ sb.append(location[currentRow % 5]).append(','); // location
+ sb.append(groupId); // groupID
+
+ return sb.toString();
+ }
+}
\ No newline at end of file
diff --git a/docs/examples/java/src/main/java/com/taos/example/highvolume/ReadTask.java b/docs/examples/java/src/main/java/com/taos/example/highvolume/ReadTask.java
new file mode 100644
index 0000000000..a6fcfed1d2
--- /dev/null
+++ b/docs/examples/java/src/main/java/com/taos/example/highvolume/ReadTask.java
@@ -0,0 +1,58 @@
+package com.taos.example.highvolume;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.Iterator;
+import java.util.List;
+import java.util.concurrent.BlockingQueue;
+
+class ReadTask implements Runnable {
+ private final static Logger logger = LoggerFactory.getLogger(ReadTask.class);
+ private final int taskId;
+ private final List> taskQueues;
+ private final int queueCount;
+ private final int tableCount;
+ private boolean active = true;
+
+ public ReadTask(int readTaskId, List> queues, int tableCount) {
+ this.taskId = readTaskId;
+ this.taskQueues = queues;
+ this.queueCount = queues.size();
+ this.tableCount = tableCount;
+ }
+
+ /**
+ * Assign data received to different queues.
+ * Here we use the suffix number in table name.
+ * You are expected to define your own rule in practice.
+ *
+ * @param line record received
+ * @return which queue to use
+ */
+ public int getQueueId(String line) {
+ String tbName = line.substring(0, line.indexOf(',')); // For example: tb1_101
+ String suffixNumber = tbName.split("_")[1];
+ return Integer.parseInt(suffixNumber) % this.queueCount;
+ }
+
+ @Override
+ public void run() {
+ logger.info("started");
+ Iterator it = new MockDataSource("tb" + this.taskId, tableCount);
+ try {
+ while (it.hasNext() && active) {
+ String line = it.next();
+ int queueId = getQueueId(line);
+ taskQueues.get(queueId).put(line);
+ }
+ } catch (Exception e) {
+ logger.error("Read Task Error", e);
+ }
+ }
+
+ public void stop() {
+ logger.info("stop");
+ this.active = false;
+ }
+}
\ No newline at end of file
diff --git a/docs/examples/java/src/main/java/com/taos/example/highvolume/SQLWriter.java b/docs/examples/java/src/main/java/com/taos/example/highvolume/SQLWriter.java
new file mode 100644
index 0000000000..c2989acdbe
--- /dev/null
+++ b/docs/examples/java/src/main/java/com/taos/example/highvolume/SQLWriter.java
@@ -0,0 +1,205 @@
+package com.taos.example.highvolume;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.*;
+import java.util.HashMap;
+import java.util.Map;
+
+/**
+ * A helper class encapsulate the logic of writing using SQL.
+ *
+ * The main interfaces are two methods:
+ *
+ * - {@link SQLWriter#processLine}, which receive raw lines from WriteTask and group them by table names.
+ * - {@link SQLWriter#flush}, which assemble INSERT statement and execute it.
+ *
+ *
+ * There is a technical skill worth mentioning: we create table as needed when "table does not exist" error occur instead of creating table automatically using syntax "INSET INTO tb USING stb".
+ * This ensure that checking table existence is a one-time-only operation.
+ *
+ *
+ *
+ */
+public class SQLWriter {
+ final static Logger logger = LoggerFactory.getLogger(SQLWriter.class);
+
+ private Connection conn;
+ private Statement stmt;
+
+ /**
+ * current number of buffered records
+ */
+ private int bufferedCount = 0;
+ /**
+ * Maximum number of buffered records.
+ * Flush action will be triggered if bufferedCount reached this value,
+ */
+ private int maxBatchSize;
+
+
+ /**
+ * Maximum SQL length.
+ */
+ private int maxSQLLength;
+
+ /**
+ * Map from table name to column values. For example:
+ * "tb001" -> "(1648432611249,2.1,114,0.09) (1648432611250,2.2,135,0.2)"
+ */
+ private Map tbValues = new HashMap<>();
+
+ /**
+ * Map from table name to tag values in the same order as creating stable.
+ * Used for creating table.
+ */
+ private Map tbTags = new HashMap<>();
+
+ public SQLWriter(int maxBatchSize) {
+ this.maxBatchSize = maxBatchSize;
+ }
+
+
+ /**
+ * Get Database Connection
+ *
+ * @return Connection
+ * @throws SQLException
+ */
+ private static Connection getConnection() throws SQLException {
+ String jdbcURL = System.getenv("TDENGINE_JDBC_URL");
+ return DriverManager.getConnection(jdbcURL);
+ }
+
+ /**
+ * Create Connection and Statement
+ *
+ * @throws SQLException
+ */
+ public void init() throws SQLException {
+ conn = getConnection();
+ stmt = conn.createStatement();
+ stmt.execute("use test");
+ ResultSet rs = stmt.executeQuery("show variables");
+ while (rs.next()) {
+ String configName = rs.getString(1);
+ if ("maxSQLLength".equals(configName)) {
+ maxSQLLength = Integer.parseInt(rs.getString(2));
+ logger.info("maxSQLLength={}", maxSQLLength);
+ }
+ }
+ }
+
+ /**
+ * Convert raw data to SQL fragments, group them by table name and cache them in a HashMap.
+ * Trigger writing when number of buffered records reached maxBachSize.
+ *
+ * @param line raw data get from task queue in format: tbName,ts,current,voltage,phase,location,groupId
+ */
+ public void processLine(String line) throws SQLException {
+ bufferedCount += 1;
+ int firstComma = line.indexOf(',');
+ String tbName = line.substring(0, firstComma);
+ int lastComma = line.lastIndexOf(',');
+ int secondLastComma = line.lastIndexOf(',', lastComma - 1);
+ String value = "(" + line.substring(firstComma + 1, secondLastComma) + ") ";
+ if (tbValues.containsKey(tbName)) {
+ tbValues.put(tbName, tbValues.get(tbName) + value);
+ } else {
+ tbValues.put(tbName, value);
+ }
+ if (!tbTags.containsKey(tbName)) {
+ String location = line.substring(secondLastComma + 1, lastComma);
+ String groupId = line.substring(lastComma + 1);
+ String tagValues = "('" + location + "'," + groupId + ')';
+ tbTags.put(tbName, tagValues);
+ }
+ if (bufferedCount == maxBatchSize) {
+ flush();
+ }
+ }
+
+
+ /**
+ * Assemble INSERT statement using buffered SQL fragments in Map {@link SQLWriter#tbValues} and execute it.
+ * In case of "Table does not exit" exception, create all tables in the sql and retry the sql.
+ */
+ public void flush() throws SQLException {
+ StringBuilder sb = new StringBuilder("INSERT INTO ");
+ for (Map.Entry entry : tbValues.entrySet()) {
+ String tableName = entry.getKey();
+ String values = entry.getValue();
+ String q = tableName + " values " + values + " ";
+ if (sb.length() + q.length() > maxSQLLength) {
+ executeSQL(sb.toString());
+ logger.warn("increase maxSQLLength or decrease maxBatchSize to gain better performance");
+ sb = new StringBuilder("INSERT INTO ");
+ }
+ sb.append(q);
+ }
+ executeSQL(sb.toString());
+ tbValues.clear();
+ bufferedCount = 0;
+ }
+
+ private void executeSQL(String sql) throws SQLException {
+ try {
+ stmt.executeUpdate(sql);
+ } catch (SQLException e) {
+ // convert to error code defined in taoserror.h
+ int errorCode = e.getErrorCode() & 0xffff;
+ if (errorCode == 0x362 || errorCode == 0x218) {
+ // Table does not exist
+ createTables();
+ executeSQL(sql);
+ } else {
+ logger.error("Execute SQL: {}", sql);
+ throw e;
+ }
+ } catch (Throwable throwable) {
+ logger.error("Execute SQL: {}", sql);
+ throw throwable;
+ }
+ }
+
+ /**
+ * Create tables in batch using syntax:
+ *
+ * CREATE TABLE [IF NOT EXISTS] tb_name1 USING stb_name TAGS (tag_value1, ...) [IF NOT EXISTS] tb_name2 USING stb_name TAGS (tag_value2, ...) ...;
+ *
+ */
+ private void createTables() throws SQLException {
+ StringBuilder sb = new StringBuilder("CREATE TABLE ");
+ for (String tbName : tbValues.keySet()) {
+ String tagValues = tbTags.get(tbName);
+ sb.append("IF NOT EXISTS ").append(tbName).append(" USING meters TAGS ").append(tagValues).append(" ");
+ }
+ String sql = sb.toString();
+ try {
+ stmt.executeUpdate(sql);
+ } catch (Throwable throwable) {
+ logger.error("Execute SQL: {}", sql);
+ throw throwable;
+ }
+ }
+
+ public boolean hasBufferedValues() {
+ return bufferedCount > 0;
+ }
+
+ public int getBufferedCount() {
+ return bufferedCount;
+ }
+
+ public void close() {
+ try {
+ stmt.close();
+ } catch (SQLException e) {
+ }
+ try {
+ conn.close();
+ } catch (SQLException e) {
+ }
+ }
+}
\ No newline at end of file
diff --git a/docs/examples/java/src/main/java/com/taos/example/highvolume/StmtWriter.java b/docs/examples/java/src/main/java/com/taos/example/highvolume/StmtWriter.java
new file mode 100644
index 0000000000..8ade06625d
--- /dev/null
+++ b/docs/examples/java/src/main/java/com/taos/example/highvolume/StmtWriter.java
@@ -0,0 +1,4 @@
+package com.taos.example.highvolume;
+
+public class StmtWriter {
+}
diff --git a/docs/examples/java/src/main/java/com/taos/example/highvolume/WriteTask.java b/docs/examples/java/src/main/java/com/taos/example/highvolume/WriteTask.java
new file mode 100644
index 0000000000..de9e5463d7
--- /dev/null
+++ b/docs/examples/java/src/main/java/com/taos/example/highvolume/WriteTask.java
@@ -0,0 +1,58 @@
+package com.taos.example.highvolume;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.util.concurrent.BlockingQueue;
+
+class WriteTask implements Runnable {
+ private final static Logger logger = LoggerFactory.getLogger(WriteTask.class);
+ private final int maxBatchSize;
+
+ // the queue from which this writing task get raw data.
+ private final BlockingQueue queue;
+
+ // A flag indicate whether to continue.
+ private boolean active = true;
+
+ public WriteTask(BlockingQueue taskQueue, int maxBatchSize) {
+ this.queue = taskQueue;
+ this.maxBatchSize = maxBatchSize;
+ }
+
+ @Override
+ public void run() {
+ logger.info("started");
+ String line = null; // data getting from the queue just now.
+ SQLWriter writer = new SQLWriter(maxBatchSize);
+ try {
+ writer.init();
+ while (active) {
+ line = queue.poll();
+ if (line != null) {
+ // parse raw data and buffer the data.
+ writer.processLine(line);
+ } else if (writer.hasBufferedValues()) {
+ // write data immediately if no more data in the queue
+ writer.flush();
+ } else {
+ // sleep a while to avoid high CPU usage if no more data in the queue and no buffered records, .
+ Thread.sleep(100);
+ }
+ }
+ if (writer.hasBufferedValues()) {
+ writer.flush();
+ }
+ } catch (Exception e) {
+ String msg = String.format("line=%s, bufferedCount=%s", line, writer.getBufferedCount());
+ logger.error(msg, e);
+ } finally {
+ writer.close();
+ }
+ }
+
+ public void stop() {
+ logger.info("stop");
+ this.active = false;
+ }
+}
\ No newline at end of file
diff --git a/docs/examples/java/src/test/java/com/taos/test/TestAll.java b/docs/examples/java/src/test/java/com/taos/test/TestAll.java
index 42db24485a..8d201da074 100644
--- a/docs/examples/java/src/test/java/com/taos/test/TestAll.java
+++ b/docs/examples/java/src/test/java/com/taos/test/TestAll.java
@@ -23,16 +23,16 @@ public class TestAll {
String jdbcUrl = "jdbc:TAOS://localhost:6030?user=root&password=taosdata";
try (Connection conn = DriverManager.getConnection(jdbcUrl)) {
try (Statement stmt = conn.createStatement()) {
- String sql = "INSERT INTO power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000)\n" +
- " power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 15:38:15.000',12.60000,218,0.33000)\n" +
- " power.d1001 USING power.meters TAGS(California.SanFrancisco, 2) VALUES('2018-10-03 15:38:16.800',12.30000,221,0.31000)\n" +
- " power.d1002 USING power.meters TAGS(California.SanFrancisco, 3) VALUES('2018-10-03 15:38:16.650',10.30000,218,0.25000)\n" +
- " power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 15:38:05.500',11.80000,221,0.28000)\n" +
- " power.d1003 USING power.meters TAGS(California.LosAngeles, 2) VALUES('2018-10-03 15:38:16.600',13.40000,223,0.29000)\n" +
- " power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:05.000',10.80000,223,0.29000)\n" +
- " power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:06.000',10.80000,223,0.29000)\n" +
- " power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:07.000',10.80000,223,0.29000)\n" +
- " power.d1004 USING power.meters TAGS(California.LosAngeles, 3) VALUES('2018-10-03 15:38:08.500',11.50000,221,0.35000)";
+ String sql = "INSERT INTO power.d1001 USING power.meters TAGS('California.SanFrancisco', 2) VALUES('2018-10-03 14:38:05.000',10.30000,219,0.31000)\n" +
+ " power.d1001 USING power.meters TAGS('California.SanFrancisco', 2) VALUES('2018-10-03 15:38:15.000',12.60000,218,0.33000)\n" +
+ " power.d1001 USING power.meters TAGS('California.SanFrancisco', 2) VALUES('2018-10-03 15:38:16.800',12.30000,221,0.31000)\n" +
+ " power.d1002 USING power.meters TAGS('California.SanFrancisco', 3) VALUES('2018-10-03 15:38:16.650',10.30000,218,0.25000)\n" +
+ " power.d1003 USING power.meters TAGS('California.LosAngeles', 2) VALUES('2018-10-03 15:38:05.500',11.80000,221,0.28000)\n" +
+ " power.d1003 USING power.meters TAGS('California.LosAngeles', 2) VALUES('2018-10-03 15:38:16.600',13.40000,223,0.29000)\n" +
+ " power.d1004 USING power.meters TAGS('California.LosAngeles', 3) VALUES('2018-10-03 15:38:05.000',10.80000,223,0.29000)\n" +
+ " power.d1004 USING power.meters TAGS('California.LosAngeles', 3) VALUES('2018-10-03 15:38:06.000',10.80000,223,0.29000)\n" +
+ " power.d1004 USING power.meters TAGS('California.LosAngeles', 3) VALUES('2018-10-03 15:38:07.000',10.80000,223,0.29000)\n" +
+ " power.d1004 USING power.meters TAGS('California.LosAngeles', 3) VALUES('2018-10-03 15:38:08.500',11.50000,221,0.35000)";
stmt.execute(sql);
}
diff --git a/docs/examples/python/fast_write_example.py b/docs/examples/python/fast_write_example.py
new file mode 100644
index 0000000000..c9d606388f
--- /dev/null
+++ b/docs/examples/python/fast_write_example.py
@@ -0,0 +1,180 @@
+# install dependencies:
+# recommend python >= 3.8
+# pip3 install faster-fifo
+#
+
+import logging
+import math
+import sys
+import time
+import os
+from multiprocessing import Process
+from faster_fifo import Queue
+from mockdatasource import MockDataSource
+from queue import Empty
+from typing import List
+
+logging.basicConfig(stream=sys.stdout, level=logging.DEBUG, format="%(asctime)s [%(name)s] - %(message)s")
+
+READ_TASK_COUNT = 1
+WRITE_TASK_COUNT = 1
+TABLE_COUNT = 1000
+QUEUE_SIZE = 1000000
+MAX_BATCH_SIZE = 3000
+
+read_processes = []
+write_processes = []
+
+
+def get_connection():
+ """
+ If variable TDENGINE_FIRST_EP is provided then it will be used. If not, firstEP in /etc/taos/taos.cfg will be used.
+ You can also override the default username and password by supply variable TDENGINE_USER and TDENGINE_PASSWORD
+ """
+ import taos
+ firstEP = os.environ.get("TDENGINE_FIRST_EP")
+ if firstEP:
+ host, port = firstEP.split(":")
+ else:
+ host, port = None, 0
+ user = os.environ.get("TDENGINE_USER", "root")
+ password = os.environ.get("TDENGINE_PASSWORD", "taosdata")
+ return taos.connect(host=host, port=int(port), user=user, password=password)
+
+
+# ANCHOR: read
+
+def run_read_task(task_id: int, task_queues: List[Queue]):
+ table_count_per_task = TABLE_COUNT // READ_TASK_COUNT
+ data_source = MockDataSource(f"tb{task_id}", table_count_per_task)
+ try:
+ for batch in data_source:
+ for table_id, rows in batch:
+ # hash data to different queue
+ i = table_id % len(task_queues)
+ # block putting forever when the queue is full
+ task_queues[i].put_many(rows, block=True, timeout=-1)
+ except KeyboardInterrupt:
+ pass
+
+
+# ANCHOR_END: read
+
+# ANCHOR: write
+def run_write_task(task_id: int, queue: Queue):
+ from sql_writer import SQLWriter
+ log = logging.getLogger(f"WriteTask-{task_id}")
+ writer = SQLWriter(get_connection)
+ lines = None
+ try:
+ while True:
+ try:
+ # get as many as possible
+ lines = queue.get_many(block=False, max_messages_to_get=MAX_BATCH_SIZE)
+ writer.process_lines(lines)
+ except Empty:
+ time.sleep(0.01)
+ except KeyboardInterrupt:
+ pass
+ except BaseException as e:
+ log.debug(f"lines={lines}")
+ raise e
+
+
+# ANCHOR_END: write
+
+def set_global_config():
+ argc = len(sys.argv)
+ if argc > 1:
+ global READ_TASK_COUNT
+ READ_TASK_COUNT = int(sys.argv[1])
+ if argc > 2:
+ global WRITE_TASK_COUNT
+ WRITE_TASK_COUNT = int(sys.argv[2])
+ if argc > 3:
+ global TABLE_COUNT
+ TABLE_COUNT = int(sys.argv[3])
+ if argc > 4:
+ global QUEUE_SIZE
+ QUEUE_SIZE = int(sys.argv[4])
+ if argc > 5:
+ global MAX_BATCH_SIZE
+ MAX_BATCH_SIZE = int(sys.argv[5])
+
+
+# ANCHOR: monitor
+def run_monitor_process():
+ log = logging.getLogger("DataBaseMonitor")
+ conn = get_connection()
+ conn.execute("DROP DATABASE IF EXISTS test")
+ conn.execute("CREATE DATABASE test")
+ conn.execute("CREATE STABLE test.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) "
+ "TAGS (location BINARY(64), groupId INT)")
+
+ def get_count():
+ res = conn.query("SELECT count(*) FROM test.meters")
+ rows = res.fetch_all()
+ return rows[0][0] if rows else 0
+
+ last_count = 0
+ while True:
+ time.sleep(10)
+ count = get_count()
+ log.info(f"count={count} speed={(count - last_count) / 10}")
+ last_count = count
+
+
+# ANCHOR_END: monitor
+# ANCHOR: main
+def main():
+ set_global_config()
+ logging.info(f"READ_TASK_COUNT={READ_TASK_COUNT}, WRITE_TASK_COUNT={WRITE_TASK_COUNT}, "
+ f"TABLE_COUNT={TABLE_COUNT}, QUEUE_SIZE={QUEUE_SIZE}, MAX_BATCH_SIZE={MAX_BATCH_SIZE}")
+
+ monitor_process = Process(target=run_monitor_process)
+ monitor_process.start()
+ time.sleep(3) # waiting for database ready.
+
+ task_queues: List[Queue] = []
+ # create task queues
+ for i in range(WRITE_TASK_COUNT):
+ queue = Queue(max_size_bytes=QUEUE_SIZE)
+ task_queues.append(queue)
+
+ # create write processes
+ for i in range(WRITE_TASK_COUNT):
+ p = Process(target=run_write_task, args=(i, task_queues[i]))
+ p.start()
+ logging.debug(f"WriteTask-{i} started with pid {p.pid}")
+ write_processes.append(p)
+
+ # create read processes
+ for i in range(READ_TASK_COUNT):
+ queues = assign_queues(i, task_queues)
+ p = Process(target=run_read_task, args=(i, queues))
+ p.start()
+ logging.debug(f"ReadTask-{i} started with pid {p.pid}")
+ read_processes.append(p)
+
+ try:
+ monitor_process.join()
+ except KeyboardInterrupt:
+ monitor_process.terminate()
+ [p.terminate() for p in read_processes]
+ [p.terminate() for p in write_processes]
+ [q.close() for q in task_queues]
+
+
+def assign_queues(read_task_id, task_queues):
+ """
+ Compute target queues for a specific read task.
+ """
+ ratio = WRITE_TASK_COUNT / READ_TASK_COUNT
+ from_index = math.floor(read_task_id * ratio)
+ end_index = math.ceil((read_task_id + 1) * ratio)
+ return task_queues[from_index:end_index]
+
+
+if __name__ == '__main__':
+ main()
+# ANCHOR_END: main
diff --git a/docs/examples/python/mockdatasource.py b/docs/examples/python/mockdatasource.py
new file mode 100644
index 0000000000..852860aec0
--- /dev/null
+++ b/docs/examples/python/mockdatasource.py
@@ -0,0 +1,49 @@
+import time
+
+
+class MockDataSource:
+ samples = [
+ "8.8,119,0.32,LosAngeles,0",
+ "10.7,116,0.34,SanDiego,1",
+ "9.9,111,0.33,Hollywood,2",
+ "8.9,113,0.329,Compton,3",
+ "9.4,118,0.141,San Francisco,4"
+ ]
+
+ def __init__(self, tb_name_prefix, table_count):
+ self.table_name_prefix = tb_name_prefix + "_"
+ self.table_count = table_count
+ self.max_rows = 10000000
+ self.current_ts = round(time.time() * 1000) - self.max_rows * 100
+ # [(tableId, tableName, values),]
+ self.data = self._init_data()
+
+ def _init_data(self):
+ lines = self.samples * (self.table_count // 5 + 1)
+ data = []
+ for i in range(self.table_count):
+ table_name = self.table_name_prefix + str(i)
+ data.append((i, table_name, lines[i])) # tableId, row
+ return data
+
+ def __iter__(self):
+ self.row = 0
+ return self
+
+ def __next__(self):
+ """
+ next 1000 rows for each table.
+ return: {tableId:[row,...]}
+ """
+ # generate 1000 timestamps
+ ts = []
+ for _ in range(1000):
+ self.current_ts += 100
+ ts.append(str(self.current_ts))
+ # add timestamp to each row
+ # [(tableId, ["tableName,ts,current,voltage,phase,location,groupId"])]
+ result = []
+ for table_id, table_name, values in self.data:
+ rows = [table_name + ',' + t + ',' + values for t in ts]
+ result.append((table_id, rows))
+ return result
diff --git a/docs/examples/python/sql_writer.py b/docs/examples/python/sql_writer.py
new file mode 100644
index 0000000000..758167376b
--- /dev/null
+++ b/docs/examples/python/sql_writer.py
@@ -0,0 +1,90 @@
+import logging
+import taos
+
+
+class SQLWriter:
+ log = logging.getLogger("SQLWriter")
+
+ def __init__(self, get_connection_func):
+ self._tb_values = {}
+ self._tb_tags = {}
+ self._conn = get_connection_func()
+ self._max_sql_length = self.get_max_sql_length()
+ self._conn.execute("USE test")
+
+ def get_max_sql_length(self):
+ rows = self._conn.query("SHOW variables").fetch_all()
+ for r in rows:
+ name = r[0]
+ if name == "maxSQLLength":
+ return int(r[1])
+ return 1024 * 1024
+
+ def process_lines(self, lines: str):
+ """
+ :param lines: [[tbName,ts,current,voltage,phase,location,groupId]]
+ """
+ for line in lines:
+ ps = line.split(",")
+ table_name = ps[0]
+ value = '(' + ",".join(ps[1:-2]) + ') '
+ if table_name in self._tb_values:
+ self._tb_values[table_name] += value
+ else:
+ self._tb_values[table_name] = value
+
+ if table_name not in self._tb_tags:
+ location = ps[-2]
+ group_id = ps[-1]
+ tag_value = f"('{location}',{group_id})"
+ self._tb_tags[table_name] = tag_value
+ self.flush()
+
+ def flush(self):
+ """
+ Assemble INSERT statement and execute it.
+ When the sql length grows close to MAX_SQL_LENGTH, the sql will be executed immediately, and a new INSERT statement will be created.
+ In case of "Table does not exit" exception, tables in the sql will be created and the sql will be re-executed.
+ """
+ sql = "INSERT INTO "
+ sql_len = len(sql)
+ buf = []
+ for tb_name, values in self._tb_values.items():
+ q = tb_name + " VALUES " + values
+ if sql_len + len(q) >= self._max_sql_length:
+ sql += " ".join(buf)
+ self.execute_sql(sql)
+ sql = "INSERT INTO "
+ sql_len = len(sql)
+ buf = []
+ buf.append(q)
+ sql_len += len(q)
+ sql += " ".join(buf)
+ self.execute_sql(sql)
+ self._tb_values.clear()
+
+ def execute_sql(self, sql):
+ try:
+ self._conn.execute(sql)
+ except taos.Error as e:
+ error_code = e.errno & 0xffff
+ # Table does not exit
+ if error_code == 9731:
+ self.create_tables()
+ else:
+ self.log.error("Execute SQL: %s", sql)
+ raise e
+ except BaseException as baseException:
+ self.log.error("Execute SQL: %s", sql)
+ raise baseException
+
+ def create_tables(self):
+ sql = "CREATE TABLE "
+ for tb in self._tb_values.keys():
+ tag_values = self._tb_tags[tb]
+ sql += "IF NOT EXISTS " + tb + " USING meters TAGS " + tag_values + " "
+ try:
+ self._conn.execute(sql)
+ except BaseException as e:
+ self.log.error("Execute SQL: %s", sql)
+ raise e
diff --git a/docs/zh/07-develop/03-insert-data/01-sql-writing.mdx b/docs/zh/07-develop/03-insert-data/01-sql-writing.mdx
index 214cbdaa96..2920fa35a4 100644
--- a/docs/zh/07-develop/03-insert-data/01-sql-writing.mdx
+++ b/docs/zh/07-develop/03-insert-data/01-sql-writing.mdx
@@ -23,7 +23,7 @@ import PhpStmt from "./_php_stmt.mdx";
## SQL 写入简介
-应用通过连接器执行 INSERT 语句来插入数据,用户还可以通过 TAOS Shell,手动输入 INSERT 语句插入数据。
+应用通过连接器执行 INSERT 语句来插入数据,用户还可以通过 TDengine CLI,手动输入 INSERT 语句插入数据。
### 一次写入一条
下面这条 INSERT 就将一条记录写入到表 d1001 中:
diff --git a/docs/zh/07-develop/03-insert-data/05-high-volume.md b/docs/zh/07-develop/03-insert-data/05-high-volume.md
new file mode 100644
index 0000000000..32be8cb890
--- /dev/null
+++ b/docs/zh/07-develop/03-insert-data/05-high-volume.md
@@ -0,0 +1,440 @@
+import Tabs from "@theme/Tabs";
+import TabItem from "@theme/TabItem";
+
+# 高效写入
+
+本节介绍如何高效地向 TDengine 写入数据。
+
+## 高效写入原理 {#principle}
+
+### 客户端程序的角度 {#application-view}
+
+从客户端程序的角度来说,高效写入数据要考虑以下几个因素:
+
+1. 单次写入的数据量。一般来讲,每批次写入的数据量越大越高效(但超过一定阈值其优势会消失)。使用 SQL 写入 TDengine 时,尽量在一条 SQL 中拼接更多数据。目前,TDengine 支持的一条 SQL 的最大长度为 1,048,576(1M)个字符。可通过配置客户端参数 maxSQLLength(默认值为 65480)进行修改。
+2. 并发连接数。一般来讲,同时写入数据的并发连接数越多写入越高效(但超过一定阈值反而会下降,取决于服务端处理能力)。
+3. 数据在不同表(或子表)之间的分布,即要写入数据的相邻性。一般来说,每批次只向同一张表(或子表)写入数据比向多张表(或子表)写入数据要更高效;
+4. 写入方式。一般来讲:
+ - 参数绑定写入比 SQL 写入更高效。因参数绑定方式避免了 SQL 解析。(但增加了 C 接口的调用次数,对于连接器也有性能损耗)。
+ - SQL 写入不自动建表比自动建表更高效。因自动建表要频繁检查表是否存在
+ - SQL 写入比无模式写入更高效。因无模式写入会自动建表且支持动态更改表结构
+
+客户端程序要充分且恰当地利用以上几个因素。在单次写入中尽量只向同一张表(或子表)写入数据,每批次写入的数据量经过测试和调优设定为一个最适合当前系统处理能力的数值,并发写入的连接数同样经过测试和调优后设定为一个最适合当前系统处理能力的数值,以实现在当前系统中的最佳写入速度。
+
+### 数据源的角度 {#datasource-view}
+
+客户端程序通常需要从数据源读数据再写入 TDengine。从数据源角度来说,以下几种情况需要在读线程和写线程之间增加队列:
+
+1. 有多个数据源,单个数据源生成数据的速度远小于单线程写入的速度,但数据量整体比较大。此时队列的作用是把多个数据源的数据汇聚到一起,增加单次写入的数据量。
+2. 单个数据源生成数据的速度远大于单线程写入的速度。此时队列的作用是增加写入的并发度。
+3. 单张表的数据分散在多个数据源。此时队列的作用是将同一张表的数据提前汇聚到一起,提高写入时数据的相邻性。
+
+如果写应用的数据源是 Kafka, 写应用本身即 Kafka 的消费者,则可利用 Kafka 的特性实现高效写入。比如:
+
+1. 将同一张表的数据写到同一个 Topic 的同一个 Partition,增加数据的相邻性
+2. 通过订阅多个 Topic 实现数据汇聚
+3. 通过增加 Consumer 线程数增加写入的并发度
+4. 通过增加每次 fetch 的最大数据量来增加单次写入的最大数据量
+
+### 服务器配置的角度 {#setting-view}
+
+从服务器配置的角度来说,也有很多优化写入性能的方法。
+
+如果总表数不多(远小于核数乘以1000), 且无论怎么调节客户端程序,taosd 进程的 CPU 使用率都很低,那么很可能是因为表在各个 vgroup 分布不均。比如:数据库总表数是 1000 且 minTablesPerVnode 设置的也是 1000,那么所有的表都会分布在 1 个 vgroup 上。此时如果将 minTablesPerVnode 和 tablelncStepPerVnode 都设置成 100, 则可将表分布至 10 个 vgroup。(假设 maxVgroupsPerDb 大于等于 10)。
+
+如果总表数比较大(比如大于500万),适当增加 maxVgroupsPerDb 也能显著提高建表的速度。maxVgroupsPerDb 默认值为 0, 自动配置为 CPU 的核数。 如果表的数量巨大,也建议调节 maxTablesPerVnode 参数,以免超过单个 vnode 建表的上限。
+
+更多调优参数,请参考 [配置参考](../../../reference/config)部分。
+
+## 高效写入示例 {#sample-code}
+
+### 场景设计 {#scenario}
+
+下面的示例程序展示了如何高效写入数据,场景设计如下:
+
+- TDengine 客户端程序从其它数据源不断读入数据,在示例程序中采用生成模拟数据的方式来模拟读取数据源
+- 单个连接向 TDengine 写入的速度无法与读数据的速度相匹配,因此客户端程序启动多个线程,每个线程都建立了与 TDengine 的连接,每个线程都有一个独占的固定大小的消息队列
+- 客户端程序将接收到的数据根据所属的表名(或子表名)HASH 到不同的线程,即写入该线程所对应的消息队列,以此确保属于某个表(或子表)的数据一定会被一个固定的线程处理
+- 各个子线程在将所关联的消息队列中的数据读空后或者读取数据量达到一个预定的阈值后将该批数据写入 TDengine,并继续处理后面接收到的数据
+
+
+
+### 示例代码 {#code}
+
+这一部分是针对以上场景的示例代码。对于其它场景高效写入原理相同,不过代码需要适当修改。
+
+本示例代码假设源数据属于同一张超级表(meters)的不同子表。程序在开始写入数据之前已经在 test 库创建了这个超级表。对于子表,将根据收到的数据,由应用程序自动创建。如果实际场景是多个超级表,只需修改写任务自动建表的代码。
+
+
+
+
+**程序清单**
+
+| 类名 | 功能说明 |
+| ---------------- | --------------------------------------------------------------------------- |
+| FastWriteExample | 主程序 |
+| ReadTask | 从模拟源中读取数据,将表名经过 hash 后得到 Queue 的 index,写入对应的 Queue |
+| WriteTask | 从 Queue 中获取数据,组成一个 Batch,写入 TDengine |
+| MockDataSource | 模拟生成一定数量 meters 子表的数据 |
+| SQLWriter | WriteTask 依赖这个类完成 SQL 拼接、自动建表、 SQL 写入、SQL 长度检查 |
+| StmtWriter | 实现参数绑定方式批量写入(暂未完成) |
+| DataBaseMonitor | 统计写入速度,并每隔 10 秒把当前写入速度打印到控制台 |
+
+
+以下是各类的完整代码和更详细的功能说明。
+
+
+FastWriteExample
+主程序负责:
+
+1. 创建消息队列
+2. 启动写线程
+3. 启动读线程
+4. 每隔 10 秒统计一次写入速度
+
+主程序默认暴露了 4 个参数,每次启动程序都可调节,用于测试和调优:
+
+1. 读线程个数。默认为 1。
+2. 写线程个数。默认为 3。
+3. 模拟生成的总表数。默认为 1000。将会平分给各个读线程。如果总表数较大,建表需要花费较长,开始统计的写入速度可能较慢。
+4. 每批最多写入记录数量。默认为 3000。
+
+队列容量(taskQueueCapacity)也是与性能有关的参数,可通过修改程序调节。一般来讲,队列容量越大,入队被阻塞的概率越小,队列的吞吐量越大,但是内存占用也会越大。 示例程序默认值已经设置地足够大。
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/FastWriteExample.java}}
+```
+
+
+
+
+ReadTask
+
+读任务负责从数据源读数据。每个读任务都关联了一个模拟数据源。每个模拟数据源可生成一点数量表的数据。不同的模拟数据源生成不同表的数据。
+
+读任务采用阻塞的方式写消息队列。也就是说,一旦队列满了,写操作就会阻塞。
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/ReadTask.java}}
+```
+
+
+
+
+WriteTask
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/WriteTask.java}}
+```
+
+
+
+
+
+MockDataSource
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/MockDataSource.java}}
+```
+
+
+
+
+
+SQLWriter
+
+SQLWriter 类封装了拼 SQL 和写数据的逻辑。注意,所有的表都没有提前创建,而是在 catch 到表不存在异常的时候,再以超级表为模板批量建表,然后重新执行 INSERT 语句。对于其它异常,这里简单地记录当时执行的 SQL 语句到日志中,你也可以记录更多线索到日志,已便排查错误和故障恢复。
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/SQLWriter.java}}
+```
+
+
+
+
+
+DataBaseMonitor
+
+```java
+{{#include docs/examples/java/src/main/java/com/taos/example/highvolume/DataBaseMonitor.java}}
+```
+
+
+
+**执行步骤**
+
+
+执行 Java 示例程序
+
+执行程序前需配置环境变量 `TDENGINE_JDBC_URL`。如果 TDengine Server 部署在本机,且用户名、密码和端口都是默认值,那么可配置:
+
+```
+TDENGINE_JDBC_URL="jdbc:TAOS://localhost:6030?user=root&password=taosdata"
+```
+
+**本地集成开发环境执行示例程序**
+
+1. clone TDengine 仓库
+ ```
+ git clone git@github.com:taosdata/TDengine.git --depth 1
+ ```
+2. 用集成开发环境打开 `docs/examples/java` 目录。
+3. 在开发环境中配置环境变量 `TDENGINE_JDBC_URL`。如果已配置了全局的环境变量 `TDENGINE_JDBC_URL` 可跳过这一步。
+4. 运行类 `com.taos.example.highvolume.FastWriteExample`。
+
+**远程服务器上执行示例程序**
+
+若要在服务器上执行示例程序,可按照下面的步骤操作:
+
+1. 打包示例代码。在目录 TDengine/docs/examples/java 下执行:
+ ```
+ mvn package
+ ```
+2. 远程服务器上创建 examples 目录:
+ ```
+ mkdir -p examples/java
+ ```
+3. 复制依赖到服务器指定目录:
+ - 复制依赖包,只用复制一次
+ ```
+ scp -r .\target\lib @:~/examples/java
+ ```
+ - 复制本程序的 jar 包,每次更新代码都需要复制
+ ```
+ scp -r .\target\javaexample-1.0.jar @:~/examples/java
+ ```
+4. 配置环境变量。
+ 编辑 `~/.bash_profile` 或 `~/.bashrc` 添加如下内容例如:
+
+ ```
+ export TDENGINE_JDBC_URL="jdbc:TAOS://localhost:6030?user=root&password=taosdata"
+ ```
+
+ 以上使用的是本地部署 TDengine Server 时默认的 JDBC URL。你需要根据自己的实际情况更改。
+
+5. 用 java 命令启动示例程序,命令模板:
+
+ ```
+ java -classpath lib/*:javaexample-1.0.jar com.taos.example.highvolume.FastWriteExample
+ ```
+
+6. 结束测试程序。测试程序不会自动结束,在获取到当前配置下稳定的写入速度后,按 CTRL + C 结束程序。
+ 下面是一次实际运行的日志输出,机器配置 16核 + 64G + 固态硬盘。
+
+ ```
+ root@vm85$ java -classpath lib/*:javaexample-1.0.jar com.taos.example.highvolume.FastWriteExample 2 12
+ 18:56:35.896 [main] INFO c.t.e.highvolume.FastWriteExample - readTaskCount=2, writeTaskCount=12 tableCount=1000 maxBatchSize=3000
+ 18:56:36.011 [WriteThread-0] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.015 [WriteThread-0] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.021 [WriteThread-1] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.022 [WriteThread-1] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.031 [WriteThread-2] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.032 [WriteThread-2] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.041 [WriteThread-3] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.042 [WriteThread-3] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.093 [WriteThread-4] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.094 [WriteThread-4] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.099 [WriteThread-5] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.100 [WriteThread-5] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.100 [WriteThread-6] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.101 [WriteThread-6] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.103 [WriteThread-7] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.104 [WriteThread-7] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.105 [WriteThread-8] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.107 [WriteThread-8] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.108 [WriteThread-9] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.109 [WriteThread-9] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.156 [WriteThread-10] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.157 [WriteThread-11] INFO c.taos.example.highvolume.WriteTask - started
+ 18:56:36.158 [WriteThread-10] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:36.158 [ReadThread-0] INFO com.taos.example.highvolume.ReadTask - started
+ 18:56:36.158 [ReadThread-1] INFO com.taos.example.highvolume.ReadTask - started
+ 18:56:36.158 [WriteThread-11] INFO c.taos.example.highvolume.SQLWriter - maxSQLLength=1048576
+ 18:56:46.369 [main] INFO c.t.e.highvolume.FastWriteExample - count=18554448 speed=1855444
+ 18:56:56.946 [main] INFO c.t.e.highvolume.FastWriteExample - count=39059660 speed=2050521
+ 18:57:07.322 [main] INFO c.t.e.highvolume.FastWriteExample - count=59403604 speed=2034394
+ 18:57:18.032 [main] INFO c.t.e.highvolume.FastWriteExample - count=80262938 speed=2085933
+ 18:57:28.432 [main] INFO c.t.e.highvolume.FastWriteExample - count=101139906 speed=2087696
+ 18:57:38.921 [main] INFO c.t.e.highvolume.FastWriteExample - count=121807202 speed=2066729
+ 18:57:49.375 [main] INFO c.t.e.highvolume.FastWriteExample - count=142952417 speed=2114521
+ 18:58:00.689 [main] INFO c.t.e.highvolume.FastWriteExample - count=163650306 speed=2069788
+ 18:58:11.646 [main] INFO c.t.e.highvolume.FastWriteExample - count=185019808 speed=2136950
+ ```
+
+
+
+
+
+
+**程序清单**
+
+Python 示例程序中采用了多进程的架构,并使用了跨进程的消息队列。
+
+| 函数或类 | 功能说明 |
+| ------------------------ | -------------------------------------------------------------------- |
+| main 函数 | 程序入口, 创建各个子进程和消息队列 |
+| run_monitor_process 函数 | 创建数据库,超级表,统计写入速度并定时打印到控制台 |
+| run_read_task 函数 | 读进程主要逻辑,负责从其它数据系统读数据,并分发数据到为之分配的队列 |
+| MockDataSource 类 | 模拟数据源, 实现迭代器接口,每次批量返回每张表的接下来 1000 条数据 |
+| run_write_task 函数 | 写进程主要逻辑。每次从队列中取出尽量多的数据,并批量写入 |
+| SQLWriter类 | SQL 写入和自动建表 |
+| StmtWriter 类 | 实现参数绑定方式批量写入(暂未完成) |
+
+
+
+main 函数
+
+main 函数负责创建消息队列和启动子进程,子进程有 3 类:
+
+1. 1 个监控进程,负责数据库初始化和统计写入速度
+2. n 个读进程,负责从其它数据系统读数据
+3. m 个写进程,负责写数据库
+
+main 函数可以接收 5 个启动参数,依次是:
+
+1. 读任务(进程)数, 默认为 1
+2. 写任务(进程)数, 默认为 1
+3. 模拟生成的总表数,默认为 1000
+4. 队列大小(单位字节),默认为 1000000
+5. 每批最多写入记录数量, 默认为 3000
+
+```python
+{{#include docs/examples/python/fast_write_example.py:main}}
+```
+
+
+
+
+run_monitor_process
+
+监控进程负责初始化数据库,并监控当前的写入速度。
+
+```python
+{{#include docs/examples/python/fast_write_example.py:monitor}}
+```
+
+
+
+
+
+run_read_task 函数
+
+读进程,负责从其它数据系统读数据,并分发数据到为之分配的队列。
+
+```python
+{{#include docs/examples/python/fast_write_example.py:read}}
+```
+
+
+
+
+
+MockDataSource
+
+以下是模拟数据源的实现,我们假设数据源生成的每一条数据都带有目标表名信息。实际中你可能需要一定的规则确定目标表名。
+
+```python
+{{#include docs/examples/python/mockdatasource.py}}
+```
+
+
+
+
+run_write_task 函数
+
+写进程每次从队列中取出尽量多的数据,并批量写入。
+
+```python
+{{#include docs/examples/python/fast_write_example.py:write}}
+```
+
+
+
+
+
+SQLWriter 类封装了拼 SQL 和写数据的逻辑。所有的表都没有提前创建,而是在发生表不存在错误的时候,再以超级表为模板批量建表,然后重新执行 INSERT 语句。对于其它错误会记录当时执行的 SQL, 以便排查错误和故障恢复。这个类也对 SQL 是否超过最大长度限制做了检查,如果接近 SQL 最大长度限制(maxSQLLength),将会立即执行 SQL。为了减少 SQL 此时,建议将 maxSQLLength 适当调大。
+
+SQLWriter
+
+```python
+{{#include docs/examples/python/sql_writer.py}}
+```
+
+
+
+**执行步骤**
+
+
+
+执行 Python 示例程序
+
+1. 前提条件
+
+ - 已安装 TDengine 客户端驱动
+ - 已安装 Python3, 推荐版本 >= 3.8
+ - 已安装 taospy
+
+2. 安装 faster-fifo 代替 python 内置的 multiprocessing.Queue
+
+ ```
+ pip3 install faster-fifo
+ ```
+
+3. 点击上面的“查看源码”链接复制 `fast_write_example.py` 、 `sql_writer.py` 和 `mockdatasource.py` 三个文件。
+
+4. 执行示例程序
+
+ ```
+ python3 fast_write_example.py
+ ```
+
+ 下面是一次实际运行的输出, 机器配置 16核 + 64G + 固态硬盘。
+
+ ```
+ root@vm85$ python3 fast_write_example.py 8 8
+ 2022-07-14 19:13:45,869 [root] - READ_TASK_COUNT=8, WRITE_TASK_COUNT=8, TABLE_COUNT=1000, QUEUE_SIZE=1000000, MAX_BATCH_SIZE=3000
+ 2022-07-14 19:13:48,882 [root] - WriteTask-0 started with pid 718347
+ 2022-07-14 19:13:48,883 [root] - WriteTask-1 started with pid 718348
+ 2022-07-14 19:13:48,884 [root] - WriteTask-2 started with pid 718349
+ 2022-07-14 19:13:48,884 [root] - WriteTask-3 started with pid 718350
+ 2022-07-14 19:13:48,885 [root] - WriteTask-4 started with pid 718351
+ 2022-07-14 19:13:48,885 [root] - WriteTask-5 started with pid 718352
+ 2022-07-14 19:13:48,886 [root] - WriteTask-6 started with pid 718353
+ 2022-07-14 19:13:48,886 [root] - WriteTask-7 started with pid 718354
+ 2022-07-14 19:13:48,887 [root] - ReadTask-0 started with pid 718355
+ 2022-07-14 19:13:48,888 [root] - ReadTask-1 started with pid 718356
+ 2022-07-14 19:13:48,889 [root] - ReadTask-2 started with pid 718357
+ 2022-07-14 19:13:48,889 [root] - ReadTask-3 started with pid 718358
+ 2022-07-14 19:13:48,890 [root] - ReadTask-4 started with pid 718359
+ 2022-07-14 19:13:48,891 [root] - ReadTask-5 started with pid 718361
+ 2022-07-14 19:13:48,892 [root] - ReadTask-6 started with pid 718364
+ 2022-07-14 19:13:48,893 [root] - ReadTask-7 started with pid 718365
+ 2022-07-14 19:13:56,042 [DataBaseMonitor] - count=6676310 speed=667631.0
+ 2022-07-14 19:14:06,196 [DataBaseMonitor] - count=20004310 speed=1332800.0
+ 2022-07-14 19:14:16,366 [DataBaseMonitor] - count=32290310 speed=1228600.0
+ 2022-07-14 19:14:26,527 [DataBaseMonitor] - count=44438310 speed=1214800.0
+ 2022-07-14 19:14:36,673 [DataBaseMonitor] - count=56608310 speed=1217000.0
+ 2022-07-14 19:14:46,834 [DataBaseMonitor] - count=68757310 speed=1214900.0
+ 2022-07-14 19:14:57,280 [DataBaseMonitor] - count=80992310 speed=1223500.0
+ 2022-07-14 19:15:07,689 [DataBaseMonitor] - count=93805310 speed=1281300.0
+ 2022-07-14 19:15:18,020 [DataBaseMonitor] - count=106111310 speed=1230600.0
+ 2022-07-14 19:15:28,356 [DataBaseMonitor] - count=118394310 speed=1228300.0
+ 2022-07-14 19:15:38,690 [DataBaseMonitor] - count=130742310 speed=1234800.0
+ 2022-07-14 19:15:49,000 [DataBaseMonitor] - count=143051310 speed=1230900.0
+ 2022-07-14 19:15:59,323 [DataBaseMonitor] - count=155276310 speed=1222500.0
+ 2022-07-14 19:16:09,649 [DataBaseMonitor] - count=167603310 speed=1232700.0
+ 2022-07-14 19:16:19,995 [DataBaseMonitor] - count=179976310 speed=1237300.0
+ ```
+
+
+
+:::note
+使用 Python 连接器多进程连接 TDengine 的时候,有一个限制:不能在父进程中建立连接,所有连接只能在子进程中创建。
+如果在父进程中创建连接,子进程再创建连接就会一直阻塞。这是个已知问题。
+
+:::
+
+
+
+
+
diff --git a/docs/zh/07-develop/03-insert-data/highvolume.webp b/docs/zh/07-develop/03-insert-data/highvolume.webp
new file mode 100644
index 0000000000..46dfc74ae3
Binary files /dev/null and b/docs/zh/07-develop/03-insert-data/highvolume.webp differ
diff --git a/docs/zh/07-develop/04-query-data/index.mdx b/docs/zh/07-develop/04-query-data/index.mdx
index c083c30c2c..92cb1906d9 100644
--- a/docs/zh/07-develop/04-query-data/index.mdx
+++ b/docs/zh/07-develop/04-query-data/index.mdx
@@ -52,7 +52,7 @@ Query OK, 2 row(s) in set (0.001100s)
### 示例一
-在 TAOS Shell,查找加利福尼亚州所有智能电表采集的电压平均值,并按照 location 分组。
+在 TDengine CLI,查找加利福尼亚州所有智能电表采集的电压平均值,并按照 location 分组。
```
taos> SELECT AVG(voltage), location FROM meters GROUP BY location;
@@ -65,7 +65,7 @@ Query OK, 2 rows in database (0.005995s)
### 示例二
-在 TAOS shell, 查找 groupId 为 2 的所有智能电表的记录条数,电流的最大值。
+在 TDengine CLI, 查找 groupId 为 2 的所有智能电表的记录条数,电流的最大值。
```
taos> SELECT count(*), max(current) FROM meters where groupId = 2;
diff --git a/docs/zh/10-deployment/01-deploy.md b/docs/zh/10-deployment/01-deploy.md
index 8d8a2eb6d8..03b4ce30f9 100644
--- a/docs/zh/10-deployment/01-deploy.md
+++ b/docs/zh/10-deployment/01-deploy.md
@@ -71,7 +71,7 @@ serverPort 6030
## 启动集群
-按照《立即开始》里的步骤,启动第一个数据节点,例如 h1.taosdata.com,然后执行 taos,启动 taos shell,从 shell 里执行命令“SHOW DNODES”,如下所示:
+按照《立即开始》里的步骤,启动第一个数据节点,例如 h1.taosdata.com,然后执行 taos,启动 TDengine CLI,在其中执行命令 “SHOW DNODES”,如下所示:
```
taos> show dnodes;
@@ -115,7 +115,7 @@ SHOW DNODES;
任何已经加入集群在线的数据节点,都可以作为后续待加入节点的 firstEp。
firstEp 这个参数仅仅在该数据节点首次加入集群时有作用,加入集群后,该数据节点会保存最新的 mnode 的 End Point 列表,不再依赖这个参数。
-接下来,配置文件中的 firstEp 参数就主要在客户端连接的时候使用了,例如 taos shell 如果不加参数,会默认连接由 firstEp 指定的节点。
+接下来,配置文件中的 firstEp 参数就主要在客户端连接的时候使用了,例如 TDengine CLI 如果不加参数,会默认连接由 firstEp 指定的节点。
两个没有配置 firstEp 参数的数据节点 dnode 启动后,会独立运行起来。这个时候,无法将其中一个数据节点加入到另外一个数据节点,形成集群。无法将两个独立的集群合并成为新的集群。
:::
diff --git a/docs/zh/10-deployment/03-k8s.md b/docs/zh/10-deployment/03-k8s.md
index 5d512700b6..0cae59657c 100644
--- a/docs/zh/10-deployment/03-k8s.md
+++ b/docs/zh/10-deployment/03-k8s.md
@@ -10,6 +10,7 @@ description: 利用 Kubernetes 部署 TDengine 集群的详细指南
要使用 Kubernetes 部署管理 TDengine 集群,需要做好如下准备工作。
+* 本文适用 Kubernetes v1.5 以上版本
* 本文和下一章使用 minikube、kubectl 和 helm 等工具进行安装部署,请提前安装好相应软件
* Kubernetes 已经安装部署并能正常访问使用或更新必要的容器仓库或其他服务
@@ -366,7 +367,7 @@ kubectl scale statefulsets tdengine --replicas=1
```
-在 taos shell 中的所有数据库操作将无法成功。
+在 TDengine CLI 中的所有数据库操作将无法成功。
```
taos> show dnodes;
diff --git a/docs/zh/12-taos-sql/03-table.md b/docs/zh/12-taos-sql/03-table.md
index a93b010c4c..9c33c45efc 100644
--- a/docs/zh/12-taos-sql/03-table.md
+++ b/docs/zh/12-taos-sql/03-table.md
@@ -10,27 +10,27 @@ description: 对表的各种管理操作
```sql
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definitionn] ...) [table_options]
-
+
CREATE TABLE create_subtable_clause
-
+
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definitionn] ...)
[TAGS (create_definition [, create_definitionn] ...)]
[table_options]
-
+
create_subtable_clause: {
create_subtable_clause [create_subtable_clause] ...
| [IF NOT EXISTS] [db_name.]tb_name USING [db_name.]stb_name [(tag_name [, tag_name] ...)] TAGS (tag_value [, tag_value] ...)
}
-
+
create_definition:
col_name column_definition
-
+
column_definition:
type_name [comment 'string_value']
-
+
table_options:
table_option ...
-
+
table_option: {
COMMENT 'string_value'
| WATERMARK duration[,duration]
@@ -54,12 +54,13 @@ table_option: {
需要注意的是转义字符中的内容必须是可打印字符。
**参数说明**
+
1. COMMENT:表注释。可用于超级表、子表和普通表。
-2. WATERMARK:指定窗口的关闭时间,默认值为 5 秒,最小单位毫秒,范围为0到15分钟,多个以逗号分隔。只可用于超级表,且只有当数据库使用了RETENTIONS参数时,才可以使用此表参数。
-3. MAX_DELAY:用于控制推送计算结果的最大延迟,默认值为 interval 的值(但不能超过最大值),最小单位毫秒,范围为1毫秒到15分钟,多个以逗号分隔。注:不建议 MAX_DELAY 设置太小,否则会过于频繁的推送结果,影响存储和查询性能,如无特殊需求,取默认值即可。只可用于超级表,且只有当数据库使用了RETENTIONS参数时,才可以使用此表参数。
-4. ROLLUP:Rollup 指定的聚合函数,提供基于多层级的降采样聚合结果。只可用于超级表。只有当数据库使用了RETENTIONS参数时,才可以使用此表参数。作用于超级表除TS列外的其它所有列,但是只能定义一个聚合函数。 聚合函数支持 avg, sum, min, max, last, first。
-5. SMA:Small Materialized Aggregates,提供基于数据块的自定义预计算功能。预计算类型包括MAX、MIN和SUM。可用于超级表/普通表。
-6. TTL:Time to Live,是用户用来指定表的生命周期的参数。如果在持续的TTL时间内,都没有数据写入该表,则TDengine系统会自动删除该表。这个TTL的时间只是一个大概时间,我们系统不保证到了时间一定会将其删除,而只保证存在这样一个机制。TTL单位是天,默认为0,表示不限制。用户需要注意,TTL优先级高于KEEP,即TTL时间满足删除机制时,即使当前数据的存在时间小于KEEP,此表也会被删除。只可用于子表和普通表。
+2. WATERMARK:指定窗口的关闭时间,默认值为 5 秒,最小单位毫秒,范围为 0 到 15 分钟,多个以逗号分隔。只可用于超级表,且只有当数据库使用了 RETENTIONS 参数时,才可以使用此表参数。
+3. MAX_DELAY:用于控制推送计算结果的最大延迟,默认值为 interval 的值(但不能超过最大值),最小单位毫秒,范围为 1 毫秒到 15 分钟,多个以逗号分隔。注:不建议 MAX_DELAY 设置太小,否则会过于频繁的推送结果,影响存储和查询性能,如无特殊需求,取默认值即可。只可用于超级表,且只有当数据库使用了 RETENTIONS 参数时,才可以使用此表参数。
+4. ROLLUP:Rollup 指定的聚合函数,提供基于多层级的降采样聚合结果。只可用于超级表。只有当数据库使用了 RETENTIONS 参数时,才可以使用此表参数。作用于超级表除 TS 列外的其它所有列,但是只能定义一个聚合函数。 聚合函数支持 avg, sum, min, max, last, first。
+5. SMA:Small Materialized Aggregates,提供基于数据块的自定义预计算功能。预计算类型包括 MAX、MIN 和 SUM。可用于超级表/普通表。
+6. TTL:Time to Live,是用户用来指定表的生命周期的参数。如果创建表时指定了这个参数,当该表的存在时间超过 TTL 指定的时间后,TDengine 自动删除该表。这个 TTL 的时间只是一个大概时间,系统不保证到了时间一定会将其删除,而只保证存在这样一个机制且最终一定会删除。TTL 单位是天,默认为 0,表示不限制,到期时间为表创建时间加上 TTL 时间。
## 创建子表
@@ -89,7 +90,7 @@ CREATE TABLE [IF NOT EXISTS] tb_name1 USING stb_name TAGS (tag_value1, ...) [IF
```sql
ALTER TABLE [db_name.]tb_name alter_table_clause
-
+
alter_table_clause: {
alter_table_options
| ADD COLUMN col_name column_type
@@ -97,10 +98,10 @@ alter_table_clause: {
| MODIFY COLUMN col_name column_type
| RENAME COLUMN old_col_name new_col_name
}
-
+
alter_table_options:
alter_table_option ...
-
+
alter_table_option: {
TTL value
| COMMENT 'string_value'
@@ -110,6 +111,7 @@ alter_table_option: {
**使用说明**
对普通表可以进行如下修改操作
+
1. ADD COLUMN:添加列。
2. DROP COLUMN:删除列。
3. MODIFY COLUMN:修改列定义,如果数据列的类型是可变长类型,那么可以使用此指令修改其宽度,只能改大,不能改小。
@@ -143,15 +145,15 @@ ALTER TABLE tb_name RENAME COLUMN old_col_name new_col_name
```sql
ALTER TABLE [db_name.]tb_name alter_table_clause
-
+
alter_table_clause: {
alter_table_options
| SET TAG tag_name = new_tag_value
}
-
+
alter_table_options:
alter_table_option ...
-
+
alter_table_option: {
TTL value
| COMMENT 'string_value'
@@ -159,6 +161,7 @@ alter_table_option: {
```
**使用说明**
+
1. 对子表的列和标签的修改,除了更改标签值以外,都要通过超级表才能进行。
### 修改子表标签值
@@ -169,7 +172,7 @@ ALTER TABLE tb_name SET TAG tag_name=new_tag_value;
## 删除表
-可以在一条SQL语句中删除一个或多个普通表或子表。
+可以在一条 SQL 语句中删除一个或多个普通表或子表。
```sql
DROP TABLE [IF EXISTS] [db_name.]tb_name [, [IF EXISTS] [db_name.]tb_name] ...
@@ -179,7 +182,7 @@ DROP TABLE [IF EXISTS] [db_name.]tb_name [, [IF EXISTS] [db_name.]tb_name] ...
### 显示所有表
-如下SQL语句可以列出当前数据库中的所有表名。
+如下 SQL 语句可以列出当前数据库中的所有表名。
```sql
SHOW TABLES [LIKE tb_name_wildchar];
diff --git a/docs/zh/12-taos-sql/10-function.md b/docs/zh/12-taos-sql/10-function.md
index af31a1d4bd..9f999181c4 100644
--- a/docs/zh/12-taos-sql/10-function.md
+++ b/docs/zh/12-taos-sql/10-function.md
@@ -1167,7 +1167,7 @@ SELECT stateDuration(field_name, oper, val, unit) FROM { tb_name | stb_name } [W
**参数范围**:
-- oper : "LT" (小于)、"GT"(大于)、"LE"(小于等于)、"GE"(大于等于)、"NE"(不等于)、"EQ"(等于),不区分大小写。
+- oper : `'LT'` (小于)、`'GT'`(大于)、`'LE'`(小于等于)、`'GE'`(大于等于)、`'NE'`(不等于)、`'EQ'`(等于),不区分大小写,但需要用`''`包括。
- val : 数值型
- unit : 时间长度的单位,可取值时间单位: 1b(纳秒), 1u(微秒),1a(毫秒),1s(秒),1m(分),1h(小时),1d(天), 1w(周)。如果省略,默认为当前数据库精度。
diff --git a/docs/zh/12-taos-sql/19-limit.md b/docs/zh/12-taos-sql/19-limit.md
index 473bb29c1c..0dbe00f800 100644
--- a/docs/zh/12-taos-sql/19-limit.md
+++ b/docs/zh/12-taos-sql/19-limit.md
@@ -31,7 +31,7 @@ description: 合法字符集和命名中的限制规则
- 最多允许 4096 列,最少需要 2 列,第一列必须是时间戳。
- 标签名最大长度为 64
- 最多允许 128 个,至少要有 1 个标签,一个表中标签值的总长度不超过 16KB
-- SQL 语句最大长度 1048576 个字符,也可通过客户端配置参数 maxSQLLength 修改,取值范围 65480 ~ 1048576
+- SQL 语句最大长度 1048576 个字符
- SELECT 语句的查询结果,最多允许返回 4096 列(语句中的函数调用可能也会占用一些列空间),超限时需要显式指定较少的返回数据列,以避免语句执行报错
- 库的数目,超级表的数目、表的数目,系统不做限制,仅受系统资源限制
- 数据库的副本数只能设置为 1 或 3
diff --git a/docs/zh/12-taos-sql/24-show.md b/docs/zh/12-taos-sql/24-show.md
index 14b51fb4c1..b4aafdaa0a 100644
--- a/docs/zh/12-taos-sql/24-show.md
+++ b/docs/zh/12-taos-sql/24-show.md
@@ -195,7 +195,7 @@ SHOW STREAMS;
SHOW SUBSCRIPTIONS;
```
-显示当前数据库下的所有的订阅关系
+显示当前系统内所有的订阅关系
## SHOW TABLES
diff --git a/docs/zh/14-reference/11-docker/index.md b/docs/zh/14-reference/11-docker/index.md
index 743fc2d32f..d712e9aba8 100644
--- a/docs/zh/14-reference/11-docker/index.md
+++ b/docs/zh/14-reference/11-docker/index.md
@@ -32,7 +32,7 @@ taos> show databases;
Query OK, 2 rows in database (0.033802s)
```
-因为运行在容器中的 TDengine 服务端使用容器的 hostname 建立连接,使用 taos shell 或者各种连接器(例如 JDBC-JNI)从容器外访问容器内的 TDengine 比较复杂,所以上述方式是访问容器中 TDengine 服务的最简单的方法,适用于一些简单场景。如果在一些复杂场景下想要从容器化使用 taos shell 或者各种连接器访问容器中的 TDengine 服务,请参考下一节。
+因为运行在容器中的 TDengine 服务端使用容器的 hostname 建立连接,使用 TDengine CLI 或者各种连接器(例如 JDBC-JNI)从容器外访问容器内的 TDengine 比较复杂,所以上述方式是访问容器中 TDengine 服务的最简单的方法,适用于一些简单场景。如果在一些复杂场景下想要从容器化使用 TDengine CLI 或者各种连接器访问容器中的 TDengine 服务,请参考下一节。
## 在 host 网络上启动 TDengine
@@ -75,7 +75,7 @@ docker run -d \
echo 127.0.0.1 tdengine |sudo tee -a /etc/hosts
```
-最后,可以从 taos shell 或者任意连接器以 "tdengine" 为服务端地址访问 TDengine 服务。
+最后,可以从 TDengine CLI 或者任意连接器以 "tdengine" 为服务端地址访问 TDengine 服务。
```shell
taos -h tdengine -P 6030
@@ -354,7 +354,7 @@ test-docker_td-2_1 /tini -- /usr/bin/entrypoi ... Up
test-docker_td-3_1 /tini -- /usr/bin/entrypoi ... Up
```
-4. 用 taos shell 查看 dnodes
+4. 用 TDengine CLI 查看 dnodes
```shell
diff --git a/docs/zh/17-operation/01-pkg-install.md b/docs/zh/17-operation/01-pkg-install.md
index 671dc00cee..6d93c1697b 100644
--- a/docs/zh/17-operation/01-pkg-install.md
+++ b/docs/zh/17-operation/01-pkg-install.md
@@ -47,7 +47,7 @@ lrwxrwxrwx 1 root root 13 Feb 22 09:34 log -> /var/log/taos/
-卸载命令如下:
+TDengine 卸载命令如下:
```
$ sudo apt-get remove tdengine
@@ -65,10 +65,26 @@ TDengine is removed successfully!
```
+taosTools 卸载命令如下:
+
+```
+$ sudo apt remove taostools
+Reading package lists... Done
+Building dependency tree
+Reading state information... Done
+The following packages will be REMOVED:
+ taostools
+0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded.
+After this operation, 68.3 MB disk space will be freed.
+Do you want to continue? [Y/n]
+(Reading database ... 147973 files and directories currently installed.)
+Removing taostools (2.1.2) ...
+```
+
-卸载命令如下:
+TDengine 卸载命令如下:
```
$ sudo dpkg -r tdengine
@@ -78,28 +94,52 @@ TDengine is removed successfully!
```
+taosTools 卸载命令如下:
+
+```
+$ sudo dpkg -r taostools
+(Reading database ... 147973 files and directories currently installed.)
+Removing taostools (2.1.2) ...
+```
+
-卸载命令如下:
+卸载 TDengine 命令如下:
```
$ sudo rpm -e tdengine
TDengine is removed successfully!
```
+卸载 taosTools 命令如下:
+
+```
+sudo rpm -e taostools
+taosToole is removed successfully!
+```
+
-卸载命令如下:
+卸载 TDengine 命令如下:
```
$ rmtaos
TDengine is removed successfully!
```
+卸载 taosTools 命令如下:
+
+```
+$ rmtaostools
+Start to uninstall taos tools ...
+
+taos tools is uninstalled successfully!
+```
+
在 C:\TDengine 目录下,通过运行 unins000.exe 卸载程序来卸载 TDengine。
diff --git a/docs/zh/17-operation/08-export.md b/docs/zh/17-operation/08-export.md
index ecc3b2f110..44247e28bd 100644
--- a/docs/zh/17-operation/08-export.md
+++ b/docs/zh/17-operation/08-export.md
@@ -7,7 +7,7 @@ description: 如何导出 TDengine 中的数据
## 按表导出 CSV 文件
-如果用户需要导出一个表或一个 STable 中的数据,可在 taos shell 中运行:
+如果用户需要导出一个表或一个 STable 中的数据,可在 TDengine CLI 中运行:
```sql
select * from >> data.csv;
diff --git a/docs/zh/27-train-faq/01-faq.md b/docs/zh/27-train-faq/01-faq.md
index 2fd9dff80b..0a46db4a28 100644
--- a/docs/zh/27-train-faq/01-faq.md
+++ b/docs/zh/27-train-faq/01-faq.md
@@ -116,7 +116,7 @@ charset UTF-8
### 9. 表名显示不全
-由于 taos shell 在终端中显示宽度有限,有可能比较长的表名显示不全,如果按照显示的不全的表名进行相关操作会发生 Table does not exist 错误。解决方法可以是通过修改 taos.cfg 文件中的设置项 maxBinaryDisplayWidth, 或者直接输入命令 set max_binary_display_width 100。或者在命令结尾使用 \G 参数来调整结果的显示方式。
+由于 TDengine CLI 在终端中显示宽度有限,有可能比较长的表名显示不全,如果按照显示的不全的表名进行相关操作会发生 Table does not exist 错误。解决方法可以是通过修改 taos.cfg 文件中的设置项 maxBinaryDisplayWidth, 或者直接输入命令 set max_binary_display_width 100。或者在命令结尾使用 \G 参数来调整结果的显示方式。
### 10. 如何进行数据迁移?
diff --git a/examples/JDBC/JDBCDemo/README-jdbc-windows.md b/examples/JDBC/JDBCDemo/README-jdbc-windows.md
index 17c5c8df00..5a781f40f7 100644
--- a/examples/JDBC/JDBCDemo/README-jdbc-windows.md
+++ b/examples/JDBC/JDBCDemo/README-jdbc-windows.md
@@ -129,7 +129,7 @@ https://www.taosdata.com/cn/all-downloads/
192.168.236.136 td01
```
-配置完成后,在命令行内使用taos shell连接server端
+配置完成后,在命令行内使用TDengine CLI连接server端
```shell
C:\TDengine>taos -h td01
diff --git a/examples/c/tmq_taosx.c b/examples/c/tmq_taosx.c
index d0def44269..491eda1ddb 100644
--- a/examples/c/tmq_taosx.c
+++ b/examples/c/tmq_taosx.c
@@ -163,6 +163,13 @@ int32_t init_env() {
}
taos_free_result(pRes);
+ pRes = taos_query(pConn, "create table if not exists ct4 using st1(t3) tags('ct4')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create child table ct4, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
pRes = taos_query(pConn, "insert into ct3 values(1626006833600, 5, 6, 'c') ct1 values(1626006833601, 2, 3, 'sds') (1626006833602, 4, 5, 'ddd') ct0 values(1626006833602, 4, 3, 'hwj') ct1 values(now+5s, 23, 32, 's21ds')");
if (taos_errno(pRes) != 0) {
printf("failed to insert into ct3, reason:%s\n", taos_errstr(pRes));
@@ -379,6 +386,8 @@ tmq_t* build_consumer() {
tmq_conf_set(conf, "td.connect.pass", "taosdata");
tmq_conf_set(conf, "msg.with.table.name", "true");
tmq_conf_set(conf, "enable.auto.commit", "true");
+ tmq_conf_set(conf, "experimental.snapshot.enable", "true");
+
/*tmq_conf_set(conf, "experimental.snapshot.enable", "true");*/
@@ -406,7 +415,7 @@ void basic_consume_loop(tmq_t* tmq, tmq_list_t* topics) {
}
int32_t cnt = 0;
while (running) {
- TAOS_RES* tmqmessage = tmq_consumer_poll(tmq, -1);
+ TAOS_RES* tmqmessage = tmq_consumer_poll(tmq, 1000);
if (tmqmessage) {
cnt++;
msg_process(tmqmessage);
diff --git a/examples/nodejs/README-win.md b/examples/nodejs/README-win.md
index 75fec69413..e496be2f87 100644
--- a/examples/nodejs/README-win.md
+++ b/examples/nodejs/README-win.md
@@ -35,7 +35,7 @@ Python 2.7.18
下载地址:https://www.taosdata.com/cn/all-downloads/,选择一个合适的windows-client下载(client应该尽量与server端的版本保持一致)
-使用client的taos shell连接server
+使用client的TDengine CLI连接server
```shell
>taos -h node5
diff --git a/include/common/tcommon.h b/include/common/tcommon.h
index a071516fbf..03672f96f3 100644
--- a/include/common/tcommon.h
+++ b/include/common/tcommon.h
@@ -184,6 +184,7 @@ typedef struct SQueryTableDataCond {
STimeWindow twindows;
int64_t startVersion;
int64_t endVersion;
+ int64_t schemaVersion;
} SQueryTableDataCond;
int32_t tEncodeDataBlock(void** buf, const SSDataBlock* pBlock);
diff --git a/include/common/tdataformat.h b/include/common/tdataformat.h
index af7c88acde..df16f4f0ab 100644
--- a/include/common/tdataformat.h
+++ b/include/common/tdataformat.h
@@ -96,6 +96,7 @@ char *tTagValToData(const STagVal *pTagVal, bool isJson);
int32_t tEncodeTag(SEncoder *pEncoder, const STag *pTag);
int32_t tDecodeTag(SDecoder *pDecoder, STag **ppTag);
int32_t tTagToValArray(const STag *pTag, SArray **ppArray);
+void tTagSetCid(const STag *pTag, int16_t iTag, int16_t cid);
void debugPrintSTag(STag *pTag, const char *tag, int32_t ln); // TODO: remove
int32_t parseJsontoTagData(const char *json, SArray *pTagVals, STag **ppTag, void *pMsgBuf);
diff --git a/include/common/tmsg.h b/include/common/tmsg.h
index c0ea5e79c7..681094471a 100644
--- a/include/common/tmsg.h
+++ b/include/common/tmsg.h
@@ -2617,7 +2617,7 @@ enum {
typedef struct {
int8_t type;
union {
- // snapshot data
+ // snapshot
struct {
int64_t uid;
int64_t ts;
@@ -2936,33 +2936,14 @@ static FORCE_INLINE void tDeleteSMqSubTopicEp(SMqSubTopicEp* pSubTopicEp) {
typedef struct {
SMqRspHead head;
- int64_t reqOffset;
- int64_t rspOffset;
- STqOffsetVal reqOffsetNew;
- STqOffsetVal rspOffsetNew;
+ STqOffsetVal rspOffset;
int16_t resMsgType;
int32_t metaRspLen;
void* metaRsp;
} SMqMetaRsp;
-static FORCE_INLINE int32_t tEncodeSMqMetaRsp(void** buf, const SMqMetaRsp* pRsp) {
- int32_t tlen = 0;
- tlen += taosEncodeFixedI64(buf, pRsp->reqOffset);
- tlen += taosEncodeFixedI64(buf, pRsp->rspOffset);
- tlen += taosEncodeFixedI16(buf, pRsp->resMsgType);
- tlen += taosEncodeFixedI32(buf, pRsp->metaRspLen);
- tlen += taosEncodeBinary(buf, pRsp->metaRsp, pRsp->metaRspLen);
- return tlen;
-}
-
-static FORCE_INLINE void* tDecodeSMqMetaRsp(const void* buf, SMqMetaRsp* pRsp) {
- buf = taosDecodeFixedI64(buf, &pRsp->reqOffset);
- buf = taosDecodeFixedI64(buf, &pRsp->rspOffset);
- buf = taosDecodeFixedI16(buf, &pRsp->resMsgType);
- buf = taosDecodeFixedI32(buf, &pRsp->metaRspLen);
- buf = taosDecodeBinary(buf, &pRsp->metaRsp, pRsp->metaRspLen);
- return (void*)buf;
-}
+int32_t tEncodeSMqMetaRsp(SEncoder* pEncoder, const SMqMetaRsp* pRsp);
+int32_t tDecodeSMqMetaRsp(SDecoder* pDecoder, SMqMetaRsp* pRsp);
typedef struct {
SMqRspHead head;
diff --git a/include/libs/executor/executor.h b/include/libs/executor/executor.h
index 1ce88905c2..25a6221fcb 100644
--- a/include/libs/executor/executor.h
+++ b/include/libs/executor/executor.h
@@ -41,6 +41,9 @@ typedef struct {
bool initTableReader;
bool initTqReader;
int32_t numOfVgroups;
+
+ void* sContext; // SSnapContext*
+
void* pStateBackend;
} SReadHandle;
@@ -181,11 +184,17 @@ int32_t qGetStreamScanStatus(qTaskInfo_t tinfo, uint64_t* uid, int64_t* ts);
int32_t qStreamPrepareTsdbScan(qTaskInfo_t tinfo, uint64_t uid, int64_t ts);
-int32_t qStreamPrepareScan(qTaskInfo_t tinfo, const STqOffsetVal* pOffset);
+int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subType);
int32_t qStreamExtractOffset(qTaskInfo_t tinfo, STqOffsetVal* pOffset);
-void* qStreamExtractMetaMsg(qTaskInfo_t tinfo);
+SMqMetaRsp* qStreamExtractMetaMsg(qTaskInfo_t tinfo);
+
+int64_t qStreamExtractPrepareUid(qTaskInfo_t tinfo);
+
+const SSchemaWrapper* qExtractSchemaFromTask(qTaskInfo_t tinfo);
+
+const char* qExtractTbnameFromTask(qTaskInfo_t tinfo);
void* qExtractReaderFromStreamScanner(void* scanner);
diff --git a/include/libs/function/function.h b/include/libs/function/function.h
index d5da306fd2..c8db01625e 100644
--- a/include/libs/function/function.h
+++ b/include/libs/function/function.h
@@ -139,9 +139,8 @@ typedef struct SqlFunctionCtx {
struct SExprInfo *pExpr;
struct SDiskbasedBuf *pBuf;
struct SSDataBlock *pSrcBlock;
- struct SSDataBlock *pDstBlock; // used by indifinite rows function to set selectivity
+ struct SSDataBlock *pDstBlock; // used by indefinite rows function to set selectivity
int32_t curBufPage;
- bool increase;
bool isStream;
char udfName[TSDB_FUNC_NAME_LEN];
diff --git a/include/libs/function/taosudf.h b/include/libs/function/taosudf.h
index 5e84b87a81..2b2063e3f6 100644
--- a/include/libs/function/taosudf.h
+++ b/include/libs/function/taosudf.h
@@ -256,8 +256,9 @@ static FORCE_INLINE int32_t udfColDataSet(SUdfColumn* pColumn, uint32_t currentR
typedef int32_t (*TUdfScalarProcFunc)(SUdfDataBlock* block, SUdfColumn *resultCol);
typedef int32_t (*TUdfAggStartFunc)(SUdfInterBuf *buf);
-typedef int32_t (*TUdfAggProcessFunc)(SUdfDataBlock* block, SUdfInterBuf *interBuf, SUdfInterBuf *newInterBuf);
-typedef int32_t (*TUdfAggFinishFunc)(SUdfInterBuf* buf, SUdfInterBuf *resultData);
+typedef int32_t (*TUdfAggProcessFunc)(SUdfDataBlock *block, SUdfInterBuf *interBuf, SUdfInterBuf *newInterBuf);
+typedef int32_t (*TUdfAggMergeFunc)(SUdfInterBuf *inputBuf1, SUdfInterBuf *inputBuf2, SUdfInterBuf *outputBuf);
+typedef int32_t (*TUdfAggFinishFunc)(SUdfInterBuf *buf, SUdfInterBuf *resultData);
#ifdef __cplusplus
}
diff --git a/include/os/os.h b/include/os/os.h
index b036002f8a..71966061a1 100644
--- a/include/os/os.h
+++ b/include/os/os.h
@@ -79,6 +79,7 @@ extern "C" {
#include
#include
+#include "taoserror.h"
#include "osAtomic.h"
#include "osDef.h"
#include "osDir.h"
diff --git a/include/os/osSemaphore.h b/include/os/osSemaphore.h
index 2a3a2e64b6..e52da96f01 100644
--- a/include/os/osSemaphore.h
+++ b/include/os/osSemaphore.h
@@ -1,74 +1,74 @@
-/*
- * Copyright (c) 2019 TAOS Data, Inc.
- *
- * This program is free software: you can use, redistribute, and/or modify
- * it under the terms of the GNU Affero General Public License, version 3
- * or later ("AGPL"), as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.
- *
- * You should have received a copy of the GNU Affero General Public License
- * along with this program. If not, see .
- */
-
-#ifndef _TD_OS_SEMPHONE_H_
-#define _TD_OS_SEMPHONE_H_
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-#include
-
-#if defined(_TD_DARWIN_64)
-#include
-// typedef struct tsem_s *tsem_t;
-typedef dispatch_semaphore_t tsem_t;
-
-int tsem_init(tsem_t *sem, int pshared, unsigned int value);
-int tsem_wait(tsem_t *sem);
-int tsem_timewait(tsem_t *sim, int64_t nanosecs);
-int tsem_post(tsem_t *sem);
-int tsem_destroy(tsem_t *sem);
-
-#else
-
-#define tsem_t sem_t
-#define tsem_init sem_init
-int tsem_wait(tsem_t *sem);
-int tsem_timewait(tsem_t *sim, int64_t nanosecs);
-#define tsem_post sem_post
-#define tsem_destroy sem_destroy
-
-#endif
-
-#if defined(_TD_DARWIN_64)
-// #define TdThreadRwlock TdThreadMutex
-// #define taosThreadRwlockInit(lock, NULL) taosThreadMutexInit(lock, NULL)
-// #define taosThreadRwlockDestroy(lock) taosThreadMutexDestroy(lock)
-// #define taosThreadRwlockWrlock(lock) taosThreadMutexLock(lock)
-// #define taosThreadRwlockRdlock(lock) taosThreadMutexLock(lock)
-// #define taosThreadRwlockUnlock(lock) taosThreadMutexUnlock(lock)
-
-// #define TdThreadSpinlock TdThreadMutex
-// #define taosThreadSpinInit(lock, NULL) taosThreadMutexInit(lock, NULL)
-// #define taosThreadSpinDestroy(lock) taosThreadMutexDestroy(lock)
-// #define taosThreadSpinLock(lock) taosThreadMutexLock(lock)
-// #define taosThreadSpinUnlock(lock) taosThreadMutexUnlock(lock)
-#endif
-
-bool taosCheckPthreadValid(TdThread thread);
-int64_t taosGetSelfPthreadId();
-int64_t taosGetPthreadId(TdThread thread);
-void taosResetPthread(TdThread *thread);
-bool taosComparePthread(TdThread first, TdThread second);
-int32_t taosGetPId();
-int32_t taosGetAppName(char *name, int32_t *len);
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif /*_TD_OS_SEMPHONE_H_*/
+/*
+ * Copyright (c) 2019 TAOS Data, Inc.
+ *
+ * This program is free software: you can use, redistribute, and/or modify
+ * it under the terms of the GNU Affero General Public License, version 3
+ * or later ("AGPL"), as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * You should have received a copy of the GNU Affero General Public License
+ * along with this program. If not, see .
+ */
+
+#ifndef _TD_OS_SEMPHONE_H_
+#define _TD_OS_SEMPHONE_H_
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+#include
+
+#if defined(_TD_DARWIN_64)
+#include
+// typedef struct tsem_s *tsem_t;
+typedef dispatch_semaphore_t tsem_t;
+
+int tsem_init(tsem_t *sem, int pshared, unsigned int value);
+int tsem_wait(tsem_t *sem);
+int tsem_timewait(tsem_t *sim, int64_t nanosecs);
+int tsem_post(tsem_t *sem);
+int tsem_destroy(tsem_t *sem);
+
+#else
+
+#define tsem_t sem_t
+#define tsem_init sem_init
+int tsem_wait(tsem_t *sem);
+int tsem_timewait(tsem_t *sim, int64_t nanosecs);
+#define tsem_post sem_post
+#define tsem_destroy sem_destroy
+
+#endif
+
+#if defined(_TD_DARWIN_64)
+// #define TdThreadRwlock TdThreadMutex
+// #define taosThreadRwlockInit(lock, NULL) taosThreadMutexInit(lock, NULL)
+// #define taosThreadRwlockDestroy(lock) taosThreadMutexDestroy(lock)
+// #define taosThreadRwlockWrlock(lock) taosThreadMutexLock(lock)
+// #define taosThreadRwlockRdlock(lock) taosThreadMutexLock(lock)
+// #define taosThreadRwlockUnlock(lock) taosThreadMutexUnlock(lock)
+
+// #define TdThreadSpinlock TdThreadMutex
+// #define taosThreadSpinInit(lock, NULL) taosThreadMutexInit(lock, NULL)
+// #define taosThreadSpinDestroy(lock) taosThreadMutexDestroy(lock)
+// #define taosThreadSpinLock(lock) taosThreadMutexLock(lock)
+// #define taosThreadSpinUnlock(lock) taosThreadMutexUnlock(lock)
+#endif
+
+bool taosCheckPthreadValid(TdThread thread);
+int64_t taosGetSelfPthreadId();
+int64_t taosGetPthreadId(TdThread thread);
+void taosResetPthread(TdThread *thread);
+bool taosComparePthread(TdThread first, TdThread second);
+int32_t taosGetPId();
+int32_t taosGetAppName(char *name, int32_t *len);
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /*_TD_OS_SEMPHONE_H_*/
diff --git a/include/util/tpagedbuf.h b/include/util/tpagedbuf.h
index ef266068cb..57a489c0dd 100644
--- a/include/util/tpagedbuf.h
+++ b/include/util/tpagedbuf.h
@@ -67,10 +67,9 @@ void* getNewBufPage(SDiskbasedBuf* pBuf, int32_t groupId, int32_t* pageId);
/**
*
* @param pBuf
- * @param groupId
* @return
*/
-SIDList getDataBufPagesIdList(SDiskbasedBuf* pBuf, int32_t groupId);
+SIDList getDataBufPagesIdList(SDiskbasedBuf* pBuf);
/**
* get the specified buffer page by id
@@ -101,13 +100,6 @@ void releaseBufPageInfo(SDiskbasedBuf* pBuf, struct SPageInfo* pi);
*/
size_t getTotalBufSize(const SDiskbasedBuf* pBuf);
-/**
- * get the number of groups in the result buffer
- * @param pBuf
- * @return
- */
-size_t getNumOfBufGroupId(const SDiskbasedBuf* pBuf);
-
/**
* destroy result buffer
* @param pBuf
diff --git a/packaging/MPtestJenkinsfile b/packaging/MPtestJenkinsfile
new file mode 100644
index 0000000000..1e2e69a977
--- /dev/null
+++ b/packaging/MPtestJenkinsfile
@@ -0,0 +1,200 @@
+def sync_source(branch_name) {
+ sh '''
+ hostname
+ env
+ echo ''' + branch_name + '''
+ '''
+ sh '''
+ cd ${TDINTERNAL_ROOT_DIR}
+ git reset --hard
+ git fetch || git fetch
+ git checkout ''' + branch_name + ''' -f
+ git branch
+ git pull || git pull
+ git log | head -n 20
+ cd ${TDENGINE_ROOT_DIR}
+ git reset --hard
+ git fetch || git fetch
+ git checkout ''' + branch_name + ''' -f
+ git branch
+ git pull || git pull
+ git log | head -n 20
+ git submodule update --init --recursive
+ '''
+ return 1
+}
+def run_test() {
+ sh '''
+ cd ${TDENGINE_ROOT_DIR}/packaging
+
+ '''
+ sh '''
+ export LD_LIBRARY_PATH=${TDINTERNAL_ROOT_DIR}/debug/build/lib
+ ./fulltest.sh
+ '''
+ sh '''
+ cd ${TDENGINE_ROOT_DIR}/tests
+ ./test-all.sh b1fq
+ '''
+}
+def build_run() {
+ sync_source("${BRANCH_NAME}")
+}
+pipeline {
+ agent none
+ parameters {
+ string (
+ name:'version',
+ defaultValue:'3.0.0.1',
+ description: 'release version number,eg: 3.0.0.1 or 3.0.0.'
+ )
+ string (
+ name:'baseVersion',
+ defaultValue:'3.0.0.1',
+ description: 'This number of baseVerison is generally not modified.Now it is 3.0.0.1'
+ )
+ }
+ environment{
+ WORK_DIR = '/var/lib/jenkins/workspace'
+ TDINTERNAL_ROOT_DIR = '/var/lib/jenkins/workspace/TDinternal'
+ TDENGINE_ROOT_DIR = '/var/lib/jenkins/workspace/TDinternal/community'
+ BRANCH_NAME = '3.0'
+
+ TD_SERVER_TAR = "TDengine-server-${version}-Linux-x64.tar.gz"
+ BASE_TD_SERVER_TAR = "TDengine-server-${baseVersion}-arm64-x64.tar.gz"
+
+ TD_SERVER_ARM_TAR = "TDengine-server-${version}-Linux-arm64.tar.gz"
+ BASE_TD_SERVER_ARM_TAR = "TDengine-server-${baseVersion}-Linux-arm64.tar.gz"
+
+ TD_SERVER_LITE_TAR = "TDengine-server-${version}-Linux-x64-Lite.tar.gz"
+ BASE_TD_SERVER_LITE_TAR = "TDengine-server-${baseVersion}-Linux-x64-Lite.tar.gz"
+
+ TD_CLIENT_TAR = "TDengine-client-${version}-Linux-x64.tar.gz"
+ BASE_TD_CLIENT_TAR = "TDengine-client-${baseVersion}-arm64-x64.tar.gz"
+
+ TD_CLIENT_ARM_TAR = "TDengine-client-${version}-Linux-arm64.tar.gz"
+ BASE_TD_CLIENT_ARM_TAR = "TDengine-client-${baseVersion}-Linux-arm64.tar.gz"
+
+ TD_CLIENT_LITE_TAR = "TDengine-client-${version}-Linux-x64-Lite.tar.gz"
+ BASE_TD_CLIENT_LITE_TAR = "TDengine-client-${baseVersion}-Linux-x64-Lite.tar.gz"
+
+ TD_SERVER_RPM = "TDengine-server-${version}-Linux-x64.rpm"
+
+ TD_SERVER_DEB = "TDengine-server-${version}-Linux-x64.deb"
+
+ TD_SERVER_EXE = "TDengine-server-${version}-Windows-x64.exe"
+
+ TD_CLIENT_EXE = "TDengine-client-${version}-Windows-x64.exe"
+
+
+ }
+ stages {
+ stage ('RUN') {
+ stage('get check package scritps'){
+ agent{label 'ubuntu18'}
+ steps {
+ catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
+ script{
+ sync_source("${BRANCH_NAME}")
+ }
+
+ }
+ }
+ }
+ parallel {
+ stage('ubuntu16') {
+ agent{label " ubuntu16 "}
+ steps {
+ timeout(time: 3, unit: 'MINUTES'){
+ sh '''
+ cd ${TDENGINE_ROOT_DIR}/packaging
+ bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server
+ python3 checkPackageRuning.py
+ rmtaos
+ '''
+ sh '''
+ cd ${TDENGINE_ROOT_DIR}/packaging
+ bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server
+ python3 checkPackageRuning.py
+ '''
+ sh '''
+ cd ${TDENGINE_ROOT_DIR}/packaging
+ bash testpackage.sh ${TD_SERVER_DEB} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server
+ python3 checkPackageRuning.py
+ '''
+ }
+ }
+ }
+ stage('ubuntu18') {
+ agent{label " ubuntu18 "}
+ steps {
+ timeout(time: 3, unit: 'MINUTES'){
+ sh '''
+ cd ${TDENGINE_ROOT_DIR}/packaging
+ bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server
+ python3 checkPackageRuning.py
+ rmtaos
+ '''
+ sh '''
+ cd ${TDENGINE_ROOT_DIR}/packaging
+ bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server
+ python3 checkPackageRuning.py
+ '''
+ sh '''
+ cd ${TDENGINE_ROOT_DIR}/packaging
+ bash testpackage.sh ${TD_SERVER_DEB} ${version} ${BASE_TD_SERVER_DEB} ${baseVersion} server
+ python3 checkPackageRuning.py
+ '''
+ }
+ }
+ }
+ stage('centos7') {
+ agent{label " centos7_9 "}
+ steps {
+ timeout(time: 240, unit: 'MINUTES'){
+ sh '''
+ cd ${TDENGINE_ROOT_DIR}/packaging
+ bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server
+ python3 checkPackageRuning.py
+ rmtaos
+ '''
+ sh '''
+ cd ${TDENGINE_ROOT_DIR}/packaging
+ bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server
+ python3 checkPackageRuning.py
+ '''
+ sh '''
+ cd ${TDENGINE_ROOT_DIR}/packaging
+ bash testpackage.sh ${TD_SERVER_RPM} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server
+ python3 checkPackageRuning.py
+ '''
+ }
+ }
+ }
+ stage('centos8') {
+ agent{label " centos8_3 "}
+ steps {
+ timeout(time: 240, unit: 'MINUTES'){
+ sh '''
+ cd ${TDENGINE_ROOT_DIR}/packaging
+ bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server
+ python3 checkPackageRuning.py
+ rmtaos
+ '''
+ sh '''
+ cd ${TDENGINE_ROOT_DIR}/packaging
+ bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server
+ python3 checkPackageRuning.py
+ '''
+ sh '''
+ cd ${TDENGINE_ROOT_DIR}/packaging
+ bash testpackage.sh ${TD_SERVER_RPM} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server
+ python3 checkPackageRuning.py
+ '''
+ }
+ }
+ }
+ }
+ }
+ }
+}
\ No newline at end of file
diff --git a/packaging/cfg/taos.cfg b/packaging/cfg/taos.cfg
index aae2e7c856..87f465fdb9 100644
--- a/packaging/cfg/taos.cfg
+++ b/packaging/cfg/taos.cfg
@@ -38,7 +38,7 @@
# The interval of dnode reporting status to mnode
# statusInterval 1
-# The interval for taos shell to send heartbeat to mnode
+# The interval for TDengine CLI to send heartbeat to mnode
# shellActivityTimer 3
# The minimum sliding window time, milli-second
diff --git a/packaging/checkPackageRuning.py b/packaging/checkPackageRuning.py
new file mode 100755
index 0000000000..e53cc3bdbc
--- /dev/null
+++ b/packaging/checkPackageRuning.py
@@ -0,0 +1,101 @@
+#!/usr/bin/python
+###################################################################
+# Copyright (c) 2016 by TAOS Technologies, Inc.
+# All rights reserved.
+#
+# This file is proprietary and confidential to TAOS Technologies.
+# No part of this file may be reproduced, stored, transmitted,
+# disclosed or used in any form or by any means other than as
+# expressly provided by the written permission from Jianhui Tao
+#
+###################################################################
+# install pip
+# pip install src/connector/python/
+
+# -*- coding: utf-8 -*-
+import sys , os
+import getopt
+import subprocess
+# from this import d
+import time
+
+# install taospy
+
+out = subprocess.getoutput("pip3 show taospy|grep Version| awk -F ':' '{print $2}' ")
+print(out)
+if (out == "" ):
+ os.system("pip install git+https://github.com/taosdata/taos-connector-python.git")
+ print("install taos python connector")
+
+
+
+# start taosd prepare
+os.system("rm -rf /var/lib/taos/*")
+os.system("systemctl restart taosd ")
+
+# wait a moment ,at least 5 seconds
+time.sleep(5)
+
+# prepare data by taosBenchmark
+
+os.system("taosBenchmark -y -n 100 -t 100")
+
+import taos
+
+conn = taos.connect(host="localhost",
+ user="root",
+ password="taosdata",
+ database="test",
+ port=6030,
+ config="/etc/taos", # for windows the default value is C:\TDengine\cfg
+ timezone="Asia/Shanghai") # default your host's timezone
+
+server_version = conn.server_info
+print("server_version", server_version)
+client_version = conn.client_info
+print("client_version", client_version) # 3.0.0.0
+
+# Execute a sql and get its result set. It's useful for SELECT statement
+result: taos.TaosResult = conn.query("SELECT count(*) from test.meters")
+
+data = result.fetch_all()
+
+if data[0][0] !=10000:
+ print(" taosBenchmark work not as expected ")
+ sys.exit(1)
+else:
+ print(" taosBenchmark work as expected ")
+
+# test taosdump dump out data and dump in data
+
+# dump out datas
+os.system("taosdump --version")
+os.system("mkdir -p /tmp/dumpdata")
+os.system("rm -rf /tmp/dumpdata/*")
+
+
+
+# dump data out
+print("taosdump dump out data")
+
+os.system("taosdump -o /tmp/dumpdata -D test -y ")
+
+# drop database of test
+print("drop database test")
+os.system(" taos -s ' drop database test ;' ")
+
+# dump data in
+print("taosdump dump data in")
+os.system("taosdump -i /tmp/dumpdata -y ")
+
+result = conn.query("SELECT count(*) from test.meters")
+
+data = result.fetch_all()
+
+if data[0][0] !=10000:
+ print(" taosdump work not as expected ")
+ sys.exit(1)
+else:
+ print(" taosdump work as expected ")
+
+conn.close()
\ No newline at end of file
diff --git a/packaging/deb/DEBIAN/prerm b/packaging/deb/DEBIAN/prerm
index 4953102842..65f261db2c 100644
--- a/packaging/deb/DEBIAN/prerm
+++ b/packaging/deb/DEBIAN/prerm
@@ -1,6 +1,6 @@
#!/bin/bash
-if [ $1 -eq "abort-upgrade" ]; then
+if [ "$1"x = "abort-upgrade"x ]; then
exit 0
fi
diff --git a/packaging/docker/README.md b/packaging/docker/README.md
index e41182f471..cb27d3bca6 100644
--- a/packaging/docker/README.md
+++ b/packaging/docker/README.md
@@ -47,7 +47,7 @@ taos> show databases;
Query OK, 1 row(s) in set (0.002843s)
```
-Since TDengine use container hostname to establish connections, it's a bit more complex to use taos shell and native connectors(such as JDBC-JNI) with TDengine container instance. This is the recommended way to expose ports and use TDengine with docker in simple cases. If you want to use taos shell or taosc/connectors smoothly outside the `tdengine` container, see next use cases that match you need.
+Since TDengine use container hostname to establish connections, it's a bit more complex to use TDengine CLI and native connectors(such as JDBC-JNI) with TDengine container instance. This is the recommended way to expose ports and use TDengine with docker in simple cases. If you want to use TDengine CLI or taosc/connectors smoothly outside the `tdengine` container, see next use cases that match you need.
### Start with host network
@@ -87,7 +87,7 @@ docker run -d \
This command starts a docker container with TDengine server running and maps the container's TCP ports from 6030 to 6049 to the host's ports from 6030 to 6049 with TCP protocol and UDP ports range 6030-6039 to the host's UDP ports 6030-6039. If the host is already running TDengine server and occupying the same port(s), you need to map the container's port to a different unused port segment. (Please see TDengine 2.0 Port Description for details). In order to support TDengine clients accessing TDengine server services, both TCP and UDP ports need to be exposed by default(unless `rpcForceTcp` is set to `1`).
-If you want to use taos shell or native connectors([JDBC-JNI](https://www.taosdata.com/cn/documentation/connector/java), or [driver-go](https://github.com/taosdata/driver-go)), you need to make sure the `TAOS_FQDN` is resolvable at `/etc/hosts` or with custom DNS service.
+If you want to use TDengine CLI or native connectors([JDBC-JNI](https://www.taosdata.com/cn/documentation/connector/java), or [driver-go](https://github.com/taosdata/driver-go)), you need to make sure the `TAOS_FQDN` is resolvable at `/etc/hosts` or with custom DNS service.
If you set the `TAOS_FQDN` to host's hostname, it will works as using `hosts` network like previous use case. Otherwise, like in `-e TAOS_FQDN=tdengine`, you can add the hostname record `tdengine` into `/etc/hosts` (use `127.0.0.1` here in host path, if use TDengine client/application in other hosts, you should set the right ip to the host eg. `192.168.10.1`(check the real ip in host with `hostname -i` or `ip route list default`) to make the TDengine endpoint resolvable):
@@ -391,7 +391,7 @@ test_td-1_1 /usr/bin/entrypoint.sh taosd Up 6030/tcp, 6031/tcp,
test_td-2_1 /usr/bin/entrypoint.sh taosd Up 6030/tcp, 6031/tcp, 6032/tcp, 6033/tcp, 6034/tcp, 6035/tcp, 6036/tcp, 6037/tcp, 6038/tcp, 6039/tcp, 6040/tcp, 6041/tcp, 6042/tcp
```
-Check dnodes with taos shell:
+Check dnodes with TDengine CLI:
```bash
$ docker-compose exec td-1 taos -s "show dnodes"
diff --git a/packaging/testpackage.sh b/packaging/testpackage.sh
new file mode 100755
index 0000000000..173fa3a3c3
--- /dev/null
+++ b/packaging/testpackage.sh
@@ -0,0 +1,112 @@
+#!/bin/sh
+
+# function installPkgAndCheckFile{
+
+echo "Download package"
+
+packgeName=$1
+version=$2
+originPackageName=$3
+originversion=$4
+testFile=$5
+subFile="taos.tar.gz"
+
+if [ ${testFile} = "server" ];then
+ tdPath="TDengine-server-${version}"
+ originTdpPath="TDengine-server-${originversion}"
+ installCmd="install.sh"
+elif [ ${testFile} = "client" ];then
+ tdPath="TDengine-client-${version}"
+ originTdpPath="TDengine-client-${originversion}"
+ installCmd="install_client.sh"
+elif [ ${testFile} = "tools" ];then
+ tdPath="taosTools-${version}"
+ originTdpPath="taosTools-${originversion}"
+ installCmd="install-taostools.sh"
+fi
+
+echo "Uninstall all components of TDeingne"
+
+if command -v rmtaos ;then
+ echo "uninstall all components of TDeingne:rmtaos"
+ echo " "
+else
+ echo "os doesn't include TDengine "
+fi
+
+if command -v rmtaostools ;then
+ echo "uninstall all components of TDeingne:rmtaostools"
+ echo " "
+else
+ echo "os doesn't include rmtaostools "
+fi
+
+echo "new workroom path"
+installPath="/usr/local/src/packageTest"
+oriInstallPath="/usr/local/src/packageTest/3.1"
+
+if [ ! -d ${installPath} ] ;then
+ mkdir -p ${installPath}
+else
+ echo "${installPath} already exists"
+fi
+
+
+if [ ! -d ${oriInstallPath} ] ;then
+ mkdir -p ${oriInstallPath}
+else
+ echo "${oriInstallPath} already exists"
+fi
+
+echo "decompress installPackage"
+
+cd ${installPath}
+wget https://www.taosdata.com/assets-download/3.0/${packgeName}
+cd ${oriInstallPath}
+wget https://www.taosdata.com/assets-download/3.0/${originPackageName}
+
+
+if [[ ${packgeName} =~ "deb" ]];then
+ echo "dpkg ${packgeName}" && dpkg -i ${packgeName}
+elif [[ ${packgeName} =~ "rpm" ]];then
+ echo "rpm ${packgeName}" && rpm -ivh ${packgeName}
+elif [[ ${packgeName} =~ "tar" ]];then
+ echo "tar ${packgeName}" && tar -xvf ${packgeName}
+ cd ${oriInstallPath}
+ echo "tar -xvf ${originPackageName}" && tar -xvf ${originPackageName}
+ cd ${installPath}
+ echo "tar -xvf ${packgeName}" && tar -xvf ${packgeName}
+
+
+ if [ ${testFile} != "tools" ] ;then
+ cd ${installPath}/${tdPath} && tar vxf ${subFile}
+ cd ${oriInstallPath}/${originTdpPath} && tar vxf ${subFile}
+ fi
+
+ echo "check installPackage File"
+
+ cd ${installPath}
+
+ tree ${oriInstallPath}/${originTdpPath} > ${originPackageName}_checkfile
+ tree ${installPath}/${tdPath} > ${packgeName}_checkfile
+
+ diff ${packgeName}_checkfile ${originPackageName}_checkfile > ${installPath}/diffFile.log
+ diffNumbers=`cat ${installPath}/diffFile.log |wc -l `
+ if [ ${diffNumbers} != 0 ];then
+ echo "The number and names of files have changed from the previous installation package"
+ echo `cat ${installPath}/diffFile.log`
+ exit -1
+ fi
+
+ cd ${installPath}/${tdPath}
+ if [ ${testFile} = "server" ];then
+ bash ${installCmd} -e no
+ else
+ bash ${installCmd}
+ fi
+
+fi
+# }
+
+# installPkgAndCheckFile
+
diff --git a/packaging/tools/make_install.sh b/packaging/tools/make_install.sh
index 6a95ace99e..f554942ce3 100755
--- a/packaging/tools/make_install.sh
+++ b/packaging/tools/make_install.sh
@@ -381,8 +381,7 @@ function install_header() {
${install_main_dir}/include ||
${csudo}cp -f ${source_dir}/include/client/taos.h ${source_dir}/include/common/taosdef.h ${source_dir}/include/util/taoserror.h ${source_dir}/include/libs/function/taosudf.h \
${install_main_2_dir}/include &&
- ${csudo}chmod 644 ${install_main_dir}/include/* ||:
- ${csudo}chmod 644 ${install_main_2_dir}/include/*
+ ${csudo}chmod 644 ${install_main_dir}/include/* || ${csudo}chmod 644 ${install_main_2_dir}/include/*
fi
}
diff --git a/source/client/src/clientSml.c b/source/client/src/clientSml.c
index 9f905a8352..4968c5c68d 100644
--- a/source/client/src/clientSml.c
+++ b/source/client/src/clientSml.c
@@ -85,8 +85,11 @@ typedef TSDB_SML_PROTOCOL_TYPE SMLProtocolType;
typedef enum {
SCHEMA_ACTION_NULL,
- SCHEMA_ACTION_COLUMN,
- SCHEMA_ACTION_TAG
+ SCHEMA_ACTION_CREATE_STABLE,
+ SCHEMA_ACTION_ADD_COLUMN,
+ SCHEMA_ACTION_ADD_TAG,
+ SCHEMA_ACTION_CHANGE_COLUMN_SIZE,
+ SCHEMA_ACTION_CHANGE_TAG_SIZE,
} ESchemaAction;
typedef struct {
@@ -219,7 +222,7 @@ static int32_t smlBuildInvalidDataMsg(SSmlMsgBuf *pBuf, const char *msg1, const
static int32_t smlGenerateSchemaAction(SSchema *colField, SHashObj *colHash, SSmlKv *kv, bool isTag,
ESchemaAction *action, SSmlHandle *info) {
- uint16_t *index = (uint16_t *)taosHashGet(colHash, kv->key, kv->keyLen);
+ uint16_t *index = colHash ? (uint16_t *)taosHashGet(colHash, kv->key, kv->keyLen) : NULL;
if (index) {
if (colField[*index].type != kv->type) {
uError("SML:0x%" PRIx64 " point type and db type mismatch. key: %s. point type: %d, db type: %d", info->id,
@@ -232,16 +235,16 @@ static int32_t smlGenerateSchemaAction(SSchema *colField, SHashObj *colHash, SSm
(colField[*index].type == TSDB_DATA_TYPE_NCHAR &&
((colField[*index].bytes - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE < kv->length))) {
if (isTag) {
- *action = SCHEMA_ACTION_TAG;
+ *action = SCHEMA_ACTION_CHANGE_TAG_SIZE;
} else {
- *action = SCHEMA_ACTION_COLUMN;
+ *action = SCHEMA_ACTION_CHANGE_COLUMN_SIZE;
}
}
} else {
if (isTag) {
- *action = SCHEMA_ACTION_TAG;
+ *action = SCHEMA_ACTION_ADD_TAG;
} else {
- *action = SCHEMA_ACTION_COLUMN;
+ *action = SCHEMA_ACTION_ADD_COLUMN;
}
}
return 0;
@@ -310,9 +313,31 @@ static int32_t getBytes(uint8_t type, int32_t length){
}
}
+static int32_t smlBuildFieldsList(SSmlHandle *info, SSchema *schemaField, SHashObj *schemaHash, SArray *cols, SArray* results, int32_t numOfCols, bool isTag) {
+ for (int j = 0; j < taosArrayGetSize(cols); ++j) {
+ SSmlKv *kv = (SSmlKv *)taosArrayGetP(cols, j);
+ ESchemaAction action = SCHEMA_ACTION_NULL;
+ smlGenerateSchemaAction(schemaField, schemaHash, kv, isTag, &action, info);
+ if(action == SCHEMA_ACTION_ADD_COLUMN || action == SCHEMA_ACTION_ADD_TAG){
+ SField field = {0};
+ field.type = kv->type;
+ field.bytes = getBytes(kv->type, kv->length);
+ memcpy(field.name, kv->key, kv->keyLen);
+ taosArrayPush(results, &field);
+ }else if(action == SCHEMA_ACTION_CHANGE_COLUMN_SIZE || action == SCHEMA_ACTION_CHANGE_TAG_SIZE){
+ uint16_t *index = (uint16_t *)taosHashGet(schemaHash, kv->key, kv->keyLen);
+ uint16_t newIndex = *index;
+ if(isTag) newIndex -= numOfCols;
+ SField *field = (SField *)taosArrayGet(results, newIndex);
+ field->bytes = getBytes(kv->type, kv->length);
+ }
+ }
+ return TSDB_CODE_SUCCESS;
+}
+
//static int32_t smlSendMetaMsg(SSmlHandle *info, SName *pName, SSmlSTableMeta *sTableData,
// int32_t colVer, int32_t tagVer, int8_t source, uint64_t suid){
-static int32_t smlSendMetaMsg(SSmlHandle *info, SName *pName, SSmlSTableMeta *sTableData,
+static int32_t smlSendMetaMsg(SSmlHandle *info, SName *pName, SArray* pColumns, SArray* pTags,
STableMeta *pTableMeta, ESchemaAction action){
SRequestObj* pRequest = NULL;
@@ -320,6 +345,12 @@ static int32_t smlSendMetaMsg(SSmlHandle *info, SName *pName, SSmlSTableMeta *s
int32_t code = TSDB_CODE_SUCCESS;
SCmdMsgInfo pCmdMsg = {0};
+ // put front for free
+ pReq.numOfColumns = taosArrayGetSize(pColumns);
+ pReq.pColumns = pColumns;
+ pReq.numOfTags = taosArrayGetSize(pTags);
+ pReq.pTags = pTags;
+
code = buildRequest(info->taos->id, "", 0, NULL, false, &pRequest);
if (code != TSDB_CODE_SUCCESS) {
goto end;
@@ -330,91 +361,41 @@ static int32_t smlSendMetaMsg(SSmlHandle *info, SName *pName, SSmlSTableMeta *s
goto end;
}
- if (action == SCHEMA_ACTION_NULL){
+ if (action == SCHEMA_ACTION_CREATE_STABLE){
pReq.colVer = 1;
pReq.tagVer = 1;
pReq.suid = 0;
pReq.source = TD_REQ_FROM_APP;
- } else if (action == SCHEMA_ACTION_TAG){
+ } else if (action == SCHEMA_ACTION_ADD_TAG || action == SCHEMA_ACTION_CHANGE_TAG_SIZE){
pReq.colVer = pTableMeta->sversion;
pReq.tagVer = pTableMeta->tversion + 1;
pReq.suid = pTableMeta->uid;
pReq.source = TD_REQ_FROM_TAOX;
- } else if (action == SCHEMA_ACTION_COLUMN){
+ } else if (action == SCHEMA_ACTION_ADD_COLUMN || action == SCHEMA_ACTION_CHANGE_COLUMN_SIZE){
pReq.colVer = pTableMeta->sversion + 1;
pReq.tagVer = pTableMeta->tversion;
pReq.suid = pTableMeta->uid;
pReq.source = TD_REQ_FROM_TAOX;
}
+ if (pReq.numOfTags == 0){
+ pReq.numOfTags = 1;
+ SField field = {0};
+ field.type = TSDB_DATA_TYPE_NCHAR;
+ field.bytes = 1;
+ strcpy(field.name, tsSmlTagName);
+ taosArrayPush(pReq.pTags, &field);
+ }
+
pReq.commentLen = -1;
pReq.igExists = true;
tNameExtractFullName(pName, pReq.name);
- if(action == SCHEMA_ACTION_NULL || action == SCHEMA_ACTION_COLUMN){
- pReq.numOfColumns = taosArrayGetSize(sTableData->cols);
- pReq.pColumns = taosArrayInit(pReq.numOfColumns, sizeof(SField));
- for (int i = 0; i < pReq.numOfColumns; i++) {
- SSmlKv *kv = (SSmlKv *)taosArrayGetP(sTableData->cols, i);
- SField field = {0};
- field.type = kv->type;
- field.bytes = getBytes(kv->type, kv->length);
- memcpy(field.name, kv->key, kv->keyLen);
- taosArrayPush(pReq.pColumns, &field);
- }
- }else if (action == SCHEMA_ACTION_TAG){
- pReq.numOfColumns = pTableMeta->tableInfo.numOfColumns;
- pReq.pColumns = taosArrayInit(pReq.numOfColumns, sizeof(SField));
- for (int i = 0; i < pReq.numOfColumns; i++) {
- SSchema *s = &pTableMeta->schema[i];
- SField field = {0};
- field.type = s->type;
- field.bytes = s->bytes;
- strcpy(field.name, s->name);
- taosArrayPush(pReq.pColumns, &field);
- }
- }
-
- if(action == SCHEMA_ACTION_NULL || action == SCHEMA_ACTION_TAG){
- pReq.numOfTags = taosArrayGetSize(sTableData->tags);
- if (pReq.numOfTags == 0){
- pReq.numOfTags = 1;
- pReq.pTags = taosArrayInit(pReq.numOfTags, sizeof(SField));
- SField field = {0};
- field.type = TSDB_DATA_TYPE_NCHAR;
- field.bytes = 1;
- strcpy(field.name, tsSmlTagName);
- taosArrayPush(pReq.pTags, &field);
- }else{
- pReq.pTags = taosArrayInit(pReq.numOfTags, sizeof(SField));
- for (int i = 0; i < pReq.numOfTags; i++) {
- SSmlKv *kv = (SSmlKv *)taosArrayGetP(sTableData->tags, i);
- SField field = {0};
- field.type = kv->type;
- field.bytes = getBytes(kv->type, kv->length);
- memcpy(field.name, kv->key, kv->keyLen);
- taosArrayPush(pReq.pTags, &field);
- }
- }
- }else if (action == SCHEMA_ACTION_COLUMN){
- pReq.numOfTags = pTableMeta->tableInfo.numOfTags;
- pReq.pTags = taosArrayInit(pReq.numOfTags, sizeof(SField));
- for (int i = 0; i < pReq.numOfTags; i++) {
- SSchema *s = &pTableMeta->schema[i + pTableMeta->tableInfo.numOfColumns];
- SField field = {0};
- field.type = s->type;
- field.bytes = s->bytes;
- strcpy(field.name, s->name);
- taosArrayPush(pReq.pTags, &field);
- }
- }
-
pCmdMsg.epSet = getEpSet_s(&info->taos->pAppInfo->mgmtEp);
pCmdMsg.msgType = TDMT_MND_CREATE_STB;
pCmdMsg.msgLen = tSerializeSMCreateStbReq(NULL, 0, &pReq);
pCmdMsg.pMsg = taosMemoryMalloc(pCmdMsg.msgLen);
if (NULL == pCmdMsg.pMsg) {
- tFreeSMCreateStbReq(&pReq);
code = TSDB_CODE_OUT_OF_MEMORY;
goto end;
}
@@ -442,7 +423,10 @@ end:
}
static int32_t smlModifyDBSchemas(SSmlHandle *info) {
- int32_t code = 0;
+ int32_t code = 0;
+ SHashObj *hashTmp = NULL;
+ STableMeta *pTableMeta = NULL;
+
SName pName = {TSDB_TABLE_NAME_T, info->taos->acctId, {0}, {0}};
strcpy(pName.dbname, info->pRequest->pDb);
@@ -455,7 +439,6 @@ static int32_t smlModifyDBSchemas(SSmlHandle *info) {
SSmlSTableMeta **tableMetaSml = (SSmlSTableMeta **)taosHashIterate(info->superTables, NULL);
while (tableMetaSml) {
SSmlSTableMeta *sTableData = *tableMetaSml;
- STableMeta *pTableMeta = NULL;
bool needCheckMeta = false; // for multi thread
size_t superTableLen = 0;
@@ -466,14 +449,19 @@ static int32_t smlModifyDBSchemas(SSmlHandle *info) {
code = catalogGetSTableMeta(info->pCatalog, &conn, &pName, &pTableMeta);
if (code == TSDB_CODE_PAR_TABLE_NOT_EXIST || code == TSDB_CODE_MND_STB_NOT_EXIST) {
- code = smlSendMetaMsg(info, &pName, sTableData, NULL, SCHEMA_ACTION_NULL);
+ SArray* pColumns = taosArrayInit(taosArrayGetSize(sTableData->cols), sizeof(SField));
+ SArray* pTags = taosArrayInit(taosArrayGetSize(sTableData->tags), sizeof(SField));
+ smlBuildFieldsList(info, NULL, NULL, sTableData->tags, pTags, 0, true);
+ smlBuildFieldsList(info, NULL, NULL, sTableData->cols, pColumns, 0, false);
+
+ code = smlSendMetaMsg(info, &pName, pColumns, pTags, NULL, SCHEMA_ACTION_CREATE_STABLE);
if (code != TSDB_CODE_SUCCESS) {
uError("SML:0x%" PRIx64 " smlSendMetaMsg failed. can not create %s", info->id, superTable);
goto end;
}
info->cost.numOfCreateSTables++;
} else if (code == TSDB_CODE_SUCCESS) {
- SHashObj *hashTmp = taosHashInit(pTableMeta->tableInfo.numOfTags,
+ hashTmp = taosHashInit(pTableMeta->tableInfo.numOfTags,
taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_NO_LOCK);
for (uint16_t i = pTableMeta->tableInfo.numOfColumns;
i < pTableMeta->tableInfo.numOfColumns + pTableMeta->tableInfo.numOfTags; i++) {
@@ -483,34 +471,70 @@ static int32_t smlModifyDBSchemas(SSmlHandle *info) {
ESchemaAction action = SCHEMA_ACTION_NULL;
code = smlProcessSchemaAction(info, pTableMeta->schema, hashTmp, sTableData->tags, &action, true);
if (code != TSDB_CODE_SUCCESS) {
- taosHashCleanup(hashTmp);
goto end;
}
- if (action == SCHEMA_ACTION_TAG){
- code = smlSendMetaMsg(info, &pName, sTableData, pTableMeta, action);
+ if (action != SCHEMA_ACTION_NULL){
+ SArray* pColumns = taosArrayInit(taosArrayGetSize(sTableData->cols) + pTableMeta->tableInfo.numOfColumns, sizeof(SField));
+ SArray* pTags = taosArrayInit(taosArrayGetSize(sTableData->tags) + pTableMeta->tableInfo.numOfTags, sizeof(SField));
+
+ for (uint16_t i = 0; i < pTableMeta->tableInfo.numOfColumns + pTableMeta->tableInfo.numOfTags; i++) {
+ SField field = {0};
+ field.type = pTableMeta->schema[i].type;
+ field.bytes = pTableMeta->schema[i].bytes;
+ strcpy(field.name, pTableMeta->schema[i].name);
+ if(i < pTableMeta->tableInfo.numOfColumns){
+ taosArrayPush(pColumns, &field);
+ }else{
+ taosArrayPush(pTags, &field);
+ }
+ }
+ smlBuildFieldsList(info, pTableMeta->schema, hashTmp, sTableData->tags, pTags, pTableMeta->tableInfo.numOfColumns, true);
+
+ code = smlSendMetaMsg(info, &pName, pColumns, pTags, pTableMeta, action);
if (code != TSDB_CODE_SUCCESS) {
uError("SML:0x%" PRIx64 " smlSendMetaMsg failed. can not create %s", info->id, superTable);
goto end;
}
}
+ taosMemoryFreeClear(pTableMeta);
code = catalogRefreshTableMeta(info->pCatalog, &conn, &pName, -1);
if (code != TSDB_CODE_SUCCESS) {
goto end;
}
+ code = catalogGetSTableMeta(info->pCatalog, &conn, &pName, &pTableMeta);
+ if (code != TSDB_CODE_SUCCESS) {
+ goto end;
+ }
taosHashClear(hashTmp);
- for (uint16_t i = 1; i < pTableMeta->tableInfo.numOfColumns; i++) {
+ for (uint16_t i = 0; i < pTableMeta->tableInfo.numOfColumns; i++) {
taosHashPut(hashTmp, pTableMeta->schema[i].name, strlen(pTableMeta->schema[i].name), &i, SHORT_BYTES);
}
action = SCHEMA_ACTION_NULL;
code = smlProcessSchemaAction(info, pTableMeta->schema, hashTmp, sTableData->cols, &action, false);
- taosHashCleanup(hashTmp);
if (code != TSDB_CODE_SUCCESS) {
goto end;
}
- if (action == SCHEMA_ACTION_COLUMN){
- code = smlSendMetaMsg(info, &pName, sTableData, pTableMeta, action);
+ if (action != SCHEMA_ACTION_NULL){
+ SArray* pColumns = taosArrayInit(taosArrayGetSize(sTableData->cols) + pTableMeta->tableInfo.numOfColumns, sizeof(SField));
+ SArray* pTags = taosArrayInit(taosArrayGetSize(sTableData->tags) + pTableMeta->tableInfo.numOfTags, sizeof(SField));
+
+ for (uint16_t i = 0; i < pTableMeta->tableInfo.numOfColumns + pTableMeta->tableInfo.numOfTags; i++) {
+ SField field = {0};
+ field.type = pTableMeta->schema[i].type;
+ field.bytes = pTableMeta->schema[i].bytes;
+ strcpy(field.name, pTableMeta->schema[i].name);
+ if(i < pTableMeta->tableInfo.numOfColumns){
+ taosArrayPush(pColumns, &field);
+ }else{
+ taosArrayPush(pTags, &field);
+ }
+ }
+
+ smlBuildFieldsList(info, pTableMeta->schema, hashTmp, sTableData->cols, pColumns, pTableMeta->tableInfo.numOfColumns, false);
+
+ code = smlSendMetaMsg(info, &pName, pColumns, pTags, pTableMeta, action);
if (code != TSDB_CODE_SUCCESS) {
uError("SML:0x%" PRIx64 " smlSendMetaMsg failed. can not create %s", info->id, superTable);
goto end;
@@ -526,7 +550,7 @@ static int32_t smlModifyDBSchemas(SSmlHandle *info) {
uError("SML:0x%" PRIx64 " load table meta error: %s", info->id, tstrerror(code));
goto end;
}
- if (pTableMeta) taosMemoryFree(pTableMeta);
+ taosMemoryFreeClear(pTableMeta);
code = catalogGetSTableMeta(info->pCatalog, &conn, &pName, &pTableMeta);
if (code != TSDB_CODE_SUCCESS) {
@@ -551,10 +575,13 @@ static int32_t smlModifyDBSchemas(SSmlHandle *info) {
sTableData->tableMeta = pTableMeta;
tableMetaSml = (SSmlSTableMeta **)taosHashIterate(info->superTables, tableMetaSml);
+ taosHashCleanup(hashTmp);
}
return 0;
end:
+ taosHashCleanup(hashTmp);
+ taosMemoryFreeClear(pTableMeta);
catalogRefreshTableMeta(info->pCatalog, &conn, &pName, 1);
return code;
}
@@ -2057,10 +2084,6 @@ static int32_t smlParseInfluxLine(SSmlHandle *info, const char *sql) {
if (info->dataFormat) taosArrayDestroy(cols);
return ret;
}
- if (taosArrayGetSize(cols) > TSDB_MAX_COLUMNS) {
- smlBuildInvalidDataMsg(&info->msgBuf, "too many columns than 4096", NULL);
- return TSDB_CODE_PAR_TOO_MANY_COLUMNS;
- }
bool hasTable = true;
SSmlTableInfo *tinfo = NULL;
@@ -2094,6 +2117,11 @@ static int32_t smlParseInfluxLine(SSmlHandle *info, const char *sql) {
return TSDB_CODE_PAR_INVALID_TAGS_NUM;
}
+ if (taosArrayGetSize(cols) + taosArrayGetSize((*oneTable)->tags) > TSDB_MAX_COLUMNS) {
+ smlBuildInvalidDataMsg(&info->msgBuf, "too many columns than 4096", NULL);
+ return TSDB_CODE_PAR_TOO_MANY_COLUMNS;
+ }
+
(*oneTable)->sTableName = elements.measure;
(*oneTable)->sTableNameLen = elements.measureLen;
if (strlen((*oneTable)->childTableName) == 0) {
diff --git a/source/client/src/taosx.c b/source/client/src/taosx.c
index 677567e38f..f016120a1f 100644
--- a/source/client/src/taosx.c
+++ b/source/client/src/taosx.c
@@ -765,6 +765,29 @@ static int32_t taosCreateTable(TAOS* taos, void* meta, int32_t metaLen) {
}
taosArrayPush(pRequest->tableList, &pName);
+ // change tag cid to new cid
+ if(pCreateReq->type == TSDB_CHILD_TABLE){
+ STableMeta* pTableMeta = NULL;
+ SName sName = {0};
+ toName(pTscObj->acctId, pRequest->pDb, pCreateReq->ctb.name, &sName);
+ code = catalogGetTableMeta(pCatalog, &conn, &sName, &pTableMeta);
+ if(code != TSDB_CODE_SUCCESS){
+ uError("taosCreateTable:catalogGetTableMeta failed. table name: %s", pCreateReq->ctb.name);
+ goto end;
+ }
+
+ for(int32_t i = 0; i < taosArrayGetSize(pCreateReq->ctb.tagName); i++){
+ char* tName = taosArrayGet(pCreateReq->ctb.tagName, i);
+ for(int32_t j = pTableMeta->tableInfo.numOfColumns; j < pTableMeta->tableInfo.numOfColumns + pTableMeta->tableInfo.numOfTags; j++){
+ SSchema *tag = &pTableMeta->schema[j];
+ if(strcmp(tag->name, tName) == 0 && tag->type != TSDB_DATA_TYPE_JSON){
+ tTagSetCid((STag *)pCreateReq->ctb.pTag, i, tag->colId);
+ }
+ }
+ }
+ taosMemoryFreeClear(pTableMeta);
+ }
+
SVgroupCreateTableBatch* pTableBatch = taosHashGet(pVgroupHashmap, &pInfo.vgId, sizeof(pInfo.vgId));
if (pTableBatch == NULL) {
SVgroupCreateTableBatch tBatch = {0};
@@ -1305,6 +1328,7 @@ static int32_t tmqWriteRaw(TAOS* taos, void* data, int32_t dataLen) {
SQuery* pQuery = NULL;
SMqRspObj rspObj = {0};
SDecoder decoder = {0};
+ STableMeta* pTableMeta = NULL;
terrno = TSDB_CODE_SUCCESS;
SRequestObj* pRequest = (SRequestObj*)createRequest(*(int64_t*)taos, TSDB_SQL_INSERT);
@@ -1361,24 +1385,6 @@ static int32_t tmqWriteRaw(TAOS* taos, void* data, int32_t dataLen) {
goto end;
}
- uint16_t fLen = 0;
- int32_t rowSize = 0;
- int16_t nVar = 0;
- for (int i = 0; i < pSW->nCols; i++) {
- SSchema* schema = pSW->pSchema + i;
- fLen += TYPE_BYTES[schema->type];
- rowSize += schema->bytes;
- if (IS_VAR_DATA_TYPE(schema->type)) {
- nVar++;
- }
- }
-
- int32_t rows = rspObj.resInfo.numOfRows;
- int32_t extendedRowSize = rowSize + TD_ROW_HEAD_LEN - sizeof(TSKEY) + nVar * sizeof(VarDataOffsetT) +
- (int32_t)TD_BITMAP_BYTES(pSW->nCols - 1);
- int32_t schemaLen = 0;
- int32_t submitLen = sizeof(SSubmitBlk) + schemaLen + rows * extendedRowSize;
-
const char* tbName = (const char*)taosArrayGetP(rspObj.rsp.blockTbName, rspObj.resIter);
if (!tbName) {
uError("WriteRaw: tbname is null");
@@ -1398,6 +1404,35 @@ static int32_t tmqWriteRaw(TAOS* taos, void* data, int32_t dataLen) {
goto end;
}
+ code = catalogGetTableMeta(pCatalog, &conn, &pName, &pTableMeta);
+ if (code == TSDB_CODE_PAR_TABLE_NOT_EXIST){
+ uError("WriteRaw:catalogGetTableMeta table not exist. table name: %s", tbName);
+ code = TSDB_CODE_SUCCESS;
+ continue;
+ }
+ if (code != TSDB_CODE_SUCCESS) {
+ uError("WriteRaw:catalogGetTableMeta failed. table name: %s", tbName);
+ goto end;
+ }
+
+ uint16_t fLen = 0;
+ int32_t rowSize = 0;
+ int16_t nVar = 0;
+ for (int i = 0; i < pTableMeta->tableInfo.numOfColumns; i++) {
+ SSchema* schema = &pTableMeta->schema[i];
+ fLen += TYPE_BYTES[schema->type];
+ rowSize += schema->bytes;
+ if (IS_VAR_DATA_TYPE(schema->type)) {
+ nVar++;
+ }
+ }
+
+ int32_t rows = rspObj.resInfo.numOfRows;
+ int32_t extendedRowSize = rowSize + TD_ROW_HEAD_LEN - sizeof(TSKEY) + nVar * sizeof(VarDataOffsetT) +
+ (int32_t)TD_BITMAP_BYTES(pTableMeta->tableInfo.numOfColumns - 1);
+ int32_t schemaLen = 0;
+ int32_t submitLen = sizeof(SSubmitBlk) + schemaLen + rows * extendedRowSize;
+
SSubmitReq* subReq = NULL;
SSubmitBlk* blk = NULL;
void* hData = taosHashGet(pVgHash, &vgData.vg.vgId, sizeof(vgData.vg.vgId));
@@ -1430,23 +1465,25 @@ static int32_t tmqWriteRaw(TAOS* taos, void* data, int32_t dataLen) {
blk = POINTER_SHIFT(vgData.data, sizeof(SSubmitReq));
}
- STableMeta* pTableMeta = NULL;
- code = catalogGetTableMeta(pCatalog, &conn, &pName, &pTableMeta);
- if (code != TSDB_CODE_SUCCESS) {
- uError("WriteRaw:catalogGetTableMeta failed. table name: %s", tbName);
- goto end;
- }
+ // pSW->pSchema should be same as pTableMeta->schema
+// ASSERT(pSW->nCols == pTableMeta->tableInfo.numOfColumns);
uint64_t suid = (TSDB_NORMAL_TABLE == pTableMeta->tableType ? 0 : pTableMeta->suid);
uint64_t uid = pTableMeta->uid;
- taosMemoryFreeClear(pTableMeta);
+ int16_t sver = pTableMeta->sversion;
void* blkSchema = POINTER_SHIFT(blk, sizeof(SSubmitBlk));
STSRow* rowData = POINTER_SHIFT(blkSchema, schemaLen);
SRowBuilder rb = {0};
- tdSRowInit(&rb, pSW->version);
- tdSRowSetTpInfo(&rb, pSW->nCols, fLen);
- int32_t dataLen = 0;
+ tdSRowInit(&rb, sver);
+ tdSRowSetTpInfo(&rb, pTableMeta->tableInfo.numOfColumns, fLen);
+ int32_t totalLen = 0;
+
+ SHashObj* schemaHash = taosHashInit(16, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), false, HASH_NO_LOCK);
+ for (int i = 0; i < pSW->nCols; i++) {
+ SSchema* schema = &pSW->pSchema[i];
+ taosHashPut(schemaHash, schema->name, strlen(schema->name), &i, sizeof(int32_t));
+ }
for (int32_t j = 0; j < rows; j++) {
tdSRowResetBuf(&rb, rowData);
@@ -1455,33 +1492,41 @@ static int32_t tmqWriteRaw(TAOS* taos, void* data, int32_t dataLen) {
rspObj.resInfo.current += 1;
int32_t offset = 0;
- for (int32_t k = 0; k < pSW->nCols; k++) {
- const SSchema* pColumn = &pSW->pSchema[k];
- char* data = rspObj.resInfo.row[k];
- if (!data) {
+ for (int32_t k = 0; k < pTableMeta->tableInfo.numOfColumns; k++) {
+ const SSchema* pColumn = &pTableMeta->schema[k];
+ int32_t* index = taosHashGet(schemaHash, pColumn->name, strlen(pColumn->name));
+ if(!index){
tdAppendColValToRow(&rb, pColumn->colId, pColumn->type, TD_VTYPE_NULL, NULL, false, offset, k);
- } else {
- if (IS_VAR_DATA_TYPE(pColumn->type)) {
- data -= VARSTR_HEADER_SIZE;
+ }else{
+ char* colData = rspObj.resInfo.row[*index];
+ if (!colData) {
+ tdAppendColValToRow(&rb, pColumn->colId, pColumn->type, TD_VTYPE_NULL, NULL, false, offset, k);
+ } else {
+ if (IS_VAR_DATA_TYPE(pColumn->type)) {
+ colData -= VARSTR_HEADER_SIZE;
+ }
+ tdAppendColValToRow(&rb, pColumn->colId, pColumn->type, TD_VTYPE_NORM, colData, true, offset, k);
}
- tdAppendColValToRow(&rb, pColumn->colId, pColumn->type, TD_VTYPE_NORM, data, true, offset, k);
}
+
offset += TYPE_BYTES[pColumn->type];
}
tdSRowEnd(&rb);
int32_t rowLen = TD_ROW_LEN(rowData);
rowData = POINTER_SHIFT(rowData, rowLen);
- dataLen += rowLen;
+ totalLen += rowLen;
}
+ taosHashCleanup(schemaHash);
blk->uid = htobe64(uid);
blk->suid = htobe64(suid);
- blk->sversion = htonl(pSW->version);
+ blk->sversion = htonl(sver);
blk->schemaLen = htonl(schemaLen);
blk->numOfRows = htonl(rows);
- blk->dataLen = htonl(dataLen);
- subReq->length += sizeof(SSubmitBlk) + schemaLen + dataLen;
+ blk->dataLen = htonl(totalLen);
+ subReq->length += sizeof(SSubmitBlk) + schemaLen + totalLen;
subReq->numOfBlocks++;
+ taosMemoryFreeClear(pTableMeta);
}
pQuery = (SQuery*)nodesMakeNode(QUERY_NODE_QUERY);
@@ -1535,6 +1580,7 @@ end:
qDestroyQuery(pQuery);
destroyRequest(pRequest);
taosHashCleanup(pVgHash);
+ taosMemoryFreeClear(pTableMeta);
return code;
}
diff --git a/source/client/src/tmq.c b/source/client/src/tmq.c
index 7637ffbc80..fa657fcb10 100644
--- a/source/client/src/tmq.c
+++ b/source/client/src/tmq.c
@@ -1132,7 +1132,10 @@ int32_t tmqPollCb(void* param, SDataBuf* pMsg, int32_t code) {
memcpy(&pRspWrapper->dataRsp, pMsg->pData, sizeof(SMqRspHead));
} else {
ASSERT(rspType == TMQ_MSG_TYPE__POLL_META_RSP);
- tDecodeSMqMetaRsp(POINTER_SHIFT(pMsg->pData, sizeof(SMqRspHead)), &pRspWrapper->metaRsp);
+ SDecoder decoder;
+ tDecoderInit(&decoder, POINTER_SHIFT(pMsg->pData, sizeof(SMqRspHead)), pMsg->len - sizeof(SMqRspHead));
+ tDecodeSMqMetaRsp(&decoder, &pRspWrapper->metaRsp);
+ tDecoderClear(&decoder);
memcpy(&pRspWrapper->metaRsp, pMsg->pData, sizeof(SMqRspHead));
}
@@ -1581,8 +1584,7 @@ void* tmqHandleAllRsp(tmq_t* tmq, int64_t timeout, bool pollIfReset) {
SMqClientVg* pVg = pollRspWrapper->vgHandle;
/*printf("vgId:%d, offset %" PRId64 " up to %" PRId64 "\n", pVg->vgId, pVg->currentOffset,
* rspMsg->msg.rspOffset);*/
- pVg->currentOffset.version = pollRspWrapper->metaRsp.rspOffset;
- pVg->currentOffset.type = TMQ_OFFSET__LOG;
+ pVg->currentOffset = pollRspWrapper->metaRsp.rspOffset;
atomic_store_32(&pVg->vgStatus, TMQ_VG_STATUS__IDLE);
// build rsp
SMqMetaRspObj* pRsp = tmqBuildMetaRspFromWrapper(pollRspWrapper);
diff --git a/source/client/test/smlTest.cpp b/source/client/test/smlTest.cpp
index 68a8b9d336..b62238ccf2 100644
--- a/source/client/test/smlTest.cpp
+++ b/source/client/test/smlTest.cpp
@@ -692,3 +692,52 @@ TEST(testCase, smlParseTelnetLine_diff_json_type2_Test) {
ASSERT_NE(ret, 0);
smlDestroyInfo(info);
}
+
+TEST(testCase, sml_col_4096_Test) {
+ SSmlHandle *info = smlBuildSmlInfo(NULL, NULL, TSDB_SML_LINE_PROTOCOL, TSDB_SML_TIMESTAMP_NANO_SECONDS);
+ ASSERT_NE(info, nullptr);
+
+ const char *sql[] = {
+ "spgwgvldxv,id=spgwgvldxv_1,t0=f c0=t,c1=t,c2=t,c3=t,c4=t,c5=t,c6=t,c7=t,c8=t,c9=t,c10=t,c11=t,c12=t,c13=t,c14=t,c15=t,c16=t,c17=t,c18=t,c19=t,c20=t,c21=t,c22=t,c23=t,c24=t,c25=t,c26=t,c27=t,c28=t,c29=t,c30=t,c31=t,c32=t,c33=t,c34=t,c35=t,c36=t,c37=t,c38=t,c39=t,c40=t,c41=t,c42=t,c43=t,c44=t,c45=t,c46=t,c47=t,c48=t,c49=t,c50=t,c51=t,c52=t,c53=t,c54=t,c55=t,c56=t,c57=t,c58=t,c59=t,c60=t,c61=t,c62=t,c63=t,c64=t,c65=t,c66=t,c67=t,c68=t,c69=t,c70=t,c71=t,c72=t,c73=t,c74=t,c75=t,c76=t,c77=t,c78=t,c79=t,c80=t,c81=t,c82=t,c83=t,c84=t,c85=t,c86=t,c87=t,c88=t,c89=t,c90=t,c91=t,c92=t,c93=t,c94=t,c95=t,c96=t,c97=t,c98=t,c99=t,c100=t,"
+ "c101=t,c102=t,c103=t,c104=t,c105=t,c106=t,c107=t,c108=t,c109=t,c110=t,c111=t,c112=t,c113=t,c114=t,c115=t,c116=t,c117=t,c118=t,c119=t,c120=t,c121=t,c122=t,c123=t,c124=t,c125=t,c126=t,c127=t,c128=t,c129=t,c130=t,c131=t,c132=t,c133=t,c134=t,c135=t,c136=t,c137=t,c138=t,c139=t,c140=t,c141=t,c142=t,c143=t,c144=t,c145=t,c146=t,c147=t,c148=t,c149=t,c150=t,c151=t,c152=t,c153=t,c154=t,c155=t,c156=t,c157=t,c158=t,c159=t,c160=t,c161=t,c162=t,c163=t,c164=t,c165=t,c166=t,c167=t,c168=t,c169=t,c170=t,c171=t,c172=t,c173=t,c174=t,c175=t,c176=t,c177=t,c178=t,c179=t,c180=t,c181=t,c182=t,c183=t,c184=t,c185=t,c186=t,c187=t,c188=t,c189=t,"
+ "c190=t,c191=t,c192=t,c193=t,c194=t,c195=t,c196=t,c197=t,c198=t,c199=t,c200=t,c201=t,c202=t,c203=t,c204=t,c205=t,c206=t,c207=t,c208=t,c209=t,c210=t,c211=t,c212=t,c213=t,c214=t,c215=t,c216=t,c217=t,c218=t,c219=t,c220=t,c221=t,c222=t,c223=t,c224=t,c225=t,c226=t,c227=t,c228=t,c229=t,c230=t,c231=t,c232=t,c233=t,c234=t,c235=t,c236=t,c237=t,c238=t,c239=t,c240=t,c241=t,c242=t,c243=t,c244=t,c245=t,c246=t,c247=t,c248=t,c249=t,c250=t,c251=t,c252=t,c253=t,c254=t,c255=t,c256=t,c257=t,c258=t,c259=t,c260=t,c261=t,c262=t,c263=t,c264=t,c265=t,c266=t,c267=t,c268=t,c269=t,c270=t,c271=t,c272=t,c273=t,c274=t,c275=t,c276=t,c277=t,c278=t,"
+ "c279=t,c280=t,c281=t,c282=t,c283=t,c284=t,c285=t,c286=t,c287=t,c288=t,c289=t,c290=t,c291=t,c292=t,c293=t,c294=t,c295=t,c296=t,c297=t,c298=t,c299=t,c300=t,c301=t,c302=t,c303=t,c304=t,c305=t,c306=t,c307=t,c308=t,c309=t,c310=t,c311=t,c312=t,c313=t,c314=t,c315=t,c316=t,c317=t,c318=t,c319=t,c320=t,c321=t,c322=t,c323=t,c324=t,c325=t,c326=t,c327=t,c328=t,c329=t,c330=t,c331=t,c332=t,c333=t,c334=t,c335=t,c336=t,c337=t,c338=t,c339=t,c340=t,c341=t,c342=t,c343=t,c344=t,c345=t,c346=t,c347=t,c348=t,c349=t,c350=t,c351=t,c352=t,c353=t,c354=t,c355=t,c356=t,c357=t,c358=t,c359=t,c360=t,c361=t,c362=t,c363=t,c364=t,c365=t,c366=t,c367=t,c368=t,c369=t,c370=t,c371=t,c372=t,c373=t,c374=t,c375=t,c376=t,c377=t,c378=t,c379=t,c380=t,c381=t,c382=t,c383=t,c384=t,c385=t,c386=t,c387=t,c388=t,c389=t,c390=t,c391=t,c392=t,c393=t,c394=t,c395=t,c396=t,c397=t,c398=t,c399=t,c400=t,c401=t,c402=t,c403=t,c404=t,c405=t,c406=t,c407=t,c408=t,c409=t,c410=t,c411=t,c412=t,c413=t,c414=t,c415=t,c416=t,c417=t,c418=t,c419=t,c420=t,c421=t,c422=t,c423=t,c424=t,c425=t,c426=t,c427=t,c428=t,c429=t,c430=t,c431=t,c432=t,c433=t,c434=t,c435=t,c436=t,c437=t,c438=t,c439=t,c440=t,c441=t,c442=t,c443=t,c444=t,c445=t,c446=t,"
+ "c447=t,c448=t,c449=t,c450=t,c451=t,c452=t,c453=t,c454=t,c455=t,c456=t,c457=t,c458=t,c459=t,c460=t,c461=t,c462=t,c463=t,c464=t,c465=t,c466=t,c467=t,c468=t,c469=t,c470=t,c471=t,c472=t,c473=t,c474=t,c475=t,c476=t,c477=t,c478=t,c479=t,c480=t,c481=t,c482=t,c483=t,c484=t,c485=t,c486=t,c487=t,c488=t,c489=t,c490=t,c491=t,c492=t,c493=t,c494=t,c495=t,c496=t,c497=t,c498=t,c499=t,c500=t,c501=t,c502=t,c503=t,c504=t,c505=t,c506=t,c507=t,c508=t,c509=t,c510=t,c511=t,c512=t,c513=t,c514=t,c515=t,c516=t,c517=t,c518=t,c519=t,c520=t,c521=t,c522=t,c523=t,c524=t,c525=t,c526=t,c527=t,c528=t,c529=t,c530=t,c531=t,c532=t,c533=t,c534=t,c535=t,c536=t,c537=t,c538=t,c539=t,c540=t,c541=t,c542=t,c543=t,c544=t,c545=t,c546=t,c547=t,c548=t,c549=t,c550=t,c551=t,c552=t,c553=t,c554=t,c555=t,c556=t,c557=t,c558=t,c559=t,c560=t,c561=t,c562=t,c563=t,c564=t,c565=t,c566=t,c567=t,c568=t,c569=t,c570=t,c571=t,c572=t,c573=t,c574=t,c575=t,c576=t,c577=t,c578=t,c579=t,c580=t,c581=t,c582=t,c583=t,c584=t,c585=t,c586=t,c587=t,c588=t,c589=t,c590=t,c591=t,c592=t,c593=t,c594=t,c595=t,c596=t,c597=t,c598=t,c599=t,c600=t,c601=t,c602=t,c603=t,c604=t,c605=t,c606=t,c607=t,c608=t,c609=t,c610=t,c611=t,c612=t,c613=t,c614=t,"
+ "c615=t,c616=t,c617=t,c618=t,c619=t,c620=t,c621=t,c622=t,c623=t,c624=t,c625=t,c626=t,c627=t,c628=t,c629=t,c630=t,c631=t,c632=t,c633=t,c634=t,c635=t,c636=t,c637=t,c638=t,c639=t,c640=t,c641=t,c642=t,c643=t,c644=t,c645=t,c646=t,c647=t,c648=t,c649=t,c650=t,c651=t,c652=t,c653=t,c654=t,c655=t,c656=t,c657=t,c658=t,c659=t,c660=t,c661=t,c662=t,c663=t,c664=t,c665=t,c666=t,c667=t,c668=t,c669=t,c670=t,c671=t,c672=t,c673=t,c674=t,c675=t,c676=t,c677=t,c678=t,c679=t,c680=t,c681=t,c682=t,c683=t,c684=t,c685=t,c686=t,c687=t,c688=t,c689=t,c690=t,c691=t,c692=t,c693=t,c694=t,c695=t,c696=t,c697=t,c698=t,c699=t,c700=t,c701=t,c702=t,c703=t,c704=t,c705=t,c706=t,c707=t,c708=t,c709=t,c710=t,c711=t,c712=t,c713=t,c714=t,c715=t,c716=t,c717=t,c718=t,c719=t,c720=t,c721=t,c722=t,c723=t,c724=t,c725=t,c726=t,c727=t,c728=t,c729=t,c730=t,c731=t,c732=t,c733=t,c734=t,c735=t,c736=t,c737=t,c738=t,c739=t,c740=t,c741=t,c742=t,c743=t,c744=t,c745=t,c746=t,c747=t,c748=t,c749=t,c750=t,c751=t,c752=t,c753=t,c754=t,c755=t,c756=t,c757=t,c758=t,c759=t,c760=t,c761=t,c762=t,c763=t,c764=t,c765=t,c766=t,c767=t,c768=t,c769=t,c770=t,c771=t,c772=t,c773=t,c774=t,c775=t,c776=t,c777=t,c778=t,c779=t,c780=t,c781=t,c782=t,"
+ "c783=t,c784=t,c785=t,c786=t,c787=t,c788=t,c789=t,c790=t,c791=t,c792=t,c793=t,c794=t,c795=t,c796=t,c797=t,c798=t,c799=t,c800=t,c801=t,c802=t,c803=t,c804=t,c805=t,c806=t,c807=t,c808=t,c809=t,c810=t,c811=t,c812=t,c813=t,"
+ "c814=t,c815=t,c816=t,c817=t,c818=t,c819=t,c820=t,c821=t,c822=t,c823=t,c824=t,c825=t,c826=t,c827=t,c828=t,c829=t,c830=t,c831=t,c832=t,c833=t,c834=t,c835=t,c836=t,c837=t,c838=t,c839=t,c840=t,c841=t,c842=t,c843=t,c844=t,c845=t,c846=t,c847=t,c848=t,c849=t,c850=t,c851=t,c852=t,c853=t,c854=t,c855=t,c856=t,c857=t,c858=t,c859=t,c860=t,c861=t,c862=t,"
+ "c863=t,c864=t,c865=t,c866=t,c867=t,c868=t,c869=t,c870=t,c871=t,c872=t,c873=t,c874=t,c875=t,c876=t,c877=t,c878=t,c879=t,c880=t,c881=t,c882=t,c883=t,c884=t,c885=t,c886=t,c887=t,c888=t,c889=t,c890=t,c891=t,c892=t,c893=t,c894=t,c895=t,c896=t,c897=t,c898=t,c899=t,c900=t,c901=t,c902=t,c903=t,c904=t,c905=t,c906=t,c907=t,c908=t,c909=t,c910=t,c911=t,c912=t,c913=t,c914=t,c915=t,c916=t,c917=t,c918=t,c919=t,c920=t,c921=t,c922=t,c923=t,c924=t,c925=t,c926=t,c927=t,c928=t,c929=t,c930=t,c931=t,c932=t,c933=t,c934=t,c935=t,c936=t,c937=t,c938=t,c939=t,c940=t,c941=t,c942=t,c943=t,c944=t,c945=t,c946=t,c947=t,c948=t,c949=t,c950=t,c951=t,c952=t,c953=t,c954=t,c955=t,c956=t,c957=t,c958=t,c959=t,c960=t,c961=t,c962=t,c963=t,c964=t,c965=t,c966=t,c967=t,c968=t,c969=t,c970=t,c971=t,c972=t,c973=t,c974=t,c975=t,c976=t,c977=t,c978=t,c979=t,c980=t,c981=t,c982=t,c983=t,c984=t,c985=t,c986=t,c987=t,c988=t,c989=t,c990=t,c991=t,c992=t,c993=t,c994=t,c995=t,c996=t,c997=t,c998=t,c999=t,c1000=t,c1001=t,c1002=t,c1003=t,c1004=t,c1005=t,c1006=t,c1007=t,c1008=t,c1009=t,c1010=t,c1011=t,c1012=t,c1013=t,c1014=t,c1015=t,c1016=t,c1017=t,c1018=t,c1019=t,c1020=t,c1021=t,c1022=t,c1023=t,c1024=t,c1025=t,c1026=t,"
+ "c1027=t,c1028=t,c1029=t,c1030=t,c1031=t,c1032=t,c1033=t,c1034=t,c1035=t,c1036=t,c1037=t,c1038=t,c1039=t,c1040=t,c1041=t,c1042=t,c1043=t,c1044=t,c1045=t,c1046=t,c1047=t,c1048=t,c1049=t,c1050=t,c1051=t,c1052=t,c1053=t,c1054=t,c1055=t,c1056=t,c1057=t,c1058=t,c1059=t,c1060=t,c1061=t,c1062=t,c1063=t,c1064=t,c1065=t,c1066=t,c1067=t,c1068=t,c1069=t,c1070=t,c1071=t,c1072=t,c1073=t,c1074=t,c1075=t,c1076=t,c1077=t,c1078=t,c1079=t,c1080=t,c1081=t,c1082=t,c1083=t,c1084=t,c1085=t,c1086=t,c1087=t,c1088=t,c1089=t,c1090=t,c1091=t,c1092=t,c1093=t,c1094=t,c1095=t,c1096=t,c1097=t,c1098=t,c1099=t,c1100=t,c1101=t,c1102=t,c1103=t,c1104=t,c1105=t,c1106=t,c1107=t,c1108=t,c1109=t,c1110=t,c1111=t,c1112=t,c1113=t,c1114=t,c1115=t,c1116=t,c1117=t,c1118=t,c1119=t,c1120=t,c1121=t,c1122=t,c1123=t,c1124=t,c1125=t,c1126=t,c1127=t,c1128=t,c1129=t,c1130=t,c1131=t,c1132=t,c1133=t,c1134=t,c1135=t,c1136=t,c1137=t,c1138=t,c1139=t,c1140=t,c1141=t,c1142=t,c1143=t,c1144=t,c1145=t,c1146=t,c1147=t,c1148=t,c1149=t,c1150=t,c1151=t,c1152=t,c1153=t,c1154=t,c1155=t,c1156=t,c1157=t,c1158=t,c1159=t,c1160=t,c1161=t,c1162=t,c1163=t,c1164=t,c1165=t,c1166=t,c1167=t,c1168=t,c1169=t,c1170=t,c1171=t,c1172=t,c1173=t,"
+ "c1174=t,c1175=t,c1176=t,c1177=t,c1178=t,c1179=t,c1180=t,c1181=t,c1182=t,c1183=t,c1184=t,c1185=t,c1186=t,c1187=t,c1188=t,c1189=t,c1190=t,c1191=t,c1192=t,c1193=t,c1194=t,c1195=t,c1196=t,c1197=t,c1198=t,c1199=t,c1200=t,c1201=t,c1202=t,c1203=t,c1204=t,c1205=t,c1206=t,c1207=t,c1208=t,c1209=t,c1210=t,c1211=t,c1212=t,c1213=t,c1214=t,c1215=t,c1216=t,c1217=t,c1218=t,c1219=t,c1220=t,c1221=t,c1222=t,c1223=t,c1224=t,c1225=t,c1226=t,c1227=t,c1228=t,c1229=t,c1230=t,c1231=t,c1232=t,c1233=t,c1234=t,c1235=t,c1236=t,c1237=t,c1238=t,c1239=t,c1240=t,c1241=t,c1242=t,c1243=t,c1244=t,c1245=t,c1246=t,c1247=t,c1248=t,c1249=t,c1250=t,c1251=t,c1252=t,c1253=t,c1254=t,c1255=t,c1256=t,c1257=t,c1258=t,c1259=t,c1260=t,c1261=t,c1262=t,c1263=t,c1264=t,c1265=t,c1266=t,c1267=t,c1268=t,c1269=t,c1270=t,c1271=t,c1272=t,c1273=t,c1274=t,c1275=t,c1276=t,c1277=t,c1278=t,c1279=t,c1280=t,c1281=t,c1282=t,c1283=t,c1284=t,c1285=t,c1286=t,c1287=t,c1288=t,c1289=t,c1290=t,c1291=t,c1292=t,c1293=t,c1294=t,c1295=t,c1296=t,c1297=t,c1298=t,c1299=t,c1300=t,c1301=t,c1302=t,c1303=t,c1304=t,c1305=t,c1306=t,c1307=t,c1308=t,c1309=t,c1310=t,c1311=t,c1312=t,c1313=t,c1314=t,c1315=t,c1316=t,c1317=t,c1318=t,c1319=t,c1320=t,"
+ "c1321=t,c1322=t,c1323=t,c1324=t,c1325=t,c1326=t,c1327=t,c1328=t,c1329=t,c1330=t,c1331=t,c1332=t,c1333=t,c1334=t,c1335=t,c1336=t,c1337=t,c1338=t,c1339=t,c1340=t,c1341=t,c1342=t,c1343=t,c1344=t,c1345=t,c1346=t,c1347=t,"
+ "c1348=t,c1349=t,c1350=t,c1351=t,c1352=t,c1353=t,c1354=t,c1355=t,c1356=t,c1357=t,c1358=t,c1359=t,c1360=t,c1361=t,c1362=t,c1363=t,c1364=t,c1365=t,c1366=t,c1367=t,c1368=t,c1369=t,c1370=t,c1371=t,c1372=t,c1373=t,c1374=t,c1375=t,c1376=t,c1377=t,c1378=t,c1379=t,c1380=t,c1381=t,c1382=t,c1383=t,c1384=t,c1385=t,c1386=t,c1387=t,c1388=t,c1389=t,c1390=t,c1391=t,c1392=t,c1393=t,c1394=t,c1395=t,c1396=t,c1397=t,c1398=t,c1399=t,c1400=t,c1401=t,c1402=t,c1403=t,c1404=t,c1405=t,c1406=t,c1407=t,c1408=t,c1409=t,c1410=t,c1411=t,c1412=t,c1413=t,c1414=t,c1415=t,c1416=t,c1417=t,c1418=t,c1419=t,c1420=t,c1421=t,c1422=t,c1423=t,c1424=t,c1425=t,c1426=t,c1427=t,c1428=t,c1429=t,c1430=t,c1431=t,c1432=t,c1433=t,c1434=t,c1435=t,c1436=t,c1437=t,c1438=t,c1439=t,c1440=t,c1441=t,c1442=t,c1443=t,c1444=t,c1445=t,c1446=t,c1447=t,c1448=t,c1449=t,c1450=t,c1451=t,c1452=t,c1453=t,c1454=t,c1455=t,c1456=t,c1457=t,c1458=t,c1459=t,c1460=t,c1461=t,c1462=t,c1463=t,c1464=t,c1465=t,c1466=t,c1467=t,c1468=t,c1469=t,c1470=t,c1471=t,c1472=t,c1473=t,c1474=t,c1475=t,c1476=t,c1477=t,c1478=t,c1479=t,c1480=t,c1481=t,c1482=t,c1483=t,c1484=t,c1485=t,c1486=t,c1487=t,c1488=t,c1489=t,c1490=t,c1491=t,c1492=t,c1493=t,c1494=t,"
+ "c1495=t,c1496=t,c1497=t,c1498=t,c1499=t,c1500=t,c1501=t,c1502=t,c1503=t,c1504=t,c1505=t,c1506=t,c1507=t,c1508=t,c1509=t,c1510=t,c1511=t,c1512=t,c1513=t,c1514=t,c1515=t,c1516=t,c1517=t,c1518=t,c1519=t,c1520=t,c1521=t,c1522=t,c1523=t,c1524=t,c1525=t,c1526=t,c1527=t,c1528=t,c1529=t,c1530=t,c1531=t,c1532=t,c1533=t,c1534=t,c1535=t,c1536=t,c1537=t,c1538=t,c1539=t,c1540=t,c1541=t,c1542=t,c1543=t,c1544=t,c1545=t,c1546=t,c1547=t,c1548=t,c1549=t,c1550=t,c1551=t,c1552=t,c1553=t,c1554=t,c1555=t,c1556=t,c1557=t,c1558=t,c1559=t,c1560=t,c1561=t,c1562=t,c1563=t,c1564=t,c1565=t,c1566=t,c1567=t,c1568=t,c1569=t,c1570=t,c1571=t,c1572=t,c1573=t,c1574=t,c1575=t,c1576=t,c1577=t,c1578=t,c1579=t,c1580=t,c1581=t,c1582=t,c1583=t,c1584=t,c1585=t,c1586=t,c1587=t,c1588=t,c1589=t,c1590=t,c1591=t,c1592=t,c1593=t,c1594=t,c1595=t,c1596=t,c1597=t,c1598=t,c1599=t,c1600=t,c1601=t,c1602=t,c1603=t,c1604=t,c1605=t,c1606=t,c1607=t,c1608=t,c1609=t,c1610=t,c1611=t,c1612=t,c1613=t,c1614=t,c1615=t,c1616=t,c1617=t,c1618=t,c1619=t,c1620=t,c1621=t,c1622=t,c1623=t,c1624=t,c1625=t,c1626=t,c1627=t,c1628=t,c1629=t,c1630=t,c1631=t,c1632=t,c1633=t,c1634=t,c1635=t,c1636=t,c1637=t,c1638=t,c1639=t,c1640=t,c1641=t,"
+ "c1642=t,c1643=t,c1644=t,c1645=t,c1646=t,c1647=t,c1648=t,c1649=t,c1650=t,c1651=t,c1652=t,c1653=t,c1654=t,c1655=t,c1656=t,c1657=t,c1658=t,c1659=t,c1660=t,c1661=t,c1662=t,c1663=t,c1664=t,c1665=t,c1666=t,c1667=t,c1668=t,c1669=t,c1670=t,c1671=t,c1672=t,c1673=t,c1674=t,c1675=t,c1676=t,c1677=t,c1678=t,c1679=t,c1680=t,c1681=t,c1682=t,c1683=t,c1684=t,c1685=t,c1686=t,c1687=t,c1688=t,c1689=t,c1690=t,c1691=t,c1692=t,c1693=t,c1694=t,c1695=t,c1696=t,c1697=t,c1698=t,c1699=t,c1700=t,c1701=t,c1702=t,c1703=t,c1704=t,c1705=t,c1706=t,c1707=t,c1708=t,c1709=t,c1710=t,c1711=t,c1712=t,c1713=t,c1714=t,c1715=t,c1716=t,c1717=t,c1718=t,c1719=t,c1720=t,c1721=t,c1722=t,c1723=t,c1724=t,c1725=t,c1726=t,c1727=t,c1728=t,c1729=t,c1730=t,c1731=t,c1732=t,c1733=t,c1734=t,c1735=t,c1736=t,c1737=t,c1738=t,c1739=t,c1740=t,c1741=t,c1742=t,c1743=t,c1744=t,c1745=t,c1746=t,c1747=t,c1748=t,c1749=t,c1750=t,c1751=t,c1752=t,c1753=t,c1754=t,c1755=t,c1756=t,c1757=t,c1758=t,c1759=t,c1760=t,c1761=t,c1762=t,c1763=t,c1764=t,c1765=t,c1766=t,c1767=t,c1768=t,c1769=t,c1770=t,c1771=t,c1772=t,c1773=t,c1774=t,c1775=t,c1776=t,c1777=t,c1778=t,c1779=t,c1780=t,c1781=t,c1782=t,c1783=t,c1784=t,c1785=t,c1786=t,c1787=t,c1788=t,"
+ "c1789=t,c1790=t,c1791=t,c1792=t,c1793=t,c1794=t,c1795=t,c1796=t,c1797=t,c1798=t,c1799=t,c1800=t,c1801=t,c1802=t,c1803=t,c1804=t,c1805=t,c1806=t,c1807=t,c1808=t,c1809=t,c1810=t,c1811=t,c1812=t,c1813=t,c1814=t,c1815=t,"
+ "c1816=t,c1817=t,c1818=t,c1819=t,c1820=t,c1821=t,c1822=t,c1823=t,c1824=t,c1825=t,c1826=t,c1827=t,c1828=t,c1829=t,c1830=t,c1831=t,c1832=t,c1833=t,c1834=t,c1835=t,c1836=t,c1837=t,c1838=t,c1839=t,c1840=t,c1841=t,c1842=t,c1843=t,c1844=t,c1845=t,c1846=t,c1847=t,c1848=t,c1849=t,c1850=t,c1851=t,c1852=t,c1853=t,c1854=t,c1855=t,c1856=t,c1857=t,c1858=t,c1859=t,c1860=t,c1861=t,c1862=t,c1863=t,c1864=t,c1865=t,c1866=t,c1867=t,c1868=t,c1869=t,c1870=t,c1871=t,c1872=t,c1873=t,c1874=t,c1875=t,c1876=t,c1877=t,c1878=t,c1879=t,c1880=t,c1881=t,c1882=t,c1883=t,c1884=t,c1885=t,c1886=t,c1887=t,c1888=t,c1889=t,c1890=t,c1891=t,c1892=t,c1893=t,c1894=t,c1895=t,c1896=t,c1897=t,c1898=t,c1899=t,c1900=t,c1901=t,c1902=t,c1903=t,c1904=t,c1905=t,c1906=t,c1907=t,c1908=t,c1909=t,c1910=t,c1911=t,c1912=t,c1913=t,c1914=t,c1915=t,c1916=t,c1917=t,c1918=t,c1919=t,c1920=t,c1921=t,c1922=t,c1923=t,c1924=t,c1925=t,c1926=t,c1927=t,c1928=t,c1929=t,c1930=t,c1931=t,c1932=t,c1933=t,c1934=t,c1935=t,c1936=t,c1937=t,c1938=t,c1939=t,c1940=t,c1941=t,c1942=t,c1943=t,c1944=t,c1945=t,c1946=t,c1947=t,c1948=t,c1949=t,c1950=t,c1951=t,c1952=t,c1953=t,c1954=t,c1955=t,c1956=t,c1957=t,c1958=t,c1959=t,c1960=t,c1961=t,c1962=t,"
+ "c1963=t,c1964=t,c1965=t,c1966=t,c1967=t,c1968=t,c1969=t,c1970=t,c1971=t,c1972=t,c1973=t,c1974=t,c1975=t,c1976=t,c1977=t,c1978=t,c1979=t,c1980=t,c1981=t,c1982=t,c1983=t,c1984=t,c1985=t,c1986=t,c1987=t,c1988=t,c1989=t,c1990=t,c1991=t,c1992=t,c1993=t,c1994=t,c1995=t,c1996=t,c1997=t,c1998=t,c1999=t,c2000=t,c2001=t,c2002=t,c2003=t,c2004=t,c2005=t,c2006=t,c2007=t,c2008=t,c2009=t,c2010=t,c2011=t,c2012=t,c2013=t,c2014=t,c2015=t,c2016=t,c2017=t,c2018=t,c2019=t,c2020=t,c2021=t,c2022=t,c2023=t,c2024=t,c2025=t,c2026=t,c2027=t,c2028=t,c2029=t,c2030=t,c2031=t,c2032=t,c2033=t,c2034=t,c2035=t,c2036=t,c2037=t,c2038=t,c2039=t,c2040=t,c2041=t,c2042=t,c2043=t,c2044=t,c2045=t,c2046=t,c2047=t,c2048=t,c2049=t,c2050=t,c2051=t,c2052=t,c2053=t,c2054=t,c2055=t,c2056=t,c2057=t,c2058=t,c2059=t,c2060=t,c2061=t,c2062=t,c2063=t,c2064=t,c2065=t,c2066=t,c2067=t,c2068=t,c2069=t,c2070=t,c2071=t,c2072=t,c2073=t,c2074=t,c2075=t,c2076=t,c2077=t,c2078=t,c2079=t,c2080=t,c2081=t,c2082=t,c2083=t,c2084=t,c2085=t,c2086=t,c2087=t,c2088=t,c2089=t,c2090=t,c2091=t,c2092=t,c2093=t,c2094=t,c2095=t,c2096=t,c2097=t,c2098=t,c2099=t,c2100=t,c2101=t,c2102=t,c2103=t,c2104=t,c2105=t,c2106=t,c2107=t,c2108=t,c2109=t,"
+ "c2110=t,c2111=t,c2112=t,c2113=t,c2114=t,c2115=t,c2116=t,c2117=t,c2118=t,c2119=t,c2120=t,c2121=t,c2122=t,c2123=t,c2124=t,c2125=t,c2126=t,c2127=t,c2128=t,c2129=t,c2130=t,c2131=t,c2132=t,c2133=t,c2134=t,c2135=t,c2136=t,c2137=t,c2138=t,c2139=t,c2140=t,c2141=t,c2142=t,c2143=t,c2144=t,c2145=t,c2146=t,c2147=t,c2148=t,c2149=t,c2150=t,c2151=t,c2152=t,c2153=t,c2154=t,c2155=t,c2156=t,c2157=t,c2158=t,c2159=t,c2160=t,c2161=t,c2162=t,c2163=t,c2164=t,c2165=t,c2166=t,c2167=t,c2168=t,c2169=t,c2170=t,c2171=t,c2172=t,c2173=t,c2174=t,c2175=t,c2176=t,c2177=t,c2178=t,c2179=t,c2180=t,c2181=t,c2182=t,c2183=t,c2184=t,c2185=t,c2186=t,c2187=t,c2188=t,c2189=t,c2190=t,c2191=t,c2192=t,c2193=t,c2194=t,c2195=t,c2196=t,c2197=t,c2198=t,c2199=t,c2200=t,c2201=t,c2202=t,c2203=t,c2204=t,c2205=t,c2206=t,c2207=t,c2208=t,c2209=t,c2210=t,c2211=t,c2212=t,c2213=t,c2214=t,c2215=t,c2216=t,c2217=t,c2218=t,c2219=t,c2220=t,c2221=t,c2222=t,c2223=t,c2224=t,c2225=t,c2226=t,c2227=t,c2228=t,c2229=t,c2230=t,c2231=t,c2232=t,c2233=t,c2234=t,c2235=t,c2236=t,c2237=t,c2238=t,c2239=t,c2240=t,c2241=t,c2242=t,c2243=t,c2244=t,c2245=t,c2246=t,c2247=t,c2248=t,c2249=t,c2250=t,c2251=t,c2252=t,c2253=t,c2254=t,c2255=t,c2256=t,"
+ "c2257=t,c2258=t,c2259=t,c2260=t,c2261=t,c2262=t,c2263=t,c2264=t,c2265=t,c2266=t,c2267=t,c2268=t,c2269=t,c2270=t,c2271=t,c2272=t,c2273=t,c2274=t,c2275=t,c2276=t,c2277=t,c2278=t,c2279=t,c2280=t,c2281=t,c2282=t,c2283=t,"
+ "c2284=t,c2285=t,c2286=t,c2287=t,c2288=t,c2289=t,c2290=t,c2291=t,c2292=t,c2293=t,c2294=t,c2295=t,c2296=t,c2297=t,c2298=t,c2299=t,c2300=t,c2301=t,c2302=t,c2303=t,c2304=t,c2305=t,c2306=t,c2307=t,c2308=t,c2309=t,c2310=t,c2311=t,c2312=t,c2313=t,c2314=t,c2315=t,c2316=t,c2317=t,c2318=t,c2319=t,c2320=t,c2321=t,c2322=t,c2323=t,c2324=t,c2325=t,c2326=t,c2327=t,c2328=t,c2329=t,c2330=t,c2331=t,c2332=t,c2333=t,c2334=t,c2335=t,c2336=t,c2337=t,c2338=t,c2339=t,c2340=t,c2341=t,c2342=t,c2343=t,c2344=t,c2345=t,c2346=t,c2347=t,c2348=t,c2349=t,c2350=t,c2351=t,c2352=t,c2353=t,c2354=t,c2355=t,c2356=t,c2357=t,c2358=t,c2359=t,c2360=t,c2361=t,c2362=t,c2363=t,c2364=t,c2365=t,c2366=t,c2367=t,c2368=t,c2369=t,c2370=t,c2371=t,c2372=t,c2373=t,c2374=t,c2375=t,c2376=t,c2377=t,c2378=t,c2379=t,c2380=t,c2381=t,c2382=t,c2383=t,c2384=t,c2385=t,c2386=t,c2387=t,c2388=t,c2389=t,c2390=t,c2391=t,c2392=t,c2393=t,c2394=t,c2395=t,c2396=t,c2397=t,c2398=t,c2399=t,c2400=t,c2401=t,c2402=t,c2403=t,c2404=t,c2405=t,c2406=t,c2407=t,c2408=t,c2409=t,c2410=t,c2411=t,c2412=t,c2413=t,c2414=t,c2415=t,c2416=t,c2417=t,c2418=t,c2419=t,c2420=t,c2421=t,c2422=t,c2423=t,c2424=t,c2425=t,c2426=t,c2427=t,c2428=t,c2429=t,c2430=t,"
+ "c2431=t,c2432=t,c2433=t,c2434=t,c2435=t,c2436=t,c2437=t,c2438=t,c2439=t,c2440=t,c2441=t,c2442=t,c2443=t,c2444=t,c2445=t,c2446=t,c2447=t,c2448=t,c2449=t,c2450=t,c2451=t,c2452=t,c2453=t,c2454=t,c2455=t,c2456=t,c2457=t,c2458=t,c2459=t,c2460=t,c2461=t,c2462=t,c2463=t,c2464=t,c2465=t,c2466=t,c2467=t,c2468=t,c2469=t,c2470=t,c2471=t,c2472=t,c2473=t,c2474=t,c2475=t,c2476=t,c2477=t,c2478=t,c2479=t,c2480=t,c2481=t,c2482=t,c2483=t,c2484=t,c2485=t,c2486=t,c2487=t,c2488=t,c2489=t,c2490=t,c2491=t,c2492=t,c2493=t,c2494=t,c2495=t,c2496=t,c2497=t,c2498=t,c2499=t,c2500=t,c2501=t,c2502=t,c2503=t,c2504=t,c2505=t,c2506=t,c2507=t,c2508=t,c2509=t,c2510=t,c2511=t,c2512=t,c2513=t,c2514=t,c2515=t,c2516=t,c2517=t,c2518=t,c2519=t,c2520=t,c2521=t,c2522=t,c2523=t,c2524=t,c2525=t,c2526=t,c2527=t,c2528=t,c2529=t,c2530=t,c2531=t,c2532=t,c2533=t,c2534=t,c2535=t,c2536=t,c2537=t,c2538=t,c2539=t,c2540=t,c2541=t,c2542=t,c2543=t,c2544=t,c2545=t,c2546=t,c2547=t,c2548=t,c2549=t,c2550=t,c2551=t,c2552=t,c2553=t,c2554=t,c2555=t,c2556=t,c2557=t,c2558=t,c2559=t,c2560=t,c2561=t,c2562=t,c2563=t,c2564=t,c2565=t,c2566=t,c2567=t,c2568=t,c2569=t,c2570=t,c2571=t,c2572=t,c2573=t,c2574=t,c2575=t,c2576=t,c2577=t,"
+ "c2578=t,c2579=t,c2580=t,c2581=t,c2582=t,c2583=t,c2584=t,c2585=t,c2586=t,c2587=t,c2588=t,c2589=t,c2590=t,c2591=t,c2592=t,c2593=t,c2594=t,c2595=t,c2596=t,c2597=t,c2598=t,c2599=t,c2600=t,c2601=t,c2602=t,c2603=t,c2604=t,c2605=t,c2606=t,c2607=t,c2608=t,c2609=t,c2610=t,c2611=t,c2612=t,c2613=t,c2614=t,c2615=t,c2616=t,c2617=t,c2618=t,c2619=t,c2620=t,c2621=t,c2622=t,c2623=t,c2624=t,c2625=t,c2626=t,c2627=t,c2628=t,c2629=t,c2630=t,c2631=t,c2632=t,c2633=t,c2634=t,c2635=t,c2636=t,c2637=t,c2638=t,c2639=t,c2640=t,c2641=t,c2642=t,c2643=t,c2644=t,c2645=t,c2646=t,c2647=t,c2648=t,c2649=t,c2650=t,c2651=t,c2652=t,c2653=t,c2654=t,c2655=t,c2656=t,c2657=t,c2658=t,c2659=t,c2660=t,c2661=t,c2662=t,c2663=t,c2664=t,c2665=t,c2666=t,c2667=t,c2668=t,c2669=t,c2670=t,c2671=t,c2672=t,c2673=t,c2674=t,c2675=t,c2676=t,c2677=t,c2678=t,c2679=t,c2680=t,c2681=t,c2682=t,c2683=t,c2684=t,c2685=t,c2686=t,c2687=t,c2688=t,c2689=t,c2690=t,c2691=t,c2692=t,c2693=t,c2694=t,c2695=t,c2696=t,c2697=t,c2698=t,c2699=t,c2700=t,c2701=t,c2702=t,c2703=t,c2704=t,c2705=t,c2706=t,c2707=t,c2708=t,c2709=t,c2710=t,c2711=t,c2712=t,c2713=t,c2714=t,c2715=t,c2716=t,c2717=t,c2718=t,c2719=t,c2720=t,c2721=t,c2722=t,c2723=t,c2724=t,"
+ "c2725=t,c2726=t,c2727=t,c2728=t,c2729=t,c2730=t,c2731=t,c2732=t,c2733=t,c2734=t,c2735=t,c2736=t,c2737=t,c2738=t,c2739=t,c2740=t,c2741=t,c2742=t,c2743=t,c2744=t,c2745=t,c2746=t,c2747=t,c2748=t,c2749=t,c2750=t,c2751=t,c2752=t,c2753=t,c2754=t,c2755=t,c2756=t,c2757=t,c2758=t,c2759=t,c2760=t,c2761=t,c2762=t,c2763=t,c2764=t,c2765=t,c2766=t,c2767=t,c2768=t,c2769=t,c2770=t,c2771=t,c2772=t,c2773=t,c2774=t,c2775=t,c2776=t,c2777=t,c2778=t,c2779=t,c2780=t,c2781=t,c2782=t,c2783=t,c2784=t,c2785=t,c2786=t,c2787=t,c2788=t,c2789=t,c2790=t,c2791=t,c2792=t,c2793=t,c2794=t,c2795=t,c2796=t,c2797=t,c2798=t,c2799=t,c2800=t,c2801=t,c2802=t,c2803=t,c2804=t,c2805=t,c2806=t,c2807=t,c2808=t,c2809=t,c2810=t,c2811=t,c2812=t,c2813=t,c2814=t,c2815=t,c2816=t,c2817=t,c2818=t,c2819=t,c2820=t,c2821=t,c2822=t,c2823=t,c2824=t,c2825=t,c2826=t,c2827=t,c2828=t,c2829=t,c2830=t,c2831=t,c2832=t,c2833=t,c2834=t,c2835=t,c2836=t,c2837=t,c2838=t,c2839=t,c2840=t,c2841=t,c2842=t,c2843=t,c2844=t,c2845=t,c2846=t,c2847=t,c2848=t,c2849=t,c2850=t,c2851=t,c2852=t,c2853=t,c2854=t,c2855=t,c2856=t,c2857=t,c2858=t,c2859=t,c2860=t,c2861=t,c2862=t,c2863=t,c2864=t,c2865=t,c2866=t,c2867=t,c2868=t,c2869=t,c2870=t,c2871=t,"
+ "c2872=t,c2873=t,c2874=t,c2875=t,c2876=t,c2877=t,c2878=t,c2879=t,c2880=t,c2881=t,c2882=t,c2883=t,c2884=t,c2885=t,c2886=t,c2887=t,c2888=t,c2889=t,c2890=t,c2891=t,c2892=t,c2893=t,c2894=t,c2895=t,c2896=t,c2897=t,c2898=t,c2899=t,c2900=t,c2901=t,c2902=t,c2903=t,c2904=t,c2905=t,c2906=t,c2907=t,c2908=t,c2909=t,c2910=t,c2911=t,c2912=t,c2913=t,c2914=t,c2915=t,c2916=t,c2917=t,c2918=t,c2919=t,c2920=t,c2921=t,c2922=t,c2923=t,c2924=t,c2925=t,c2926=t,c2927=t,c2928=t,c2929=t,c2930=t,c2931=t,c2932=t,c2933=t,c2934=t,c2935=t,c2936=t,c2937=t,c2938=t,c2939=t,c2940=t,c2941=t,c2942=t,c2943=t,c2944=t,c2945=t,c2946=t,c2947=t,c2948=t,c2949=t,c2950=t,c2951=t,c2952=t,c2953=t,c2954=t,c2955=t,c2956=t,c2957=t,c2958=t,c2959=t,c2960=t,c2961=t,c2962=t,c2963=t,c2964=t,c2965=t,c2966=t,c2967=t,c2968=t,c2969=t,c2970=t,c2971=t,c2972=t,c2973=t,c2974=t,c2975=t,c2976=t,c2977=t,c2978=t,c2979=t,c2980=t,c2981=t,c2982=t,c2983=t,c2984=t,c2985=t,c2986=t,c2987=t,c2988=t,c2989=t,c2990=t,c2991=t,c2992=t,c2993=t,c2994=t,c2995=t,c2996=t,c2997=t,c2998=t,c2999=t,c3000=t,c3001=t,c3002=t,c3003=t,c3004=t,c3005=t,c3006=t,c3007=t,c3008=t,c3009=t,c3010=t,c3011=t,c3012=t,c3013=t,c3014=t,c3015=t,c3016=t,c3017=t,c3018=t,"
+ "c3019=t,c3020=t,c3021=t,c3022=t,c3023=t,c3024=t,c3025=t,c3026=t,c3027=t,c3028=t,c3029=t,c3030=t,c3031=t,c3032=t,c3033=t,c3034=t,c3035=t,c3036=t,c3037=t,c3038=t,c3039=t,c3040=t,c3041=t,c3042=t,c3043=t,c3044=t,c3045=t,c3046=t,c3047=t,c3048=t,c3049=t,c3050=t,c3051=t,c3052=t,c3053=t,c3054=t,c3055=t,c3056=t,c3057=t,c3058=t,c3059=t,c3060=t,c3061=t,c3062=t,c3063=t,c3064=t,c3065=t,c3066=t,c3067=t,c3068=t,c3069=t,c3070=t,c3071=t,c3072=t,c3073=t,c3074=t,c3075=t,c3076=t,c3077=t,c3078=t,c3079=t,c3080=t,c3081=t,c3082=t,c3083=t,c3084=t,c3085=t,c3086=t,c3087=t,c3088=t,c3089=t,c3090=t,c3091=t,c3092=t,c3093=t,c3094=t,c3095=t,c3096=t,c3097=t,c3098=t,c3099=t,c3100=t,c3101=t,c3102=t,c3103=t,c3104=t,c3105=t,c3106=t,c3107=t,c3108=t,c3109=t,c3110=t,c3111=t,c3112=t,c3113=t,c3114=t,c3115=t,c3116=t,c3117=t,c3118=t,c3119=t,c3120=t,c3121=t,c3122=t,c3123=t,c3124=t,c3125=t,c3126=t,c3127=t,c3128=t,c3129=t,c3130=t,c3131=t,c3132=t,c3133=t,c3134=t,c3135=t,c3136=t,c3137=t,c3138=t,c3139=t,c3140=t,c3141=t,c3142=t,c3143=t,c3144=t,c3145=t,c3146=t,c3147=t,c3148=t,c3149=t,c3150=t,c3151=t,c3152=t,c3153=t,c3154=t,c3155=t,c3156=t,c3157=t,c3158=t,c3159=t,c3160=t,c3161=t,c3162=t,c3163=t,c3164=t,c3165=t,"
+ "c3166=t,c3167=t,c3168=t,c3169=t,c3170=t,c3171=t,c3172=t,c3173=t,c3174=t,c3175=t,c3176=t,c3177=t,c3178=t,c3179=t,c3180=t,c3181=t,c3182=t,c3183=t,c3184=t,c3185=t,c3186=t,c3187=t,c3188=t,c3189=t,c3190=t,c3191=t,c3192=t,c3193=t,c3194=t,c3195=t,c3196=t,c3197=t,c3198=t,c3199=t,c3200=t,c3201=t,c3202=t,c3203=t,c3204=t,c3205=t,c3206=t,c3207=t,c3208=t,c3209=t,c3210=t,c3211=t,c3212=t,c3213=t,c3214=t,c3215=t,c3216=t,c3217=t,c3218=t,c3219=t,c3220=t,c3221=t,c3222=t,c3223=t,c3224=t,c3225=t,c3226=t,c3227=t,c3228=t,c3229=t,c3230=t,c3231=t,c3232=t,c3233=t,c3234=t,c3235=t,c3236=t,c3237=t,c3238=t,c3239=t,c3240=t,c3241=t,c3242=t,c3243=t,c3244=t,c3245=t,c3246=t,c3247=t,c3248=t,c3249=t,c3250=t,c3251=t,c3252=t,c3253=t,c3254=t,c3255=t,c3256=t,c3257=t,c3258=t,c3259=t,c3260=t,c3261=t,c3262=t,c3263=t,c3264=t,c3265=t,c3266=t,c3267=t,c3268=t,c3269=t,c3270=t,c3271=t,c3272=t,c3273=t,c3274=t,c3275=t,c3276=t,c3277=t,c3278=t,c3279=t,c3280=t,c3281=t,c3282=t,c3283=t,c3284=t,c3285=t,c3286=t,c3287=t,c3288=t,c3289=t,c3290=t,c3291=t,c3292=t,c3293=t,c3294=t,c3295=t,c3296=t,c3297=t,c3298=t,c3299=t,c3300=t,c3301=t,c3302=t,c3303=t,c3304=t,c3305=t,c3306=t,c3307=t,c3308=t,c3309=t,c3310=t,c3311=t,c3312=t,"
+ "c3313=t,c3314=t,c3315=t,c3316=t,c3317=t,c3318=t,c3319=t,c3320=t,c3321=t,c3322=t,c3323=t,c3324=t,c3325=t,c3326=t,c3327=t,c3328=t,c3329=t,c3330=t,c3331=t,c3332=t,c3333=t,c3334=t,c3335=t,c3336=t,c3337=t,c3338=t,c3339=t,c3340=t,c3341=t,c3342=t,c3343=t,c3344=t,c3345=t,c3346=t,c3347=t,c3348=t,c3349=t,c3350=t,c3351=t,c3352=t,c3353=t,c3354=t,c3355=t,c3356=t,c3357=t,c3358=t,c3359=t,c3360=t,c3361=t,c3362=t,c3363=t,c3364=t,c3365=t,c3366=t,c3367=t,c3368=t,c3369=t,c3370=t,c3371=t,c3372=t,c3373=t,c3374=t,c3375=t,c3376=t,c3377=t,c3378=t,c3379=t,c3380=t,c3381=t,c3382=t,c3383=t,c3384=t,c3385=t,c3386=t,c3387=t,c3388=t,c3389=t,c3390=t,c3391=t,c3392=t,c3393=t,c3394=t,c3395=t,c3396=t,c3397=t,c3398=t,c3399=t,c3400=t,c3401=t,c3402=t,c3403=t,c3404=t,c3405=t,c3406=t,c3407=t,c3408=t,c3409=t,c3410=t,c3411=t,c3412=t,c3413=t,c3414=t,c3415=t,c3416=t,c3417=t,c3418=t,c3419=t,c3420=t,c3421=t,c3422=t,c3423=t,c3424=t,c3425=t,c3426=t,c3427=t,c3428=t,c3429=t,c3430=t,c3431=t,c3432=t,c3433=t,c3434=t,c3435=t,c3436=t,c3437=t,c3438=t,c3439=t,c3440=t,c3441=t,c3442=t,c3443=t,c3444=t,c3445=t,c3446=t,c3447=t,c3448=t,c3449=t,c3450=t,c3451=t,c3452=t,c3453=t,c3454=t,c3455=t,c3456=t,c3457=t,c3458=t,c3459=t,"
+ "c3460=t,c3461=t,c3462=t,c3463=t,c3464=t,c3465=t,c3466=t,c3467=t,c3468=t,c3469=t,c3470=t,c3471=t,c3472=t,c3473=t,c3474=t,c3475=t,c3476=t,c3477=t,c3478=t,c3479=t,c3480=t,c3481=t,c3482=t,c3483=t,c3484=t,c3485=t,c3486=t,c3487=t,c3488=t,c3489=t,c3490=t,c3491=t,c3492=t,c3493=t,c3494=t,c3495=t,c3496=t,c3497=t,c3498=t,c3499=t,c3500=t,c3501=t,c3502=t,c3503=t,c3504=t,c3505=t,c3506=t,c3507=t,c3508=t,c3509=t,c3510=t,c3511=t,c3512=t,c3513=t,"
+ "c3514=t,c3515=t,c3516=t,c3517=t,c3518=t,c3519=t,c3520=t,c3521=t,c3522=t,c3523=t,c3524=t,c3525=t,c3526=t,c3527=t,c3528=t,c3529=t,c3530=t,c3531=t,c3532=t,c3533=t,c3534=t,c3535=t,c3536=t,c3537=t,c3538=t,c3539=t,c3540=t,c3541=t,c3542=t,c3543=t,c3544=t,c3545=t,c3546=t,c3547=t,c3548=t,c3549=t,c3550=t,c3551=t,c3552=t,c3553=t,c3554=t,c3555=t,c3556=t,c3557=t,c3558=t,c3559=t,c3560=t,c3561=t,c3562=t,c3563=t,c3564=t,c3565=t,c3566=t,c3567=t,c3568=t,c3569=t,c3570=t,c3571=t,c3572=t,c3573=t,c3574=t,c3575=t,c3576=t,c3577=t,c3578=t,c3579=t,c3580=t,c3581=t,c3582=t,c3583=t,c3584=t,c3585=t,c3586=t,c3587=t,c3588=t,c3589=t,c3590=t,c3591=t,c3592=t,c3593=t,c3594=t,c3595=t,c3596=t,c3597=t,c3598=t,c3599=t,c3600=t,c3601=t,c3602=t,c3603=t,c3604=t,c3605=t,c3606=t,c3607=t,c3608=t,c3609=t,c3610=t,c3611=t,c3612=t,c3613=t,c3614=t,c3615=t,c3616=t,c3617=t,c3618=t,c3619=t,c3620=t,c3621=t,c3622=t,c3623=t,c3624=t,c3625=t,c3626=t,c3627=t,c3628=t,c3629=t,c3630=t,c3631=t,c3632=t,c3633=t,c3634=t,c3635=t,c3636=t,c3637=t,c3638=t,c3639=t,c3640=t,c3641=t,c3642=t,c3643=t,c3644=t,c3645=t,c3646=t,c3647=t,c3648=t,c3649=t,c3650=t,c3651=t,c3652=t,c3653=t,c3654=t,c3655=t,c3656=t,c3657=t,c3658=t,c3659=t,c3660=t,"
+ "c3661=t,c3662=t,c3663=t,c3664=t,c3665=t,c3666=t,c3667=t,c3668=t,c3669=t,c3670=t,c3671=t,c3672=t,c3673=t,c3674=t,c3675=t,c3676=t,c3677=t,c3678=t,c3679=t,c3680=t,c3681=t,c3682=t,c3683=t,c3684=t,c3685=t,c3686=t,c3687=t,c3688=t,c3689=t,c3690=t,c3691=t,c3692=t,c3693=t,c3694=t,c3695=t,c3696=t,c3697=t,c3698=t,c3699=t,c3700=t,c3701=t,c3702=t,c3703=t,c3704=t,c3705=t,c3706=t,c3707=t,c3708=t,c3709=t,c3710=t,c3711=t,c3712=t,c3713=t,c3714=t,c3715=t,c3716=t,c3717=t,c3718=t,c3719=t,c3720=t,c3721=t,c3722=t,c3723=t,c3724=t,c3725=t,c3726=t,c3727=t,c3728=t,c3729=t,c3730=t,c3731=t,c3732=t,c3733=t,c3734=t,c3735=t,c3736=t,c3737=t,c3738=t,c3739=t,c3740=t,c3741=t,c3742=t,c3743=t,c3744=t,c3745=t,c3746=t,c3747=t,c3748=t,c3749=t,c3750=t,c3751=t,c3752=t,c3753=t,c3754=t,c3755=t,c3756=t,c3757=t,c3758=t,c3759=t,c3760=t,c3761=t,c3762=t,c3763=t,c3764=t,c3765=t,c3766=t,c3767=t,c3768=t,c3769=t,c3770=t,c3771=t,c3772=t,c3773=t,c3774=t,c3775=t,c3776=t,c3777=t,c3778=t,c3779=t,c3780=t,c3781=t,c3782=t,c3783=t,c3784=t,c3785=t,c3786=t,c3787=t,c3788=t,c3789=t,c3790=t,c3791=t,c3792=t,c3793=t,c3794=t,c3795=t,c3796=t,c3797=t,c3798=t,c3799=t,c3800=t,c3801=t,c3802=t,c3803=t,c3804=t,c3805=t,c3806=t,c3807=t,"
+ "c3808=t,c3809=t,c3810=t,c3811=t,c3812=t,c3813=t,c3814=t,c3815=t,c3816=t,c3817=t,c3818=t,c3819=t,c3820=t,c3821=t,c3822=t,c3823=t,c3824=t,c3825=t,c3826=t,c3827=t,c3828=t,c3829=t,c3830=t,c3831=t,c3832=t,c3833=t,c3834=t,c3835=t,c3836=t,c3837=t,c3838=t,c3839=t,c3840=t,c3841=t,c3842=t,c3843=t,c3844=t,c3845=t,c3846=t,c3847=t,c3848=t,c3849=t,c3850=t,c3851=t,c3852=t,c3853=t,c3854=t,c3855=t,c3856=t,c3857=t,c3858=t,c3859=t,c3860=t,c3861=t,c3862=t,c3863=t,c3864=t,c3865=t,c3866=t,c3867=t,c3868=t,c3869=t,c3870=t,c3871=t,c3872=t,c3873=t,c3874=t,c3875=t,c3876=t,c3877=t,c3878=t,c3879=t,c3880=t,c3881=t,c3882=t,c3883=t,c3884=t,c3885=t,c3886=t,c3887=t,c3888=t,c3889=t,c3890=t,c3891=t,c3892=t,c3893=t,c3894=t,c3895=t,c3896=t,c3897=t,c3898=t,c3899=t,c3900=t,c3901=t,c3902=t,c3903=t,c3904=t,c3905=t,c3906=t,c3907=t,c3908=t,c3909=t,c3910=t,c3911=t,c3912=t,c3913=t,c3914=t,c3915=t,c3916=t,c3917=t,c3918=t,c3919=t,c3920=t,c3921=t,c3922=t,c3923=t,c3924=t,c3925=t,c3926=t,c3927=t,c3928=t,c3929=t,c3930=t,c3931=t,c3932=t,c3933=t,c3934=t,c3935=t,c3936=t,c3937=t,c3938=t,c3939=t,c3940=t,c3941=t,c3942=t,c3943=t,c3944=t,c3945=t,c3946=t,c3947=t,c3948=t,c3949=t,c3950=t,c3951=t,c3952=t,c3953=t,c3954=t,"
+ "c3955=t,c3956=t,c3957=t,c3958=t,c3959=t,c3960=t,c3961=t,c3962=t,c3963=t,c3964=t,c3965=t,c3966=t,c3967=t,c3968=t,c3969=t,c3970=t,c3971=t,c3972=t,c3973=t,c3974=t,c3975=t,c3976=t,c3977=t,c3978=t,c3979=t,c3980=t,c3981=t,c3982=t,c3983=t,c3984=t,c3985=t,c3986=t,c3987=t,c3988=t,c3989=t,c3990=t,c3991=t,c3992=t,c3993=t,c3994=t,c3995=t,c3996=t,c3997=t,c3998=t,c3999=t,c4000=t,c4001=t,c4002=t,c4003=t,c4004=t,c4005=t,c4006=t,c4007=t,c4008=t,c4009=t,c4010=t,c4011=t,c4012=t,c4013=t,c4014=t,c4015=t,c4016=t,c4017=t,c4018=t,c4019=t,c4020=t,c4021=t,c4022=t,c4023=t,c4024=t,c4025=t,c4026=t,c4027=t,c4028=t,c4029=t,c4030=t,c4031=t,c4032=t,c4033=t,c4034=t,c4035=t,c4036=t,c4037=t,c4038=t,c4039=t,c4040=t,c4041=t,c4042=t,c4043=t,c4044=t,c4045=t,c4046=t,c4047=t,c4048=t,c4049=t,c4050=t,c4051=t,c4052=t,c4053=t,c4054=t,c4055=t,c4056=t,c4057=t,c4058=t,c4059=t,c4060=t,c4061=t,c4062=t,c4063=t,c4064=t,c4065=t,c4066=t,c4067=t,c4068=t,c4069=t,c4070=t,c4071=t,c4072=t,c4073=t,c4074=t,c4075=t,c4076=t,c4077=t,c4078=t,c4079=t,c4080=t,c4081=t,c4082=t,c4083=t,c4084=t,c4085=t,c4086=t,c4087=t,c4088=t,c4089=t,c4090=t,c4091=t,c4092=t,c4093=t 1626006833640000000"
+ };
+
+ int ret = TSDB_CODE_SUCCESS;
+ for(int i = 0; i < sizeof(sql)/sizeof(sql[0]); i++){
+ ret = smlParseInfluxLine(info, sql[i]);
+ if(ret != TSDB_CODE_SUCCESS) break;
+ }
+ ASSERT_NE(ret, 0);
+ smlDestroyInfo(info);
+}
diff --git a/source/common/src/tdatablock.c b/source/common/src/tdatablock.c
index c65e966046..c7f372f17b 100644
--- a/source/common/src/tdatablock.c
+++ b/source/common/src/tdatablock.c
@@ -1228,6 +1228,7 @@ void blockDataFreeRes(SSDataBlock* pBlock) {
}
taosArrayDestroy(pBlock->pDataBlock);
+ pBlock->pDataBlock = NULL;
taosMemoryFreeClear(pBlock->pBlockAgg);
memset(&pBlock->info, 0, sizeof(SDataBlockInfo));
}
@@ -1706,8 +1707,8 @@ static char* formatTimestamp(char* buf, int64_t val, int precision) {
}
void blockDebugShowDataBlock(SSDataBlock* pBlock, const char* flag) {
- SArray* dataBlocks = taosArrayInit(1, sizeof(SSDataBlock));
- taosArrayPush(dataBlocks, pBlock);
+ SArray* dataBlocks = taosArrayInit(1, sizeof(SSDataBlock*));
+ taosArrayPush(dataBlocks, &pBlock);
blockDebugShowDataBlocks(dataBlocks, flag);
taosArrayDestroy(dataBlocks);
}
diff --git a/source/common/src/tdataformat.c b/source/common/src/tdataformat.c
index 8eeab77a15..b40f449a05 100644
--- a/source/common/src/tdataformat.c
+++ b/source/common/src/tdataformat.c
@@ -1064,6 +1064,26 @@ _err:
return code;
}
+void tTagSetCid(const STag *pTag, int16_t iTag, int16_t cid) {
+ uint8_t *p = NULL;
+ int8_t isLarge = pTag->flags & TD_TAG_LARGE;
+ int16_t offset = 0;
+
+ if (isLarge) {
+ p = (uint8_t *)&((int16_t *)pTag->idx)[pTag->nTag];
+ } else {
+ p = (uint8_t *)&pTag->idx[pTag->nTag];
+ }
+
+ if (isLarge) {
+ offset = ((int16_t *)pTag->idx)[iTag];
+ } else {
+ offset = pTag->idx[iTag];
+ }
+
+ tPutI16v(p + offset, cid);
+}
+
#if 1 // ===================================================================================================================
int tdInitTSchemaBuilder(STSchemaBuilder *pBuilder, schema_ver_t version) {
if (pBuilder == NULL) return -1;
diff --git a/source/common/src/tmsg.c b/source/common/src/tmsg.c
index 058f26d145..8dc4931573 100644
--- a/source/common/src/tmsg.c
+++ b/source/common/src/tmsg.c
@@ -5675,7 +5675,7 @@ void tFreeSMCreateStbRsp(SMCreateStbRsp *pRsp) {
int32_t tEncodeSTqOffsetVal(SEncoder *pEncoder, const STqOffsetVal *pOffsetVal) {
if (tEncodeI8(pEncoder, pOffsetVal->type) < 0) return -1;
- if (pOffsetVal->type == TMQ_OFFSET__SNAPSHOT_DATA) {
+ if (pOffsetVal->type == TMQ_OFFSET__SNAPSHOT_DATA || pOffsetVal->type == TMQ_OFFSET__SNAPSHOT_META) {
if (tEncodeI64(pEncoder, pOffsetVal->uid) < 0) return -1;
if (tEncodeI64(pEncoder, pOffsetVal->ts) < 0) return -1;
} else if (pOffsetVal->type == TMQ_OFFSET__LOG) {
@@ -5690,7 +5690,7 @@ int32_t tEncodeSTqOffsetVal(SEncoder *pEncoder, const STqOffsetVal *pOffsetVal)
int32_t tDecodeSTqOffsetVal(SDecoder *pDecoder, STqOffsetVal *pOffsetVal) {
if (tDecodeI8(pDecoder, &pOffsetVal->type) < 0) return -1;
- if (pOffsetVal->type == TMQ_OFFSET__SNAPSHOT_DATA) {
+ if (pOffsetVal->type == TMQ_OFFSET__SNAPSHOT_DATA || pOffsetVal->type == TMQ_OFFSET__SNAPSHOT_META) {
if (tDecodeI64(pDecoder, &pOffsetVal->uid) < 0) return -1;
if (tDecodeI64(pDecoder, &pOffsetVal->ts) < 0) return -1;
} else if (pOffsetVal->type == TMQ_OFFSET__LOG) {
@@ -5712,10 +5712,8 @@ int32_t tFormatOffset(char *buf, int32_t maxLen, const STqOffsetVal *pVal) {
snprintf(buf, maxLen, "offset(reset to latest)");
} else if (pVal->type == TMQ_OFFSET__LOG) {
snprintf(buf, maxLen, "offset(log) ver:%" PRId64, pVal->version);
- } else if (pVal->type == TMQ_OFFSET__SNAPSHOT_DATA) {
+ } else if (pVal->type == TMQ_OFFSET__SNAPSHOT_DATA || pVal->type == TMQ_OFFSET__SNAPSHOT_META) {
snprintf(buf, maxLen, "offset(ss data) uid:%" PRId64 ", ts:%" PRId64, pVal->uid, pVal->ts);
- } else if (pVal->type == TMQ_OFFSET__SNAPSHOT_META) {
- snprintf(buf, maxLen, "offset(ss meta) uid:%" PRId64 ", ts:%" PRId64, pVal->uid, pVal->ts);
} else {
ASSERT(0);
}
@@ -5729,9 +5727,7 @@ bool tOffsetEqual(const STqOffsetVal *pLeft, const STqOffsetVal *pRight) {
} else if (pLeft->type == TMQ_OFFSET__SNAPSHOT_DATA) {
return pLeft->uid == pRight->uid && pLeft->ts == pRight->ts;
} else if (pLeft->type == TMQ_OFFSET__SNAPSHOT_META) {
- ASSERT(0);
- // TODO
- return pLeft->uid == pRight->uid && pLeft->ts == pRight->ts;
+ return pLeft->uid == pRight->uid;
} else {
ASSERT(0);
/*ASSERT(pLeft->type == TMQ_OFFSET__RESET_NONE || pLeft->type == TMQ_OFFSET__RESET_EARLIEAST ||*/
@@ -5816,6 +5812,21 @@ int32_t tDecodeDeleteRes(SDecoder *pCoder, SDeleteRes *pRes) {
if (tDecodeCStrTo(pCoder, pRes->tsColName) < 0) return -1;
return 0;
}
+
+int32_t tEncodeSMqMetaRsp(SEncoder* pEncoder, const SMqMetaRsp* pRsp) {
+ if (tEncodeSTqOffsetVal(pEncoder, &pRsp->rspOffset) < 0) return -1;
+ if(tEncodeI16(pEncoder, pRsp->resMsgType)) return -1;
+ if(tEncodeBinary(pEncoder, pRsp->metaRsp, pRsp->metaRspLen)) return -1;
+ return 0;
+}
+
+int32_t tDecodeSMqMetaRsp(SDecoder* pDecoder, SMqMetaRsp* pRsp) {
+ if (tDecodeSTqOffsetVal(pDecoder, &pRsp->rspOffset) < 0) return -1;
+ if (tDecodeI16(pDecoder, &pRsp->resMsgType) < 0) return -1;
+ if (tDecodeBinaryAlloc(pDecoder, &pRsp->metaRsp, (uint64_t*)&pRsp->metaRspLen) < 0) return -1;
+ return 0;
+}
+
int32_t tEncodeSMqDataRsp(SEncoder *pEncoder, const SMqDataRsp *pRsp) {
if (tEncodeSTqOffsetVal(pEncoder, &pRsp->reqOffset) < 0) return -1;
if (tEncodeSTqOffsetVal(pEncoder, &pRsp->rspOffset) < 0) return -1;
diff --git a/source/dnode/mnode/impl/src/mndMnode.c b/source/dnode/mnode/impl/src/mndMnode.c
index 4f07d9e014..71bda4d4f3 100644
--- a/source/dnode/mnode/impl/src/mndMnode.c
+++ b/source/dnode/mnode/impl/src/mndMnode.c
@@ -89,14 +89,14 @@ static int32_t mndCreateDefaultMnode(SMnode *pMnode) {
if (pRaw == NULL) return -1;
sdbSetRawStatus(pRaw, SDB_STATUS_READY);
- mDebug("mnode:%d, will be created when deploying, raw:%p", mnodeObj.id, pRaw);
+ mInfo("mnode:%d, will be created when deploying, raw:%p", mnodeObj.id, pRaw);
STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_GLOBAL, NULL);
if (pTrans == NULL) {
mError("mnode:%d, failed to create since %s", mnodeObj.id, terrstr());
return -1;
}
- mDebug("trans:%d, used to create mnode:%d", pTrans->id, mnodeObj.id);
+ mInfo("trans:%d, used to create mnode:%d", pTrans->id, mnodeObj.id);
if (mndTransAppendCommitlog(pTrans, pRaw) != 0) {
mError("trans:%d, failed to append commit log since %s", pTrans->id, terrstr());
@@ -365,7 +365,7 @@ static int32_t mndCreateMnode(SMnode *pMnode, SRpcMsg *pReq, SDnodeObj *pDnode,
STrans *pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_GLOBAL, pReq);
if (pTrans == NULL) goto _OVER;
mndTransSetSerial(pTrans);
- mDebug("trans:%d, used to create mnode:%d", pTrans->id, pCreate->dnodeId);
+ mInfo("trans:%d, used to create mnode:%d", pTrans->id, pCreate->dnodeId);
if (mndSetCreateMnodeRedoLogs(pMnode, pTrans, &mnodeObj) != 0) goto _OVER;
if (mndSetCreateMnodeCommitLogs(pMnode, pTrans, &mnodeObj) != 0) goto _OVER;
@@ -392,7 +392,7 @@ static int32_t mndProcessCreateMnodeReq(SRpcMsg *pReq) {
goto _OVER;
}
- mDebug("mnode:%d, start to create", createReq.dnodeId);
+ mInfo("mnode:%d, start to create", createReq.dnodeId);
if (mndCheckOperPrivilege(pMnode, pReq->info.conn.user, MND_OPER_CREATE_MNODE) != 0) {
goto _OVER;
}
@@ -574,7 +574,7 @@ static int32_t mndDropMnode(SMnode *pMnode, SRpcMsg *pReq, SMnodeObj *pObj) {
pTrans = mndTransCreate(pMnode, TRN_POLICY_RETRY, TRN_CONFLICT_GLOBAL, pReq);
if (pTrans == NULL) goto _OVER;
mndTransSetSerial(pTrans);
- mDebug("trans:%d, used to drop mnode:%d", pTrans->id, pObj->id);
+ mInfo("trans:%d, used to drop mnode:%d", pTrans->id, pObj->id);
if (mndSetDropMnodeInfoToTrans(pMnode, pTrans, pObj) != 0) goto _OVER;
if (mndTransPrepare(pMnode, pTrans) != 0) goto _OVER;
@@ -597,7 +597,7 @@ static int32_t mndProcessDropMnodeReq(SRpcMsg *pReq) {
goto _OVER;
}
- mDebug("mnode:%d, start to drop", dropReq.dnodeId);
+ mInfo("mnode:%d, start to drop", dropReq.dnodeId);
if (mndCheckOperPrivilege(pMnode, pReq->info.conn.user, MND_OPER_DROP_MNODE) != 0) {
goto _OVER;
}
@@ -732,7 +732,7 @@ static int32_t mndProcessAlterMnodeReq(SRpcMsg *pReq) {
}
}
- mTrace("trans:-1, sync reconfig will be proposed");
+ mInfo("trans:-1, sync reconfig will be proposed");
SSyncMgmt *pMgmt = &pMnode->syncMgmt;
pMgmt->standby = 0;
diff --git a/source/dnode/mnode/impl/src/mndStb.c b/source/dnode/mnode/impl/src/mndStb.c
index 81ede6de90..dc8285740a 100644
--- a/source/dnode/mnode/impl/src/mndStb.c
+++ b/source/dnode/mnode/impl/src/mndStb.c
@@ -536,7 +536,7 @@ int32_t mndCheckCreateStbReq(SMCreateStbReq *pCreate) {
return -1;
}
- if (pCreate->numOfColumns < TSDB_MIN_COLUMNS || pCreate->numOfColumns > TSDB_MAX_COLUMNS) {
+ if (pCreate->numOfColumns < TSDB_MIN_COLUMNS || pCreate->numOfTags + pCreate->numOfColumns > TSDB_MAX_COLUMNS) {
terrno = TSDB_CODE_PAR_INVALID_COLUMNS_NUM;
return -1;
}
diff --git a/source/dnode/mnode/impl/src/mndSync.c b/source/dnode/mnode/impl/src/mndSync.c
index b7129cf56e..e8b75e6a94 100644
--- a/source/dnode/mnode/impl/src/mndSync.c
+++ b/source/dnode/mnode/impl/src/mndSync.c
@@ -50,7 +50,7 @@ void mndSyncCommitMsg(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbM
int32_t transId = sdbGetIdFromRaw(pMnode->pSdb, pRaw);
pMgmt->errCode = cbMeta.code;
- mDebug("trans:%d, is proposed, saved:%d code:0x%x, apply index:%" PRId64 " term:%" PRIu64 " config:%" PRId64
+ mInfo("trans:%d, is proposed, saved:%d code:0x%x, apply index:%" PRId64 " term:%" PRIu64 " config:%" PRId64
" role:%s raw:%p",
transId, pMgmt->transId, cbMeta.code, cbMeta.index, cbMeta.term, cbMeta.lastConfigIndex, syncStr(cbMeta.state),
pRaw);
@@ -68,7 +68,7 @@ void mndSyncCommitMsg(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbM
if (pMgmt->errCode != 0) {
mError("trans:%d, failed to propose since %s, post sem", transId, tstrerror(pMgmt->errCode));
} else {
- mDebug("trans:%d, is proposed and post sem", transId, tstrerror(pMgmt->errCode));
+ mInfo("trans:%d, is proposed and post sem", transId, tstrerror(pMgmt->errCode));
}
pMgmt->transId = 0;
taosWUnLockLatch(&pMgmt->lock);
@@ -88,7 +88,7 @@ void mndSyncCommitMsg(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbM
}
int32_t mndSyncGetSnapshot(struct SSyncFSM *pFsm, SSnapshot *pSnapshot, void *pReaderParam, void **ppReader) {
- mDebug("start to read snapshot from sdb in atomic way");
+ mInfo("start to read snapshot from sdb in atomic way");
SMnode *pMnode = pFsm->data;
return sdbStartRead(pMnode->pSdb, (SSdbIter **)ppReader, &pSnapshot->lastApplyIndex, &pSnapshot->lastApplyTerm,
&pSnapshot->lastConfigIndex);
@@ -118,7 +118,7 @@ void mndReConfig(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SReConfigCbMeta cbM
SSyncMgmt *pMgmt = &pMnode->syncMgmt;
pMgmt->errCode = cbMeta.code;
- mDebug("trans:-1, sync reconfig is proposed, saved:%d code:0x%x, index:%" PRId64 " term:%" PRId64, pMgmt->transId,
+ mInfo("trans:-1, sync reconfig is proposed, saved:%d code:0x%x, index:%" PRId64 " term:%" PRId64, pMgmt->transId,
cbMeta.code, cbMeta.index, cbMeta.term);
taosWLockLatch(&pMgmt->lock);
@@ -126,7 +126,7 @@ void mndReConfig(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SReConfigCbMeta cbM
if (pMgmt->errCode != 0) {
mError("trans:-1, failed to propose sync reconfig since %s, post sem", tstrerror(pMgmt->errCode));
} else {
- mDebug("trans:-1, sync reconfig is proposed, saved:%d code:0x%x, index:%" PRId64 " term:%" PRId64 " post sem",
+ mInfo("trans:-1, sync reconfig is proposed, saved:%d code:0x%x, index:%" PRId64 " term:%" PRId64 " post sem",
pMgmt->transId, cbMeta.code, cbMeta.index, cbMeta.term);
}
pMgmt->transId = 0;
@@ -136,13 +136,13 @@ void mndReConfig(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SReConfigCbMeta cbM
}
int32_t mndSnapshotStartRead(struct SSyncFSM *pFsm, void *pParam, void **ppReader) {
- mDebug("start to read snapshot from sdb");
+ mInfo("start to read snapshot from sdb");
SMnode *pMnode = pFsm->data;
return sdbStartRead(pMnode->pSdb, (SSdbIter **)ppReader, NULL, NULL, NULL);
}
int32_t mndSnapshotStopRead(struct SSyncFSM *pFsm, void *pReader) {
- mDebug("stop to read snapshot from sdb");
+ mInfo("stop to read snapshot from sdb");
SMnode *pMnode = pFsm->data;
return sdbStopRead(pMnode->pSdb, pReader);
}
@@ -174,12 +174,12 @@ int32_t mndSnapshotDoWrite(struct SSyncFSM *pFsm, void *pWriter, void *pBuf, int
void mndLeaderTransfer(struct SSyncFSM *pFsm, const SRpcMsg *pMsg, SFsmCbMeta cbMeta) {
SMnode *pMnode = pFsm->data;
atomic_store_8(&(pMnode->syncMgmt.leaderTransferFinish), 1);
- mDebug("vgId:1, mnode leader transfer finish");
+ mInfo("vgId:1, mnode leader transfer finish");
}
static void mndBecomeFollower(struct SSyncFSM *pFsm) {
SMnode *pMnode = pFsm->data;
- mDebug("vgId:1, become follower and post sem");
+ mInfo("vgId:1, become follower and post sem");
taosWLockLatch(&pMnode->syncMgmt.lock);
if (pMnode->syncMgmt.transId != 0) {
@@ -190,7 +190,7 @@ static void mndBecomeFollower(struct SSyncFSM *pFsm) {
}
static void mndBecomeLeader(struct SSyncFSM *pFsm) {
- mDebug("vgId:1, become leader");
+ mInfo("vgId:1, become leader");
SMnode *pMnode = pFsm->data;
}
@@ -228,7 +228,7 @@ int32_t mndInitSync(SMnode *pMnode) {
syncInfo.isStandBy = pMgmt->standby;
syncInfo.snapshotStrategy = SYNC_STRATEGY_STANDARD_SNAPSHOT;
- mDebug("start to open mnode sync, standby:%d", pMgmt->standby);
+ mInfo("start to open mnode sync, standby:%d", pMgmt->standby);
if (pMgmt->standby || pMgmt->replica.id > 0) {
SSyncCfg *pCfg = &syncInfo.syncCfg;
pCfg->replicaNum = 1;
@@ -236,7 +236,7 @@ int32_t mndInitSync(SMnode *pMnode) {
SNodeInfo *pNode = &pCfg->nodeInfo[0];
tstrncpy(pNode->nodeFqdn, pMgmt->replica.fqdn, sizeof(pNode->nodeFqdn));
pNode->nodePort = pMgmt->replica.port;
- mDebug("mnode ep:%s:%u", pNode->nodeFqdn, pNode->nodePort);
+ mInfo("mnode ep:%s:%u", pNode->nodeFqdn, pNode->nodePort);
}
tsem_init(&pMgmt->syncSem, 0, 0);
@@ -284,7 +284,7 @@ int32_t mndSyncPropose(SMnode *pMnode, SSdbRaw *pRaw, int32_t transId) {
return -1;
} else {
pMgmt->transId = transId;
- mDebug("trans:%d, will be proposed", pMgmt->transId);
+ mInfo("trans:%d, will be proposed", pMgmt->transId);
taosWUnLockLatch(&pMgmt->lock);
}
@@ -314,7 +314,7 @@ void mndSyncStart(SMnode *pMnode) {
SSyncMgmt *pMgmt = &pMnode->syncMgmt;
syncSetMsgCb(pMgmt->sync, &pMnode->msgCb);
syncStart(pMgmt->sync);
- mDebug("mnode sync started, id:%" PRId64 " standby:%d", pMgmt->sync, pMgmt->standby);
+ mInfo("mnode sync started, id:%" PRId64 " standby:%d", pMgmt->sync, pMgmt->standby);
}
void mndSyncStop(SMnode *pMnode) {
diff --git a/source/dnode/mnode/impl/src/mndTrans.c b/source/dnode/mnode/impl/src/mndTrans.c
index 1d8d62e534..9c4a5afb03 100644
--- a/source/dnode/mnode/impl/src/mndTrans.c
+++ b/source/dnode/mnode/impl/src/mndTrans.c
@@ -456,11 +456,11 @@ static const char *mndTransStr(ETrnStage stage) {
}
static void mndTransTestStartFunc(SMnode *pMnode, void *param, int32_t paramLen) {
- mDebug("test trans start, param:%s, len:%d", (char *)param, paramLen);
+ mInfo("test trans start, param:%s, len:%d", (char *)param, paramLen);
}
static void mndTransTestStopFunc(SMnode *pMnode, void *param, int32_t paramLen) {
- mDebug("test trans stop, param:%s, len:%d", (char *)param, paramLen);
+ mInfo("test trans stop, param:%s, len:%d", (char *)param, paramLen);
}
static TransCbFp mndTransGetCbFp(ETrnFunc ftype) {
@@ -707,7 +707,7 @@ int32_t mndSetRpcInfoForDbTrans(SMnode *pMnode, SRpcMsg *pMsg, EOperType oper, c
if (pTrans->oper == oper) {
if (strcasecmp(dbname, pTrans->dbname1) == 0) {
- mDebug("trans:%d, db:%s oper:%d matched with input", pTrans->id, dbname, oper);
+ mInfo("trans:%d, db:%s oper:%d matched with input", pTrans->id, dbname, oper);
if (pTrans->pRpcArray == NULL) {
pTrans->pRpcArray = taosArrayInit(1, sizeof(SRpcHandleInfo));
}
@@ -746,7 +746,7 @@ static int32_t mndTransSync(SMnode *pMnode, STrans *pTrans) {
}
sdbSetRawStatus(pRaw, SDB_STATUS_READY);
- mDebug("trans:%d, sync to other mnodes, stage:%s", pTrans->id, mndTransStr(pTrans->stage));
+ mInfo("trans:%d, sync to other mnodes, stage:%s", pTrans->id, mndTransStr(pTrans->stage));
int32_t code = mndSyncPropose(pMnode, pRaw, pTrans->id);
if (code != 0) {
mError("trans:%d, failed to sync since %s", pTrans->id, terrstr());
@@ -755,7 +755,7 @@ static int32_t mndTransSync(SMnode *pMnode, STrans *pTrans) {
}
sdbFreeRaw(pRaw);
- mDebug("trans:%d, sync finished", pTrans->id);
+ mInfo("trans:%d, sync finished", pTrans->id);
return 0;
}
@@ -821,12 +821,12 @@ int32_t mndTransPrepare(SMnode *pMnode, STrans *pTrans) {
return -1;
}
- mDebug("trans:%d, prepare transaction", pTrans->id);
+ mInfo("trans:%d, prepare transaction", pTrans->id);
if (mndTransSync(pMnode, pTrans) != 0) {
mError("trans:%d, failed to prepare since %s", pTrans->id, terrstr());
return -1;
}
- mDebug("trans:%d, prepare finished", pTrans->id);
+ mInfo("trans:%d, prepare finished", pTrans->id);
STrans *pNew = mndAcquireTrans(pMnode, pTrans->id);
if (pNew == NULL) {
@@ -847,22 +847,22 @@ int32_t mndTransPrepare(SMnode *pMnode, STrans *pTrans) {
}
static int32_t mndTransCommit(SMnode *pMnode, STrans *pTrans) {
- mDebug("trans:%d, commit transaction", pTrans->id);
+ mInfo("trans:%d, commit transaction", pTrans->id);
if (mndTransSync(pMnode, pTrans) != 0) {
mError("trans:%d, failed to commit since %s", pTrans->id, terrstr());
return -1;
}
- mDebug("trans:%d, commit finished", pTrans->id);
+ mInfo("trans:%d, commit finished", pTrans->id);
return 0;
}
static int32_t mndTransRollback(SMnode *pMnode, STrans *pTrans) {
- mDebug("trans:%d, rollback transaction", pTrans->id);
+ mInfo("trans:%d, rollback transaction", pTrans->id);
if (mndTransSync(pMnode, pTrans) != 0) {
mError("trans:%d, failed to rollback since %s", pTrans->id, terrstr());
return -1;
}
- mDebug("trans:%d, rollback finished", pTrans->id);
+ mInfo("trans:%d, rollback finished", pTrans->id);
return 0;
}
@@ -894,7 +894,7 @@ static void mndTransSendRpcRsp(SMnode *pMnode, STrans *pTrans) {
for (int32_t i = 0; i < size; ++i) {
SRpcHandleInfo *pInfo = taosArrayGet(pTrans->pRpcArray, i);
if (pInfo->handle != NULL) {
- mDebug("trans:%d, send rsp, code:0x%x stage:%s app:%p", pTrans->id, code, mndTransStr(pTrans->stage),
+ mInfo("trans:%d, send rsp, code:0x%x stage:%s app:%p", pTrans->id, code, mndTransStr(pTrans->stage),
pInfo->ahandle);
if (code == TSDB_CODE_RPC_NETWORK_UNAVAIL) {
code = TSDB_CODE_MND_TRANS_NETWORK_UNAVAILL;
@@ -902,13 +902,13 @@ static void mndTransSendRpcRsp(SMnode *pMnode, STrans *pTrans) {
SRpcMsg rspMsg = {.code = code, .info = *pInfo};
if (pTrans->originRpcType == TDMT_MND_CREATE_DB) {
- mDebug("trans:%d, origin msgtype:%s", pTrans->id, TMSG_INFO(pTrans->originRpcType));
+ mInfo("trans:%d, origin msgtype:%s", pTrans->id, TMSG_INFO(pTrans->originRpcType));
SDbObj *pDb = mndAcquireDb(pMnode, pTrans->dbname1);
if (pDb != NULL) {
for (int32_t j = 0; j < 12; j++) {
bool ready = mndIsDbReady(pMnode, pDb);
if (!ready) {
- mDebug("trans:%d, db:%s not ready yet, wait %d times", pTrans->id, pTrans->dbname1, j);
+ mInfo("trans:%d, db:%s not ready yet, wait %d times", pTrans->id, pTrans->dbname1, j);
taosMsleep(1000);
} else {
break;
@@ -978,7 +978,7 @@ int32_t mndTransProcessRsp(SRpcMsg *pRsp) {
pAction->errCode = pRsp->code;
}
- mDebug("trans:%d, %s:%d response is received, code:0x%x, accept:0x%x retry:0x%x", transId,
+ mInfo("trans:%d, %s:%d response is received, code:0x%x, accept:0x%x retry:0x%x", transId,
mndTransStr(pAction->stage), action, pRsp->code, pAction->acceptableCode, pAction->retryCode);
mndTransExecute(pMnode, pTrans);
@@ -994,10 +994,10 @@ static void mndTransResetAction(SMnode *pMnode, STrans *pTrans, STransAction *pA
if (pAction->errCode == TSDB_CODE_RPC_REDIRECT || pAction->errCode == TSDB_CODE_SYN_NEW_CONFIG_ERROR ||
pAction->errCode == TSDB_CODE_SYN_INTERNAL_ERROR || pAction->errCode == TSDB_CODE_SYN_NOT_LEADER) {
pAction->epSet.inUse = (pAction->epSet.inUse + 1) % pAction->epSet.numOfEps;
- mDebug("trans:%d, %s:%d execute status is reset and set epset inuse:%d", pTrans->id, mndTransStr(pAction->stage),
+ mInfo("trans:%d, %s:%d execute status is reset and set epset inuse:%d", pTrans->id, mndTransStr(pAction->stage),
pAction->id, pAction->epSet.inUse);
} else {
- mDebug("trans:%d, %s:%d execute status is reset", pTrans->id, mndTransStr(pAction->stage), pAction->id);
+ mInfo("trans:%d, %s:%d execute status is reset", pTrans->id, mndTransStr(pAction->stage), pAction->id);
}
pAction->errCode = 0;
}
@@ -1024,7 +1024,7 @@ static int32_t mndTransWriteSingleLog(SMnode *pMnode, STrans *pTrans, STransActi
pAction->rawWritten = true;
pAction->errCode = 0;
code = 0;
- mDebug("trans:%d, %s:%d write to sdb, type:%s status:%s", pTrans->id, mndTransStr(pAction->stage), pAction->id,
+ mInfo("trans:%d, %s:%d write to sdb, type:%s status:%s", pTrans->id, mndTransStr(pAction->stage), pAction->id,
sdbTableName(pAction->pRaw->type), sdbStatusName(pAction->pRaw->status));
pTrans->lastAction = pAction->id;
@@ -1073,7 +1073,7 @@ static int32_t mndTransSendSingleMsg(SMnode *pMnode, STrans *pTrans, STransActio
pAction->msgSent = 1;
pAction->msgReceived = 0;
pAction->errCode = 0;
- mDebug("trans:%d, %s:%d is sent, %s", pTrans->id, mndTransStr(pAction->stage), pAction->id, detail);
+ mInfo("trans:%d, %s:%d is sent, %s", pTrans->id, mndTransStr(pAction->stage), pAction->id, detail);
pTrans->lastAction = pAction->id;
pTrans->lastMsgType = pAction->msgType;
@@ -1100,7 +1100,7 @@ static int32_t mndTransSendSingleMsg(SMnode *pMnode, STrans *pTrans, STransActio
static int32_t mndTransExecNullMsg(SMnode *pMnode, STrans *pTrans, STransAction *pAction) {
pAction->rawWritten = 0;
pAction->errCode = 0;
- mDebug("trans:%d, %s:%d confirm action executed", pTrans->id, mndTransStr(pAction->stage), pAction->id);
+ mInfo("trans:%d, %s:%d confirm action executed", pTrans->id, mndTransStr(pAction->stage), pAction->id);
pTrans->lastAction = pAction->id;
pTrans->lastMsgType = pAction->msgType;
@@ -1160,7 +1160,7 @@ static int32_t mndTransExecuteActions(SMnode *pMnode, STrans *pTrans, SArray *pA
pTrans->lastMsgType = 0;
memset(&pTrans->lastEpset, 0, sizeof(pTrans->lastEpset));
pTrans->lastErrorNo = 0;
- mDebug("trans:%d, all %d actions execute successfully", pTrans->id, numOfActions);
+ mInfo("trans:%d, all %d actions execute successfully", pTrans->id, numOfActions);
return 0;
} else {
mError("trans:%d, all %d actions executed, code:0x%x", pTrans->id, numOfActions, errCode & 0XFFFF);
@@ -1175,7 +1175,7 @@ static int32_t mndTransExecuteActions(SMnode *pMnode, STrans *pTrans, SArray *pA
return errCode;
}
} else {
- mDebug("trans:%d, %d of %d actions executed", pTrans->id, numOfExecuted, numOfActions);
+ mInfo("trans:%d, %d of %d actions executed", pTrans->id, numOfExecuted, numOfActions);
return TSDB_CODE_ACTION_IN_PROGRESS;
}
}
@@ -1221,7 +1221,7 @@ static int32_t mndTransExecuteRedoActionsSerial(SMnode *pMnode, STrans *pTrans)
code = pAction->errCode;
mndTransResetAction(pMnode, pTrans, pAction);
} else {
- mDebug("trans:%d, %s:%d execute successfully", pTrans->id, mndTransStr(pAction->stage), action);
+ mInfo("trans:%d, %s:%d execute successfully", pTrans->id, mndTransStr(pAction->stage), action);
}
} else {
code = TSDB_CODE_ACTION_IN_PROGRESS;
@@ -1230,7 +1230,7 @@ static int32_t mndTransExecuteRedoActionsSerial(SMnode *pMnode, STrans *pTrans)
if (pAction->errCode != 0 && pAction->errCode != pAction->acceptableCode) {
code = pAction->errCode;
} else {
- mDebug("trans:%d, %s:%d write successfully", pTrans->id, mndTransStr(pAction->stage), action);
+ mInfo("trans:%d, %s:%d write successfully", pTrans->id, mndTransStr(pAction->stage), action);
}
} else {
}
@@ -1254,7 +1254,7 @@ static int32_t mndTransExecuteRedoActionsSerial(SMnode *pMnode, STrans *pTrans)
if (code == 0) {
pTrans->code = 0;
pTrans->redoActionPos++;
- mDebug("trans:%d, %s:%d is executed and need sync to other mnodes", pTrans->id, mndTransStr(pAction->stage),
+ mInfo("trans:%d, %s:%d is executed and need sync to other mnodes", pTrans->id, mndTransStr(pAction->stage),
pAction->id);
code = mndTransSync(pMnode, pTrans);
if (code != 0) {
@@ -1263,17 +1263,17 @@ static int32_t mndTransExecuteRedoActionsSerial(SMnode *pMnode, STrans *pTrans)
mndTransStr(pAction->stage), pAction->id, terrstr());
}
} else if (code == TSDB_CODE_ACTION_IN_PROGRESS) {
- mDebug("trans:%d, %s:%d is in progress and wait it finish", pTrans->id, mndTransStr(pAction->stage), pAction->id);
+ mInfo("trans:%d, %s:%d is in progress and wait it finish", pTrans->id, mndTransStr(pAction->stage), pAction->id);
break;
} else if (code == pAction->retryCode) {
- mDebug("trans:%d, %s:%d receive code:0x%x and retry", pTrans->id, mndTransStr(pAction->stage), pAction->id, code);
+ mInfo("trans:%d, %s:%d receive code:0x%x and retry", pTrans->id, mndTransStr(pAction->stage), pAction->id, code);
taosMsleep(300);
action--;
continue;
} else {
terrno = code;
pTrans->code = code;
- mDebug("trans:%d, %s:%d receive code:0x%x and wait another schedule, failedTimes:%d", pTrans->id,
+ mInfo("trans:%d, %s:%d receive code:0x%x and wait another schedule, failedTimes:%d", pTrans->id,
mndTransStr(pAction->stage), pAction->id, code, pTrans->failedTimes);
break;
}
@@ -1285,7 +1285,7 @@ static int32_t mndTransExecuteRedoActionsSerial(SMnode *pMnode, STrans *pTrans)
static bool mndTransPerformPrepareStage(SMnode *pMnode, STrans *pTrans) {
bool continueExec = true;
pTrans->stage = TRN_STAGE_REDO_ACTION;
- mDebug("trans:%d, stage from prepare to redoAction", pTrans->id);
+ mInfo("trans:%d, stage from prepare to redoAction", pTrans->id);
return continueExec;
}
@@ -1304,10 +1304,10 @@ static bool mndTransPerformRedoActionStage(SMnode *pMnode, STrans *pTrans) {
if (code == 0) {
pTrans->code = 0;
pTrans->stage = TRN_STAGE_COMMIT;
- mDebug("trans:%d, stage from redoAction to commit", pTrans->id);
+ mInfo("trans:%d, stage from redoAction to commit", pTrans->id);
continueExec = true;
} else if (code == TSDB_CODE_ACTION_IN_PROGRESS) {
- mDebug("trans:%d, stage keep on redoAction since %s", pTrans->id, tstrerror(code));
+ mInfo("trans:%d, stage keep on redoAction since %s", pTrans->id, tstrerror(code));
continueExec = false;
} else {
pTrans->failedTimes++;
@@ -1347,7 +1347,7 @@ static bool mndTransPerformCommitStage(SMnode *pMnode, STrans *pTrans) {
if (code == 0) {
pTrans->code = 0;
pTrans->stage = TRN_STAGE_COMMIT_ACTION;
- mDebug("trans:%d, stage from commit to commitAction", pTrans->id);
+ mInfo("trans:%d, stage from commit to commitAction", pTrans->id);
continueExec = true;
} else {
pTrans->code = terrno;
@@ -1366,7 +1366,7 @@ static bool mndTransPerformCommitActionStage(SMnode *pMnode, STrans *pTrans) {
if (code == 0) {
pTrans->code = 0;
pTrans->stage = TRN_STAGE_FINISHED;
- mDebug("trans:%d, stage from commitAction to finished", pTrans->id);
+ mInfo("trans:%d, stage from commitAction to finished", pTrans->id);
continueExec = true;
} else {
pTrans->code = terrno;
@@ -1384,10 +1384,10 @@ static bool mndTransPerformUndoActionStage(SMnode *pMnode, STrans *pTrans) {
if (code == 0) {
pTrans->stage = TRN_STAGE_FINISHED;
- mDebug("trans:%d, stage from undoAction to finished", pTrans->id);
+ mInfo("trans:%d, stage from undoAction to finished", pTrans->id);
continueExec = true;
} else if (code == TSDB_CODE_ACTION_IN_PROGRESS) {
- mDebug("trans:%d, stage keep on undoAction since %s", pTrans->id, tstrerror(code));
+ mInfo("trans:%d, stage keep on undoAction since %s", pTrans->id, tstrerror(code));
continueExec = false;
} else {
pTrans->failedTimes++;
@@ -1406,7 +1406,7 @@ static bool mndTransPerformRollbackStage(SMnode *pMnode, STrans *pTrans) {
if (code == 0) {
pTrans->stage = TRN_STAGE_UNDO_ACTION;
- mDebug("trans:%d, stage from rollback to undoAction", pTrans->id);
+ mInfo("trans:%d, stage from rollback to undoAction", pTrans->id);
continueExec = true;
} else {
pTrans->failedTimes++;
@@ -1431,7 +1431,7 @@ static bool mndTransPerfromFinishedStage(SMnode *pMnode, STrans *pTrans) {
mError("trans:%d, failed to write sdb since %s", pTrans->id, terrstr());
}
- mDebug("trans:%d, execute finished, code:0x%x, failedTimes:%d", pTrans->id, pTrans->code, pTrans->failedTimes);
+ mInfo("trans:%d, execute finished, code:0x%x, failedTimes:%d", pTrans->id, pTrans->code, pTrans->failedTimes);
return continueExec;
}
@@ -1439,7 +1439,7 @@ void mndTransExecute(SMnode *pMnode, STrans *pTrans) {
bool continueExec = true;
while (continueExec) {
- mDebug("trans:%d, continue to execute, stage:%s", pTrans->id, mndTransStr(pTrans->stage));
+ mInfo("trans:%d, continue to execute, stage:%s", pTrans->id, mndTransStr(pTrans->stage));
pTrans->lastExecTime = taosGetTimestampMs();
switch (pTrans->stage) {
case TRN_STAGE_PREPARE:
diff --git a/source/dnode/vnode/inc/vnode.h b/source/dnode/vnode/inc/vnode.h
index a475e5409a..3a3cbe72ba 100644
--- a/source/dnode/vnode/inc/vnode.h
+++ b/source/dnode/vnode/inc/vnode.h
@@ -128,8 +128,10 @@ typedef struct STsdbReader STsdbReader;
#define TIMEWINDOW_RANGE_CONTAINED 1
#define TIMEWINDOW_RANGE_EXTERNAL 2
-#define LASTROW_RETRIEVE_TYPE_ALL 0x1
-#define LASTROW_RETRIEVE_TYPE_SINGLE 0x2
+#define CACHESCAN_RETRIEVE_TYPE_ALL 0x1
+#define CACHESCAN_RETRIEVE_TYPE_SINGLE 0x2
+#define CACHESCAN_RETRIEVE_LAST_ROW 0x4
+#define CACHESCAN_RETRIEVE_LAST 0x8
int32_t tsdbSetTableId(STsdbReader *pReader, int64_t uid);
int32_t tsdbReaderOpen(SVnode *pVnode, SQueryTableDataCond *pCond, SArray *pTableList, STsdbReader **ppReader,
@@ -146,15 +148,40 @@ void *tsdbGetIdx(SMeta *pMeta);
void *tsdbGetIvtIdx(SMeta *pMeta);
uint64_t getReaderMaxVersion(STsdbReader *pReader);
-int32_t tsdbLastRowReaderOpen(void *pVnode, int32_t type, SArray *pTableIdList, int32_t numOfCols, void **pReader);
-int32_t tsdbRetrieveLastRow(void *pReader, SSDataBlock *pResBlock, const int32_t *slotIds, SArray *pTableUids);
-int32_t tsdbLastrowReaderClose(void *pReader);
+int32_t tsdbCacherowsReaderOpen(void *pVnode, int32_t type, SArray *pTableIdList, int32_t numOfCols, void **pReader);
+int32_t tsdbRetrieveCacheRows(void *pReader, SSDataBlock *pResBlock, const int32_t *slotIds, SArray *pTableUids);
+int32_t tsdbCacherowsReaderClose(void *pReader);
int32_t tsdbGetTableSchema(SVnode *pVnode, int64_t uid, STSchema **pSchema, int64_t *suid);
void tsdbCacheSetCapacity(SVnode *pVnode, size_t capacity);
size_t tsdbCacheGetCapacity(SVnode *pVnode);
// tq
+typedef struct SMetaTableInfo{
+ int64_t suid;
+ int64_t uid;
+ SSchemaWrapper *schema;
+ char tbName[TSDB_TABLE_NAME_LEN];
+}SMetaTableInfo;
+
+typedef struct SIdInfo{
+ int64_t version;
+ int32_t index;
+}SIdInfo;
+
+typedef struct SSnapContext {
+ SMeta *pMeta;
+ int64_t snapVersion;
+ TBC *pCur;
+ int64_t suid;
+ int8_t subType;
+ SHashObj *idVersion;
+ SHashObj *suidInfo;
+ SArray *idList;
+ int32_t index;
+ bool withMeta;
+ bool queryMetaOrData; // true-get meta, false-get data
+}SSnapContext;
typedef struct STqReader {
int64_t ver;
@@ -205,6 +232,12 @@ int32_t vnodeSnapWriterOpen(SVnode *pVnode, int64_t sver, int64_t ever, SVSnapWr
int32_t vnodeSnapWriterClose(SVSnapWriter *pWriter, int8_t rollback, SSnapshot *pSnapshot);
int32_t vnodeSnapWrite(SVSnapWriter *pWriter, uint8_t *pData, uint32_t nData);
+int32_t buildSnapContext(SMeta* pMeta, int64_t snapVersion, int64_t suid, int8_t subType, bool withMeta, SSnapContext** ctxRet);
+int32_t getMetafromSnapShot(SSnapContext* ctx, void **pBuf, int32_t *contLen, int16_t *type, int64_t *uid);
+SMetaTableInfo getUidfromSnapShot(SSnapContext* ctx);
+int32_t setForSnapShot(SSnapContext* ctx, int64_t uid);
+int32_t destroySnapContext(SSnapContext* ctx);
+
// structs
struct STsdbCfg {
int8_t precision;
diff --git a/source/dnode/vnode/src/inc/tq.h b/source/dnode/vnode/src/inc/tq.h
index cb5ec7aabe..7c394c4baf 100644
--- a/source/dnode/vnode/src/inc/tq.h
+++ b/source/dnode/vnode/src/inc/tq.h
@@ -68,27 +68,27 @@ typedef struct {
typedef struct {
char* qmsg;
- qTaskInfo_t task;
} STqExecCol;
typedef struct {
- int64_t suid;
+ int64_t suid;
} STqExecTb;
typedef struct {
- SHashObj* pFilterOutTbUid;
+ SHashObj* pFilterOutTbUid;
} STqExecDb;
typedef struct {
int8_t subType;
STqReader* pExecReader;
+ qTaskInfo_t task;
union {
STqExecCol execCol;
STqExecTb execTb;
STqExecDb execDb;
};
- int32_t numOfCols; // number of out pout column, temporarily used
+// int32_t numOfCols; // number of out pout column, temporarily used
SSchemaWrapper* pSchemaWrapper; // columns that are involved in query
} STqExecHandle;
@@ -101,7 +101,6 @@ typedef struct {
int64_t snapshotVer;
- // TODO remove
SWalReader* pWalReader;
SWalRef* pRef;
@@ -141,7 +140,7 @@ int32_t tEncodeSTqHandle(SEncoder* pEncoder, const STqHandle* pHandle);
int32_t tDecodeSTqHandle(SDecoder* pDecoder, STqHandle* pHandle);
// tqRead
-int64_t tqScan(STQ* pTq, const STqHandle* pHandle, SMqDataRsp* pRsp, STqOffsetVal* offset);
+int64_t tqScan(STQ* pTq, const STqHandle* pHandle, SMqDataRsp* pRsp, SMqMetaRsp* pMetaRsp, STqOffsetVal* offset);
int64_t tqFetchLog(STQ* pTq, STqHandle* pHandle, int64_t* fetchOffset, SWalCkHead** pHeadWithCkSum);
// tqExec
@@ -182,6 +181,11 @@ static FORCE_INLINE void tqOffsetResetToData(STqOffsetVal* pOffsetVal, int64_t u
pOffsetVal->ts = ts;
}
+static FORCE_INLINE void tqOffsetResetToMeta(STqOffsetVal* pOffsetVal, int64_t uid) {
+ pOffsetVal->type = TMQ_OFFSET__SNAPSHOT_META;
+ pOffsetVal->uid = uid;
+}
+
static FORCE_INLINE void tqOffsetResetToLog(STqOffsetVal* pOffsetVal, int64_t ver) {
pOffsetVal->type = TMQ_OFFSET__LOG;
pOffsetVal->version = ver;
diff --git a/source/dnode/vnode/src/meta/metaQuery.c b/source/dnode/vnode/src/meta/metaQuery.c
index 04e4c52c49..9d3b4d82eb 100644
--- a/source/dnode/vnode/src/meta/metaQuery.c
+++ b/source/dnode/vnode/src/meta/metaQuery.c
@@ -887,6 +887,37 @@ const void *metaGetTableTagVal(void *pTag, int16_t type, STagVal *val) {
if (!find) {
return NULL;
}
+
+#ifdef TAG_FILTER_DEBUG
+ if (IS_VAR_DATA_TYPE(val->type)) {
+ char* buf = taosMemoryCalloc(val->nData + 1, 1);
+ memcpy(buf, val->pData, val->nData);
+ metaDebug("metaTag table val varchar index:%d cid:%d type:%d value:%s", 1, val->cid, val->type, buf);
+ taosMemoryFree(buf);
+ } else {
+ double dval = 0;
+ GET_TYPED_DATA(dval, double, val->type, &val->i64);
+ metaDebug("metaTag table val number index:%d cid:%d type:%d value:%f", 1, val->cid, val->type, dval);
+ }
+
+ SArray* pTagVals = NULL;
+ tTagToValArray((STag*)pTag, &pTagVals);
+ for (int i = 0; i < taosArrayGetSize(pTagVals); i++) {
+ STagVal* pTagVal = (STagVal*)taosArrayGet(pTagVals, i);
+
+ if (IS_VAR_DATA_TYPE(pTagVal->type)) {
+ char* buf = taosMemoryCalloc(pTagVal->nData + 1, 1);
+ memcpy(buf, pTagVal->pData, pTagVal->nData);
+ metaDebug("metaTag table varchar index:%d cid:%d type:%d value:%s", i, pTagVal->cid, pTagVal->type, buf);
+ taosMemoryFree(buf);
+ } else {
+ double dval = 0;
+ GET_TYPED_DATA(dval, double, pTagVal->type, &pTagVal->i64);
+ metaDebug("metaTag table number index:%d cid:%d type:%d value:%f", i, pTagVal->cid, pTagVal->type, dval);
+ }
+ }
+#endif
+
return val;
}
diff --git a/source/dnode/vnode/src/meta/metaSnapshot.c b/source/dnode/vnode/src/meta/metaSnapshot.c
index 973c381407..0edbd092e6 100644
--- a/source/dnode/vnode/src/meta/metaSnapshot.c
+++ b/source/dnode/vnode/src/meta/metaSnapshot.c
@@ -195,3 +195,434 @@ _err:
metaError("vgId:%d, vnode snapshot meta write failed since %s", TD_VID(pMeta->pVnode), tstrerror(code));
return code;
}
+
+typedef struct STableInfoForChildTable{
+ char *tableName;
+ SSchemaWrapper *schemaRow;
+ SSchemaWrapper *tagRow;
+}STableInfoForChildTable;
+
+static void destroySTableInfoForChildTable(void* data) {
+ STableInfoForChildTable* pData = (STableInfoForChildTable*)data;
+ taosMemoryFree(pData->tableName);
+ tDeleteSSchemaWrapper(pData->schemaRow);
+ tDeleteSSchemaWrapper(pData->tagRow);
+}
+
+static void MoveToSnapShotVersion(SSnapContext* ctx){
+ tdbTbcClose(ctx->pCur);
+ tdbTbcOpen(ctx->pMeta->pTbDb, &ctx->pCur, NULL);
+ STbDbKey key = {.version = ctx->snapVersion, .uid = INT64_MAX};
+ int c = 0;
+ tdbTbcMoveTo(ctx->pCur, &key, sizeof(key), &c);
+ if(c < 0){
+ tdbTbcMoveToPrev(ctx->pCur);
+ }
+}
+
+static int32_t MoveToPosition(SSnapContext* ctx, int64_t ver, int64_t uid){
+ tdbTbcClose(ctx->pCur);
+ tdbTbcOpen(ctx->pMeta->pTbDb, &ctx->pCur, NULL);
+ STbDbKey key = {.version = ver, .uid = uid};
+ int c = 0;
+ tdbTbcMoveTo(ctx->pCur, &key, sizeof(key), &c);
+ return c;
+}
+
+static void MoveToFirst(SSnapContext* ctx){
+ tdbTbcClose(ctx->pCur);
+ tdbTbcOpen(ctx->pMeta->pTbDb, &ctx->pCur, NULL);
+ tdbTbcMoveToFirst(ctx->pCur);
+}
+
+static void saveSuperTableInfoForChildTable(SMetaEntry *me, SHashObj *suidInfo){
+ STableInfoForChildTable* data = (STableInfoForChildTable*)taosHashGet(suidInfo, &me->uid, sizeof(tb_uid_t));
+ if(data){
+ return;
+ }
+ STableInfoForChildTable dataTmp = {0};
+ dataTmp.tableName = strdup(me->name);
+
+ dataTmp.schemaRow = tCloneSSchemaWrapper(&me->stbEntry.schemaRow);
+ dataTmp.tagRow = tCloneSSchemaWrapper(&me->stbEntry.schemaTag);
+ taosHashPut(suidInfo, &me->uid, sizeof(tb_uid_t), &dataTmp, sizeof(STableInfoForChildTable));
+}
+
+int32_t buildSnapContext(SMeta* pMeta, int64_t snapVersion, int64_t suid, int8_t subType, bool withMeta, SSnapContext** ctxRet){
+ SSnapContext* ctx = taosMemoryCalloc(1, sizeof(SSnapContext));
+ if(ctx == NULL) return -1;
+ *ctxRet = ctx;
+ ctx->pMeta = pMeta;
+ ctx->snapVersion = snapVersion;
+ ctx->suid = suid;
+ ctx->subType = subType;
+ ctx->queryMetaOrData = withMeta;
+ ctx->withMeta = withMeta;
+ ctx->idVersion = taosHashInit(100, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_NO_LOCK);
+ if(ctx->idVersion == NULL){
+ return -1;
+ }
+
+ ctx->suidInfo = taosHashInit(100, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_NO_LOCK);
+ if(ctx->suidInfo == NULL){
+ return -1;
+ }
+ taosHashSetFreeFp(ctx->suidInfo, destroySTableInfoForChildTable);
+
+ ctx->index = 0;
+ ctx->idList = taosArrayInit(100, sizeof(int64_t));
+ void *pKey = NULL;
+ void *pVal = NULL;
+ int vLen = 0, kLen = 0;
+
+ metaDebug("tmqsnap init snapVersion:%" PRIi64, ctx->snapVersion);
+ MoveToFirst(ctx);
+ while(1){
+ int32_t ret = tdbTbcNext(ctx->pCur, &pKey, &kLen, &pVal, &vLen);
+ if (ret < 0) break;
+ STbDbKey *tmp = (STbDbKey*)pKey;
+ if (tmp->version > ctx->snapVersion) break;
+
+ SIdInfo* idData = (SIdInfo*)taosHashGet(ctx->idVersion, &tmp->uid, sizeof(tb_uid_t));
+ if(idData) {
+ continue;
+ }
+
+ if (tdbTbGet(pMeta->pUidIdx, &tmp->uid, sizeof(tb_uid_t), NULL, NULL) < 0) { // check if table exist for now, need optimize later
+ continue;
+ }
+
+ SDecoder dc = {0};
+ SMetaEntry me = {0};
+ tDecoderInit(&dc, pVal, vLen);
+ metaDecodeEntry(&dc, &me);
+ if(ctx->subType == TOPIC_SUB_TYPE__TABLE){
+ if ((me.uid != ctx->suid && me.type == TSDB_SUPER_TABLE) ||
+ (me.ctbEntry.suid != ctx->suid && me.type == TSDB_CHILD_TABLE)){
+ tDecoderClear(&dc);
+ continue;
+ }
+ }
+
+ taosArrayPush(ctx->idList, &tmp->uid);
+ metaDebug("tmqsnap init idlist name:%s, uid:%" PRIi64, me.name, tmp->uid);
+ SIdInfo info = {0};
+ taosHashPut(ctx->idVersion, &tmp->uid, sizeof(tb_uid_t), &info, sizeof(SIdInfo));
+
+ tDecoderClear(&dc);
+ }
+ taosHashClear(ctx->idVersion);
+
+ MoveToSnapShotVersion(ctx);
+ while(1){
+ int32_t ret = tdbTbcPrev(ctx->pCur, &pKey, &kLen, &pVal, &vLen);
+ if (ret < 0) break;
+
+ STbDbKey *tmp = (STbDbKey*)pKey;
+ SIdInfo* idData = (SIdInfo*)taosHashGet(ctx->idVersion, &tmp->uid, sizeof(tb_uid_t));
+ if(idData){
+ continue;
+ }
+ SIdInfo info = {.version = tmp->version, .index = 0};
+ taosHashPut(ctx->idVersion, &tmp->uid, sizeof(tb_uid_t), &info, sizeof(SIdInfo));
+
+ SDecoder dc = {0};
+ SMetaEntry me = {0};
+ tDecoderInit(&dc, pVal, vLen);
+ metaDecodeEntry(&dc, &me);
+ if(ctx->subType == TOPIC_SUB_TYPE__TABLE){
+ if ((me.uid != ctx->suid && me.type == TSDB_SUPER_TABLE) ||
+ (me.ctbEntry.suid != ctx->suid && me.type == TSDB_CHILD_TABLE)){
+ tDecoderClear(&dc);
+ continue;
+ }
+ }
+
+ if ((ctx->subType == TOPIC_SUB_TYPE__DB && me.type == TSDB_SUPER_TABLE)
+ || (ctx->subType == TOPIC_SUB_TYPE__TABLE && me.uid == ctx->suid)) {
+ saveSuperTableInfoForChildTable(&me, ctx->suidInfo);
+ }
+ tDecoderClear(&dc);
+ }
+
+ for(int i = 0; i < taosArrayGetSize(ctx->idList); i++){
+ int64_t *uid = taosArrayGet(ctx->idList, i);
+ SIdInfo* idData = (SIdInfo*)taosHashGet(ctx->idVersion, uid, sizeof(int64_t));
+ ASSERT(idData);
+ idData->index = i;
+ metaDebug("tmqsnap init idVersion uid:%" PRIi64 " version:%" PRIi64 " index:%d", *uid, idData->version, idData->index);
+ }
+
+ return TDB_CODE_SUCCESS;
+}
+
+int32_t destroySnapContext(SSnapContext* ctx){
+ tdbTbcClose(ctx->pCur);
+ taosArrayDestroy(ctx->idList);
+ taosHashCleanup(ctx->idVersion);
+ taosHashCleanup(ctx->suidInfo);
+ taosMemoryFree(ctx);
+ return 0;
+}
+
+static int32_t buildNormalChildTableInfo(SVCreateTbReq *req, void **pBuf, int32_t *contLen){
+ int32_t ret = 0;
+ SVCreateTbBatchReq reqs = {0};
+
+ reqs.pArray = taosArrayInit(1, sizeof(struct SVCreateTbReq));
+ if (NULL == reqs.pArray){
+ ret = -1;
+ goto end;
+ }
+ taosArrayPush(reqs.pArray, req);
+ reqs.nReqs = 1;
+
+ tEncodeSize(tEncodeSVCreateTbBatchReq, &reqs, *contLen, ret);
+ if(ret < 0){
+ ret = -1;
+ goto end;
+ }
+ *contLen += sizeof(SMsgHead);
+ *pBuf = taosMemoryMalloc(*contLen);
+ if (NULL == *pBuf) {
+ ret = -1;
+ goto end;
+ }
+ SEncoder coder = {0};
+ tEncoderInit(&coder, POINTER_SHIFT(*pBuf, sizeof(SMsgHead)), *contLen);
+ if (tEncodeSVCreateTbBatchReq(&coder, &reqs) < 0) {
+ taosMemoryFreeClear(*pBuf);
+ tEncoderClear(&coder);
+ ret = -1;
+ goto end;
+ }
+ tEncoderClear(&coder);
+
+end:
+ taosArrayDestroy(reqs.pArray);
+ return ret;
+}
+
+static int32_t buildSuperTableInfo(SVCreateStbReq *req, void **pBuf, int32_t *contLen){
+ int32_t ret = 0;
+ tEncodeSize(tEncodeSVCreateStbReq, req, *contLen, ret);
+ if (ret < 0) {
+ return -1;
+ }
+
+ *contLen += sizeof(SMsgHead);
+ *pBuf = taosMemoryMalloc(*contLen);
+ if (NULL == *pBuf) {
+ return -1;
+ }
+
+ SEncoder encoder = {0};
+ tEncoderInit(&encoder, POINTER_SHIFT(*pBuf, sizeof(SMsgHead)), *contLen);
+ if (tEncodeSVCreateStbReq(&encoder, req) < 0) {
+ taosMemoryFreeClear(*pBuf);
+ tEncoderClear(&encoder);
+ return -1;
+ }
+ tEncoderClear(&encoder);
+ return 0;
+}
+
+int32_t setForSnapShot(SSnapContext* ctx, int64_t uid){
+ int c = 0;
+
+ if(uid == 0){
+ ctx->index = 0;
+ return c;
+ }
+
+ SIdInfo* idInfo = (SIdInfo*)taosHashGet(ctx->idVersion, &uid, sizeof(tb_uid_t));
+ if(!idInfo){
+ return -1;
+ }
+
+ ctx->index = idInfo->index;
+
+ return c;
+}
+
+int32_t getMetafromSnapShot(SSnapContext* ctx, void **pBuf, int32_t *contLen, int16_t *type, int64_t *uid){
+ int32_t ret = 0;
+ void *pKey = NULL;
+ void *pVal = NULL;
+ int vLen = 0, kLen = 0;
+
+ while(1){
+ if(ctx->index >= taosArrayGetSize(ctx->idList)){
+ metaDebug("tmqsnap get meta end");
+ ctx->index = 0;
+ ctx->queryMetaOrData = false; // change to get data
+ return 0;
+ }
+
+ int64_t* uidTmp = taosArrayGet(ctx->idList, ctx->index);
+ ctx->index++;
+ SIdInfo* idInfo = (SIdInfo*)taosHashGet(ctx->idVersion, uidTmp, sizeof(tb_uid_t));
+ ASSERT(idInfo);
+
+ *uid = *uidTmp;
+ ret = MoveToPosition(ctx, idInfo->version, *uidTmp);
+ if(ret == 0){
+ break;
+ }
+ metaDebug("tmqsnap get meta not exist uid:%" PRIi64 " version:%" PRIi64, *uid, idInfo->version);
+ }
+
+ tdbTbcGet(ctx->pCur, (const void**)&pKey, &kLen, (const void**)&pVal, &vLen);
+ SDecoder dc = {0};
+ SMetaEntry me = {0};
+ tDecoderInit(&dc, pVal, vLen);
+ metaDecodeEntry(&dc, &me);
+ metaDebug("tmqsnap get meta uid:%" PRIi64 " name:%s index:%d", *uid, me.name, ctx->index-1);
+
+ if ((ctx->subType == TOPIC_SUB_TYPE__DB && me.type == TSDB_SUPER_TABLE)
+ || (ctx->subType == TOPIC_SUB_TYPE__TABLE && me.uid == ctx->suid)) {
+ SVCreateStbReq req = {0};
+ req.name = me.name;
+ req.suid = me.uid;
+ req.schemaRow = me.stbEntry.schemaRow;
+ req.schemaTag = me.stbEntry.schemaTag;
+ req.schemaRow.version = 1;
+ req.schemaTag.version = 1;
+
+ ret = buildSuperTableInfo(&req, pBuf, contLen);
+ *type = TDMT_VND_CREATE_STB;
+
+ } else if ((ctx->subType == TOPIC_SUB_TYPE__DB && me.type == TSDB_CHILD_TABLE)
+ || (ctx->subType == TOPIC_SUB_TYPE__TABLE && me.type == TSDB_CHILD_TABLE && me.ctbEntry.suid == ctx->suid)) {
+ STableInfoForChildTable* data = (STableInfoForChildTable*)taosHashGet(ctx->suidInfo, &me.ctbEntry.suid, sizeof(tb_uid_t));
+ ASSERT(data);
+ SVCreateTbReq req = {0};
+
+ req.type = TSDB_CHILD_TABLE;
+ req.name = me.name;
+ req.uid = me.uid;
+ req.commentLen = -1;
+ req.ctb.suid = me.ctbEntry.suid;
+ req.ctb.tagNum = data->tagRow->nCols;
+ req.ctb.name = data->tableName;
+
+ SArray* tagName = taosArrayInit(req.ctb.tagNum, TSDB_COL_NAME_LEN);
+ STag* p = (STag*)me.ctbEntry.pTags;
+ if(tTagIsJson(p)){
+ if (p->nTag != 0) {
+ SSchema* schema = &data->tagRow->pSchema[0];
+ taosArrayPush(tagName, schema->name);
+ }
+ }else{
+ SArray* pTagVals = NULL;
+ if (tTagToValArray((const STag*)p, &pTagVals) != 0) {
+ ASSERT(0);
+ }
+ int16_t nCols = taosArrayGetSize(pTagVals);
+ for (int j = 0; j < nCols; ++j) {
+ STagVal* pTagVal = (STagVal*)taosArrayGet(pTagVals, j);
+ for(int i = 0; i < data->tagRow->nCols; i++){
+ SSchema *schema = &data->tagRow->pSchema[i];
+ if(schema->colId == pTagVal->cid){
+ taosArrayPush(tagName, schema->name);
+ }
+ }
+ }
+ }
+// SIdInfo* sidInfo = (SIdInfo*)taosHashGet(ctx->idVersion, &me.ctbEntry.suid, sizeof(tb_uid_t));
+// if(sidInfo->version >= idInfo->version){
+// // need parse tag
+// STag* p = (STag*)me.ctbEntry.pTags;
+// SArray* pTagVals = NULL;
+// if (tTagToValArray((const STag*)p, &pTagVals) != 0) {
+// }
+//
+// int16_t nCols = taosArrayGetSize(pTagVals);
+// for (int j = 0; j < nCols; ++j) {
+// STagVal* pTagVal = (STagVal*)taosArrayGet(pTagVals, j);
+// }
+// }else{
+ req.ctb.pTag = me.ctbEntry.pTags;
+// }
+
+ req.ctb.tagName = tagName;
+ ret = buildNormalChildTableInfo(&req, pBuf, contLen);
+ *type = TDMT_VND_CREATE_TABLE;
+ taosArrayDestroy(tagName);
+ } else if(ctx->subType == TOPIC_SUB_TYPE__DB){
+ SVCreateTbReq req = {0};
+ req.type = TSDB_NORMAL_TABLE;
+ req.name = me.name;
+ req.uid = me.uid;
+ req.commentLen = -1;
+ req.ntb.schemaRow = me.ntbEntry.schemaRow;
+ ret = buildNormalChildTableInfo(&req, pBuf, contLen);
+ *type = TDMT_VND_CREATE_TABLE;
+ } else{
+ ASSERT(0);
+ }
+ tDecoderClear(&dc);
+
+ return ret;
+}
+
+SMetaTableInfo getUidfromSnapShot(SSnapContext* ctx){
+ SMetaTableInfo result = {0};
+ void *pKey = NULL;
+ void *pVal = NULL;
+ int vLen, kLen;
+
+ while(1){
+ if(ctx->index >= taosArrayGetSize(ctx->idList)){
+ metaDebug("tmqsnap get uid info end");
+ return result;
+ }
+ int64_t* uidTmp = taosArrayGet(ctx->idList, ctx->index);
+ ctx->index++;
+ SIdInfo* idInfo = (SIdInfo*)taosHashGet(ctx->idVersion, uidTmp, sizeof(tb_uid_t));
+ ASSERT(idInfo);
+
+ int32_t ret = MoveToPosition(ctx, idInfo->version, *uidTmp);
+ if(ret != 0) {
+ metaDebug("tmqsnap getUidfromSnapShot not exist uid:%" PRIi64 " version:%" PRIi64, *uidTmp, idInfo->version);
+ continue;
+ }
+ tdbTbcGet(ctx->pCur, (const void**)&pKey, &kLen, (const void**)&pVal, &vLen);
+ SDecoder dc = {0};
+ SMetaEntry me = {0};
+ tDecoderInit(&dc, pVal, vLen);
+ metaDecodeEntry(&dc, &me);
+ metaDebug("tmqsnap get uid info uid:%" PRIi64 " name:%s index:%d", me.uid, me.name, ctx->index-1);
+
+ if (ctx->subType == TOPIC_SUB_TYPE__DB && me.type == TSDB_CHILD_TABLE){
+ STableInfoForChildTable* data = (STableInfoForChildTable*)taosHashGet(ctx->suidInfo, &me.ctbEntry.suid, sizeof(tb_uid_t));
+ result.uid = me.uid;
+ result.suid = me.ctbEntry.suid;
+ result.schema = tCloneSSchemaWrapper(data->schemaRow);
+ strcpy(result.tbName, me.name);
+ tDecoderClear(&dc);
+ break;
+ } else if (ctx->subType == TOPIC_SUB_TYPE__DB && me.type == TSDB_NORMAL_TABLE) {
+ result.uid = me.uid;
+ result.suid = 0;
+ strcpy(result.tbName, me.name);
+ result.schema = tCloneSSchemaWrapper(&me.ntbEntry.schemaRow);
+ tDecoderClear(&dc);
+ break;
+ } else if(ctx->subType == TOPIC_SUB_TYPE__TABLE && me.type == TSDB_CHILD_TABLE && me.ctbEntry.suid == ctx->suid) {
+ STableInfoForChildTable* data = (STableInfoForChildTable*)taosHashGet(ctx->suidInfo, &me.ctbEntry.suid, sizeof(tb_uid_t));
+ result.uid = me.uid;
+ result.suid = me.ctbEntry.suid;
+ strcpy(result.tbName, me.name);
+ result.schema = tCloneSSchemaWrapper(data->schemaRow);
+ tDecoderClear(&dc);
+ break;
+ } else{
+ metaDebug("tmqsnap get uid continue");
+ tDecoderClear(&dc);
+ continue;
+ }
+ }
+
+ return result;
+}
diff --git a/source/dnode/vnode/src/meta/metaTable.c b/source/dnode/vnode/src/meta/metaTable.c
index fef0ff49ac..583a2e098f 100644
--- a/source/dnode/vnode/src/meta/metaTable.c
+++ b/source/dnode/vnode/src/meta/metaTable.c
@@ -99,6 +99,7 @@ static int metaSaveJsonVarToIdx(SMeta *pMeta, const SMetaEntry *pCtbEntry, const
memcpy(val, (uint16_t *)&len, VARSTR_HEADER_SIZE);
type = TSDB_DATA_TYPE_VARCHAR;
term = indexTermCreate(suid, ADD_VALUE, type, key, nKey, val, len);
+ taosMemoryFree(val);
} else if (pTagVal->nData == 0) {
term = indexTermCreate(suid, ADD_VALUE, TSDB_DATA_TYPE_VARCHAR, key, nKey, pTagVal->pData, 0);
}
@@ -115,6 +116,7 @@ static int metaSaveJsonVarToIdx(SMeta *pMeta, const SMetaEntry *pCtbEntry, const
indexMultiTermAdd(terms, term);
}
}
+ taosArrayDestroy(pTagVals);
indexJsonPut(pMeta->pTagIvtIdx, terms, tuid);
indexMultiTermDestroy(terms);
#endif
@@ -413,6 +415,25 @@ int metaCreateTable(SMeta *pMeta, int64_t version, SVCreateTbReq *pReq, STableMe
me.ctbEntry.suid = pReq->ctb.suid;
me.ctbEntry.pTags = pReq->ctb.pTag;
+#ifdef TAG_FILTER_DEBUG
+ SArray* pTagVals = NULL;
+ int32_t code = tTagToValArray((STag*)pReq->ctb.pTag, &pTagVals);
+ for (int i = 0; i < taosArrayGetSize(pTagVals); i++) {
+ STagVal* pTagVal = (STagVal*)taosArrayGet(pTagVals, i);
+
+ if (IS_VAR_DATA_TYPE(pTagVal->type)) {
+ char* buf = taosMemoryCalloc(pTagVal->nData + 1, 1);
+ memcpy(buf, pTagVal->pData, pTagVal->nData);
+ metaDebug("metaTag table:%s varchar index:%d cid:%d type:%d value:%s", pReq->name, i, pTagVal->cid, pTagVal->type, buf);
+ taosMemoryFree(buf);
+ } else {
+ double val = 0;
+ GET_TYPED_DATA(val, double, pTagVal->type, &pTagVal->i64);
+ metaDebug("metaTag table:%s number index:%d cid:%d type:%d value:%f", pReq->name, i, pTagVal->cid, pTagVal->type, val);
+ }
+ }
+#endif
+
++pMeta->pVnode->config.vndStats.numOfCTables;
} else {
me.ntbEntry.ctime = pReq->ctime;
diff --git a/source/dnode/vnode/src/tq/tq.c b/source/dnode/vnode/src/tq/tq.c
index 3ff59ac2c0..26db68a1d4 100644
--- a/source/dnode/vnode/src/tq/tq.c
+++ b/source/dnode/vnode/src/tq/tq.c
@@ -100,7 +100,13 @@ void tqClose(STQ* pTq) {
}
int32_t tqSendMetaPollRsp(STQ* pTq, const SRpcMsg* pMsg, const SMqPollReq* pReq, const SMqMetaRsp* pRsp) {
- int32_t tlen = sizeof(SMqRspHead) + tEncodeSMqMetaRsp(NULL, pRsp);
+ int32_t len = 0;
+ int32_t code = 0;
+ tEncodeSize(tEncodeSMqMetaRsp, pRsp, len, code);
+ if (code < 0) {
+ return -1;
+ }
+ int32_t tlen = sizeof(SMqRspHead) + len;
void* buf = rpcMallocCont(tlen);
if (buf == NULL) {
return -1;
@@ -111,7 +117,11 @@ int32_t tqSendMetaPollRsp(STQ* pTq, const SRpcMsg* pMsg, const SMqPollReq* pReq,
((SMqRspHead*)buf)->consumerId = pReq->consumerId;
void* abuf = POINTER_SHIFT(buf, sizeof(SMqRspHead));
- tEncodeSMqMetaRsp(&abuf, pRsp);
+
+ SEncoder encoder = {0};
+ tEncoderInit(&encoder, abuf, len);
+ tEncodeSMqMetaRsp(&encoder, pRsp);
+ tEncoderClear(&encoder);
SRpcMsg resp = {
.info = pMsg->info,
@@ -121,9 +131,8 @@ int32_t tqSendMetaPollRsp(STQ* pTq, const SRpcMsg* pMsg, const SMqPollReq* pReq,
};
tmsgSendRsp(&resp);
- tqDebug("vgId:%d, from consumer:%" PRId64 ", (epoch %d) send rsp, res msg type %d, reqOffset:%" PRId64
- ", rspOffset:%" PRId64,
- TD_VID(pTq->pVnode), pReq->consumerId, pReq->epoch, pRsp->resMsgType, pRsp->reqOffset, pRsp->rspOffset);
+ tqDebug("vgId:%d, from consumer:%" PRId64 ", (epoch %d) send rsp, res msg type %d, offset type:%d",
+ TD_VID(pTq->pVnode), pReq->consumerId, pReq->epoch, pRsp->resMsgType, pRsp->rspOffset.type);
return 0;
}
@@ -202,7 +211,7 @@ int32_t tqProcessOffsetCommitReq(STQ* pTq, int64_t version, char* msg, int32_t m
}
tDecoderClear(&decoder);
- if (offset.val.type == TMQ_OFFSET__SNAPSHOT_DATA) {
+ if (offset.val.type == TMQ_OFFSET__SNAPSHOT_DATA || offset.val.type == TMQ_OFFSET__SNAPSHOT_META) {
tqDebug("receive offset commit msg to %s on vgId:%d, offset(type:snapshot) uid:%" PRId64 ", ts:%" PRId64,
offset.subKey, TD_VID(pTq->pVnode), offset.val.uid, offset.val.ts);
} else if (offset.val.type == TMQ_OFFSET__LOG) {
@@ -297,7 +306,6 @@ static int32_t tqInitDataRsp(SMqDataRsp* pRsp, const SMqPollReq* pReq, int8_t su
int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
SMqPollReq* pReq = pMsg->pCont;
int64_t consumerId = pReq->consumerId;
- int64_t timeout = pReq->timeout;
int32_t reqEpoch = pReq->epoch;
int32_t code = 0;
STqOffsetVal reqOffset = pReq->reqOffset;
@@ -349,12 +357,11 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
TD_VID(pTq->pVnode), formatBuf);
} else {
if (reqOffset.type == TMQ_OFFSET__RESET_EARLIEAST) {
- if (pReq->useSnapshot && pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
- if (!pHandle->fetchMeta) {
- tqOffsetResetToData(&fetchOffsetNew, 0, 0);
+ if (pReq->useSnapshot){
+ if (pHandle->fetchMeta){
+ tqOffsetResetToMeta(&fetchOffsetNew, 0);
} else {
- // reset to meta
- ASSERT(0);
+ tqOffsetResetToData(&fetchOffsetNew, 0, 0);
}
} else {
tqOffsetResetToLog(&fetchOffsetNew, walGetFirstVer(pTq->pVnode->pWal));
@@ -378,28 +385,34 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
}
}
- // 3.query
- if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
- /*if (fetchOffsetNew.type == TMQ_OFFSET__LOG) {*/
- /*fetchOffsetNew.version++;*/
- /*}*/
- if (tqScan(pTq, pHandle, &dataRsp, &fetchOffsetNew) < 0) {
- ASSERT(0);
- code = -1;
+ if(pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN || fetchOffsetNew.type != TMQ_OFFSET__LOG){
+ SMqMetaRsp metaRsp = {0};
+ tqScan(pTq, pHandle, &dataRsp, &metaRsp, &fetchOffsetNew);
+
+ if(metaRsp.metaRspLen > 0){
+ if (tqSendMetaPollRsp(pTq, pMsg, pReq, &metaRsp) < 0) {
+ code = -1;
+ }
+ tqDebug("tmq poll: consumer %ld, subkey %s, vg %d, send meta offset type:%d,uid:%ld,version:%ld", consumerId, pHandle->subKey,
+ TD_VID(pTq->pVnode), metaRsp.rspOffset.type, metaRsp.rspOffset.uid, metaRsp.rspOffset.version);
+ taosMemoryFree(metaRsp.metaRsp);
goto OVER;
}
- if (dataRsp.blockNum == 0) {
- // TODO add to async task pool
- /*dataRsp.rspOffset.version--;*/
+
+ if (dataRsp.blockNum > 0){
+ if (tqSendDataRsp(pTq, pMsg, pReq, &dataRsp) < 0) {
+ code = -1;
+ }
+ goto OVER;
+ }else{
+ fetchOffsetNew = dataRsp.rspOffset;
}
- if (tqSendDataRsp(pTq, pMsg, pReq, &dataRsp) < 0) {
- code = -1;
- }
- goto OVER;
+
+ tqDebug("tmq poll: consumer %ld, subkey %s, vg %d, send data blockNum:%d, offset type:%d,uid:%ld,version:%ld", consumerId, pHandle->subKey,
+ TD_VID(pTq->pVnode), dataRsp.blockNum, dataRsp.rspOffset.type, dataRsp.rspOffset.uid, dataRsp.rspOffset.version);
}
- if (pHandle->execHandle.subType != TOPIC_SUB_TYPE__COLUMN) {
- ASSERT(fetchOffsetNew.type == TMQ_OFFSET__LOG);
+ if (pHandle->execHandle.subType != TOPIC_SUB_TYPE__COLUMN && fetchOffsetNew.type == TMQ_OFFSET__LOG) {
int64_t fetchVer = fetchOffsetNew.version + 1;
pCkHead = taosMemoryMalloc(sizeof(SWalCkHead) + 2048);
if (pCkHead == NULL) {
@@ -413,7 +426,7 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
consumerEpoch = atomic_load_32(&pHandle->epoch);
if (consumerEpoch > reqEpoch) {
tqWarn("tmq poll: consumer %" PRId64 " (epoch %d), subkey %s, vg %d offset %" PRId64
- ", found new consumer epoch %d, discard req epoch %d",
+ ", found new consumer epoch %d, discard req epoch %d",
consumerId, pReq->epoch, pHandle->subKey, TD_VID(pTq->pVnode), fetchVer, consumerEpoch, reqEpoch);
break;
}
@@ -422,7 +435,6 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
// TODO add push mgr
tqOffsetResetToLog(&dataRsp.rspOffset, fetchVer);
- ASSERT(dataRsp.rspOffset.version >= dataRsp.reqOffset.version);
if (tqSendDataRsp(pTq, pMsg, pReq, &dataRsp) < 0) {
code = -1;
}
@@ -444,8 +456,6 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
// TODO continue scan until meeting batch requirement
if (dataRsp.blockNum > 0 /* threshold */) {
tqOffsetResetToLog(&dataRsp.rspOffset, fetchVer);
- ASSERT(dataRsp.rspOffset.version >= dataRsp.reqOffset.version);
-
if (tqSendDataRsp(pTq, pMsg, pReq, &dataRsp) < 0) {
code = -1;
}
@@ -459,11 +469,7 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
ASSERT(IS_META_MSG(pHead->msgType));
tqDebug("fetch meta msg, ver:%" PRId64 ", type:%d", pHead->version, pHead->msgType);
SMqMetaRsp metaRsp = {0};
- /*metaRsp.reqOffset = pReq->reqOffset.version;*/
- metaRsp.rspOffset = fetchVer;
- /*metaRsp.rspOffsetNew.version = fetchVer;*/
- tqOffsetResetToLog(&metaRsp.reqOffsetNew, pReq->reqOffset.version);
- tqOffsetResetToLog(&metaRsp.rspOffsetNew, fetchVer);
+ tqOffsetResetToLog(&metaRsp.rspOffset, fetchVer);
metaRsp.resMsgType = pHead->msgType;
metaRsp.metaRspLen = pHead->bodyLen;
metaRsp.metaRsp = pHead->body;
@@ -477,6 +483,11 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
}
}
+ // send empty to client
+ if (tqSendDataRsp(pTq, pMsg, pReq, &dataRsp) < 0) {
+ code = -1;
+ }
+
OVER:
if (pCkHead) taosMemoryFree(pCkHead);
// TODO wrap in destroy func
@@ -561,6 +572,7 @@ int32_t tqProcessVgChangeReq(STQ* pTq, int64_t version, char* msg, int32_t msgLe
pHandle->execHandle.subType = req.subType;
pHandle->fetchMeta = req.withMeta;
+
// TODO version should be assigned and refed during preprocess
SWalRef* pRef = walRefCommittedVer(pTq->pVnode->pWal);
if (pRef == NULL) {
@@ -570,36 +582,42 @@ int32_t tqProcessVgChangeReq(STQ* pTq, int64_t version, char* msg, int32_t msgLe
int64_t ver = pRef->refVer;
pHandle->pRef = pRef;
+ SReadHandle handle = {
+ .meta = pTq->pVnode->pMeta,
+ .vnode = pTq->pVnode,
+ .initTableReader = true,
+ .initTqReader = true,
+ .version = ver,
+ };
+ pHandle->snapshotVer = ver;
+
if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
pHandle->execHandle.execCol.qmsg = req.qmsg;
- pHandle->snapshotVer = ver;
req.qmsg = NULL;
- SReadHandle handle = {
- .meta = pTq->pVnode->pMeta,
- .vnode = pTq->pVnode,
- .initTableReader = true,
- .initTqReader = true,
- .version = ver,
- };
- pHandle->execHandle.execCol.task =
- qCreateQueueExecTaskInfo(pHandle->execHandle.execCol.qmsg, &handle, &pHandle->execHandle.numOfCols,
+
+ pHandle->execHandle.task =
+ qCreateQueueExecTaskInfo(pHandle->execHandle.execCol.qmsg, &handle, NULL,
&pHandle->execHandle.pSchemaWrapper);
- ASSERT(pHandle->execHandle.execCol.task);
+ ASSERT(pHandle->execHandle.task);
void* scanner = NULL;
- qExtractStreamScanner(pHandle->execHandle.execCol.task, &scanner);
+ qExtractStreamScanner(pHandle->execHandle.task, &scanner);
ASSERT(scanner);
pHandle->execHandle.pExecReader = qExtractReaderFromStreamScanner(scanner);
ASSERT(pHandle->execHandle.pExecReader);
} else if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__DB) {
pHandle->pWalReader = walOpenReader(pTq->pVnode->pWal, NULL);
-
pHandle->execHandle.pExecReader = tqOpenReader(pTq->pVnode);
pHandle->execHandle.execDb.pFilterOutTbUid =
taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);
+ buildSnapContext(handle.meta, handle.version, 0, pHandle->execHandle.subType, pHandle->fetchMeta, (SSnapContext **)(&handle.sContext));
+
+ pHandle->execHandle.task =
+ qCreateQueueExecTaskInfo(NULL, &handle, NULL, NULL);
} else if (pHandle->execHandle.subType == TOPIC_SUB_TYPE__TABLE) {
pHandle->pWalReader = walOpenReader(pTq->pVnode->pWal, NULL);
pHandle->execHandle.execTb.suid = req.suid;
+
SArray* tbUidList = taosArrayInit(0, sizeof(int64_t));
vnodeGetCtbIdList(pTq->pVnode, req.suid, tbUidList);
tqDebug("vgId:%d, tq try to get all ctb, suid:%" PRId64, pTq->pVnode->config.vgId, req.suid);
@@ -610,6 +628,10 @@ int32_t tqProcessVgChangeReq(STQ* pTq, int64_t version, char* msg, int32_t msgLe
pHandle->execHandle.pExecReader = tqOpenReader(pTq->pVnode);
tqReaderSetTbUidList(pHandle->execHandle.pExecReader, tbUidList);
taosArrayDestroy(tbUidList);
+
+ buildSnapContext(handle.meta, handle.version, req.suid, pHandle->execHandle.subType, pHandle->fetchMeta, (SSnapContext **)(&handle.sContext));
+ pHandle->execHandle.task =
+ qCreateQueueExecTaskInfo(NULL, &handle, NULL, NULL);
}
taosHashPut(pTq->pHandle, req.subKey, strlen(req.subKey), pHandle, sizeof(STqHandle));
tqDebug("try to persist handle %s consumer %" PRId64, req.subKey, pHandle->consumerId);
diff --git a/source/dnode/vnode/src/tq/tqExec.c b/source/dnode/vnode/src/tq/tqExec.c
index 435bbb77b8..e21125f3a4 100644
--- a/source/dnode/vnode/src/tq/tqExec.c
+++ b/source/dnode/vnode/src/tq/tqExec.c
@@ -60,18 +60,18 @@ static int32_t tqAddTbNameToRsp(const STQ* pTq, int64_t uid, SMqDataRsp* pRsp) {
return 0;
}
-int64_t tqScan(STQ* pTq, const STqHandle* pHandle, SMqDataRsp* pRsp, STqOffsetVal* pOffset) {
+int64_t tqScan(STQ* pTq, const STqHandle* pHandle, SMqDataRsp* pRsp, SMqMetaRsp* pMetaRsp, STqOffsetVal* pOffset) {
const STqExecHandle* pExec = &pHandle->execHandle;
- qTaskInfo_t task = pExec->execCol.task;
+ qTaskInfo_t task = pExec->task;
- if (qStreamPrepareScan(task, pOffset) < 0) {
+ if (qStreamPrepareScan(task, pOffset, pHandle->execHandle.subType) < 0) {
tqDebug("prepare scan failed, return");
if (pOffset->type == TMQ_OFFSET__LOG) {
pRsp->rspOffset = *pOffset;
return 0;
} else {
tqOffsetResetToLog(pOffset, pHandle->snapshotVer);
- if (qStreamPrepareScan(task, pOffset) < 0) {
+ if (qStreamPrepareScan(task, pOffset, pHandle->execHandle.subType) < 0) {
tqDebug("prepare scan failed, return");
pRsp->rspOffset = *pOffset;
return 0;
@@ -83,24 +83,34 @@ int64_t tqScan(STQ* pTq, const STqHandle* pHandle, SMqDataRsp* pRsp, STqOffsetVa
while (1) {
SSDataBlock* pDataBlock = NULL;
uint64_t ts = 0;
- tqDebug("task start to execute");
+ tqDebug("tmqsnap task start to execute");
if (qExecTask(task, &pDataBlock, &ts) < 0) {
ASSERT(0);
}
- tqDebug("task execute end, get %p", pDataBlock);
+ tqDebug("tmqsnap task execute end, get %p", pDataBlock);
if (pDataBlock != NULL) {
if (pRsp->withTbName) {
+ int64_t uid = 0;
if (pOffset->type == TMQ_OFFSET__LOG) {
- int64_t uid = pExec->pExecReader->msgIter.uid;
+ uid = pExec->pExecReader->msgIter.uid;
if (tqAddTbNameToRsp(pTq, uid, pRsp) < 0) {
continue;
}
} else {
- pRsp->withTbName = 0;
+ char* tbName = strdup(qExtractTbnameFromTask(task));
+ taosArrayPush(pRsp->blockTbName, &tbName);
}
}
- tqAddBlockDataToRsp(pDataBlock, pRsp, pExec->numOfCols);
+ if(pRsp->withSchema){
+ if (pOffset->type == TMQ_OFFSET__LOG) {
+ tqAddBlockSchemaToRsp(pExec, pRsp);
+ }else{
+ SSchemaWrapper* pSW = tCloneSSchemaWrapper(qExtractSchemaFromTask(task));
+ taosArrayPush(pRsp->blockSchema, &pSW);
+ }
+ }
+ tqAddBlockDataToRsp(pDataBlock, pRsp, taosArrayGetSize(pDataBlock->pDataBlock));
pRsp->blockNum++;
if (pOffset->type == TMQ_OFFSET__LOG) {
continue;
@@ -110,39 +120,51 @@ int64_t tqScan(STQ* pTq, const STqHandle* pHandle, SMqDataRsp* pRsp, STqOffsetVa
}
}
- if (pRsp->blockNum == 0 && pOffset->type == TMQ_OFFSET__SNAPSHOT_DATA) {
- tqDebug("vgId: %d, tsdb consume over, switch to wal, ver %" PRId64, TD_VID(pTq->pVnode),
- pHandle->snapshotVer + 1);
- tqOffsetResetToLog(pOffset, pHandle->snapshotVer);
- qStreamPrepareScan(task, pOffset);
- continue;
- }
-
- void* meta = qStreamExtractMetaMsg(task);
- if (meta != NULL) {
- // tq add meta to rsp
- }
-
- if (qStreamExtractOffset(task, &pRsp->rspOffset) < 0) {
- ASSERT(0);
- }
-
- ASSERT(pRsp->rspOffset.type != 0);
-
-#if 0
- if (pRsp->reqOffset.type == TMQ_OFFSET__LOG) {
- if (pRsp->blockNum > 0) {
- ASSERT(pRsp->rspOffset.version > pRsp->reqOffset.version);
- } else {
- ASSERT(pRsp->rspOffset.version >= pRsp->reqOffset.version);
+ if(pHandle->execHandle.subType == TOPIC_SUB_TYPE__COLUMN){
+ if (pRsp->blockNum == 0 && pOffset->type == TMQ_OFFSET__SNAPSHOT_DATA) {
+ tqDebug("vgId: %d, tsdb consume over, switch to wal, ver %" PRId64, TD_VID(pTq->pVnode),
+ pHandle->snapshotVer + 1);
+ tqOffsetResetToLog(pOffset, pHandle->snapshotVer);
+ qStreamPrepareScan(task, pOffset, pHandle->execHandle.subType);
+ continue;
}
+ }else{
+ if (pDataBlock == NULL && pOffset->type == TMQ_OFFSET__SNAPSHOT_DATA){
+ if(qStreamExtractPrepareUid(task) != 0){
+ continue;
+ }
+ tqDebug("tmqsnap vgId: %d, tsdb consume over, switch to wal, ver %" PRId64, TD_VID(pTq->pVnode),
+ pHandle->snapshotVer + 1);
+ break;
+ }
+
+ if (pRsp->blockNum > 0){
+ tqDebug("tmqsnap task exec exited, get data");
+ break;
+ }
+
+ SMqMetaRsp* tmp = qStreamExtractMetaMsg(task);
+ if(tmp->rspOffset.type == TMQ_OFFSET__SNAPSHOT_DATA){
+ tqOffsetResetToData(pOffset, tmp->rspOffset.uid, tmp->rspOffset.ts);
+ qStreamPrepareScan(task, pOffset, pHandle->execHandle.subType);
+ tmp->rspOffset.type = TMQ_OFFSET__SNAPSHOT_META;
+ tqDebug("tmqsnap task exec change to get data");
+ continue;
+ }
+
+ *pMetaRsp = *tmp;
+ tqDebug("tmqsnap task exec exited, get meta");
}
-#endif
tqDebug("task exec exited");
break;
}
+ if (qStreamExtractOffset(task, &pRsp->rspOffset) < 0) {
+ ASSERT(0);
+ }
+
+ ASSERT(pRsp->rspOffset.type != 0);
return 0;
}
diff --git a/source/dnode/vnode/src/tq/tqMeta.c b/source/dnode/vnode/src/tq/tqMeta.c
index 405bc669bd..6b6717ff57 100644
--- a/source/dnode/vnode/src/tq/tqMeta.c
+++ b/source/dnode/vnode/src/tq/tqMeta.c
@@ -249,27 +249,34 @@ int32_t tqMetaRestoreHandle(STQ* pTq) {
}
walRefVer(handle.pRef, handle.snapshotVer);
- if (handle.execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
- SReadHandle reader = {
- .meta = pTq->pVnode->pMeta,
- .vnode = pTq->pVnode,
- .initTableReader = true,
- .initTqReader = true,
- .version = handle.snapshotVer,
- };
+ SReadHandle reader = {
+ .meta = pTq->pVnode->pMeta,
+ .vnode = pTq->pVnode,
+ .initTableReader = true,
+ .initTqReader = true,
+ .version = handle.snapshotVer,
+ };
- handle.execHandle.execCol.task = qCreateQueueExecTaskInfo(
- handle.execHandle.execCol.qmsg, &reader, &handle.execHandle.numOfCols, &handle.execHandle.pSchemaWrapper);
- ASSERT(handle.execHandle.execCol.task);
+ if (handle.execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
+
+ handle.execHandle.task = qCreateQueueExecTaskInfo(
+ handle.execHandle.execCol.qmsg, &reader, NULL, &handle.execHandle.pSchemaWrapper);
+ ASSERT(handle.execHandle.task);
void* scanner = NULL;
- qExtractStreamScanner(handle.execHandle.execCol.task, &scanner);
+ qExtractStreamScanner(handle.execHandle.task, &scanner);
ASSERT(scanner);
handle.execHandle.pExecReader = qExtractReaderFromStreamScanner(scanner);
ASSERT(handle.execHandle.pExecReader);
} else {
+
handle.pWalReader = walOpenReader(pTq->pVnode->pWal, NULL);
handle.execHandle.execDb.pFilterOutTbUid =
taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);
+// handle.execHandle.pExecReader = tqOpenReader(pTq->pVnode);
+ buildSnapContext(reader.meta, reader.version, 0, handle.execHandle.subType, handle.fetchMeta, (SSnapContext **)(&reader.sContext));
+
+ handle.execHandle.task =
+ qCreateQueueExecTaskInfo(NULL, &reader, NULL, NULL);
}
tqDebug("tq restore %s consumer %" PRId64 " vgId:%d", handle.subKey, handle.consumerId, TD_VID(pTq->pVnode));
taosHashPut(pTq->pHandle, pKey, kLen, &handle, sizeof(STqHandle));
diff --git a/source/dnode/vnode/src/tq/tqRead.c b/source/dnode/vnode/src/tq/tqRead.c
index e6a331f20e..6e2a6fdb71 100644
--- a/source/dnode/vnode/src/tq/tqRead.c
+++ b/source/dnode/vnode/src/tq/tqRead.c
@@ -68,7 +68,7 @@ int64_t tqFetchLog(STQ* pTq, STqHandle* pHandle, int64_t* fetchOffset, SWalCkHea
offset++;
}
}
-END:
+ END:
taosThreadMutexUnlock(&pHandle->pWalReader->mutex);
return code;
}
@@ -398,7 +398,7 @@ int32_t tqUpdateTbUidList(STQ* pTq, const SArray* tbUidList, bool isAdd) {
if (pIter == NULL) break;
STqHandle* pExec = (STqHandle*)pIter;
if (pExec->execHandle.subType == TOPIC_SUB_TYPE__COLUMN) {
- int32_t code = qUpdateQualifiedTableId(pExec->execHandle.execCol.task, tbUidList, isAdd);
+ int32_t code = qUpdateQualifiedTableId(pExec->execHandle.task, tbUidList, isAdd);
ASSERT(code == 0);
} else if (pExec->execHandle.subType == TOPIC_SUB_TYPE__DB) {
if (!isAdd) {
diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c
index b9f3897674..61c6877555 100644
--- a/source/dnode/vnode/src/tsdb/tsdbCache.c
+++ b/source/dnode/vnode/src/tsdb/tsdbCache.c
@@ -476,7 +476,7 @@ static int32_t getNextRowFromFSLast(void *iter, TSDBROW **ppRow) {
if (code) goto _err;
if (!state->aBlockL) {
- state->aBlockL = taosArrayInit(0, sizeof(SBlockIdx));
+ state->aBlockL = taosArrayInit(0, sizeof(SBlockL));
} else {
taosArrayClear(state->aBlockL);
}
diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c
index 66843d9a28..ea9a7ec7d9 100644
--- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c
+++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c
@@ -18,7 +18,7 @@
#include "tcommon.h"
#include "tsdb.h"
-typedef struct SLastrowReader {
+typedef struct SCacheRowsReader {
SVnode* pVnode;
STSchema* pSchema;
uint64_t uid;
@@ -27,9 +27,9 @@ typedef struct SLastrowReader {
int32_t type;
int32_t tableIndex; // currently returned result tables
SArray* pTableList; // table id list
-} SLastrowReader;
+} SCacheRowsReader;
-static void saveOneRow(STSRow* pRow, SSDataBlock* pBlock, SLastrowReader* pReader, const int32_t* slotIds) {
+static void saveOneRow(STSRow* pRow, SSDataBlock* pBlock, SCacheRowsReader* pReader, const int32_t* slotIds) {
ASSERT(pReader->numOfCols <= taosArrayGetSize(pBlock->pDataBlock));
int32_t numOfRows = pBlock->info.rows;
@@ -61,8 +61,10 @@ static void saveOneRow(STSRow* pRow, SSDataBlock* pBlock, SLastrowReader* pReade
pBlock->info.rows += 1;
}
-int32_t tsdbLastRowReaderOpen(void* pVnode, int32_t type, SArray* pTableIdList, int32_t numOfCols, void** pReader) {
- SLastrowReader* p = taosMemoryCalloc(1, sizeof(SLastrowReader));
+int32_t tsdbCacherowsReaderOpen(void* pVnode, int32_t type, SArray* pTableIdList, int32_t numOfCols, void** pReader) {
+ *pReader = NULL;
+
+ SCacheRowsReader* p = taosMemoryCalloc(1, sizeof(SCacheRowsReader));
if (p == NULL) {
return TSDB_CODE_OUT_OF_MEMORY;
}
@@ -81,9 +83,17 @@ int32_t tsdbLastRowReaderOpen(void* pVnode, int32_t type, SArray* pTableIdList,
p->pTableList = pTableIdList;
p->transferBuf = taosMemoryCalloc(p->pSchema->numOfCols, POINTER_BYTES);
+ if (p->transferBuf == NULL) {
+ return TSDB_CODE_OUT_OF_MEMORY;
+ }
+
for (int32_t i = 0; i < p->pSchema->numOfCols; ++i) {
if (IS_VAR_DATA_TYPE(p->pSchema->columns[i].type)) {
p->transferBuf[i] = taosMemoryMalloc(p->pSchema->columns[i].bytes);
+ if (p->transferBuf[i] == NULL) {
+ tsdbCacherowsReaderClose(p);
+ return TSDB_CODE_OUT_OF_MEMORY;
+ }
}
}
@@ -91,8 +101,8 @@ int32_t tsdbLastRowReaderOpen(void* pVnode, int32_t type, SArray* pTableIdList,
return TSDB_CODE_SUCCESS;
}
-int32_t tsdbLastrowReaderClose(void* pReader) {
- SLastrowReader* p = pReader;
+int32_t tsdbCacherowsReaderClose(void* pReader) {
+ SCacheRowsReader* p = pReader;
if (p->pSchema != NULL) {
for (int32_t i = 0; i < p->pSchema->numOfCols; ++i) {
@@ -107,28 +117,56 @@ int32_t tsdbLastrowReaderClose(void* pReader) {
return TSDB_CODE_SUCCESS;
}
-int32_t tsdbRetrieveLastRow(void* pReader, SSDataBlock* pResBlock, const int32_t* slotIds, SArray* pTableUidList) {
+static int32_t doExtractCacheRow(SCacheRowsReader* pr, SLRUCache* lruCache, uint64_t uid, STSRow** pRow, LRUHandle** h) {
+ int32_t code = TSDB_CODE_SUCCESS;
+ if ((pr->type & CACHESCAN_RETRIEVE_LAST_ROW) == CACHESCAN_RETRIEVE_LAST_ROW) {
+ code = tsdbCacheGetLastrowH(lruCache, uid, pr->pVnode->pTsdb, h);
+ if (code != TSDB_CODE_SUCCESS) {
+ return code;
+ }
+
+ // no data in the table of Uid
+ if (*h != NULL) {
+ *pRow = (STSRow*)taosLRUCacheValue(lruCache, *h);
+ }
+ } else {
+ code = tsdbCacheGetLastH(lruCache, uid, pr->pVnode->pTsdb, h);
+ if (code != TSDB_CODE_SUCCESS) {
+ return code;
+ }
+
+ // no data in the table of Uid
+ if (*h != NULL) {
+ SArray* pLast = (SArray*)taosLRUCacheValue(lruCache, *h);
+ tsdbCacheLastArray2Row(pLast, pRow, pr->pSchema);
+ }
+ }
+
+ return code;
+}
+
+int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32_t* slotIds, SArray* pTableUidList) {
if (pReader == NULL || pResBlock == NULL) {
return TSDB_CODE_INVALID_PARA;
}
- SLastrowReader* pr = pReader;
+ SCacheRowsReader* pr = pReader;
+ int32_t code = TSDB_CODE_SUCCESS;
SLRUCache* lruCache = pr->pVnode->pTsdb->lruCache;
LRUHandle* h = NULL;
STSRow* pRow = NULL;
size_t numOfTables = taosArrayGetSize(pr->pTableList);
// retrieve the only one last row of all tables in the uid list.
- if (pr->type == LASTROW_RETRIEVE_TYPE_SINGLE) {
+ if ((pr->type & CACHESCAN_RETRIEVE_TYPE_SINGLE) == CACHESCAN_RETRIEVE_TYPE_SINGLE) {
int64_t lastKey = INT64_MIN;
bool internalResult = false;
for (int32_t i = 0; i < numOfTables; ++i) {
STableKeyInfo* pKeyInfo = taosArrayGet(pr->pTableList, i);
- int32_t code = tsdbCacheGetLastrowH(lruCache, pKeyInfo->uid, pr->pVnode->pTsdb, &h);
- // int32_t code = tsdbCacheGetLastH(lruCache, pKeyInfo->uid, pr->pVnode->pTsdb, &h);
- if (code != TSDB_CODE_SUCCESS) {
+ code = doExtractCacheRow(pr, lruCache, pKeyInfo->uid, &pRow, &h);
+ if (code != TSDB_CODE_SUCCESS) {
return code;
}
@@ -136,9 +174,6 @@ int32_t tsdbRetrieveLastRow(void* pReader, SSDataBlock* pResBlock, const int32_t
continue;
}
- pRow = (STSRow*)taosLRUCacheValue(lruCache, h);
- // SArray* pLast = (SArray*)taosLRUCacheValue(lruCache, h);
- // tsdbCacheLastArray2Row(pLast, &pRow, pr->pSchema);
if (pRow->ts > lastKey) {
// Set result row into the same rowIndex repeatly, so we need to check if the internal result row has already
// appended or not.
@@ -155,25 +190,18 @@ int32_t tsdbRetrieveLastRow(void* pReader, SSDataBlock* pResBlock, const int32_t
tsdbCacheRelease(lruCache, h);
}
- } else if (pr->type == LASTROW_RETRIEVE_TYPE_ALL) {
+ } else if ((pr->type & CACHESCAN_RETRIEVE_TYPE_ALL) == CACHESCAN_RETRIEVE_TYPE_ALL) {
for (int32_t i = pr->tableIndex; i < numOfTables; ++i) {
STableKeyInfo* pKeyInfo = taosArrayGet(pr->pTableList, i);
-
- int32_t code = tsdbCacheGetLastrowH(lruCache, pKeyInfo->uid, pr->pVnode->pTsdb, &h);
- // int32_t code = tsdbCacheGetLastH(lruCache, pKeyInfo->uid, pr->pVnode->pTsdb, &h);
- if (code != TSDB_CODE_SUCCESS) {
+ code = doExtractCacheRow(pr, lruCache, pKeyInfo->uid, &pRow, &h);
+ if (code != TSDB_CODE_SUCCESS) {
return code;
}
- // no data in the table of Uid
if (h == NULL) {
continue;
}
- pRow = (STSRow*)taosLRUCacheValue(lruCache, h);
- // SArray* pLast = (SArray*)taosLRUCacheValue(lruCache, h);
- // tsdbCacheLastArray2Row(pLast, &pRow, pr->pSchema);
-
saveOneRow(pRow, pResBlock, pr, slotIds);
taosArrayPush(pTableUidList, &pKeyInfo->uid);
diff --git a/source/dnode/vnode/src/tsdb/tsdbRead.c b/source/dnode/vnode/src/tsdb/tsdbRead.c
index 0a0ef11774..a92e8189a1 100644
--- a/source/dnode/vnode/src/tsdb/tsdbRead.c
+++ b/source/dnode/vnode/src/tsdb/tsdbRead.c
@@ -16,9 +16,9 @@
#include "osDef.h"
#include "tsdb.h"
-#define ASCENDING_TRAVERSE(o) (o == TSDB_ORDER_ASC)
+#define ASCENDING_TRAVERSE(o) (o == TSDB_ORDER_ASC)
#define ALL_ROWS_CHECKED_INDEX (INT16_MIN)
-#define DEFAULT_ROW_INDEX_VAL (-1)
+#define INITIAL_ROW_INDEX_VAL (-1)
typedef enum {
EXTERNAL_ROWS_PREV = 0x1,
@@ -234,7 +234,7 @@ static SHashObj* createDataBlockScanInfo(STsdbReader* pTsdbReader, const STableK
}
for (int32_t j = 0; j < numOfTables; ++j) {
- STableBlockScanInfo info = {.lastKey = 0, .uid = idList[j].uid, .indexInBlockL = DEFAULT_ROW_INDEX_VAL};
+ STableBlockScanInfo info = {.lastKey = 0, .uid = idList[j].uid, .indexInBlockL = INITIAL_ROW_INDEX_VAL};
if (ASCENDING_TRAVERSE(pTsdbReader->order)) {
if (info.lastKey == INT64_MIN || info.lastKey < pTsdbReader->window.skey) {
info.lastKey = pTsdbReader->window.skey;
@@ -266,7 +266,9 @@ static void resetDataBlockScanInfo(SHashObj* pTableMap) {
p->iter.iter = tsdbTbDataIterDestroy(p->iter.iter);
}
- p->delSkyline = taosArrayDestroy(p->delSkyline);
+ p->fileDelIndex = -1;
+ p->delSkyline = taosArrayDestroy(p->delSkyline);
+ p->lastBlockDelIndex = INITIAL_ROW_INDEX_VAL;
}
}
@@ -414,7 +416,7 @@ _err:
return false;
}
-static void resetDataBlockIterator(SDataBlockIter* pIter, int32_t order, SHashObj* pTableMap) {
+static void resetDataBlockIterator(SDataBlockIter* pIter, int32_t order) {
pIter->order = order;
pIter->index = -1;
pIter->numOfBlocks = 0;
@@ -423,7 +425,6 @@ static void resetDataBlockIterator(SDataBlockIter* pIter, int32_t order, SHashOb
} else {
taosArrayClear(pIter->blockList);
}
- pIter->pTableMap = pTableMap;
}
static void cleanupDataBlockIterator(SDataBlockIter* pIter) { taosArrayDestroy(pIter->blockList); }
@@ -579,7 +580,7 @@ static void cleanupTableScanInfo(SHashObj* pTableMap) {
}
// reset the index in last block when handing a new file
- px->indexInBlockL = DEFAULT_ROW_INDEX_VAL;
+ px->indexInBlockL = INITIAL_ROW_INDEX_VAL;
tMapDataClear(&px->mapData);
taosArrayClear(px->pBlockList);
}
@@ -887,6 +888,7 @@ static int32_t initBlockIterator(STsdbReader* pReader, SDataBlockIter* pBlockIte
pBlockIter->numOfBlocks = numOfBlocks;
taosArrayClear(pBlockIter->blockList);
+ pBlockIter->pTableMap = pReader->status.pTableMap;
// access data blocks according to the offset of each block in asc/desc order.
int32_t numOfTables = (int32_t)taosHashGetSize(pReader->status.pTableMap);
@@ -2403,7 +2405,7 @@ static int32_t doLoadLastBlockSequentially(STsdbReader* pReader) {
initLastBlockReader(pLastBlockReader, pScanInfo->uid, &pScanInfo->indexInBlockL);
int32_t index = pScanInfo->indexInBlockL;
- if (index == DEFAULT_ROW_INDEX_VAL || index == pLastBlockReader->lastBlockData.nRow) {
+ if (index == INITIAL_ROW_INDEX_VAL || index == pLastBlockReader->lastBlockData.nRow) {
bool hasData = nextRowInLastBlock(pLastBlockReader, pScanInfo);
if (!hasData) { // current table does not have rows in last block, try next table
bool hasNexTable = moveToNextTable(pOrderedCheckInfo, pStatus);
@@ -2470,7 +2472,7 @@ static int32_t doBuildDataBlock(STsdbReader* pReader) {
// note: the lastblock may be null here
initLastBlockReader(pLastBlockReader, pScanInfo->uid, &pScanInfo->indexInBlockL);
- if (pScanInfo->indexInBlockL == DEFAULT_ROW_INDEX_VAL || pScanInfo->indexInBlockL == pLastBlockReader->lastBlockData.nRow) {
+ if (pScanInfo->indexInBlockL == INITIAL_ROW_INDEX_VAL || pScanInfo->indexInBlockL == pLastBlockReader->lastBlockData.nRow) {
bool hasData = nextRowInLastBlock(pLastBlockReader, pScanInfo);
}
}
@@ -2582,7 +2584,7 @@ static int32_t initForFirstBlockInFile(STsdbReader* pReader, SDataBlockIter* pBl
code = initBlockIterator(pReader, pBlockIter, num.numOfBlocks);
} else { // no block data, only last block exists
tBlockDataReset(&pReader->status.fileBlockData);
- resetDataBlockIterator(pBlockIter, pReader->order, pReader->status.pTableMap);
+ resetDataBlockIterator(pBlockIter, pReader->order);
}
SLastBlockReader* pLReader = pReader->status.fileIter.pLastBlockReader;
@@ -2654,7 +2656,7 @@ static int32_t buildBlockFromFiles(STsdbReader* pReader) {
initBlockDumpInfo(pReader, pBlockIter);
} else if (taosArrayGetSize(pReader->status.fileIter.pLastBlockReader->pBlockL) > 0) { // data blocks in current file are exhausted, let's try the next file now
tBlockDataReset(&pReader->status.fileBlockData);
- resetDataBlockIterator(pBlockIter, pReader->order, pReader->status.pTableMap);
+ resetDataBlockIterator(pBlockIter, pReader->order);
goto _begin;
} else {
code = initForFirstBlockInFile(pReader, pBlockIter);
@@ -3276,7 +3278,7 @@ int32_t tsdbSetTableId(STsdbReader* pReader, int64_t uid) {
ASSERT(pReader != NULL);
taosHashClear(pReader->status.pTableMap);
- STableBlockScanInfo info = {.lastKey = 0, .uid = uid, .indexInBlockL = DEFAULT_ROW_INDEX_VAL};
+ STableBlockScanInfo info = {.lastKey = 0, .uid = uid, .indexInBlockL = INITIAL_ROW_INDEX_VAL};
taosHashPut(pReader->status.pTableMap, &info.uid, sizeof(uint64_t), &info, sizeof(info));
return TDB_CODE_SUCCESS;
}
@@ -3347,10 +3349,10 @@ int32_t tsdbReaderOpen(SVnode* pVnode, SQueryTableDataCond* pCond, SArray* pTabl
}
if (pCond->suid != 0) {
- pReader->pSchema = metaGetTbTSchema(pReader->pTsdb->pVnode->pMeta, pReader->suid, -1);
+ pReader->pSchema = metaGetTbTSchema(pReader->pTsdb->pVnode->pMeta, pReader->suid, pCond->schemaVersion);
} else if (taosArrayGetSize(pTableList) > 0) {
STableKeyInfo* pKey = taosArrayGet(pTableList, 0);
- pReader->pSchema = metaGetTbTSchema(pReader->pTsdb->pVnode->pMeta, pKey->uid, -1);
+ pReader->pSchema = metaGetTbTSchema(pReader->pTsdb->pVnode->pMeta, pKey->uid, pCond->schemaVersion);
}
int32_t numOfTables = taosArrayGetSize(pTableList);
@@ -3372,7 +3374,7 @@ int32_t tsdbReaderOpen(SVnode* pVnode, SQueryTableDataCond* pCond, SArray* pTabl
SDataBlockIter* pBlockIter = &pReader->status.blockIter;
initFilesetIterator(&pReader->status.fileIter, pReader->pReadSnap->fs.aDFileSet, pReader);
- resetDataBlockIterator(&pReader->status.blockIter, pReader->order, pReader->status.pTableMap);
+ resetDataBlockIterator(&pReader->status.blockIter, pReader->order);
// no data in files, let's try buffer in memory
if (pReader->status.fileIter.numOfFiles == 0) {
@@ -3393,7 +3395,7 @@ int32_t tsdbReaderOpen(SVnode* pVnode, SQueryTableDataCond* pCond, SArray* pTabl
}
initFilesetIterator(&pPrevReader->status.fileIter, pPrevReader->pReadSnap->fs.aDFileSet, pPrevReader);
- resetDataBlockIterator(&pPrevReader->status.blockIter, pPrevReader->order, pReader->status.pTableMap);
+ resetDataBlockIterator(&pPrevReader->status.blockIter, pPrevReader->order);
// no data in files, let's try buffer in memory
if (pPrevReader->status.fileIter.numOfFiles == 0) {
@@ -3696,7 +3698,7 @@ int32_t tsdbReaderReset(STsdbReader* pReader, SQueryTableDataCond* pCond) {
tsdbDataFReaderClose(&pReader->pFileReader);
initFilesetIterator(&pReader->status.fileIter, pReader->pReadSnap->fs.aDFileSet, pReader);
- resetDataBlockIterator(&pReader->status.blockIter, pReader->order, pReader->status.pTableMap);
+ resetDataBlockIterator(&pReader->status.blockIter, pReader->order);
resetDataBlockScanInfo(pReader->status.pTableMap);
int32_t code = 0;
diff --git a/source/libs/executor/inc/executil.h b/source/libs/executor/inc/executil.h
index a25933d15e..9e7fcc2227 100644
--- a/source/libs/executor/inc/executil.h
+++ b/source/libs/executor/inc/executil.h
@@ -22,6 +22,7 @@
#include "tbuffer.h"
#include "tcommon.h"
#include "tpagedbuf.h"
+#include "tsimplehash.h"
#define T_LONG_JMP(_obj, _c) \
do { \
@@ -106,7 +107,7 @@ static FORCE_INLINE void setResultBufPageDirty(SDiskbasedBuf* pBuf, SResultRowPo
setBufPageDirty(pPage, true);
}
-void initGroupedResultInfo(SGroupResInfo* pGroupResInfo, SHashObj* pHashmap, int32_t order);
+void initGroupedResultInfo(SGroupResInfo* pGroupResInfo, SSHashObj* pHashmap, int32_t order);
void cleanupGroupResInfo(SGroupResInfo* pGroupResInfo);
void initMultiResInfoFromArrayList(SGroupResInfo* pGroupResInfo, SArray* pArrayList);
diff --git a/source/libs/executor/inc/executorimpl.h b/source/libs/executor/inc/executorimpl.h
index f059a90c9c..7eb02308de 100644
--- a/source/libs/executor/inc/executorimpl.h
+++ b/source/libs/executor/inc/executorimpl.h
@@ -142,7 +142,9 @@ typedef struct {
//TODO remove prepareStatus
STqOffsetVal prepareStatus; // for tmq
STqOffsetVal lastStatus; // for tmq
- void* metaBlk; // for tmq fetching meta
+ SMqMetaRsp metaRsp; // for tmq fetching meta
+ SSchemaWrapper *schema;
+ char tbName[TSDB_TABLE_NAME_LEN];
SSDataBlock* pullOverBlk; // for streaming
SWalFilterCond cond;
int64_t lastScanUid;
@@ -297,10 +299,10 @@ enum {
};
typedef struct SAggSupporter {
- SHashObj* pResultRowHashTable; // quick locate the window object for each result
- char* keyBuf; // window key buffer
- SDiskbasedBuf* pResultBuf; // query result buffer based on blocked-wised disk file
- int32_t resultRowSize; // the result buffer size for each result row, with the meta data size for each row
+ SSHashObj* pResultRowHashTable; // quick locate the window object for each result
+ char* keyBuf; // window key buffer
+ SDiskbasedBuf* pResultBuf; // query result buffer based on blocked-wised disk file
+ int32_t resultRowSize; // the result buffer size for each result row, with the meta data size for each row
} SAggSupporter;
typedef struct {
@@ -489,6 +491,19 @@ typedef struct SStreamScanInfo {
SNode* pTagIndexCond;
} SStreamScanInfo;
+typedef struct SStreamRawScanInfo{
+// int8_t subType;
+// bool withMeta;
+// int64_t suid;
+// int64_t snapVersion;
+// void *metaInfo;
+// void *dataInfo;
+ SVnode* vnode;
+ SSDataBlock pRes; // result SSDataBlock
+ STsdbReader* dataReader;
+ SSnapContext* sContext;
+}SStreamRawScanInfo;
+
typedef struct SSysTableScanInfo {
SRetrieveMetaTableRsp* pRsp;
SRetrieveTableReq req;
@@ -909,7 +924,7 @@ SOperatorInfo* createIndefinitOutputOperatorInfo(SOperatorInfo* downstream, SPhy
SOperatorInfo* createProjectOperatorInfo(SOperatorInfo* downstream, SProjectPhysiNode* pProjPhyNode, SExecTaskInfo* pTaskInfo);
SOperatorInfo* createSortOperatorInfo(SOperatorInfo* downstream, SSortPhysiNode* pSortNode, SExecTaskInfo* pTaskInfo);
SOperatorInfo* createMultiwayMergeOperatorInfo(SOperatorInfo** dowStreams, size_t numStreams, SMergePhysiNode* pMergePhysiNode, SExecTaskInfo* pTaskInfo);
-SOperatorInfo* createLastrowScanOperator(SLastRowScanPhysiNode* pTableScanNode, SReadHandle* readHandle, SExecTaskInfo* pTaskInfo);
+SOperatorInfo* createCacherowsScanOperator(SLastRowScanPhysiNode* pTableScanNode, SReadHandle* readHandle, SExecTaskInfo* pTaskInfo);
SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo* pExprInfo, int32_t numOfCols,
SSDataBlock* pResBlock, SInterval* pInterval, int32_t primaryTsSlotId,
@@ -931,6 +946,8 @@ SOperatorInfo* createDataBlockInfoScanOperator(void* dataReader, SReadHandle* re
SOperatorInfo* createStreamScanOperatorInfo(SReadHandle* pHandle, STableScanPhysiNode* pTableScanNode, SNode* pTagCond,
SExecTaskInfo* pTaskInfo);
+SOperatorInfo* createRawScanOperatorInfo(SReadHandle* pHandle, SExecTaskInfo* pTaskInfo);
+
SOperatorInfo* createFillOperatorInfo(SOperatorInfo* downstream, SFillPhysiNode* pPhyFillNode, SExecTaskInfo* pTaskInfo);
SOperatorInfo* createStatewindowOperatorInfo(SOperatorInfo* downstream, SStateWinodwPhysiNode* pStateNode, SExecTaskInfo* pTaskInfo);
SOperatorInfo* createPartitionOperatorInfo(SOperatorInfo* downstream, SPartitionPhysiNode* pPartNode, SExecTaskInfo* pTaskInfo);
diff --git a/source/libs/executor/inc/tsimplehash.h b/source/libs/executor/inc/tsimplehash.h
index 4c5a80e2f1..27191e3b7e 100644
--- a/source/libs/executor/inc/tsimplehash.h
+++ b/source/libs/executor/inc/tsimplehash.h
@@ -28,7 +28,7 @@ typedef void (*_hash_free_fn_t)(void *);
/**
* @brief single thread hash
- *
+ *
*/
typedef struct SSHashObj SSHashObj;
@@ -52,13 +52,13 @@ int32_t tSimpleHashPrint(const SSHashObj *pHashObj);
/**
* @brief put element into hash table, if the element with the same key exists, update it
- *
- * @param pHashObj
- * @param key
- * @param keyLen
- * @param data
- * @param dataLen
- * @return int32_t
+ *
+ * @param pHashObj
+ * @param key
+ * @param keyLen
+ * @param data
+ * @param dataLen
+ * @return int32_t
*/
int32_t tSimpleHashPut(SSHashObj *pHashObj, const void *key, size_t keyLen, const void *data, size_t dataLen);
@@ -80,6 +80,18 @@ void *tSimpleHashGet(SSHashObj *pHashObj, const void *key, size_t keyLen);
*/
int32_t tSimpleHashRemove(SSHashObj *pHashObj, const void *key, size_t keyLen);
+/**
+ * remove item with the specified key during hash iterate
+ *
+ * @param pHashObj
+ * @param key
+ * @param keyLen
+ * @param pIter
+ * @param iter
+ * @return int32_t
+ */
+int32_t tSimpleHashIterateRemove(SSHashObj *pHashObj, const void *key, size_t keyLen, void **pIter, int32_t *iter);
+
/**
* Clear the hash table.
* @param pHashObj
@@ -99,13 +111,27 @@ void tSimpleHashCleanup(SSHashObj *pHashObj);
*/
size_t tSimpleHashGetMemSize(const SSHashObj *pHashObj);
+#pragma pack(push, 4)
+typedef struct SHNode{
+ struct SHNode *next;
+ uint32_t keyLen : 20;
+ uint32_t dataLen : 12;
+ char data[];
+} SHNode;
+#pragma pack(pop)
+
/**
* Get the corresponding key information for a given data in hash table
* @param data
* @param keyLen
* @return
*/
-void *tSimpleHashGetKey(void *data, size_t* keyLen);
+static FORCE_INLINE void *tSimpleHashGetKey(void *data, size_t *keyLen) {
+ SHNode *node = (SHNode *)((char *)data - offsetof(SHNode, data));
+ if (keyLen) *keyLen = node->keyLen;
+
+ return POINTER_SHIFT(data, node->dataLen);
+}
/**
* Create the hash table iterator
@@ -116,17 +142,6 @@ void *tSimpleHashGetKey(void *data, size_t* keyLen);
*/
void *tSimpleHashIterate(const SSHashObj *pHashObj, void *data, int32_t *iter);
-/**
- * Create the hash table iterator
- *
- * @param pHashObj
- * @param data
- * @param key
- * @param iter
- * @return void*
- */
-void *tSimpleHashIterateKV(const SSHashObj *pHashObj, void *data, void **key, int32_t *iter);
-
#ifdef __cplusplus
}
#endif
diff --git a/source/libs/executor/src/cachescanoperator.c b/source/libs/executor/src/cachescanoperator.c
index b31fa279e5..94d9d0cadb 100644
--- a/source/libs/executor/src/cachescanoperator.c
+++ b/source/libs/executor/src/cachescanoperator.c
@@ -25,24 +25,27 @@
#include "thash.h"
#include "ttypes.h"
-static SSDataBlock* doScanLastrow(SOperatorInfo* pOperator);
+static SSDataBlock* doScanCache(SOperatorInfo* pOperator);
static void destroyLastrowScanOperator(void* param);
static int32_t extractTargetSlotId(const SArray* pColMatchInfo, SExecTaskInfo* pTaskInfo, int32_t** pSlotIds);
-SOperatorInfo* createLastrowScanOperator(SLastRowScanPhysiNode* pScanNode, SReadHandle* readHandle, SExecTaskInfo* pTaskInfo) {
+SOperatorInfo* createCacherowsScanOperator(SLastRowScanPhysiNode* pScanNode, SReadHandle* readHandle,
+ SExecTaskInfo* pTaskInfo) {
+ int32_t code = TSDB_CODE_SUCCESS;
SLastrowScanInfo* pInfo = taosMemoryCalloc(1, sizeof(SLastrowScanInfo));
SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
if (pInfo == NULL || pOperator == NULL) {
+ code = TSDB_CODE_OUT_OF_MEMORY;
goto _error;
}
pInfo->readHandle = *readHandle;
- pInfo->pRes = createResDataBlock(pScanNode->scan.node.pOutputDataBlockDesc);
+ pInfo->pRes = createResDataBlock(pScanNode->scan.node.pOutputDataBlockDesc);
int32_t numOfCols = 0;
pInfo->pColMatchInfo = extractColMatchInfo(pScanNode->scan.pScanCols, pScanNode->scan.node.pOutputDataBlockDesc, &numOfCols,
COL_MATCH_FROM_COL_ID);
- int32_t code = extractTargetSlotId(pInfo->pColMatchInfo, pTaskInfo, &pInfo->pSlotIds);
+ code = extractTargetSlotId(pInfo->pColMatchInfo, pTaskInfo, &pInfo->pSlotIds);
if (code != TSDB_CODE_SUCCESS) {
goto _error;
}
@@ -55,13 +58,17 @@ SOperatorInfo* createLastrowScanOperator(SLastRowScanPhysiNode* pScanNode, SRead
// partition by tbname
if (taosArrayGetSize(pTableList->pGroupList) == taosArrayGetSize(pTableList->pTableList)) {
- pInfo->retrieveType = LASTROW_RETRIEVE_TYPE_ALL;
- tsdbLastRowReaderOpen(pInfo->readHandle.vnode, pInfo->retrieveType, pTableList->pTableList,
- taosArrayGetSize(pInfo->pColMatchInfo), &pInfo->pLastrowReader);
+ pInfo->retrieveType = CACHESCAN_RETRIEVE_TYPE_ALL|CACHESCAN_RETRIEVE_LAST_ROW;
+ code = tsdbCacherowsReaderOpen(pInfo->readHandle.vnode, pInfo->retrieveType, pTableList->pTableList,
+ taosArrayGetSize(pInfo->pColMatchInfo), &pInfo->pLastrowReader);
+ if (code != TSDB_CODE_SUCCESS) {
+ goto _error;
+ }
+
pInfo->pBufferredRes = createOneDataBlock(pInfo->pRes, false);
blockDataEnsureCapacity(pInfo->pBufferredRes, pOperator->resultInfo.capacity);
} else { // by tags
- pInfo->retrieveType = LASTROW_RETRIEVE_TYPE_SINGLE;
+ pInfo->retrieveType = CACHESCAN_RETRIEVE_TYPE_SINGLE|CACHESCAN_RETRIEVE_LAST_ROW;
}
if (pScanNode->scan.pScanPseudoCols != NULL) {
@@ -80,19 +87,19 @@ SOperatorInfo* createLastrowScanOperator(SLastRowScanPhysiNode* pScanNode, SRead
pOperator->exprSupp.numOfExprs = taosArrayGetSize(pInfo->pRes->pDataBlock);
pOperator->fpSet =
- createOperatorFpSet(operatorDummyOpenFn, doScanLastrow, NULL, NULL, destroyLastrowScanOperator, NULL, NULL, NULL);
+ createOperatorFpSet(operatorDummyOpenFn, doScanCache, NULL, NULL, destroyLastrowScanOperator, NULL, NULL, NULL);
pOperator->cost.openCost = 0;
return pOperator;
_error:
- pTaskInfo->code = TSDB_CODE_OUT_OF_MEMORY;
- taosMemoryFree(pInfo);
+ pTaskInfo->code = code;
+ destroyLastrowScanOperator(pInfo);
taosMemoryFree(pOperator);
return NULL;
}
-SSDataBlock* doScanLastrow(SOperatorInfo* pOperator) {
+SSDataBlock* doScanCache(SOperatorInfo* pOperator) {
if (pOperator->status == OP_EXEC_DONE) {
return NULL;
}
@@ -109,14 +116,14 @@ SSDataBlock* doScanLastrow(SOperatorInfo* pOperator) {
blockDataCleanup(pInfo->pRes);
// check if it is a group by tbname
- if (pInfo->retrieveType == LASTROW_RETRIEVE_TYPE_ALL) {
+ if ((pInfo->retrieveType & CACHESCAN_RETRIEVE_TYPE_ALL) == CACHESCAN_RETRIEVE_TYPE_ALL) {
if (pInfo->indexOfBufferedRes >= pInfo->pBufferredRes->info.rows) {
blockDataCleanup(pInfo->pBufferredRes);
taosArrayClear(pInfo->pUidList);
- int32_t code = tsdbRetrieveLastRow(pInfo->pLastrowReader, pInfo->pBufferredRes, pInfo->pSlotIds, pInfo->pUidList);
+ int32_t code = tsdbRetrieveCacheRows(pInfo->pLastrowReader, pInfo->pBufferredRes, pInfo->pSlotIds, pInfo->pUidList);
if (code != TSDB_CODE_SUCCESS) {
- longjmp(pTaskInfo->env, code);
+ T_LONG_JMP(pTaskInfo->env, code);
}
// check for tag values
@@ -172,11 +179,11 @@ SSDataBlock* doScanLastrow(SOperatorInfo* pOperator) {
while (pInfo->currentGroupIndex < totalGroups) {
SArray* pGroupTableList = taosArrayGetP(pTableList->pGroupList, pInfo->currentGroupIndex);
- tsdbLastRowReaderOpen(pInfo->readHandle.vnode, pInfo->retrieveType, pGroupTableList,
+ tsdbCacherowsReaderOpen(pInfo->readHandle.vnode, pInfo->retrieveType, pGroupTableList,
taosArrayGetSize(pInfo->pColMatchInfo), &pInfo->pLastrowReader);
taosArrayClear(pInfo->pUidList);
- int32_t code = tsdbRetrieveLastRow(pInfo->pLastrowReader, pInfo->pRes, pInfo->pSlotIds, pInfo->pUidList);
+ int32_t code = tsdbRetrieveCacheRows(pInfo->pLastrowReader, pInfo->pRes, pInfo->pSlotIds, pInfo->pUidList);
if (code != TSDB_CODE_SUCCESS) {
longjmp(pTaskInfo->env, code);
}
@@ -200,7 +207,7 @@ SSDataBlock* doScanLastrow(SOperatorInfo* pOperator) {
}
}
- tsdbLastrowReaderClose(pInfo->pLastrowReader);
+ tsdbCacherowsReaderClose(pInfo->pLastrowReader);
return pInfo->pRes;
}
}
diff --git a/source/libs/executor/src/executil.c b/source/libs/executor/src/executil.c
index b89579a017..3b3ef9e3de 100644
--- a/source/libs/executor/src/executil.c
+++ b/source/libs/executor/src/executil.c
@@ -83,7 +83,7 @@ int32_t resultrowComparAsc(const void* p1, const void* p2) {
static int32_t resultrowComparDesc(const void* p1, const void* p2) { return resultrowComparAsc(p2, p1); }
-void initGroupedResultInfo(SGroupResInfo* pGroupResInfo, SHashObj* pHashmap, int32_t order) {
+void initGroupedResultInfo(SGroupResInfo* pGroupResInfo, SSHashObj* pHashmap, int32_t order) {
if (pGroupResInfo->pRows != NULL) {
taosArrayDestroy(pGroupResInfo->pRows);
}
@@ -92,9 +92,10 @@ void initGroupedResultInfo(SGroupResInfo* pGroupResInfo, SHashObj* pHashmap, int
void* pData = NULL;
pGroupResInfo->pRows = taosArrayInit(10, POINTER_BYTES);
- size_t keyLen = 0;
- while ((pData = taosHashIterate(pHashmap, pData)) != NULL) {
- void* key = taosHashGetKey(pData, &keyLen);
+ size_t keyLen = 0;
+ int32_t iter = 0;
+ while ((pData = tSimpleHashIterate(pHashmap, pData, &iter)) != NULL) {
+ void* key = tSimpleHashGetKey(pData, &keyLen);
SResKeyPos* p = taosMemoryMalloc(keyLen + sizeof(SResultRowPosition));
@@ -348,7 +349,7 @@ static int32_t createResultData(SDataType* pType, int32_t numOfRows, SScalarPara
int32_t code = colInfoDataEnsureCapacity(pColumnData, numOfRows);
if (code != TSDB_CODE_SUCCESS) {
- terrno = TSDB_CODE_OUT_OF_MEMORY;
+ terrno = code;
taosMemoryFree(pColumnData);
return terrno;
}
@@ -366,6 +367,7 @@ static SColumnInfoData* getColInfoResult(void* metaHandle, uint64_t suid, SArray
SScalarParam output = {0};
tagFilterAssist ctx = {0};
+
ctx.colHash = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_SMALLINT), false, HASH_NO_LOCK);
if (ctx.colHash == NULL) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
@@ -473,6 +475,7 @@ static SColumnInfoData* getColInfoResult(void* metaHandle, uint64_t suid, SArray
if (code != TSDB_CODE_SUCCESS) {
terrno = code;
qError("failed to create result, reason:%s", tstrerror(code));
+ terrno = code;
goto end;
}
@@ -1216,7 +1219,6 @@ SqlFunctionCtx* createSqlFunctionCtx(SExprInfo* pExprInfo, int32_t numOfOutput,
pCtx->start.key = INT64_MIN;
pCtx->end.key = INT64_MIN;
pCtx->numOfParams = pExpr->base.numOfParams;
- pCtx->increase = false;
pCtx->isStream = false;
pCtx->param = pFunct->pParam;
@@ -1298,6 +1300,7 @@ int32_t initQueryTableDataCond(SQueryTableDataCond* pCond, const STableScanPhysi
pCond->type = TIMEWINDOW_RANGE_CONTAINED;
pCond->startVersion = -1;
pCond->endVersion = -1;
+ pCond->schemaVersion = -1;
// pCond->type = pTableScanNode->scanFlag;
int32_t j = 0;
diff --git a/source/libs/executor/src/executor.c b/source/libs/executor/src/executor.c
index fe1f4911ca..271a65647d 100644
--- a/source/libs/executor/src/executor.c
+++ b/source/libs/executor/src/executor.c
@@ -140,7 +140,23 @@ int32_t qSetMultiStreamInput(qTaskInfo_t tinfo, const void* pBlocks, size_t numO
qTaskInfo_t qCreateQueueExecTaskInfo(void* msg, SReadHandle* readers, int32_t* numOfCols, SSchemaWrapper** pSchema) {
if (msg == NULL) {
// TODO create raw scan
- return NULL;
+
+ SExecTaskInfo* pTaskInfo = taosMemoryCalloc(1, sizeof(SExecTaskInfo));
+ if (NULL == pTaskInfo) {
+ terrno = TSDB_CODE_OUT_OF_MEMORY;
+ return NULL;
+ }
+ setTaskStatus(pTaskInfo, TASK_NOT_COMPLETED);
+
+ pTaskInfo->cost.created = taosGetTimestampMs();
+ pTaskInfo->execModel = OPTR_EXEC_MODEL_QUEUE;
+ pTaskInfo->pRoot = createRawScanOperatorInfo(readers, pTaskInfo);
+ if(NULL == pTaskInfo->pRoot){
+ terrno = TSDB_CODE_OUT_OF_MEMORY;
+ taosMemoryFree(pTaskInfo);
+ return NULL;
+ }
+ return pTaskInfo;
}
struct SSubplan* pPlan = NULL;
@@ -161,13 +177,13 @@ qTaskInfo_t qCreateQueueExecTaskInfo(void* msg, SReadHandle* readers, int32_t* n
// extract the number of output columns
SDataBlockDescNode* pDescNode = pPlan->pNode->pOutputDataBlockDesc;
- *numOfCols = 0;
+ if(numOfCols) *numOfCols = 0;
SNode* pNode;
FOREACH(pNode, pDescNode->pSlots) {
SSlotDescNode* pSlotDesc = (SSlotDescNode*)pNode;
if (pSlotDesc->output) {
- ++(*numOfCols);
+ if(numOfCols) ++(*numOfCols);
}
}
@@ -669,15 +685,26 @@ void* qExtractReaderFromStreamScanner(void* scanner) {
return (void*)pInfo->tqReader;
}
-const SSchemaWrapper* qExtractSchemaFromStreamScanner(void* scanner) {
- SStreamScanInfo* pInfo = scanner;
- return pInfo->tqReader->pSchemaWrapper;
+const SSchemaWrapper* qExtractSchemaFromTask(qTaskInfo_t tinfo) {
+ SExecTaskInfo* pTaskInfo = (SExecTaskInfo*)tinfo;
+ return pTaskInfo->streamInfo.schema;
}
-void* qStreamExtractMetaMsg(qTaskInfo_t tinfo) {
+const char* qExtractTbnameFromTask(qTaskInfo_t tinfo) {
+ SExecTaskInfo* pTaskInfo = (SExecTaskInfo*)tinfo;
+ return pTaskInfo->streamInfo.tbName;
+}
+
+SMqMetaRsp* qStreamExtractMetaMsg(qTaskInfo_t tinfo) {
SExecTaskInfo* pTaskInfo = (SExecTaskInfo*)tinfo;
ASSERT(pTaskInfo->execModel == OPTR_EXEC_MODEL_QUEUE);
- return pTaskInfo->streamInfo.metaBlk;
+ return &pTaskInfo->streamInfo.metaRsp;
+}
+
+int64_t qStreamExtractPrepareUid(qTaskInfo_t tinfo) {
+ SExecTaskInfo* pTaskInfo = (SExecTaskInfo*)tinfo;
+ ASSERT(pTaskInfo->execModel == OPTR_EXEC_MODEL_QUEUE);
+ return pTaskInfo->streamInfo.prepareStatus.uid;
}
int32_t qStreamExtractOffset(qTaskInfo_t tinfo, STqOffsetVal* pOffset) {
@@ -687,102 +714,164 @@ int32_t qStreamExtractOffset(qTaskInfo_t tinfo, STqOffsetVal* pOffset) {
return 0;
}
-int32_t qStreamPrepareScan(qTaskInfo_t tinfo, const STqOffsetVal* pOffset) {
+int32_t initQueryTableDataCondForTmq(SQueryTableDataCond* pCond, SSnapContext* sContext, SMetaTableInfo mtInfo) {
+ memset(pCond, 0, sizeof(SQueryTableDataCond));
+ pCond->order = TSDB_ORDER_ASC;
+ pCond->numOfCols = mtInfo.schema->nCols;
+ pCond->colList = taosMemoryCalloc(pCond->numOfCols, sizeof(SColumnInfo));
+ if (pCond->colList == NULL) {
+ terrno = TSDB_CODE_QRY_OUT_OF_MEMORY;
+ return terrno;
+ }
+
+ pCond->twindows = (STimeWindow){.skey = INT64_MIN, .ekey = INT64_MAX};
+ pCond->suid = mtInfo.suid;
+ pCond->type = TIMEWINDOW_RANGE_CONTAINED;
+ pCond->startVersion = -1;
+ pCond->endVersion = sContext->snapVersion;
+ pCond->schemaVersion = sContext->snapVersion;
+
+ for (int32_t i = 0; i < pCond->numOfCols; ++i) {
+ pCond->colList[i].type = mtInfo.schema->pSchema[i].type;
+ pCond->colList[i].bytes = mtInfo.schema->pSchema[i].bytes;
+ pCond->colList[i].colId = mtInfo.schema->pSchema[i].colId;
+ }
+
+ return TSDB_CODE_SUCCESS;
+}
+
+int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subType) {
SExecTaskInfo* pTaskInfo = (SExecTaskInfo*)tinfo;
SOperatorInfo* pOperator = pTaskInfo->pRoot;
ASSERT(pTaskInfo->execModel == OPTR_EXEC_MODEL_QUEUE);
pTaskInfo->streamInfo.prepareStatus = *pOffset;
- if (!tOffsetEqual(pOffset, &pTaskInfo->streamInfo.lastStatus)) {
- while (1) {
- uint16_t type = pOperator->operatorType;
- pOperator->status = OP_OPENED;
- // TODO add more check
- if (type != QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN) {
- ASSERT(pOperator->numOfDownstream == 1);
- pOperator = pOperator->pDownstream[0];
- }
+ if (tOffsetEqual(pOffset, &pTaskInfo->streamInfo.lastStatus)) {
+ return 0;
+ }
+ if (subType == TOPIC_SUB_TYPE__COLUMN) {
+ uint16_t type = pOperator->operatorType;
+ pOperator->status = OP_OPENED;
+ // TODO add more check
+ if (type != QUERY_NODE_PHYSICAL_PLAN_STREAM_SCAN) {
+ ASSERT(pOperator->numOfDownstream == 1);
+ pOperator = pOperator->pDownstream[0];
+ }
- SStreamScanInfo* pInfo = pOperator->info;
- if (pOffset->type == TMQ_OFFSET__LOG) {
- STableScanInfo* pTSInfo = pInfo->pTableScanOp->info;
- tsdbReaderClose(pTSInfo->dataReader);
- pTSInfo->dataReader = NULL;
+ SStreamScanInfo* pInfo = pOperator->info;
+ if (pOffset->type == TMQ_OFFSET__LOG) {
+ STableScanInfo* pTSInfo = pInfo->pTableScanOp->info;
+ tsdbReaderClose(pTSInfo->dataReader);
+ pTSInfo->dataReader = NULL;
#if 0
- if (tOffsetEqual(pOffset, &pTaskInfo->streamInfo.lastStatus) &&
- pInfo->tqReader->pWalReader->curVersion != pOffset->version) {
- qError("prepare scan ver %" PRId64 " actual ver %" PRId64 ", last %" PRId64, pOffset->version,
- pInfo->tqReader->pWalReader->curVersion, pTaskInfo->streamInfo.lastStatus.version);
- ASSERT(0);
- }
-#endif
- if (tqSeekVer(pInfo->tqReader, pOffset->version + 1) < 0) {
- return -1;
- }
- ASSERT(pInfo->tqReader->pWalReader->curVersion == pOffset->version + 1);
- } else if (pOffset->type == TMQ_OFFSET__SNAPSHOT_DATA) {
- /*pInfo->blockType = STREAM_INPUT__TABLE_SCAN;*/
- int64_t uid = pOffset->uid;
- int64_t ts = pOffset->ts;
-
- if (uid == 0) {
- if (taosArrayGetSize(pTaskInfo->tableqinfoList.pTableList) != 0) {
- STableKeyInfo* pTableInfo = taosArrayGet(pTaskInfo->tableqinfoList.pTableList, 0);
- uid = pTableInfo->uid;
- ts = INT64_MIN;
- } else {
- return -1;
- }
- }
-
- /*if (pTaskInfo->streamInfo.lastStatus.type != TMQ_OFFSET__SNAPSHOT_DATA ||*/
- /*pTaskInfo->streamInfo.lastStatus.uid != uid || pTaskInfo->streamInfo.lastStatus.ts != ts) {*/
- STableScanInfo* pTableScanInfo = pInfo->pTableScanOp->info;
- int32_t tableSz = taosArrayGetSize(pTaskInfo->tableqinfoList.pTableList);
-
-#ifndef NDEBUG
-
- qDebug("switch to next table %" PRId64 " (cursor %d), %" PRId64 " rows returned", uid,
- pTableScanInfo->currentTable, pInfo->pTableScanOp->resultInfo.totalRows);
- pInfo->pTableScanOp->resultInfo.totalRows = 0;
-#endif
-
- bool found = false;
- for (int32_t i = 0; i < tableSz; i++) {
- STableKeyInfo* pTableInfo = taosArrayGet(pTaskInfo->tableqinfoList.pTableList, i);
- if (pTableInfo->uid == uid) {
- found = true;
- pTableScanInfo->currentTable = i;
- break;
- }
- }
-
- // TODO after dropping table, table may be not found
- ASSERT(found);
-
- if (pTableScanInfo->dataReader == NULL) {
- if (tsdbReaderOpen(pTableScanInfo->readHandle.vnode, &pTableScanInfo->cond,
- pTaskInfo->tableqinfoList.pTableList, &pTableScanInfo->dataReader, NULL) < 0 ||
- pTableScanInfo->dataReader == NULL) {
- ASSERT(0);
- }
- }
-
- tsdbSetTableId(pTableScanInfo->dataReader, uid);
- int64_t oldSkey = pTableScanInfo->cond.twindows.skey;
- pTableScanInfo->cond.twindows.skey = ts + 1;
- tsdbReaderReset(pTableScanInfo->dataReader, &pTableScanInfo->cond);
- pTableScanInfo->cond.twindows.skey = oldSkey;
- pTableScanInfo->scanTimes = 0;
-
- qDebug("tsdb reader offset seek to uid %" PRId64 " ts %" PRId64 ", table cur set to %d , all table num %d", uid,
- ts, pTableScanInfo->currentTable, tableSz);
- /*}*/
-
- } else {
+ if (tOffsetEqual(pOffset, &pTaskInfo->streamInfo.lastStatus) &&
+ pInfo->tqReader->pWalReader->curVersion != pOffset->version) {
+ qError("prepare scan ver %" PRId64 " actual ver %" PRId64 ", last %" PRId64, pOffset->version,
+ pInfo->tqReader->pWalReader->curVersion, pTaskInfo->streamInfo.lastStatus.version);
ASSERT(0);
}
- return 0;
+#endif
+ if (tqSeekVer(pInfo->tqReader, pOffset->version + 1) < 0) {
+ return -1;
+ }
+ ASSERT(pInfo->tqReader->pWalReader->curVersion == pOffset->version + 1);
+ } else if (pOffset->type == TMQ_OFFSET__SNAPSHOT_DATA) {
+ /*pInfo->blockType = STREAM_INPUT__TABLE_SCAN;*/
+ int64_t uid = pOffset->uid;
+ int64_t ts = pOffset->ts;
+
+ if (uid == 0) {
+ if (taosArrayGetSize(pTaskInfo->tableqinfoList.pTableList) != 0) {
+ STableKeyInfo* pTableInfo = taosArrayGet(pTaskInfo->tableqinfoList.pTableList, 0);
+ uid = pTableInfo->uid;
+ ts = INT64_MIN;
+ } else {
+ return -1;
+ }
+ }
+
+ /*if (pTaskInfo->streamInfo.lastStatus.type != TMQ_OFFSET__SNAPSHOT_DATA ||*/
+ /*pTaskInfo->streamInfo.lastStatus.uid != uid || pTaskInfo->streamInfo.lastStatus.ts != ts) {*/
+ STableScanInfo* pTableScanInfo = pInfo->pTableScanOp->info;
+ int32_t tableSz = taosArrayGetSize(pTaskInfo->tableqinfoList.pTableList);
+
+#ifndef NDEBUG
+ qDebug("switch to next table %" PRId64 " (cursor %d), %" PRId64 " rows returned", uid,
+ pTableScanInfo->currentTable, pInfo->pTableScanOp->resultInfo.totalRows);
+ pInfo->pTableScanOp->resultInfo.totalRows = 0;
+#endif
+
+ bool found = false;
+ for (int32_t i = 0; i < tableSz; i++) {
+ STableKeyInfo* pTableInfo = taosArrayGet(pTaskInfo->tableqinfoList.pTableList, i);
+ if (pTableInfo->uid == uid) {
+ found = true;
+ pTableScanInfo->currentTable = i;
+ break;
+ }
+ }
+
+ // TODO after dropping table, table may be not found
+ ASSERT(found);
+
+ if (pTableScanInfo->dataReader == NULL) {
+ if (tsdbReaderOpen(pTableScanInfo->readHandle.vnode, &pTableScanInfo->cond,
+ pTaskInfo->tableqinfoList.pTableList, &pTableScanInfo->dataReader, NULL) < 0 ||
+ pTableScanInfo->dataReader == NULL) {
+ ASSERT(0);
+ }
+ }
+
+ tsdbSetTableId(pTableScanInfo->dataReader, uid);
+ int64_t oldSkey = pTableScanInfo->cond.twindows.skey;
+ pTableScanInfo->cond.twindows.skey = ts + 1;
+ tsdbReaderReset(pTableScanInfo->dataReader, &pTableScanInfo->cond);
+ pTableScanInfo->cond.twindows.skey = oldSkey;
+ pTableScanInfo->scanTimes = 0;
+
+ qDebug("tsdb reader offset seek to uid %" PRId64 " ts %" PRId64 ", table cur set to %d , all table num %d", uid,
+ ts, pTableScanInfo->currentTable, tableSz);
+ /*}*/
+ } else {
+ ASSERT(0);
}
+ }else if (pOffset->type == TMQ_OFFSET__SNAPSHOT_DATA){
+ SStreamRawScanInfo* pInfo = pOperator->info;
+ SSnapContext* sContext = pInfo->sContext;
+ if(setForSnapShot(sContext, pOffset->uid) != 0) {
+ qError("setDataForSnapShot error. uid:%"PRIi64, pOffset->uid);
+ return -1;
+ }
+
+ SMetaTableInfo mtInfo = getUidfromSnapShot(sContext);
+ tsdbReaderClose(pInfo->dataReader);
+ pInfo->dataReader = NULL;
+ cleanupQueryTableDataCond(&pTaskInfo->streamInfo.tableCond);
+ taosArrayDestroy(pTaskInfo->tableqinfoList.pTableList);
+ if(mtInfo.uid == 0) return 0; // no data
+
+ initQueryTableDataCondForTmq(&pTaskInfo->streamInfo.tableCond, sContext, mtInfo);
+ pTaskInfo->streamInfo.tableCond.twindows.skey = pOffset->ts;
+ pTaskInfo->tableqinfoList.pTableList = taosArrayInit(1, sizeof(STableKeyInfo));
+ taosArrayPush(pTaskInfo->tableqinfoList.pTableList, &(STableKeyInfo){.uid = mtInfo.uid, .groupId = 0});
+ tsdbReaderOpen(pInfo->vnode, &pTaskInfo->streamInfo.tableCond, pTaskInfo->tableqinfoList.pTableList, &pInfo->dataReader, NULL);
+
+ strcpy(pTaskInfo->streamInfo.tbName, mtInfo.tbName);
+ tDeleteSSchemaWrapper(pTaskInfo->streamInfo.schema);
+ pTaskInfo->streamInfo.schema = mtInfo.schema;
+ qDebug("tmqsnap qStreamPrepareScan snapshot data uid %ld ts %ld", mtInfo.uid, pOffset->ts);
+ }else if(pOffset->type == TMQ_OFFSET__SNAPSHOT_META){
+ SStreamRawScanInfo* pInfo = pOperator->info;
+ SSnapContext* sContext = pInfo->sContext;
+ if(setForSnapShot(sContext, pOffset->uid) != 0) {
+ qError("setForSnapShot error. uid:%"PRIi64" ,version:%"PRIi64, pOffset->uid);
+ return -1;
+ }
+ qDebug("tmqsnap qStreamPrepareScan snapshot meta uid %ld ts %ld", pOffset->uid);
+ }else if (pOffset->type == TMQ_OFFSET__LOG) {
+ SStreamRawScanInfo* pInfo = pOperator->info;
+ tsdbReaderClose(pInfo->dataReader);
+ pInfo->dataReader = NULL;
+ qDebug("tmqsnap qStreamPrepareScan snapshot log");
}
return 0;
}
diff --git a/source/libs/executor/src/executorimpl.c b/source/libs/executor/src/executorimpl.c
index 6f4c84f9c0..e79a9fa16e 100644
--- a/source/libs/executor/src/executorimpl.c
+++ b/source/libs/executor/src/executorimpl.c
@@ -184,7 +184,7 @@ SResultRow* getNewResultRow(SDiskbasedBuf* pResultBuf, int64_t tableGroupId, int
// in the first scan, new space needed for results
int32_t pageId = -1;
- SIDList list = getDataBufPagesIdList(pResultBuf, tableGroupId);
+ SIDList list = getDataBufPagesIdList(pResultBuf);
if (taosArrayGetSize(list) == 0) {
pData = getNewBufPage(pResultBuf, tableGroupId, &pageId);
@@ -234,7 +234,7 @@ SResultRow* doSetResultOutBufByKey(SDiskbasedBuf* pResultBuf, SResultRowInfo* pR
SET_RES_WINDOW_KEY(pSup->keyBuf, pData, bytes, groupId);
SResultRowPosition* p1 =
- (SResultRowPosition*)taosHashGet(pSup->pResultRowHashTable, pSup->keyBuf, GET_RES_WINDOW_KEY_LEN(bytes));
+ (SResultRowPosition*)tSimpleHashGet(pSup->pResultRowHashTable, pSup->keyBuf, GET_RES_WINDOW_KEY_LEN(bytes));
SResultRow* pResult = NULL;
@@ -273,7 +273,7 @@ SResultRow* doSetResultOutBufByKey(SDiskbasedBuf* pResultBuf, SResultRowInfo* pR
// add a new result set for a new group
SResultRowPosition pos = {.pageId = pResult->pageId, .offset = pResult->offset};
- taosHashPut(pSup->pResultRowHashTable, pSup->keyBuf, GET_RES_WINDOW_KEY_LEN(bytes), &pos,
+ tSimpleHashPut(pSup->pResultRowHashTable, pSup->keyBuf, GET_RES_WINDOW_KEY_LEN(bytes), &pos,
sizeof(SResultRowPosition));
}
@@ -282,7 +282,7 @@ SResultRow* doSetResultOutBufByKey(SDiskbasedBuf* pResultBuf, SResultRowInfo* pR
// too many time window in query
if (pTaskInfo->execModel == OPTR_EXEC_MODEL_BATCH &&
- taosHashGetSize(pSup->pResultRowHashTable) > MAX_INTERVAL_TIME_WINDOW) {
+ tSimpleHashGetSize(pSup->pResultRowHashTable) > MAX_INTERVAL_TIME_WINDOW) {
T_LONG_JMP(pTaskInfo->env, TSDB_CODE_QRY_TOO_MANY_TIMEWINDOW);
}
@@ -299,7 +299,7 @@ static int32_t addNewWindowResultBuf(SResultRow* pWindowRes, SDiskbasedBuf* pRes
// in the first scan, new space needed for results
int32_t pageId = -1;
- SIDList list = getDataBufPagesIdList(pResultBuf, tid);
+ SIDList list = getDataBufPagesIdList(pResultBuf);
if (taosArrayGetSize(list) == 0) {
pData = getNewBufPage(pResultBuf, tid, &pageId);
@@ -1565,16 +1565,8 @@ int32_t doCopyToSDataBlock(SExecTaskInfo* pTaskInfo, SSDataBlock* pBlock, SExprS
// the _wstart needs to copy to 20 following rows, since the results of top-k expands to 20 different rows.
SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, slotId);
char* in = GET_ROWCELL_INTERBUF(pCtx[j].resultInfo);
- if (pCtx[j].increase) {
- int64_t ts = *(int64_t*)in;
- for (int32_t k = 0; k < pRow->numOfRows; ++k) {
- colDataAppend(pColInfoData, pBlock->info.rows + k, (const char*)&ts, pCtx[j].resultInfo->isNullRes);
- ts++;
- }
- } else {
- for (int32_t k = 0; k < pRow->numOfRows; ++k) {
- colDataAppend(pColInfoData, pBlock->info.rows + k, in, pCtx[j].resultInfo->isNullRes);
- }
+ for (int32_t k = 0; k < pRow->numOfRows; ++k) {
+ colDataAppend(pColInfoData, pBlock->info.rows + k, in, pCtx[j].resultInfo->isNullRes);
}
}
}
@@ -3011,7 +3003,7 @@ int32_t aggEncodeResultRow(SOperatorInfo* pOperator, char** result, int32_t* len
}
SOptrBasicInfo* pInfo = (SOptrBasicInfo*)(pOperator->info);
SAggSupporter* pSup = (SAggSupporter*)POINTER_SHIFT(pOperator->info, sizeof(SOptrBasicInfo));
- int32_t size = taosHashGetSize(pSup->pResultRowHashTable);
+ int32_t size = tSimpleHashGetSize(pSup->pResultRowHashTable);
size_t keyLen = sizeof(uint64_t) * 2; // estimate the key length
int32_t totalSize =
sizeof(int32_t) + sizeof(int32_t) + size * (sizeof(int32_t) + keyLen + sizeof(int32_t) + pSup->resultRowSize);
@@ -3038,10 +3030,11 @@ int32_t aggEncodeResultRow(SOperatorInfo* pOperator, char** result, int32_t* len
SResultRow* pRow = (SResultRow*)((char*)pPage + pos->offset);
setBufPageDirty(pPage, true);
releaseBufPage(pSup->pResultBuf, pPage);
-
- void* pIter = taosHashIterate(pSup->pResultRowHashTable, NULL);
- while (pIter) {
- void* key = taosHashGetKey(pIter, &keyLen);
+
+ int32_t iter = 0;
+ void* pIter = NULL;
+ while ((pIter = tSimpleHashIterate(pSup->pResultRowHashTable, pIter, &iter))) {
+ void* key = tSimpleHashGetKey(pIter, &keyLen);
SResultRowPosition* p1 = (SResultRowPosition*)pIter;
pPage = (SFilePage*)getBufPage(pSup->pResultBuf, p1->pageId);
@@ -3072,8 +3065,6 @@ int32_t aggEncodeResultRow(SOperatorInfo* pOperator, char** result, int32_t* len
offset += sizeof(int32_t);
memcpy(*result + offset, pRow, pSup->resultRowSize);
offset += pSup->resultRowSize;
-
- pIter = taosHashIterate(pSup->pResultRowHashTable, pIter);
}
*(int32_t*)(*result) = offset;
@@ -3108,7 +3099,7 @@ int32_t aggDecodeResultRow(SOperatorInfo* pOperator, char* result) {
// add a new result set for a new group
SResultRowPosition pos = {.pageId = resultRow->pageId, .offset = resultRow->offset};
- taosHashPut(pSup->pResultRowHashTable, result + offset, keyLen, &pos, sizeof(SResultRowPosition));
+ tSimpleHashPut(pSup->pResultRowHashTable, result + offset, keyLen, &pos, sizeof(SResultRowPosition));
offset += keyLen;
int32_t valueLen = *(int32_t*)(result + offset);
@@ -3225,6 +3216,7 @@ static void doHandleRemainBlockForNewGroupImpl(SOperatorInfo* pOperator, SFillOp
Q_STATUS_EQUAL(pTaskInfo->status, TASK_COMPLETED) ? pInfo->win.ekey : pInfo->existNewGroupBlock->info.window.ekey;
taosResetFillInfo(pInfo->pFillInfo, getFillInfoStart(pInfo->pFillInfo));
+ blockDataCleanup(pInfo->pRes);
doApplyScalarCalculation(pOperator, pInfo->existNewGroupBlock, order, scanFlag);
taosFillSetStartInfo(pInfo->pFillInfo, pInfo->pRes->info.rows, ekey);
@@ -3287,7 +3279,6 @@ static SSDataBlock* doFillImpl(SOperatorInfo* pOperator) {
SSDataBlock* pResBlock = pInfo->pFinalRes;
blockDataCleanup(pResBlock);
- blockDataCleanup(pInfo->pRes);
int32_t order = TSDB_ORDER_ASC;
int32_t scanFlag = MAIN_SCAN;
@@ -3311,6 +3302,8 @@ static SSDataBlock* doFillImpl(SOperatorInfo* pOperator) {
taosFillSetStartInfo(pInfo->pFillInfo, 0, pInfo->win.ekey);
} else {
blockDataUpdateTsWindow(pBlock, pInfo->primarySrcSlotId);
+
+ blockDataCleanup(pInfo->pRes);
doApplyScalarCalculation(pOperator, pBlock, order, scanFlag);
if (pInfo->curGroupId == 0 || pInfo->curGroupId == pInfo->pRes->info.groupId) {
@@ -3353,7 +3346,6 @@ static SSDataBlock* doFillImpl(SOperatorInfo* pOperator) {
assert(pBlock != NULL);
blockDataCleanup(pResBlock);
- blockDataCleanup(pInfo->pRes);
doHandleRemainBlockForNewGroupImpl(pOperator, pInfo, pResultInfo, pTaskInfo);
if (pResBlock->info.rows > pResultInfo->threshold) {
@@ -3452,7 +3444,7 @@ int32_t doInitAggInfoSup(SAggSupporter* pAggSup, SqlFunctionCtx* pCtx, int32_t n
pAggSup->resultRowSize = getResultRowSize(pCtx, numOfOutput);
pAggSup->keyBuf = taosMemoryCalloc(1, keyBufSize + POINTER_BYTES + sizeof(int64_t));
- pAggSup->pResultRowHashTable = taosHashInit(10, hashFn, true, HASH_NO_LOCK);
+ pAggSup->pResultRowHashTable = tSimpleHashInit(10, hashFn);
if (pAggSup->keyBuf == NULL || pAggSup->pResultRowHashTable == NULL) {
return TSDB_CODE_OUT_OF_MEMORY;
@@ -3479,7 +3471,7 @@ int32_t doInitAggInfoSup(SAggSupporter* pAggSup, SqlFunctionCtx* pCtx, int32_t n
void cleanupAggSup(SAggSupporter* pAggSup) {
taosMemoryFreeClear(pAggSup->keyBuf);
- taosHashCleanup(pAggSup->pResultRowHashTable);
+ tSimpleHashCleanup(pAggSup->pResultRowHashTable);
destroyDiskbasedBuf(pAggSup->pResultBuf);
}
@@ -4005,6 +3997,7 @@ static int32_t initTableblockDistQueryCond(uint64_t uid, SQueryTableDataCond* pC
pCond->type = TIMEWINDOW_RANGE_CONTAINED;
pCond->startVersion = -1;
pCond->endVersion = -1;
+ pCond->schemaVersion = -1;
return TSDB_CODE_SUCCESS;
}
@@ -4138,7 +4131,7 @@ SOperatorInfo* createOperatorTree(SPhysiNode* pPhyNode, SExecTaskInfo* pTaskInfo
return NULL;
}
- pOperator = createLastrowScanOperator(pScanNode, pHandle, pTaskInfo);
+ pOperator = createCacherowsScanOperator(pScanNode, pHandle, pTaskInfo);
} else if (QUERY_NODE_PHYSICAL_PLAN_PROJECT == type) {
pOperator = createProjectOperatorInfo(NULL, (SProjectPhysiNode*)pPhyNode, pTaskInfo);
} else {
diff --git a/source/libs/executor/src/scanoperator.c b/source/libs/executor/src/scanoperator.c
index fc36d740a9..d4c98adb7c 100644
--- a/source/libs/executor/src/scanoperator.c
+++ b/source/libs/executor/src/scanoperator.c
@@ -178,8 +178,8 @@ static SResultRow* getTableGroupOutputBuf(SOperatorInfo* pOperator, uint64_t gro
STableScanInfo* pTableScanInfo = pOperator->info;
- SResultRowPosition* p1 = (SResultRowPosition*)taosHashGet(pTableScanInfo->pdInfo.pAggSup->pResultRowHashTable, buf,
- GET_RES_WINDOW_KEY_LEN(sizeof(groupId)));
+ SResultRowPosition* p1 = (SResultRowPosition*)tSimpleHashGet(pTableScanInfo->pdInfo.pAggSup->pResultRowHashTable, buf,
+ GET_RES_WINDOW_KEY_LEN(sizeof(groupId)));
if (p1 == NULL) {
return NULL;
@@ -1334,9 +1334,9 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
}
} else if (ret.fetchType == FETCH_TYPE__META) {
ASSERT(0);
- pTaskInfo->streamInfo.lastStatus = ret.offset;
- pTaskInfo->streamInfo.metaBlk = ret.meta;
- return NULL;
+// pTaskInfo->streamInfo.lastStatus = ret.offset;
+// pTaskInfo->streamInfo.metaBlk = ret.meta;
+// return NULL;
} else if (ret.fetchType == FETCH_TYPE__NONE) {
pTaskInfo->streamInfo.lastStatus = ret.offset;
ASSERT(pTaskInfo->streamInfo.lastStatus.version >= pTaskInfo->streamInfo.prepareStatus.version);
@@ -1357,10 +1357,6 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
}
qDebug("stream scan tsdb return null");
return NULL;
- } else if (pTaskInfo->streamInfo.prepareStatus.type == TMQ_OFFSET__SNAPSHOT_META) {
- // TODO scan meta
- ASSERT(0);
- return NULL;
}
if (pTaskInfo->streamInfo.recoverStep == STREAM_RECOVER_STEP__PREPARE) {
@@ -1545,11 +1541,6 @@ static SSDataBlock* doStreamScan(SOperatorInfo* pOperator) {
}
}
-static SSDataBlock* doRawScan(SOperatorInfo* pInfo) {
- //
- return NULL;
-}
-
static SArray* extractTableIdList(const STableListInfo* pTableGroupInfo) {
SArray* tableIdList = taosArrayInit(4, sizeof(uint64_t));
@@ -1562,17 +1553,160 @@ static SArray* extractTableIdList(const STableListInfo* pTableGroupInfo) {
return tableIdList;
}
+static SSDataBlock* doRawScan(SOperatorInfo* pOperator) {
+// NOTE: this operator does never check if current status is done or not
+ SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
+ SStreamRawScanInfo* pInfo = pOperator->info;
+ pTaskInfo->streamInfo.metaRsp.metaRspLen = 0; // use metaRspLen !=0 to judge if data is meta
+ pTaskInfo->streamInfo.metaRsp.metaRsp = NULL;
+
+ qDebug("tmqsnap doRawScan called");
+ if(pTaskInfo->streamInfo.prepareStatus.type == TMQ_OFFSET__SNAPSHOT_DATA){
+ SSDataBlock* pBlock = &pInfo->pRes;
+
+ if (pInfo->dataReader && tsdbNextDataBlock(pInfo->dataReader)) {
+ if (isTaskKilled(pTaskInfo)) {
+ longjmp(pTaskInfo->env, TSDB_CODE_TSC_QUERY_CANCELLED);
+ }
+
+ tsdbRetrieveDataBlockInfo(pInfo->dataReader, &pBlock->info);
+
+ SArray* pCols = tsdbRetrieveDataBlock(pInfo->dataReader, NULL);
+ pBlock->pDataBlock = pCols;
+ if (pCols == NULL) {
+ longjmp(pTaskInfo->env, terrno);
+ }
+
+ qDebug("tmqsnap doRawScan get data uid:%ld", pBlock->info.uid);
+ pTaskInfo->streamInfo.lastStatus.type = TMQ_OFFSET__SNAPSHOT_DATA;
+ pTaskInfo->streamInfo.lastStatus.uid = pBlock->info.uid;
+ pTaskInfo->streamInfo.lastStatus.ts = pBlock->info.window.ekey;
+ return pBlock;
+ }
+
+ SMetaTableInfo mtInfo = getUidfromSnapShot(pInfo->sContext);
+ if (mtInfo.uid == 0){ //read snapshot done, change to get data from wal
+ qDebug("tmqsnap read snapshot done, change to get data from wal");
+ pTaskInfo->streamInfo.prepareStatus.uid = mtInfo.uid;
+ pTaskInfo->streamInfo.lastStatus.type = TMQ_OFFSET__LOG;
+ pTaskInfo->streamInfo.lastStatus.version = pInfo->sContext->snapVersion;
+ tDeleteSSchemaWrapper(pTaskInfo->streamInfo.schema);
+ }else{
+ pTaskInfo->streamInfo.prepareStatus.uid = mtInfo.uid;
+ pTaskInfo->streamInfo.prepareStatus.ts = INT64_MIN;
+ qDebug("tmqsnap change get data uid:%ld", mtInfo.uid);
+ qStreamPrepareScan(pTaskInfo, &pTaskInfo->streamInfo.prepareStatus, pInfo->sContext->subType);
+ strcpy(pTaskInfo->streamInfo.tbName, mtInfo.tbName);
+ tDeleteSSchemaWrapper(pTaskInfo->streamInfo.schema);
+ pTaskInfo->streamInfo.schema = mtInfo.schema;
+ }
+ qDebug("tmqsnap stream scan tsdb return null");
+ return NULL;
+ }else if(pTaskInfo->streamInfo.prepareStatus.type == TMQ_OFFSET__SNAPSHOT_META){
+ SSnapContext *sContext = pInfo->sContext;
+ void* data = NULL;
+ int32_t dataLen = 0;
+ int16_t type = 0;
+ int64_t uid = 0;
+ if(getMetafromSnapShot(sContext, &data, &dataLen, &type, &uid) < 0){
+ qError("tmqsnap getMetafromSnapShot error");
+ taosMemoryFreeClear(data);
+ return NULL;
+ }
+
+ if(!sContext->queryMetaOrData){ // change to get data next poll request
+ pTaskInfo->streamInfo.lastStatus.type = TMQ_OFFSET__SNAPSHOT_META;
+ pTaskInfo->streamInfo.lastStatus.uid = uid;
+ pTaskInfo->streamInfo.metaRsp.rspOffset.type = TMQ_OFFSET__SNAPSHOT_DATA;
+ pTaskInfo->streamInfo.metaRsp.rspOffset.uid = 0;
+ pTaskInfo->streamInfo.metaRsp.rspOffset.ts = INT64_MIN;
+ }else{
+ pTaskInfo->streamInfo.lastStatus.type = TMQ_OFFSET__SNAPSHOT_META;
+ pTaskInfo->streamInfo.lastStatus.uid = uid;
+ pTaskInfo->streamInfo.metaRsp.rspOffset = pTaskInfo->streamInfo.lastStatus;
+ pTaskInfo->streamInfo.metaRsp.resMsgType = type;
+ pTaskInfo->streamInfo.metaRsp.metaRspLen = dataLen;
+ pTaskInfo->streamInfo.metaRsp.metaRsp = data;
+ }
+
+ return NULL;
+ }
+// else if (pTaskInfo->streamInfo.prepareStatus.type == TMQ_OFFSET__LOG) {
+// int64_t fetchVer = pTaskInfo->streamInfo.prepareStatus.version + 1;
+//
+// while(1){
+// if (tqFetchLog(pInfo->tqReader->pWalReader, pInfo->sContext->withMeta, &fetchVer, &pInfo->pCkHead) < 0) {
+// qDebug("tmqsnap tmq poll: consumer log end. offset %" PRId64, fetchVer);
+// pTaskInfo->streamInfo.lastStatus.version = fetchVer;
+// pTaskInfo->streamInfo.lastStatus.type = TMQ_OFFSET__LOG;
+// return NULL;
+// }
+// SWalCont* pHead = &pInfo->pCkHead->head;
+// qDebug("tmqsnap tmq poll: consumer log offset %" PRId64 " msgType %d", fetchVer, pHead->msgType);
+//
+// if (pHead->msgType == TDMT_VND_SUBMIT) {
+// SSubmitReq* pCont = (SSubmitReq*)&pHead->body;
+// tqReaderSetDataMsg(pInfo->tqReader, pCont, 0);
+// SSDataBlock* block = tqLogScanExec(pInfo->sContext->subType, pInfo->tqReader, pInfo->pFilterOutTbUid, &pInfo->pRes);
+// if(block){
+// pTaskInfo->streamInfo.lastStatus.type = TMQ_OFFSET__LOG;
+// pTaskInfo->streamInfo.lastStatus.version = fetchVer;
+// qDebug("tmqsnap fetch data msg, ver:%" PRId64 ", type:%d", pHead->version, pHead->msgType);
+// return block;
+// }else{
+// fetchVer++;
+// }
+// } else{
+// ASSERT(pInfo->sContext->withMeta);
+// ASSERT(IS_META_MSG(pHead->msgType));
+// qDebug("tmqsnap fetch meta msg, ver:%" PRId64 ", type:%d", pHead->version, pHead->msgType);
+// pTaskInfo->streamInfo.metaRsp.rspOffset.version = fetchVer;
+// pTaskInfo->streamInfo.metaRsp.rspOffset.type = TMQ_OFFSET__LOG;
+// pTaskInfo->streamInfo.metaRsp.resMsgType = pHead->msgType;
+// pTaskInfo->streamInfo.metaRsp.metaRspLen = pHead->bodyLen;
+// pTaskInfo->streamInfo.metaRsp.metaRsp = taosMemoryMalloc(pHead->bodyLen);
+// memcpy(pTaskInfo->streamInfo.metaRsp.metaRsp, pHead->body, pHead->bodyLen);
+// return NULL;
+// }
+// }
+ return NULL;
+}
+
+static void destroyRawScanOperatorInfo(void* param) {
+ SStreamRawScanInfo* pRawScan = (SStreamRawScanInfo*)param;
+ tsdbReaderClose(pRawScan->dataReader);
+ destroySnapContext(pRawScan->sContext);
+ taosMemoryFree(pRawScan);
+}
+
// for subscribing db or stb (not including column),
// if this scan is used, meta data can be return
// and schemas are decided when scanning
-SOperatorInfo* createRawScanOperatorInfo(SReadHandle* pHandle, STableScanPhysiNode* pTableScanNode,
- SExecTaskInfo* pTaskInfo, STimeWindowAggSupp* pTwSup) {
+SOperatorInfo* createRawScanOperatorInfo(SReadHandle* pHandle, SExecTaskInfo* pTaskInfo) {
// create operator
// create tb reader
// create meta reader
// create tq reader
- return NULL;
+ SStreamRawScanInfo* pInfo = taosMemoryCalloc(1, sizeof(SStreamRawScanInfo));
+ SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo));
+ if (pInfo == NULL || pOperator == NULL) {
+ terrno = TSDB_CODE_QRY_OUT_OF_MEMORY;
+ return NULL;
+ }
+
+ pInfo->vnode = pHandle->vnode;
+
+ pInfo->sContext = pHandle->sContext;
+ pOperator->name = "RawStreamScanOperator";
+// pOperator->blocking = false;
+// pOperator->status = OP_NOT_OPENED;
+ pOperator->info = pInfo;
+ pOperator->pTaskInfo = pTaskInfo;
+
+ pOperator->fpSet = createOperatorFpSet(NULL, doRawScan, NULL, NULL, destroyRawScanOperatorInfo,
+ NULL, NULL, NULL);
+ return pOperator;
}
static void destroyStreamScanOperatorInfo(void* param) {
diff --git a/source/libs/executor/src/timewindowoperator.c b/source/libs/executor/src/timewindowoperator.c
index ebafa91046..b97970aeef 100644
--- a/source/libs/executor/src/timewindowoperator.c
+++ b/source/libs/executor/src/timewindowoperator.c
@@ -33,11 +33,16 @@ typedef struct SPullWindowInfo {
uint64_t groupId;
} SPullWindowInfo;
+typedef struct SOpenWindowInfo {
+ SResultRowPosition pos;
+ uint64_t groupId;
+} SOpenWindowInfo;
+
static SSDataBlock* doStreamSessionAgg(SOperatorInfo* pOperator);
static int64_t* extractTsCol(SSDataBlock* pBlock, const SIntervalAggOperatorInfo* pInfo);
-static SResultRowPosition addToOpenWindowList(SResultRowInfo* pResultRowInfo, const SResultRow* pResult);
+static SResultRowPosition addToOpenWindowList(SResultRowInfo* pResultRowInfo, const SResultRow* pResult, uint64_t groupId);
static void doCloseWindow(SResultRowInfo* pResultRowInfo, const SIntervalAggOperatorInfo* pInfo, SResultRow* pResult);
///*
@@ -598,14 +603,14 @@ static void doInterpUnclosedTimeWindow(SOperatorInfo* pOperatorInfo, int32_t num
int32_t startPos = 0;
int32_t numOfOutput = pSup->numOfExprs;
- uint64_t groupId = pBlock->info.groupId;
SResultRow* pResult = NULL;
while (1) {
SListNode* pn = tdListGetHead(pResultRowInfo->openWindow);
-
- SResultRowPosition* p1 = (SResultRowPosition*)pn->data;
+ SOpenWindowInfo* pOpenWin = (SOpenWindowInfo *)pn->data;
+ uint64_t groupId = pOpenWin->groupId;
+ SResultRowPosition* p1 = &pOpenWin->pos;
if (p->pageId == p1->pageId && p->offset == p1->offset) {
break;
}
@@ -631,12 +636,15 @@ static void doInterpUnclosedTimeWindow(SOperatorInfo* pOperatorInfo, int32_t num
SGroupKeys* pTsKey = taosArrayGet(pInfo->pPrevValues, 0);
int64_t prevTs = *(int64_t*)pTsKey->pData;
- doTimeWindowInterpolation(pInfo->pPrevValues, pBlock->pDataBlock, prevTs, -1, tsCols[startPos], startPos, w.ekey,
- RESULT_ROW_END_INTERP, pSup);
+ if (groupId == pBlock->info.groupId) {
+ doTimeWindowInterpolation(pInfo->pPrevValues, pBlock->pDataBlock, prevTs, -1, tsCols[startPos], startPos, w.ekey,
+ RESULT_ROW_END_INTERP, pSup);
+ }
setResultRowInterpo(pResult, RESULT_ROW_END_INTERP);
setNotInterpoWindowKey(pSup->pCtx, numOfExprs, RESULT_ROW_START_INTERP);
+ updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &w, true);
doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, startPos, 0, pBlock->info.rows,
numOfExprs);
@@ -965,7 +973,7 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul
// prev time window not interpolation yet.
if (pInfo->timeWindowInterpo) {
- SResultRowPosition pos = addToOpenWindowList(pResultRowInfo, pResult);
+ SResultRowPosition pos = addToOpenWindowList(pResultRowInfo, pResult, tableGroupId);
doInterpUnclosedTimeWindow(pOperatorInfo, numOfOutput, pResultRowInfo, pBlock, scanFlag, tsCols, &pos);
// restore current time window
@@ -1017,10 +1025,18 @@ static void hashIntervalAgg(SOperatorInfo* pOperatorInfo, SResultRowInfo* pResul
ekey = ascScan ? nextWin.ekey : nextWin.skey;
forwardRows =
getNumOfRowsInTimeWindow(&pBlock->info, tsCols, startPos, ekey, binarySearchForKey, NULL, pInfo->inputOrder);
-
// window start(end) key interpolation
doWindowBorderInterpolation(pInfo, pBlock, pResult, &nextWin, startPos, forwardRows, pSup);
-
+ //TODO: add to open window? how to close the open windows after input blocks exhausted?
+#if 0
+ if ((ascScan && ekey <= pBlock->info.window.ekey) ||
+ (!ascScan && ekey >= pBlock->info.window.skey)) {
+ // window start(end) key interpolation
+ doWindowBorderInterpolation(pInfo, pBlock, pResult, &nextWin, startPos, forwardRows, pSup);
+ } else if (pInfo->timeWindowInterpo) {
+ addToOpenWindowList(pResultRowInfo, pResult, tableGroupId);
+ }
+#endif
updateTimeWindowInfo(&pInfo->twAggSup.timeWindowData, &nextWin, true);
doApplyFunctions(pTaskInfo, pSup->pCtx, &pInfo->twAggSup.timeWindowData, startPos, forwardRows, pBlock->info.rows,
numOfOutput);
@@ -1040,20 +1056,23 @@ void doCloseWindow(SResultRowInfo* pResultRowInfo, const SIntervalAggOperatorInf
}
}
-SResultRowPosition addToOpenWindowList(SResultRowInfo* pResultRowInfo, const SResultRow* pResult) {
- SResultRowPosition pos = (SResultRowPosition){.pageId = pResult->pageId, .offset = pResult->offset};
+SResultRowPosition addToOpenWindowList(SResultRowInfo* pResultRowInfo, const SResultRow* pResult, uint64_t groupId) {
+ SOpenWindowInfo openWin = {0};
+ openWin.pos.pageId = pResult->pageId;
+ openWin.pos.offset = pResult->offset;
+ openWin.groupId = groupId;
SListNode* pn = tdListGetTail(pResultRowInfo->openWindow);
if (pn == NULL) {
- tdListAppend(pResultRowInfo->openWindow, &pos);
- return pos;
+ tdListAppend(pResultRowInfo->openWindow, &openWin);
+ return openWin.pos;
}
- SResultRowPosition* px = (SResultRowPosition*)pn->data;
- if (px->pageId != pos.pageId || px->offset != pos.offset) {
- tdListAppend(pResultRowInfo->openWindow, &pos);
+ SOpenWindowInfo * px = (SOpenWindowInfo *)pn->data;
+ if (px->pos.pageId != openWin.pos.pageId || px->pos.offset != openWin.pos.offset || px->groupId != openWin.groupId) {
+ tdListAppend(pResultRowInfo->openWindow, &openWin);
}
- return pos;
+ return openWin.pos;
}
int64_t* extractTsCol(SSDataBlock* pBlock, const SIntervalAggOperatorInfo* pInfo) {
@@ -1380,7 +1399,7 @@ bool doClearWindow(SAggSupporter* pAggSup, SExprSupp* pSup, char* pData, int16_t
int32_t numOfOutput) {
SET_RES_WINDOW_KEY(pAggSup->keyBuf, pData, bytes, groupId);
SResultRowPosition* p1 =
- (SResultRowPosition*)taosHashGet(pAggSup->pResultRowHashTable, pAggSup->keyBuf, GET_RES_WINDOW_KEY_LEN(bytes));
+ (SResultRowPosition*)tSimpleHashGet(pAggSup->pResultRowHashTable, pAggSup->keyBuf, GET_RES_WINDOW_KEY_LEN(bytes));
if (!p1) {
// window has been closed
return false;
@@ -1393,14 +1412,14 @@ bool doDeleteIntervalWindow(SAggSupporter* pAggSup, TSKEY ts, uint64_t groupId)
size_t bytes = sizeof(TSKEY);
SET_RES_WINDOW_KEY(pAggSup->keyBuf, &ts, bytes, groupId);
SResultRowPosition* p1 =
- (SResultRowPosition*)taosHashGet(pAggSup->pResultRowHashTable, pAggSup->keyBuf, GET_RES_WINDOW_KEY_LEN(bytes));
+ (SResultRowPosition*)tSimpleHashGet(pAggSup->pResultRowHashTable, pAggSup->keyBuf, GET_RES_WINDOW_KEY_LEN(bytes));
if (!p1) {
// window has been closed
return false;
}
// SFilePage* bufPage = getBufPage(pAggSup->pResultBuf, p1->pageId);
// dBufSetBufPageRecycled(pAggSup->pResultBuf, bufPage);
- taosHashRemove(pAggSup->pResultRowHashTable, pAggSup->keyBuf, GET_RES_WINDOW_KEY_LEN(bytes));
+ tSimpleHashRemove(pAggSup->pResultRowHashTable, pAggSup->keyBuf, GET_RES_WINDOW_KEY_LEN(bytes));
return true;
}
@@ -1450,11 +1469,13 @@ static void doClearWindows(SAggSupporter* pAggSup, SExprSupp* pSup1, SInterval*
}
}
-static int32_t getAllIntervalWindow(SHashObj* pHashMap, SHashObj* resWins) {
- void* pIte = NULL;
- size_t keyLen = 0;
- while ((pIte = taosHashIterate(pHashMap, pIte)) != NULL) {
- void* key = taosHashGetKey(pIte, &keyLen);
+static int32_t getAllIntervalWindow(SSHashObj* pHashMap, SHashObj* resWins) {
+
+ void* pIte = NULL;
+ size_t keyLen = 0;
+ int32_t iter = 0;
+ while ((pIte = tSimpleHashIterate(pHashMap, pIte, &iter)) != NULL) {
+ void* key = tSimpleHashGetKey(pIte, &keyLen);
uint64_t groupId = *(uint64_t*)key;
ASSERT(keyLen == GET_RES_WINDOW_KEY_LEN(sizeof(TSKEY)));
TSKEY ts = *(int64_t*)((char*)key + sizeof(uint64_t));
@@ -1467,14 +1488,15 @@ static int32_t getAllIntervalWindow(SHashObj* pHashMap, SHashObj* resWins) {
return TSDB_CODE_SUCCESS;
}
-static int32_t closeIntervalWindow(SHashObj* pHashMap, STimeWindowAggSupp* pSup, SInterval* pInterval,
+static int32_t closeIntervalWindow(SSHashObj* pHashMap, STimeWindowAggSupp* pSup, SInterval* pInterval,
SHashObj* pPullDataMap, SHashObj* closeWins, SArray* pRecyPages,
SDiskbasedBuf* pDiscBuf) {
qDebug("===stream===close interval window");
- void* pIte = NULL;
- size_t keyLen = 0;
- while ((pIte = taosHashIterate(pHashMap, pIte)) != NULL) {
- void* key = taosHashGetKey(pIte, &keyLen);
+ void* pIte = NULL;
+ size_t keyLen = 0;
+ int32_t iter = 0;
+ while ((pIte = tSimpleHashIterate(pHashMap, pIte, &iter)) != NULL) {
+ void* key = tSimpleHashGetKey(pIte, &keyLen);
uint64_t groupId = *(uint64_t*)key;
ASSERT(keyLen == GET_RES_WINDOW_KEY_LEN(sizeof(TSKEY)));
TSKEY ts = *(int64_t*)((char*)key + sizeof(uint64_t));
@@ -1512,7 +1534,7 @@ static int32_t closeIntervalWindow(SHashObj* pHashMap, STimeWindowAggSupp* pSup,
}
char keyBuf[GET_RES_WINDOW_KEY_LEN(sizeof(TSKEY))];
SET_RES_WINDOW_KEY(keyBuf, &ts, sizeof(TSKEY), groupId);
- taosHashRemove(pHashMap, keyBuf, keyLen);
+ tSimpleHashIterateRemove(pHashMap, keyBuf, keyLen, &pIte, &iter);
}
}
return TSDB_CODE_SUCCESS;
@@ -1808,7 +1830,7 @@ static bool timeWindowinterpNeeded(SqlFunctionCtx* pCtx, int32_t numOfCols, SInt
void increaseTs(SqlFunctionCtx* pCtx) {
if (pCtx[0].pExpr->pExpr->_function.pFunctNode->funcType == FUNCTION_TYPE_WSTART) {
- pCtx[0].increase = true;
+// pCtx[0].increase = true;
}
}
@@ -1884,7 +1906,7 @@ SOperatorInfo* createIntervalOperatorInfo(SOperatorInfo* downstream, SExprInfo*
pInfo->timeWindowInterpo = timeWindowinterpNeeded(pSup->pCtx, numOfCols, pInfo);
if (pInfo->timeWindowInterpo) {
- pInfo->binfo.resultRowInfo.openWindow = tdListNew(sizeof(SResultRowPosition));
+ pInfo->binfo.resultRowInfo.openWindow = tdListNew(sizeof(SOpenWindowInfo));
if (pInfo->binfo.resultRowInfo.openWindow == NULL) {
goto _error;
}
@@ -2855,7 +2877,7 @@ bool hasIntervalWindow(SAggSupporter* pSup, TSKEY ts, uint64_t groupId) {
int32_t bytes = sizeof(TSKEY);
SET_RES_WINDOW_KEY(pSup->keyBuf, &ts, bytes, groupId);
SResultRowPosition* p1 =
- (SResultRowPosition*)taosHashGet(pSup->pResultRowHashTable, pSup->keyBuf, GET_RES_WINDOW_KEY_LEN(bytes));
+ (SResultRowPosition*)tSimpleHashGet(pSup->pResultRowHashTable, pSup->keyBuf, GET_RES_WINDOW_KEY_LEN(bytes));
return p1 != NULL;
}
@@ -2896,7 +2918,7 @@ static void rebuildIntervalWindow(SStreamFinalIntervalOperatorInfo* pInfo, SExpr
bool isDeletedWindow(STimeWindow* pWin, uint64_t groupId, SAggSupporter* pSup) {
SET_RES_WINDOW_KEY(pSup->keyBuf, &pWin->skey, sizeof(int64_t), groupId);
- SResultRowPosition* p1 = (SResultRowPosition*)taosHashGet(pSup->pResultRowHashTable, pSup->keyBuf,
+ SResultRowPosition* p1 = (SResultRowPosition*)tSimpleHashGet(pSup->pResultRowHashTable, pSup->keyBuf,
GET_RES_WINDOW_KEY_LEN(sizeof(int64_t)));
return p1 == NULL;
}
@@ -3025,7 +3047,7 @@ static void doHashInterval(SOperatorInfo* pOperatorInfo, SSDataBlock* pSDataBloc
}
static void clearStreamIntervalOperator(SStreamFinalIntervalOperatorInfo* pInfo) {
- taosHashClear(pInfo->aggSup.pResultRowHashTable);
+ tSimpleHashClear(pInfo->aggSup.pResultRowHashTable);
clearDiskbasedBuf(pInfo->aggSup.pResultBuf);
initResultRowInfo(&pInfo->binfo.resultRowInfo);
}
@@ -4926,14 +4948,14 @@ static int32_t outputMergeAlignedIntervalResult(SOperatorInfo* pOperatorInfo, ui
SExprSupp* pSup = &pOperatorInfo->exprSupp;
SET_RES_WINDOW_KEY(iaInfo->aggSup.keyBuf, &wstartTs, TSDB_KEYSIZE, tableGroupId);
- SResultRowPosition* p1 = (SResultRowPosition*)taosHashGet(iaInfo->aggSup.pResultRowHashTable, iaInfo->aggSup.keyBuf,
+ SResultRowPosition* p1 = (SResultRowPosition*)tSimpleHashGet(iaInfo->aggSup.pResultRowHashTable, iaInfo->aggSup.keyBuf,
GET_RES_WINDOW_KEY_LEN(TSDB_KEYSIZE));
ASSERT(p1 != NULL);
finalizeResultRowIntoResultDataBlock(iaInfo->aggSup.pResultBuf, p1, pSup->pCtx, pSup->pExprInfo, pSup->numOfExprs,
pSup->rowEntryInfoOffset, pResultBlock, pTaskInfo);
- taosHashRemove(iaInfo->aggSup.pResultRowHashTable, iaInfo->aggSup.keyBuf, GET_RES_WINDOW_KEY_LEN(TSDB_KEYSIZE));
- ASSERT(taosHashGetSize(iaInfo->aggSup.pResultRowHashTable) == 0);
+ tSimpleHashRemove(iaInfo->aggSup.pResultRowHashTable, iaInfo->aggSup.keyBuf, GET_RES_WINDOW_KEY_LEN(TSDB_KEYSIZE));
+ ASSERT(tSimpleHashGetSize(iaInfo->aggSup.pResultRowHashTable) == 0);
return TSDB_CODE_SUCCESS;
}
@@ -4956,7 +4978,7 @@ static void doMergeAlignedIntervalAggImpl(SOperatorInfo* pOperatorInfo, SResultR
// there is an result exists
if (miaInfo->curTs != INT64_MIN) {
- ASSERT(taosHashGetSize(iaInfo->aggSup.pResultRowHashTable) == 1);
+ ASSERT(tSimpleHashGetSize(iaInfo->aggSup.pResultRowHashTable) == 1);
if (ts != miaInfo->curTs) {
outputMergeAlignedIntervalResult(pOperatorInfo, tableGroupId, pResultBlock, miaInfo->curTs);
@@ -4964,7 +4986,7 @@ static void doMergeAlignedIntervalAggImpl(SOperatorInfo* pOperatorInfo, SResultR
}
} else {
miaInfo->curTs = ts;
- ASSERT(taosHashGetSize(iaInfo->aggSup.pResultRowHashTable) == 0);
+ ASSERT(tSimpleHashGetSize(iaInfo->aggSup.pResultRowHashTable) == 0);
}
STimeWindow win = {0};
@@ -5040,7 +5062,7 @@ static void doMergeAlignedIntervalAgg(SOperatorInfo* pOperator) {
if (pBlock == NULL) {
// close last unfinalized time window
if (miaInfo->curTs != INT64_MIN) {
- ASSERT(taosHashGetSize(iaInfo->aggSup.pResultRowHashTable) == 1);
+ ASSERT(tSimpleHashGetSize(iaInfo->aggSup.pResultRowHashTable) == 1);
outputMergeAlignedIntervalResult(pOperator, miaInfo->groupId, pRes, miaInfo->curTs);
miaInfo->curTs = INT64_MIN;
}
@@ -5157,7 +5179,7 @@ SOperatorInfo* createMergeAlignedIntervalOperatorInfo(SOperatorInfo* downstream,
iaInfo->timeWindowInterpo = timeWindowinterpNeeded(pSup->pCtx, num, iaInfo);
if (iaInfo->timeWindowInterpo) {
- iaInfo->binfo.resultRowInfo.openWindow = tdListNew(sizeof(SResultRowPosition));
+ iaInfo->binfo.resultRowInfo.openWindow = tdListNew(sizeof(SOpenWindowInfo));
}
initResultRowInfo(&iaInfo->binfo.resultRowInfo);
@@ -5221,12 +5243,12 @@ static int32_t finalizeWindowResult(SOperatorInfo* pOperatorInfo, uint64_t table
SExprSupp* pExprSup = &pOperatorInfo->exprSupp;
SET_RES_WINDOW_KEY(iaInfo->aggSup.keyBuf, &win->skey, TSDB_KEYSIZE, tableGroupId);
- SResultRowPosition* p1 = (SResultRowPosition*)taosHashGet(iaInfo->aggSup.pResultRowHashTable, iaInfo->aggSup.keyBuf,
+ SResultRowPosition* p1 = (SResultRowPosition*)tSimpleHashGet(iaInfo->aggSup.pResultRowHashTable, iaInfo->aggSup.keyBuf,
GET_RES_WINDOW_KEY_LEN(TSDB_KEYSIZE));
ASSERT(p1 != NULL);
finalizeResultRowIntoResultDataBlock(iaInfo->aggSup.pResultBuf, p1, pExprSup->pCtx, pExprSup->pExprInfo,
pExprSup->numOfExprs, pExprSup->rowEntryInfoOffset, pResultBlock, pTaskInfo);
- taosHashRemove(iaInfo->aggSup.pResultRowHashTable, iaInfo->aggSup.keyBuf, GET_RES_WINDOW_KEY_LEN(TSDB_KEYSIZE));
+ tSimpleHashRemove(iaInfo->aggSup.pResultRowHashTable, iaInfo->aggSup.keyBuf, GET_RES_WINDOW_KEY_LEN(TSDB_KEYSIZE));
return TSDB_CODE_SUCCESS;
}
@@ -5292,7 +5314,7 @@ static void doMergeIntervalAggImpl(SOperatorInfo* pOperatorInfo, SResultRowInfo*
// prev time window not interpolation yet.
if (iaInfo->timeWindowInterpo) {
- SResultRowPosition pos = addToOpenWindowList(pResultRowInfo, pResult);
+ SResultRowPosition pos = addToOpenWindowList(pResultRowInfo, pResult, tableGroupId);
doInterpUnclosedTimeWindow(pOperatorInfo, numOfOutput, pResultRowInfo, pBlock, scanFlag, tsCols, &pos);
// restore current time window
@@ -5467,9 +5489,10 @@ SOperatorInfo* createMergeIntervalOperatorInfo(SOperatorInfo* downstream, SMerge
initBasicInfo(&pIntervalInfo->binfo, pResBlock);
initExecTimeWindowInfo(&pIntervalInfo->twAggSup.timeWindowData, &pIntervalInfo->win);
+
pIntervalInfo->timeWindowInterpo = timeWindowinterpNeeded(pExprSupp->pCtx, num, pIntervalInfo);
if (pIntervalInfo->timeWindowInterpo) {
- pIntervalInfo->binfo.resultRowInfo.openWindow = tdListNew(sizeof(SResultRowPosition));
+ pIntervalInfo->binfo.resultRowInfo.openWindow = tdListNew(sizeof(SOpenWindowInfo));
if (pIntervalInfo->binfo.resultRowInfo.openWindow == NULL) {
goto _error;
}
diff --git a/source/libs/executor/src/tsimplehash.c b/source/libs/executor/src/tsimplehash.c
index 6b2edf0d5e..8cd376e092 100644
--- a/source/libs/executor/src/tsimplehash.c
+++ b/source/libs/executor/src/tsimplehash.c
@@ -31,21 +31,12 @@
taosMemoryFreeClear(_n); \
} while (0);
-#pragma pack(push, 4)
-typedef struct SHNode {
- struct SHNode *next;
- uint32_t keyLen : 20;
- uint32_t dataLen : 12;
- char data[];
-} SHNode;
-#pragma pack(pop)
-
struct SSHashObj {
SHNode **hashList;
size_t capacity; // number of slots
- int64_t size; // number of elements in hash table
- _hash_fn_t hashFp; // hash function
- _equal_fn_t equalFp; // equal function
+ int64_t size; // number of elements in hash table
+ _hash_fn_t hashFp; // hash function
+ _equal_fn_t equalFp; // equal function
};
static FORCE_INLINE int32_t taosHashCapacity(int32_t length) {
@@ -76,7 +67,6 @@ SSHashObj *tSimpleHashInit(size_t capacity, _hash_fn_t fn) {
pHashObj->hashFp = fn;
ASSERT((pHashObj->capacity & (pHashObj->capacity - 1)) == 0);
-
pHashObj->hashList = (SHNode **)taosMemoryCalloc(pHashObj->capacity, sizeof(void *));
if (!pHashObj->hashList) {
taosMemoryFree(pHashObj);
@@ -285,6 +275,44 @@ int32_t tSimpleHashRemove(SSHashObj *pHashObj, const void *key, size_t keyLen) {
return TSDB_CODE_SUCCESS;
}
+int32_t tSimpleHashIterateRemove(SSHashObj *pHashObj, const void *key, size_t keyLen, void **pIter, int32_t *iter) {
+ if (!pHashObj || !key) {
+ return TSDB_CODE_FAILED;
+ }
+
+ uint32_t hashVal = (*pHashObj->hashFp)(key, (uint32_t)keyLen);
+
+ int32_t slot = HASH_INDEX(hashVal, pHashObj->capacity);
+
+ SHNode *pNode = pHashObj->hashList[slot];
+ SHNode *pPrev = NULL;
+ while (pNode) {
+ if ((*(pHashObj->equalFp))(GET_SHASH_NODE_KEY(pNode, pNode->dataLen), key, keyLen) == 0) {
+ if (!pPrev) {
+ pHashObj->hashList[slot] = pNode->next;
+ } else {
+ pPrev->next = pNode->next;
+ }
+
+ if (*pIter == (void *)GET_SHASH_NODE_DATA(pNode)) {
+ if (!pPrev) {
+ *pIter = NULL;
+ } else {
+ *pIter = GET_SHASH_NODE_DATA(pPrev);
+ }
+ }
+
+ FREE_HASH_NODE(pNode);
+ atomic_sub_fetch_64(&pHashObj->size, 1);
+ break;
+ }
+ pPrev = pNode;
+ pNode = pNode->next;
+ }
+
+ return TSDB_CODE_SUCCESS;
+}
+
void tSimpleHashClear(SSHashObj *pHashObj) {
if (!pHashObj || taosHashTableEmpty(pHashObj)) {
return;
@@ -302,6 +330,7 @@ void tSimpleHashClear(SSHashObj *pHashObj) {
FREE_HASH_NODE(pNode);
pNode = pNext;
}
+ pHashObj->hashList[i] = NULL;
}
atomic_store_64(&pHashObj->size, 0);
}
@@ -324,15 +353,6 @@ size_t tSimpleHashGetMemSize(const SSHashObj *pHashObj) {
return (pHashObj->capacity * sizeof(void *)) + sizeof(SHNode) * tSimpleHashGetSize(pHashObj) + sizeof(SSHashObj);
}
-void *tSimpleHashGetKey(void *data, size_t *keyLen) {
- SHNode *node = (SHNode *)((char *)data - offsetof(SHNode, data));
- if (keyLen) {
- *keyLen = node->keyLen;
- }
-
- return POINTER_SHIFT(data, node->dataLen);
-}
-
void *tSimpleHashIterate(const SSHashObj *pHashObj, void *data, int32_t *iter) {
if (!pHashObj) {
return NULL;
@@ -341,7 +361,7 @@ void *tSimpleHashIterate(const SSHashObj *pHashObj, void *data, int32_t *iter) {
SHNode *pNode = NULL;
if (!data) {
- for (int32_t i = 0; i < pHashObj->capacity; ++i) {
+ for (int32_t i = *iter; i < pHashObj->capacity; ++i) {
pNode = pHashObj->hashList[i];
if (!pNode) {
continue;
@@ -368,52 +388,5 @@ void *tSimpleHashIterate(const SSHashObj *pHashObj, void *data, int32_t *iter) {
return GET_SHASH_NODE_DATA(pNode);
}
- return NULL;
-}
-
-void *tSimpleHashIterateKV(const SSHashObj *pHashObj, void *data, void **key, int32_t *iter) {
- if (!pHashObj) {
- return NULL;
- }
-
- SHNode *pNode = NULL;
-
- if (!data) {
- for (int32_t i = 0; i < pHashObj->capacity; ++i) {
- pNode = pHashObj->hashList[i];
- if (!pNode) {
- continue;
- }
- *iter = i;
- if (key) {
- *key = GET_SHASH_NODE_KEY(pNode, pNode->dataLen);
- }
- return GET_SHASH_NODE_DATA(pNode);
- }
- return NULL;
- }
-
- pNode = (SHNode *)((char *)data - offsetof(SHNode, data));
-
- if (pNode->next) {
- if (key) {
- *key = GET_SHASH_NODE_KEY(pNode->next, pNode->next->dataLen);
- }
- return GET_SHASH_NODE_DATA(pNode->next);
- }
-
- ++(*iter);
- for (int32_t i = *iter; i < pHashObj->capacity; ++i) {
- pNode = pHashObj->hashList[i];
- if (!pNode) {
- continue;
- }
- *iter = i;
- if (key) {
- *key = GET_SHASH_NODE_KEY(pNode, pNode->dataLen);
- }
- return GET_SHASH_NODE_DATA(pNode);
- }
-
return NULL;
}
\ No newline at end of file
diff --git a/source/libs/executor/src/tsort.c b/source/libs/executor/src/tsort.c
index 48af951773..fc411e850a 100644
--- a/source/libs/executor/src/tsort.c
+++ b/source/libs/executor/src/tsort.c
@@ -97,7 +97,7 @@ SSortHandle* tsortCreateSortHandle(SArray* pSortInfo, int32_t type, int32_t page
return pSortHandle;
}
-static int32_t sortComparClearup(SMsortComparParam* cmpParam) {
+static int32_t sortComparCleanup(SMsortComparParam* cmpParam) {
for(int32_t i = 0; i < cmpParam->numOfSources; ++i) {
SSortSource* pSource = cmpParam->pSources[i]; // NOTICE: pSource may be SGenericSource *, if it is SORT_MULTISOURCE_MERGE
blockDataDestroy(pSource->src.pBlock);
@@ -134,15 +134,14 @@ int32_t tsortAddSource(SSortHandle* pSortHandle, void* pSource) {
return TSDB_CODE_SUCCESS;
}
-static int32_t doAddNewExternalMemSource(SDiskbasedBuf *pBuf, SArray* pAllSources, SSDataBlock* pBlock, int32_t* sourceId) {
+static int32_t doAddNewExternalMemSource(SDiskbasedBuf *pBuf, SArray* pAllSources, SSDataBlock* pBlock, int32_t* sourceId, SArray* pPageIdList) {
SSortSource* pSource = taosMemoryCalloc(1, sizeof(SSortSource));
if (pSource == NULL) {
return TSDB_CODE_QRY_OUT_OF_MEMORY;
}
- pSource->pageIdList = getDataBufPagesIdList(pBuf, (*sourceId));
pSource->src.pBlock = pBlock;
-
+ pSource->pageIdList = pPageIdList;
taosArrayPush(pAllSources, &pSource);
(*sourceId) += 1;
@@ -171,6 +170,7 @@ static int32_t doAddToBuf(SSDataBlock* pDataBlock, SSortHandle* pHandle) {
}
}
+ SArray* pPageIdList = taosArrayInit(4, sizeof(int32_t));
while(start < pDataBlock->info.rows) {
int32_t stop = 0;
blockDataSplitRows(pDataBlock, pDataBlock->info.hasVarCol, start, &stop, pHandle->pageSize);
@@ -186,6 +186,8 @@ static int32_t doAddToBuf(SSDataBlock* pDataBlock, SSortHandle* pHandle) {
return terrno;
}
+ taosArrayPush(pPageIdList, &pageId);
+
int32_t size = blockDataGetSize(p) + sizeof(int32_t) + taosArrayGetSize(p->pDataBlock) * sizeof(int32_t);
assert(size <= getBufPageSize(pHandle->pBuf));
@@ -201,7 +203,7 @@ static int32_t doAddToBuf(SSDataBlock* pDataBlock, SSortHandle* pHandle) {
blockDataCleanup(pDataBlock);
SSDataBlock* pBlock = createOneDataBlock(pDataBlock, false);
- return doAddNewExternalMemSource(pHandle->pBuf, pHandle->pOrderedSource, pBlock, &pHandle->sourceId);
+ return doAddNewExternalMemSource(pHandle->pBuf, pHandle->pOrderedSource, pBlock, &pHandle->sourceId, pPageIdList);
}
static void setCurrentSourceIsDone(SSortSource* pSource, SSortHandle* pHandle) {
@@ -502,6 +504,7 @@ static int32_t doInternalMergeSort(SSortHandle* pHandle) {
return code;
}
+ SArray* pPageIdList = taosArrayInit(4, sizeof(int32_t));
while (1) {
SSDataBlock* pDataBlock = getSortedBlockDataInner(pHandle, &pHandle->cmpParam, numOfRows);
if (pDataBlock == NULL) {
@@ -514,6 +517,8 @@ static int32_t doInternalMergeSort(SSortHandle* pHandle) {
return terrno;
}
+ taosArrayPush(pPageIdList, &pageId);
+
int32_t size = blockDataGetSize(pDataBlock) + sizeof(int32_t) + taosArrayGetSize(pDataBlock->pDataBlock) * sizeof(int32_t);
assert(size <= getBufPageSize(pHandle->pBuf));
@@ -525,12 +530,12 @@ static int32_t doInternalMergeSort(SSortHandle* pHandle) {
blockDataCleanup(pDataBlock);
}
- sortComparClearup(&pHandle->cmpParam);
+ sortComparCleanup(&pHandle->cmpParam);
tMergeTreeDestroy(pHandle->pMergeTree);
pHandle->numOfCompletedSources = 0;
SSDataBlock* pBlock = createOneDataBlock(pHandle->pDataBlock, false);
- code = doAddNewExternalMemSource(pHandle->pBuf, pResList, pBlock, &pHandle->sourceId);
+ code = doAddNewExternalMemSource(pHandle->pBuf, pResList, pBlock, &pHandle->sourceId, pPageIdList);
if (code != 0) {
return code;
}
diff --git a/source/libs/executor/test/tSimpleHashTests.cpp b/source/libs/executor/test/tSimpleHashTests.cpp
index acb6d434b4..3bf339ef90 100644
--- a/source/libs/executor/test/tSimpleHashTests.cpp
+++ b/source/libs/executor/test/tSimpleHashTests.cpp
@@ -30,7 +30,7 @@
// return RUN_ALL_TESTS();
// }
-TEST(testCase, tSimpleHashTest) {
+TEST(testCase, tSimpleHashTest_intKey) {
SSHashObj *pHashObj =
tSimpleHashInit(8, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT));
@@ -57,12 +57,14 @@ TEST(testCase, tSimpleHashTest) {
int32_t iter = 0;
int64_t keySum = 0;
int64_t dataSum = 0;
+ size_t kLen = 0;
while ((data = tSimpleHashIterate(pHashObj, data, &iter))) {
- void *key = tSimpleHashGetKey(data, NULL);
+ void *key = tSimpleHashGetKey(data, &kLen);
+ ASSERT_EQ(keyLen, kLen);
keySum += *(int64_t *)key;
dataSum += *(int64_t *)data;
}
-
+
ASSERT_EQ(keySum, dataSum);
ASSERT_EQ(keySum, originKeySum);
@@ -74,4 +76,69 @@ TEST(testCase, tSimpleHashTest) {
tSimpleHashCleanup(pHashObj);
}
+
+TEST(testCase, tSimpleHashTest_binaryKey) {
+ SSHashObj *pHashObj =
+ tSimpleHashInit(8, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT));
+
+ assert(pHashObj != nullptr);
+
+ ASSERT_EQ(0, tSimpleHashGetSize(pHashObj));
+
+ typedef struct {
+ int64_t suid;
+ int64_t uid;
+ } SCombineKey;
+
+ size_t keyLen = sizeof(SCombineKey);
+ size_t dataLen = sizeof(int64_t);
+
+ int64_t originDataSum = 0;
+ SCombineKey combineKey = {0};
+ for (int64_t i = 1; i <= 100; ++i) {
+ combineKey.suid = i;
+ combineKey.uid = i + 1;
+ tSimpleHashPut(pHashObj, (const void *)&combineKey, keyLen, (const void *)&i, dataLen);
+ originDataSum += i;
+ ASSERT_EQ(i, tSimpleHashGetSize(pHashObj));
+ }
+
+ for (int64_t i = 1; i <= 100; ++i) {
+ combineKey.suid = i;
+ combineKey.uid = i + 1;
+ void *data = tSimpleHashGet(pHashObj, (const void *)&combineKey, keyLen);
+ ASSERT_EQ(i, *(int64_t *)data);
+ }
+
+ void *data = NULL;
+ int32_t iter = 0;
+ int64_t keySum = 0;
+ int64_t dataSum = 0;
+ size_t kLen = 0;
+ while ((data = tSimpleHashIterate(pHashObj, data, &iter))) {
+ void *key = tSimpleHashGetKey(data, &kLen);
+ ASSERT_EQ(keyLen, kLen);
+ dataSum += *(int64_t *)data;
+ }
+
+ ASSERT_EQ(originDataSum, dataSum);
+
+ tSimpleHashRemove(pHashObj, (const void *)&combineKey, keyLen);
+
+ while ((data = tSimpleHashIterate(pHashObj, data, &iter))) {
+ void *key = tSimpleHashGetKey(data, &kLen);
+ ASSERT_EQ(keyLen, kLen);
+ }
+
+ for (int64_t i = 1; i <= 99; ++i) {
+ combineKey.suid = i;
+ combineKey.uid = i + 1;
+ tSimpleHashRemove(pHashObj, (const void *)&combineKey, keyLen);
+ ASSERT_EQ(99 - i, tSimpleHashGetSize(pHashObj));
+ }
+
+ tSimpleHashCleanup(pHashObj);
+}
+
+
#pragma GCC diagnostic pop
\ No newline at end of file
diff --git a/source/libs/function/inc/tpercentile.h b/source/libs/function/inc/tpercentile.h
index dfb52f7694..554f9e567f 100644
--- a/source/libs/function/inc/tpercentile.h
+++ b/source/libs/function/inc/tpercentile.h
@@ -51,20 +51,20 @@ struct tMemBucket;
typedef int32_t (*__perc_hash_func_t)(struct tMemBucket *pBucket, const void *value);
typedef struct tMemBucket {
- int16_t numOfSlots;
- int16_t type;
- int16_t bytes;
- int32_t total;
- int32_t elemPerPage; // number of elements for each object
- int32_t maxCapacity; // maximum allowed number of elements that can be sort directly to get the result
- int32_t bufPageSize; // disk page size
- MinMaxEntry range; // value range
- int32_t times; // count that has been checked for deciding the correct data value buckets.
- __compar_fn_t comparFn;
-
- tMemBucketSlot * pSlots;
- SDiskbasedBuf *pBuffer;
- __perc_hash_func_t hashFunc;
+ int16_t numOfSlots;
+ int16_t type;
+ int16_t bytes;
+ int32_t total;
+ int32_t elemPerPage; // number of elements for each object
+ int32_t maxCapacity; // maximum allowed number of elements that can be sort directly to get the result
+ int32_t bufPageSize; // disk page size
+ MinMaxEntry range; // value range
+ int32_t times; // count that has been checked for deciding the correct data value buckets.
+ __compar_fn_t comparFn;
+ tMemBucketSlot* pSlots;
+ SDiskbasedBuf* pBuffer;
+ __perc_hash_func_t hashFunc;
+ SHashObj* groupPagesMap; // disk page map for different groups;
} tMemBucket;
tMemBucket *tMemBucketCreate(int16_t nElemSize, int16_t dataType, double minval, double maxval);
diff --git a/source/libs/function/src/builtins.c b/source/libs/function/src/builtins.c
index ed82e4cb50..b7cd02befd 100644
--- a/source/libs/function/src/builtins.c
+++ b/source/libs/function/src/builtins.c
@@ -303,7 +303,7 @@ static int32_t translateInOutStr(SFunctionNode* pFunc, char* pErrBuf, int32_t le
}
SExprNode* pPara1 = (SExprNode*)nodesListGetNode(pFunc->pParameterList, 0);
- if (!IS_VAR_DATA_TYPE(pPara1->resType.type)) {
+ if (!IS_STR_DATA_TYPE(pPara1->resType.type)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
@@ -317,7 +317,7 @@ static int32_t translateTrimStr(SFunctionNode* pFunc, char* pErrBuf, int32_t len
}
SExprNode* pPara1 = (SExprNode*)nodesListGetNode(pFunc->pParameterList, 0);
- if (!IS_VAR_DATA_TYPE(pPara1->resType.type)) {
+ if (!IS_STR_DATA_TYPE(pPara1->resType.type)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
@@ -546,7 +546,7 @@ static int32_t translateApercentile(SFunctionNode* pFunc, char* pErrBuf, int32_t
// param2
if (3 == numOfParams) {
uint8_t para3Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 2))->resType.type;
- if (!IS_VAR_DATA_TYPE(para3Type)) {
+ if (!IS_STR_DATA_TYPE(para3Type)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
@@ -593,7 +593,7 @@ static int32_t translateApercentileImpl(SFunctionNode* pFunc, char* pErrBuf, int
// param2
if (3 == numOfParams) {
uint8_t para3Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 2))->resType.type;
- if (!IS_VAR_DATA_TYPE(para3Type)) {
+ if (!IS_STR_DATA_TYPE(para3Type)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
@@ -1388,7 +1388,7 @@ static int32_t translateSample(SFunctionNode* pFunc, char* pErrBuf, int32_t len)
}
// set result type
- if (IS_VAR_DATA_TYPE(colType)) {
+ if (IS_STR_DATA_TYPE(colType)) {
pFunc->node.resType = (SDataType){.bytes = pCol->resType.bytes, .type = colType};
} else {
pFunc->node.resType = (SDataType){.bytes = tDataTypes[colType].bytes, .type = colType};
@@ -1431,7 +1431,7 @@ static int32_t translateTail(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
}
// set result type
- if (IS_VAR_DATA_TYPE(colType)) {
+ if (IS_STR_DATA_TYPE(colType)) {
pFunc->node.resType = (SDataType){.bytes = pCol->resType.bytes, .type = colType};
} else {
pFunc->node.resType = (SDataType){.bytes = tDataTypes[colType].bytes, .type = colType};
@@ -1514,7 +1514,7 @@ static int32_t translateInterp(SFunctionNode* pFunc, char* pErrBuf, int32_t len)
for (int32_t i = 1; i < 3; ++i) {
nodeType = nodeType(nodesListGetNode(pFunc->pParameterList, i));
paraType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, i))->resType.type;
- if (!IS_VAR_DATA_TYPE(paraType) || QUERY_NODE_VALUE != nodeType) {
+ if (!IS_STR_DATA_TYPE(paraType) || QUERY_NODE_VALUE != nodeType) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
@@ -1682,7 +1682,7 @@ static int32_t translateLength(SFunctionNode* pFunc, char* pErrBuf, int32_t len)
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
}
- if (!IS_VAR_DATA_TYPE(((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type)) {
+ if (!IS_STR_DATA_TYPE(((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
@@ -1714,7 +1714,7 @@ static int32_t translateConcatImpl(SFunctionNode* pFunc, char* pErrBuf, int32_t
for (int32_t i = 0; i < numOfParams; ++i) {
SNode* pPara = nodesListGetNode(pFunc->pParameterList, i);
uint8_t paraType = ((SExprNode*)pPara)->resType.type;
- if (!IS_VAR_DATA_TYPE(paraType) && !IS_NULL_TYPE(paraType)) {
+ if (!IS_STR_DATA_TYPE(paraType) && !IS_NULL_TYPE(paraType)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
if (TSDB_DATA_TYPE_NCHAR == paraType) {
@@ -1770,7 +1770,7 @@ static int32_t translateSubstr(SFunctionNode* pFunc, char* pErrBuf, int32_t len)
uint8_t para0Type = pPara0->resType.type;
uint8_t para1Type = pPara1->resType.type;
- if (!IS_VAR_DATA_TYPE(para0Type) || !IS_INTEGER_TYPE(para1Type)) {
+ if (!IS_STR_DATA_TYPE(para0Type) || !IS_INTEGER_TYPE(para1Type)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
@@ -1802,7 +1802,7 @@ static int32_t translateCast(SFunctionNode* pFunc, char* pErrBuf, int32_t len) {
uint8_t para2Type = pFunc->node.resType.type;
int32_t para2Bytes = pFunc->node.resType.bytes;
- if (IS_VAR_DATA_TYPE(para2Type)) {
+ if (IS_STR_DATA_TYPE(para2Type)) {
para2Bytes -= VARSTR_HEADER_SIZE;
}
if (para2Bytes <= 0 || para2Bytes > 4096) { // cast dst var type length limits to 4096 bytes
@@ -1859,7 +1859,7 @@ static int32_t translateToUnixtimestamp(SFunctionNode* pFunc, char* pErrBuf, int
return invaildFuncParaNumErrMsg(pErrBuf, len, pFunc->functionName);
}
- if (!IS_VAR_DATA_TYPE(((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type)) {
+ if (!IS_STR_DATA_TYPE(((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
@@ -1878,7 +1878,7 @@ static int32_t translateTimeTruncate(SFunctionNode* pFunc, char* pErrBuf, int32_
uint8_t para1Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 0))->resType.type;
uint8_t para2Type = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, 1))->resType.type;
- if ((!IS_VAR_DATA_TYPE(para1Type) && !IS_INTEGER_TYPE(para1Type) && TSDB_DATA_TYPE_TIMESTAMP != para1Type) ||
+ if ((!IS_STR_DATA_TYPE(para1Type) && !IS_INTEGER_TYPE(para1Type) && TSDB_DATA_TYPE_TIMESTAMP != para1Type) ||
!IS_INTEGER_TYPE(para2Type)) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
@@ -1911,7 +1911,7 @@ static int32_t translateTimeDiff(SFunctionNode* pFunc, char* pErrBuf, int32_t le
for (int32_t i = 0; i < 2; ++i) {
uint8_t paraType = ((SExprNode*)nodesListGetNode(pFunc->pParameterList, i))->resType.type;
- if (!IS_VAR_DATA_TYPE(paraType) && !IS_INTEGER_TYPE(paraType) && TSDB_DATA_TYPE_TIMESTAMP != paraType) {
+ if (!IS_STR_DATA_TYPE(paraType) && !IS_INTEGER_TYPE(paraType) && TSDB_DATA_TYPE_TIMESTAMP != paraType) {
return invaildFuncParaTypeErrMsg(pErrBuf, len, pFunc->functionName);
}
}
diff --git a/source/libs/function/src/builtinsimpl.c b/source/libs/function/src/builtinsimpl.c
index 013c58cc45..32d0472a50 100644
--- a/source/libs/function/src/builtinsimpl.c
+++ b/source/libs/function/src/builtinsimpl.c
@@ -3643,7 +3643,7 @@ int32_t topBotFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) {
SResultRowEntryInfo* pEntryInfo = GET_RES_INFO(pCtx);
STopBotRes* pRes = getTopBotOutputInfo(pCtx);
- int16_t type = pCtx->input.pData[0]->info.type;
+ int16_t type = pCtx->pExpr->base.resSchema.type;
int32_t slotId = pCtx->pExpr->base.resSchema.slotId;
SColumnInfoData* pCol = taosArrayGet(pBlock->pDataBlock, slotId);
diff --git a/source/libs/function/src/tpercentile.c b/source/libs/function/src/tpercentile.c
index 517253dc01..dbe0b6bb3a 100644
--- a/source/libs/function/src/tpercentile.c
+++ b/source/libs/function/src/tpercentile.c
@@ -33,13 +33,13 @@ static SFilePage *loadDataFromFilePage(tMemBucket *pMemBucket, int32_t slotIdx)
SFilePage *buffer = (SFilePage *)taosMemoryCalloc(1, pMemBucket->bytes * pMemBucket->pSlots[slotIdx].info.size + sizeof(SFilePage));
int32_t groupId = getGroupId(pMemBucket->numOfSlots, slotIdx, pMemBucket->times);
- SIDList list = getDataBufPagesIdList(pMemBucket->pBuffer, groupId);
+ SArray* pIdList = *(SArray**)taosHashGet(pMemBucket->groupPagesMap, &groupId, sizeof(groupId));
int32_t offset = 0;
- for(int32_t i = 0; i < list->size; ++i) {
- struct SPageInfo* pgInfo = *(struct SPageInfo**) taosArrayGet(list, i);
+ for(int32_t i = 0; i < taosArrayGetSize(pIdList); ++i) {
+ int32_t* pageId = taosArrayGet(pIdList, i);
- SFilePage* pg = getBufPage(pMemBucket->pBuffer, getPageId(pgInfo));
+ SFilePage* pg = getBufPage(pMemBucket->pBuffer, *pageId);
memcpy(buffer->data + offset, pg->data, (size_t)(pg->num * pMemBucket->bytes));
offset += (int32_t)(pg->num * pMemBucket->bytes);
@@ -97,11 +97,11 @@ double findOnlyResult(tMemBucket *pMemBucket) {
}
int32_t groupId = getGroupId(pMemBucket->numOfSlots, i, pMemBucket->times);
- SIDList list = getDataBufPagesIdList(pMemBucket->pBuffer, groupId);
+ SArray* list = *(SArray**)taosHashGet(pMemBucket->groupPagesMap, &groupId, sizeof(groupId));
assert(list->size == 1);
- struct SPageInfo* pgInfo = (struct SPageInfo*) taosArrayGetP(list, 0);
- SFilePage* pPage = getBufPage(pMemBucket->pBuffer, getPageId(pgInfo));
+ int32_t* pageId = taosArrayGet(list, 0);
+ SFilePage* pPage = getBufPage(pMemBucket->pBuffer, *pageId);
assert(pPage->num == 1);
double v = 0;
@@ -233,7 +233,7 @@ tMemBucket *tMemBucketCreate(int16_t nElemSize, int16_t dataType, double minval,
pBucket->times = 1;
pBucket->maxCapacity = 200000;
-
+ pBucket->groupPagesMap = taosHashInit(128, taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT), false, HASH_NO_LOCK);
if (setBoundingBox(&pBucket->range, pBucket->type, minval, maxval) != 0) {
// qError("MemBucket:%p, invalid value range: %f-%f", pBucket, minval, maxval);
taosMemoryFree(pBucket);
@@ -280,8 +280,16 @@ void tMemBucketDestroy(tMemBucket *pBucket) {
return;
}
+ void* p = taosHashIterate(pBucket->groupPagesMap, NULL);
+ while(p) {
+ SArray** p1 = p;
+ p = taosHashIterate(pBucket->groupPagesMap, p);
+ taosArrayDestroy(*p1);
+ }
+
destroyDiskbasedBuf(pBucket->pBuffer);
taosMemoryFreeClear(pBucket->pSlots);
+ taosHashCleanup(pBucket->groupPagesMap);
taosMemoryFreeClear(pBucket);
}
@@ -357,8 +365,16 @@ int32_t tMemBucketPut(tMemBucket *pBucket, const void *data, size_t size) {
pSlot->info.data = NULL;
}
+ SArray* pPageIdList = (SArray*)taosHashGet(pBucket->groupPagesMap, &groupId, sizeof(groupId));
+ if (pPageIdList == NULL) {
+ SArray* pList = taosArrayInit(4, sizeof(int32_t));
+ taosHashPut(pBucket->groupPagesMap, &groupId, sizeof(groupId), &pList, POINTER_BYTES);
+ pPageIdList = pList;
+ }
+
pSlot->info.data = getNewBufPage(pBucket->pBuffer, groupId, &pageId);
pSlot->info.pageId = pageId;
+ taosArrayPush(pPageIdList, &pageId);
}
memcpy(pSlot->info.data->data + pSlot->info.data->num * pBucket->bytes, d, pBucket->bytes);
@@ -476,7 +492,7 @@ double getPercentileImpl(tMemBucket *pMemBucket, int32_t count, double fraction)
resetSlotInfo(pMemBucket);
int32_t groupId = getGroupId(pMemBucket->numOfSlots, i, pMemBucket->times - 1);
- SIDList list = getDataBufPagesIdList(pMemBucket->pBuffer, groupId);
+ SIDList list = taosHashGet(pMemBucket->groupPagesMap, &groupId, sizeof(groupId));
assert(list->size > 0);
for (int32_t f = 0; f < list->size; ++f) {
diff --git a/source/libs/function/src/udfd.c b/source/libs/function/src/udfd.c
index 1cbc78df48..5b27e030b9 100644
--- a/source/libs/function/src/udfd.c
+++ b/source/libs/function/src/udfd.c
@@ -84,6 +84,7 @@ typedef struct SUdf {
TUdfAggStartFunc aggStartFunc;
TUdfAggProcessFunc aggProcFunc;
TUdfAggFinishFunc aggFinishFunc;
+ TUdfAggMergeFunc aggMergeFunc;
TUdfInitFunc initFunc;
TUdfDestroyFunc destroyFunc;
@@ -271,6 +272,15 @@ void udfdProcessCallRequest(SUvUdfWork *uvUdf, SUdfRequest *request) {
break;
}
+ case TSDB_UDF_CALL_AGG_MERGE: {
+ SUdfInterBuf outBuf = {.buf = taosMemoryMalloc(udf->bufSize), .bufLen = udf->bufSize, .numOfResult = 0};
+ code = udf->aggMergeFunc(&call->interBuf, &call->interBuf2, &outBuf);
+ freeUdfInterBuf(&call->interBuf);
+ freeUdfInterBuf(&call->interBuf2);
+ subRsp->resultBuf = outBuf;
+
+ break;
+ }
case TSDB_UDF_CALL_AGG_FIN: {
SUdfInterBuf outBuf = {.buf = taosMemoryMalloc(udf->bufSize), .bufLen = udf->bufSize, .numOfResult = 0};
code = udf->aggFinishFunc(&call->interBuf, &outBuf);
@@ -309,6 +319,10 @@ void udfdProcessCallRequest(SUvUdfWork *uvUdf, SUdfRequest *request) {
freeUdfInterBuf(&subRsp->resultBuf);
break;
}
+ case TSDB_UDF_CALL_AGG_MERGE: {
+ freeUdfInterBuf(&subRsp->resultBuf);
+ break;
+ }
case TSDB_UDF_CALL_AGG_FIN: {
freeUdfInterBuf(&subRsp->resultBuf);
break;
@@ -560,7 +574,11 @@ int32_t udfdLoadUdf(char *udfName, SUdf *udf) {
strncpy(finishFuncName, processFuncName, strlen(processFuncName));
strncat(finishFuncName, finishSuffix, strlen(finishSuffix));
uv_dlsym(&udf->lib, finishFuncName, (void **)(&udf->aggFinishFunc));
- // TODO: merge
+ char mergeFuncName[TSDB_FUNC_NAME_LEN + 6] = {0};
+ char *mergeSuffix = "_merge";
+ strncpy(finishFuncName, processFuncName, strlen(processFuncName));
+ strncat(finishFuncName, mergeSuffix, strlen(mergeSuffix));
+ uv_dlsym(&udf->lib, finishFuncName, (void **)(&udf->aggMergeFunc));
}
return 0;
}
diff --git a/source/libs/scalar/inc/sclInt.h b/source/libs/scalar/inc/sclInt.h
index d423b92da7..36d2c5a49c 100644
--- a/source/libs/scalar/inc/sclInt.h
+++ b/source/libs/scalar/inc/sclInt.h
@@ -45,6 +45,8 @@ typedef struct SScalarCtx {
#define SCL_IS_CONST_CALC(_ctx) (NULL == (_ctx)->pBlockList)
//#define SCL_IS_NULL_VALUE_NODE(_node) ((QUERY_NODE_VALUE == nodeType(_node)) && (TSDB_DATA_TYPE_NULL == ((SValueNode *)_node)->node.resType.type) && (((SValueNode *)_node)->placeholderNo <= 0))
#define SCL_IS_NULL_VALUE_NODE(_node) ((QUERY_NODE_VALUE == nodeType(_node)) && (TSDB_DATA_TYPE_NULL == ((SValueNode *)_node)->node.resType.type))
+#define SCL_IS_COMPARISON_OPERATOR(_opType) ((_opType) >= OP_TYPE_GREATER_THAN && (_opType) < OP_TYPE_IS_NOT_UNKNOWN)
+#define SCL_DOWNGRADE_DATETYPE(_type) ((_type) == TSDB_DATA_TYPE_BIGINT || TSDB_DATA_TYPE_DOUBLE == (_type) || (_type) == TSDB_DATA_TYPE_UBIGINT)
#define sclFatal(...) qFatal(__VA_ARGS__)
#define sclError(...) qError(__VA_ARGS__)
diff --git a/source/libs/scalar/src/scalar.c b/source/libs/scalar/src/scalar.c
index 6634a29f40..cd1f6624bd 100644
--- a/source/libs/scalar/src/scalar.c
+++ b/source/libs/scalar/src/scalar.c
@@ -9,6 +9,7 @@
#include "scalar.h"
#include "tudf.h"
#include "ttime.h"
+#include "tcompare.h"
int32_t scalarGetOperatorParamNum(EOperatorType type) {
if (OP_TYPE_IS_NULL == type || OP_TYPE_IS_NOT_NULL == type || OP_TYPE_IS_TRUE == type || OP_TYPE_IS_NOT_TRUE == type
@@ -219,6 +220,82 @@ void sclFreeParamList(SScalarParam *param, int32_t paramNum) {
taosMemoryFree(param);
}
+void sclDowngradeValueType(SValueNode *valueNode) {
+ switch (valueNode->node.resType.type) {
+ case TSDB_DATA_TYPE_BIGINT: {
+ int8_t i8 = valueNode->datum.i;
+ if (i8 == valueNode->datum.i) {
+ valueNode->node.resType.type = TSDB_DATA_TYPE_TINYINT;
+ *(int8_t*)&valueNode->typeData = i8;
+ break;
+ }
+ int16_t i16 = valueNode->datum.i;
+ if (i16 == valueNode->datum.i) {
+ valueNode->node.resType.type = TSDB_DATA_TYPE_SMALLINT;
+ *(int16_t*)&valueNode->typeData = i16;
+ break;
+ }
+ int32_t i32 = valueNode->datum.i;
+ if (i32 == valueNode->datum.i) {
+ valueNode->node.resType.type = TSDB_DATA_TYPE_INT;
+ *(int32_t*)&valueNode->typeData = i32;
+ break;
+ }
+ break;
+ }
+ case TSDB_DATA_TYPE_UBIGINT:{
+ uint8_t u8 = valueNode->datum.i;
+ if (u8 == valueNode->datum.i) {
+ int8_t i8 = valueNode->datum.i;
+ if (i8 == valueNode->datum.i) {
+ valueNode->node.resType.type = TSDB_DATA_TYPE_TINYINT;
+ *(int8_t*)&valueNode->typeData = i8;
+ } else {
+ valueNode->node.resType.type = TSDB_DATA_TYPE_UTINYINT;
+ *(uint8_t*)&valueNode->typeData = u8;
+ }
+ break;
+ }
+ uint16_t u16 = valueNode->datum.i;
+ if (u16 == valueNode->datum.i) {
+ int16_t i16 = valueNode->datum.i;
+ if (i16 == valueNode->datum.i) {
+ valueNode->node.resType.type = TSDB_DATA_TYPE_SMALLINT;
+ *(int16_t*)&valueNode->typeData = i16;
+ } else {
+ valueNode->node.resType.type = TSDB_DATA_TYPE_USMALLINT;
+ *(uint16_t*)&valueNode->typeData = u16;
+ }
+ break;
+ }
+ uint32_t u32 = valueNode->datum.i;
+ if (u32 == valueNode->datum.i) {
+ int32_t i32 = valueNode->datum.i;
+ if (i32 == valueNode->datum.i) {
+ valueNode->node.resType.type = TSDB_DATA_TYPE_INT;
+ *(int32_t*)&valueNode->typeData = i32;
+ } else {
+ valueNode->node.resType.type = TSDB_DATA_TYPE_UINT;
+ *(uint32_t*)&valueNode->typeData = u32;
+ }
+ break;
+ }
+ break;
+ }
+ case TSDB_DATA_TYPE_DOUBLE: {
+ float f = valueNode->datum.d;
+ if (FLT_EQUAL(f, valueNode->datum.d)) {
+ valueNode->node.resType.type = TSDB_DATA_TYPE_FLOAT;
+ *(float*)&valueNode->typeData = f;
+ break;
+ }
+ break;
+ }
+ default:
+ break;
+ }
+}
+
int32_t sclInitParam(SNode* node, SScalarParam *param, SScalarCtx *ctx, int32_t *rowNum) {
switch (nodeType(node)) {
case QUERY_NODE_LEFT_VALUE: {
@@ -675,6 +752,10 @@ EDealRes sclRewriteNonConstOperator(SNode** pNode, SScalarCtx *ctx) {
return DEAL_RES_ERROR;
}
}
+
+ if (SCL_IS_COMPARISON_OPERATOR(node->opType) && SCL_DOWNGRADE_DATETYPE(valueNode->node.resType.type)) {
+ sclDowngradeValueType(valueNode);
+ }
}
if (node->pRight && (QUERY_NODE_VALUE == nodeType(node->pRight))) {
@@ -692,6 +773,10 @@ EDealRes sclRewriteNonConstOperator(SNode** pNode, SScalarCtx *ctx) {
return DEAL_RES_ERROR;
}
}
+
+ if (SCL_IS_COMPARISON_OPERATOR(node->opType) && SCL_DOWNGRADE_DATETYPE(valueNode->node.resType.type)) {
+ sclDowngradeValueType(valueNode);
+ }
}
if (node->pRight && (QUERY_NODE_NODE_LIST == nodeType(node->pRight))) {
diff --git a/source/libs/stream/src/streamMeta.c b/source/libs/stream/src/streamMeta.c
index 20a2f7d332..1442ed2e05 100644
--- a/source/libs/stream/src/streamMeta.c
+++ b/source/libs/stream/src/streamMeta.c
@@ -265,6 +265,8 @@ int32_t streamLoadTasks(SStreamMeta* pMeta) {
}
}
+ tdbFree(pKey);
+ tdbFree(pVal);
if (tdbTbcClose(pCur) < 0) {
return -1;
}
diff --git a/source/os/src/osDir.c b/source/os/src/osDir.c
index b755a35815..30aaa01dae 100644
--- a/source/os/src/osDir.c
+++ b/source/os/src/osDir.c
@@ -133,6 +133,7 @@ int32_t taosMulMkDir(const char *dirname) {
code = mkdir(temp, 0755);
#endif
if (code < 0 && errno != EEXIST) {
+ terrno = TAOS_SYSTEM_ERROR(errno);
return code;
}
*pos = TD_DIRSEP[0];
@@ -146,6 +147,7 @@ int32_t taosMulMkDir(const char *dirname) {
code = mkdir(temp, 0755);
#endif
if (code < 0 && errno != EEXIST) {
+ terrno = TAOS_SYSTEM_ERROR(errno);
return code;
}
}
diff --git a/source/os/src/osFile.c b/source/os/src/osFile.c
index f9797f6319..fab933755a 100644
--- a/source/os/src/osFile.c
+++ b/source/os/src/osFile.c
@@ -313,6 +313,7 @@ TdFilePtr taosOpenFile(const char *path, int32_t tdFileOptions) {
assert(!(tdFileOptions & TD_FILE_EXCL));
fp = fopen(path, mode);
if (fp == NULL) {
+ terrno = TAOS_SYSTEM_ERROR(errno);
return NULL;
}
} else {
@@ -335,6 +336,7 @@ TdFilePtr taosOpenFile(const char *path, int32_t tdFileOptions) {
fd = open(path, access, S_IRWXU | S_IRWXG | S_IRWXO);
#endif
if (fd == -1) {
+ terrno = TAOS_SYSTEM_ERROR(errno);
return NULL;
}
}
diff --git a/source/os/src/osSemaphore.c b/source/os/src/osSemaphore.c
index a95503b5e5..8cc6f0ef2e 100644
--- a/source/os/src/osSemaphore.c
+++ b/source/os/src/osSemaphore.c
@@ -1,531 +1,531 @@
-/*
- * Copyright (c) 2019 TAOS Data, Inc.
- *
- * This program is free software: you can use, redistribute, and/or modify
- * it under the terms of the GNU Affero General Public License, version 3
- * or later ("AGPL"), as published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE.
- *
- * You should have received a copy of the GNU Affero General Public License
- * along with this program. If not, see .
- */
-
-#define ALLOW_FORBID_FUNC
-#define _DEFAULT_SOURCE
-#include "os.h"
-#include "pthread.h"
-#include "tdef.h"
-
-#ifdef WINDOWS
-
-/*
- * windows implementation
- */
-
-#include
-
-bool taosCheckPthreadValid(TdThread thread) { return thread.p != NULL; }
-
-void taosResetPthread(TdThread* thread) { thread->p = 0; }
-
-int64_t taosGetPthreadId(TdThread thread) {
-#ifdef PTW32_VERSION
- return pthread_getw32threadid_np(thread);
-#else
- return (int64_t)thread;
-#endif
-}
-
-int64_t taosGetSelfPthreadId() { return GetCurrentThreadId(); }
-
-bool taosComparePthread(TdThread first, TdThread second) { return first.p == second.p; }
-
-int32_t taosGetPId() { return GetCurrentProcessId(); }
-
-int32_t taosGetAppName(char* name, int32_t* len) {
- char filepath[1024] = {0};
-
- GetModuleFileName(NULL, filepath, MAX_PATH);
- char* sub = strrchr(filepath, '.');
- if (sub != NULL) {
- *sub = '\0';
- }
- char* end = strrchr(filepath, TD_DIRSEP[0]);
- if (end == NULL) {
- end = filepath;
- }
-
- tstrncpy(name, end, TSDB_APP_NAME_LEN);
-
- if (len != NULL) {
- *len = (int32_t)strlen(end);
- }
-
- return 0;
-}
-
-int32_t tsem_wait(tsem_t* sem) {
- int ret = 0;
- do {
- ret = sem_wait(sem);
- } while (ret != 0 && errno == EINTR);
- return ret;
-}
-
-int32_t tsem_timewait(tsem_t* sem, int64_t nanosecs) {
- struct timespec ts, rel;
- FILETIME ft_before, ft_after;
- int rc;
-
- rel.tv_sec = 0;
- rel.tv_nsec = nanosecs;
-
- GetSystemTimeAsFileTime(&ft_before);
- // errno = 0;
- rc = sem_timedwait(sem, pthread_win32_getabstime_np(&ts, &rel));
-
- /* This should have timed out */
- // assert(errno == ETIMEDOUT);
- // assert(rc != 0);
- // GetSystemTimeAsFileTime(&ft_after);
- // // We specified a non-zero wait. Time must advance.
- // if (ft_before.dwLowDateTime == ft_after.dwLowDateTime && ft_before.dwHighDateTime == ft_after.dwHighDateTime)
- // {
- // printf("nanoseconds: %d, rc: %d, code:0x%x. before filetime: %d, %d; after filetime: %d, %d\n",
- // nanosecs, rc, errno,
- // (int)ft_before.dwLowDateTime, (int)ft_before.dwHighDateTime,
- // (int)ft_after.dwLowDateTime, (int)ft_after.dwHighDateTime);
- // printf("time must advance during sem_timedwait.");
- // return 1;
- // }
- return rc;
-}
-
-#elif defined(_TD_DARWIN_64)
-
-/*
- * darwin implementation
- */
-
-#include
-
-// #define SEM_USE_PTHREAD
-// #define SEM_USE_POSIX
-// #define SEM_USE_SEM
-
-// #ifdef SEM_USE_SEM
-// #include
-// #include
-// #include
-// #include
-
-// static TdThread sem_thread;
-// static TdThreadOnce sem_once;
-// static task_t sem_port;
-// static volatile int sem_inited = 0;
-// static semaphore_t sem_exit;
-
-// static void *sem_thread_routine(void *arg) {
-// (void)arg;
-// setThreadName("sem_thrd");
-
-// sem_port = mach_task_self();
-// kern_return_t ret = semaphore_create(sem_port, &sem_exit, SYNC_POLICY_FIFO, 0);
-// if (ret != KERN_SUCCESS) {
-// fprintf(stderr, "==%s[%d]%s()==failed to create sem_exit\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__);
-// sem_inited = -1;
-// return NULL;
-// }
-// sem_inited = 1;
-// semaphore_wait(sem_exit);
-// return NULL;
-// }
-
-// static void once_init(void) {
-// int r = 0;
-// r = taosThreadCreate(&sem_thread, NULL, sem_thread_routine, NULL);
-// if (r) {
-// fprintf(stderr, "==%s[%d]%s()==failed to create thread\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__);
-// return;
-// }
-// while (sem_inited == 0) {
-// ;
-// }
-// }
-// #endif
-
-// struct tsem_s {
-// #ifdef SEM_USE_PTHREAD
-// TdThreadMutex lock;
-// TdThreadCond cond;
-// volatile int64_t val;
-// #elif defined(SEM_USE_POSIX)
-// size_t id;
-// sem_t *sem;
-// #elif defined(SEM_USE_SEM)
-// semaphore_t sem;
-// #else // SEM_USE_PTHREAD
-// dispatch_semaphore_t sem;
-// #endif // SEM_USE_PTHREAD
-
-// volatile unsigned int valid : 1;
-// };
-
-// int tsem_init(tsem_t *sem, int pshared, unsigned int value) {
-// // fprintf(stderr, "==%s[%d]%s():[%p]==creating\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__, sem);
-// if (*sem) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==already initialized\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
-// sem);
-// abort();
-// }
-// struct tsem_s *p = (struct tsem_s *)taosMemoryCalloc(1, sizeof(*p));
-// if (!p) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==out of memory\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__, sem);
-// abort();
-// }
-
-// #ifdef SEM_USE_PTHREAD
-// int r = taosThreadMutexInit(&p->lock, NULL);
-// do {
-// if (r) break;
-// r = taosThreadCondInit(&p->cond, NULL);
-// if (r) {
-// taosThreadMutexDestroy(&p->lock);
-// break;
-// }
-// p->val = value;
-// } while (0);
-// if (r) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==not created\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__, sem);
-// abort();
-// }
-// #elif defined(SEM_USE_POSIX)
-// static size_t tick = 0;
-// do {
-// size_t id = atomic_add_fetch_64(&tick, 1);
-// if (id == SEM_VALUE_MAX) {
-// atomic_store_64(&tick, 0);
-// id = 0;
-// }
-// char name[NAME_MAX - 4];
-// snprintf(name, sizeof(name), "/t" PRId64, id);
-// p->sem = sem_open(name, O_CREAT | O_EXCL, pshared, value);
-// p->id = id;
-// if (p->sem != SEM_FAILED) break;
-// int e = errno;
-// if (e == EEXIST) continue;
-// if (e == EINTR) continue;
-// fprintf(stderr, "==%s[%d]%s():[%p]==not created[%d]%s\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
-// sem,
-// e, strerror(e));
-// abort();
-// } while (p->sem == SEM_FAILED);
-// #elif defined(SEM_USE_SEM)
-// taosThreadOnce(&sem_once, once_init);
-// if (sem_inited != 1) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==internal resource init failed\n", taosDirEntryBaseName(__FILE__), __LINE__,
-// __func__, sem);
-// errno = ENOMEM;
-// return -1;
-// }
-// kern_return_t ret = semaphore_create(sem_port, &p->sem, SYNC_POLICY_FIFO, value);
-// if (ret != KERN_SUCCESS) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==semophore_create failed\n", taosDirEntryBaseName(__FILE__), __LINE__,
-// __func__,
-// sem);
-// // we fail-fast here, because we have less-doc about semaphore_create for the moment
-// abort();
-// }
-// #else // SEM_USE_PTHREAD
-// p->sem = dispatch_semaphore_create(value);
-// if (p->sem == NULL) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==not created\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__, sem);
-// abort();
-// }
-// #endif // SEM_USE_PTHREAD
-
-// p->valid = 1;
-
-// *sem = p;
-
-// return 0;
-// }
-
-// int tsem_wait(tsem_t *sem) {
-// if (!*sem) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==not initialized\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__, sem);
-// abort();
-// }
-// struct tsem_s *p = *sem;
-// if (!p->valid) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==already destroyed\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
-// sem); abort();
-// }
-// #ifdef SEM_USE_PTHREAD
-// if (taosThreadMutexLock(&p->lock)) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
-// sem);
-// abort();
-// }
-// p->val -= 1;
-// if (p->val < 0) {
-// if (taosThreadCondWait(&p->cond, &p->lock)) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__,
-// __func__,
-// sem);
-// abort();
-// }
-// }
-// if (taosThreadMutexUnlock(&p->lock)) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
-// sem);
-// abort();
-// }
-// return 0;
-// #elif defined(SEM_USE_POSIX)
-// return sem_wait(p->sem);
-// #elif defined(SEM_USE_SEM)
-// return semaphore_wait(p->sem);
-// #else // SEM_USE_PTHREAD
-// return dispatch_semaphore_wait(p->sem, DISPATCH_TIME_FOREVER);
-// #endif // SEM_USE_PTHREAD
-// }
-
-// int tsem_post(tsem_t *sem) {
-// if (!*sem) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==not initialized\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__, sem);
-// abort();
-// }
-// struct tsem_s *p = *sem;
-// if (!p->valid) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==already destroyed\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
-// sem); abort();
-// }
-// #ifdef SEM_USE_PTHREAD
-// if (taosThreadMutexLock(&p->lock)) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
-// sem);
-// abort();
-// }
-// p->val += 1;
-// if (p->val <= 0) {
-// if (taosThreadCondSignal(&p->cond)) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__,
-// __func__,
-// sem);
-// abort();
-// }
-// }
-// if (taosThreadMutexUnlock(&p->lock)) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
-// sem);
-// abort();
-// }
-// return 0;
-// #elif defined(SEM_USE_POSIX)
-// return sem_post(p->sem);
-// #elif defined(SEM_USE_SEM)
-// return semaphore_signal(p->sem);
-// #else // SEM_USE_PTHREAD
-// return dispatch_semaphore_signal(p->sem);
-// #endif // SEM_USE_PTHREAD
-// }
-
-// int tsem_destroy(tsem_t *sem) {
-// // fprintf(stderr, "==%s[%d]%s():[%p]==destroying\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__, sem);
-// if (!*sem) {
-// // fprintf(stderr, "==%s[%d]%s():[%p]==not initialized\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
-// sem);
-// // abort();
-// return 0;
-// }
-// struct tsem_s *p = *sem;
-// if (!p->valid) {
-// // fprintf(stderr, "==%s[%d]%s():[%p]==already destroyed\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
-// // sem); abort();
-// return 0;
-// }
-// #ifdef SEM_USE_PTHREAD
-// if (taosThreadMutexLock(&p->lock)) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
-// sem);
-// abort();
-// }
-// p->valid = 0;
-// if (taosThreadCondDestroy(&p->cond)) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
-// sem);
-// abort();
-// }
-// if (taosThreadMutexUnlock(&p->lock)) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
-// sem);
-// abort();
-// }
-// if (taosThreadMutexDestroy(&p->lock)) {
-// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
-// sem);
-// abort();
-// }
-// #elif defined(SEM_USE_POSIX)
-// char name[NAME_MAX - 4];
-// snprintf(name, sizeof(name), "/t" PRId64, p->id);
-// int r = sem_unlink(name);
-// if (r) {
-// int e = errno;
-// fprintf(stderr, "==%s[%d]%s():[%p]==unlink failed[%d]%s\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
-// sem,
-// e, strerror(e));
-// abort();
-// }
-// #elif defined(SEM_USE_SEM)
-// semaphore_destroy(sem_port, p->sem);
-// #else // SEM_USE_PTHREAD
-// #endif // SEM_USE_PTHREAD
-
-// p->valid = 0;
-// taosMemoryFree(p);
-
-// *sem = NULL;
-// return 0;
-// }
-
-int tsem_init(tsem_t *psem, int flags, unsigned int count) {
- *psem = dispatch_semaphore_create(count);
- if (*psem == NULL) return -1;
- return 0;
-}
-
-int tsem_destroy(tsem_t *psem) {
- return 0;
-}
-
-int tsem_post(tsem_t *psem) {
- if (psem == NULL || *psem == NULL) return -1;
- dispatch_semaphore_signal(*psem);
- return 0;
-}
-
-int tsem_wait(tsem_t *psem) {
- if (psem == NULL || *psem == NULL) return -1;
- dispatch_semaphore_wait(*psem, DISPATCH_TIME_FOREVER);
- return 0;
-}
-
-int tsem_timewait(tsem_t *psem, int64_t nanosecs) {
- if (psem == NULL || *psem == NULL) return -1;
- dispatch_semaphore_wait(*psem, nanosecs);
- return 0;
-}
-
-bool taosCheckPthreadValid(TdThread thread) {
- int32_t ret = taosThreadKill(thread, 0);
- if (ret == ESRCH) return false;
- if (ret == EINVAL) return false;
- // alive
- return true;
-}
-
-int64_t taosGetSelfPthreadId() {
- TdThread thread = taosThreadSelf();
- return (int64_t)thread;
-}
-
-int64_t taosGetPthreadId(TdThread thread) { return (int64_t)thread; }
-
-void taosResetPthread(TdThread *thread) { *thread = NULL; }
-
-bool taosComparePthread(TdThread first, TdThread second) { return taosThreadEqual(first, second) ? true : false; }
-
-int32_t taosGetPId() { return (int32_t)getpid(); }
-
-int32_t taosGetAppName(char *name, int32_t *len) {
- char buf[PATH_MAX + 1];
- buf[0] = '\0';
- proc_name(getpid(), buf, sizeof(buf) - 1);
- buf[PATH_MAX] = '\0';
- size_t n = strlen(buf);
- if (len) *len = n;
- if (name) tstrncpy(name, buf, TSDB_APP_NAME_LEN);
- return 0;
-}
-
-#else
-
-/*
- * linux implementation
- */
-
-#include
-#include
-
-bool taosCheckPthreadValid(TdThread thread) { return thread != 0; }
-
-int64_t taosGetSelfPthreadId() {
- static __thread int id = 0;
- if (id != 0) return id;
- id = syscall(SYS_gettid);
- return id;
-}
-
-int64_t taosGetPthreadId(TdThread thread) { return (int64_t)thread; }
-void taosResetPthread(TdThread* thread) { *thread = 0; }
-bool taosComparePthread(TdThread first, TdThread second) { return first == second; }
-
-int32_t taosGetPId() {
- static int32_t pid;
- if (pid != 0) return pid;
- pid = getpid();
- return pid;
-}
-
-int32_t taosGetAppName(char* name, int32_t* len) {
- const char* self = "/proc/self/exe";
- char path[PATH_MAX] = {0};
-
- if (readlink(self, path, PATH_MAX) <= 0) {
- return -1;
- }
-
- path[PATH_MAX - 1] = 0;
- char* end = strrchr(path, '/');
- if (end == NULL) {
- return -1;
- }
-
- ++end;
-
- tstrncpy(name, end, TSDB_APP_NAME_LEN);
-
- if (len != NULL) {
- *len = strlen(name);
- }
-
- return 0;
-}
-
-int32_t tsem_wait(tsem_t* sem) {
- int ret = 0;
- do {
- ret = sem_wait(sem);
- } while (ret != 0 && errno == EINTR);
- return ret;
-}
-
-int32_t tsem_timewait(tsem_t* sem, int64_t nanosecs) {
- int ret = 0;
-
- struct timespec tv = {
- .tv_sec = 0,
- .tv_nsec = nanosecs,
- };
-
- while ((ret = sem_timedwait(sem, &tv)) == -1 && errno == EINTR) continue;
-
- return ret;
-}
-
-#endif
+/*
+ * Copyright (c) 2019 TAOS Data, Inc.
+ *
+ * This program is free software: you can use, redistribute, and/or modify
+ * it under the terms of the GNU Affero General Public License, version 3
+ * or later ("AGPL"), as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * You should have received a copy of the GNU Affero General Public License
+ * along with this program. If not, see .
+ */
+
+#define ALLOW_FORBID_FUNC
+#define _DEFAULT_SOURCE
+#include "os.h"
+#include "pthread.h"
+#include "tdef.h"
+
+#ifdef WINDOWS
+
+/*
+ * windows implementation
+ */
+
+#include
+
+bool taosCheckPthreadValid(TdThread thread) { return thread.p != NULL; }
+
+void taosResetPthread(TdThread* thread) { thread->p = 0; }
+
+int64_t taosGetPthreadId(TdThread thread) {
+#ifdef PTW32_VERSION
+ return pthread_getw32threadid_np(thread);
+#else
+ return (int64_t)thread;
+#endif
+}
+
+int64_t taosGetSelfPthreadId() { return GetCurrentThreadId(); }
+
+bool taosComparePthread(TdThread first, TdThread second) { return first.p == second.p; }
+
+int32_t taosGetPId() { return GetCurrentProcessId(); }
+
+int32_t taosGetAppName(char* name, int32_t* len) {
+ char filepath[1024] = {0};
+
+ GetModuleFileName(NULL, filepath, MAX_PATH);
+ char* sub = strrchr(filepath, '.');
+ if (sub != NULL) {
+ *sub = '\0';
+ }
+ char* end = strrchr(filepath, TD_DIRSEP[0]);
+ if (end == NULL) {
+ end = filepath;
+ }
+
+ tstrncpy(name, end, TSDB_APP_NAME_LEN);
+
+ if (len != NULL) {
+ *len = (int32_t)strlen(end);
+ }
+
+ return 0;
+}
+
+int32_t tsem_wait(tsem_t* sem) {
+ int ret = 0;
+ do {
+ ret = sem_wait(sem);
+ } while (ret != 0 && errno == EINTR);
+ return ret;
+}
+
+int32_t tsem_timewait(tsem_t* sem, int64_t nanosecs) {
+ struct timespec ts, rel;
+ FILETIME ft_before, ft_after;
+ int rc;
+
+ rel.tv_sec = 0;
+ rel.tv_nsec = nanosecs;
+
+ GetSystemTimeAsFileTime(&ft_before);
+ // errno = 0;
+ rc = sem_timedwait(sem, pthread_win32_getabstime_np(&ts, &rel));
+
+ /* This should have timed out */
+ // assert(errno == ETIMEDOUT);
+ // assert(rc != 0);
+ // GetSystemTimeAsFileTime(&ft_after);
+ // // We specified a non-zero wait. Time must advance.
+ // if (ft_before.dwLowDateTime == ft_after.dwLowDateTime && ft_before.dwHighDateTime == ft_after.dwHighDateTime)
+ // {
+ // printf("nanoseconds: %d, rc: %d, code:0x%x. before filetime: %d, %d; after filetime: %d, %d\n",
+ // nanosecs, rc, errno,
+ // (int)ft_before.dwLowDateTime, (int)ft_before.dwHighDateTime,
+ // (int)ft_after.dwLowDateTime, (int)ft_after.dwHighDateTime);
+ // printf("time must advance during sem_timedwait.");
+ // return 1;
+ // }
+ return rc;
+}
+
+#elif defined(_TD_DARWIN_64)
+
+/*
+ * darwin implementation
+ */
+
+#include
+
+// #define SEM_USE_PTHREAD
+// #define SEM_USE_POSIX
+// #define SEM_USE_SEM
+
+// #ifdef SEM_USE_SEM
+// #include
+// #include
+// #include
+// #include
+
+// static TdThread sem_thread;
+// static TdThreadOnce sem_once;
+// static task_t sem_port;
+// static volatile int sem_inited = 0;
+// static semaphore_t sem_exit;
+
+// static void *sem_thread_routine(void *arg) {
+// (void)arg;
+// setThreadName("sem_thrd");
+
+// sem_port = mach_task_self();
+// kern_return_t ret = semaphore_create(sem_port, &sem_exit, SYNC_POLICY_FIFO, 0);
+// if (ret != KERN_SUCCESS) {
+// fprintf(stderr, "==%s[%d]%s()==failed to create sem_exit\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__);
+// sem_inited = -1;
+// return NULL;
+// }
+// sem_inited = 1;
+// semaphore_wait(sem_exit);
+// return NULL;
+// }
+
+// static void once_init(void) {
+// int r = 0;
+// r = taosThreadCreate(&sem_thread, NULL, sem_thread_routine, NULL);
+// if (r) {
+// fprintf(stderr, "==%s[%d]%s()==failed to create thread\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__);
+// return;
+// }
+// while (sem_inited == 0) {
+// ;
+// }
+// }
+// #endif
+
+// struct tsem_s {
+// #ifdef SEM_USE_PTHREAD
+// TdThreadMutex lock;
+// TdThreadCond cond;
+// volatile int64_t val;
+// #elif defined(SEM_USE_POSIX)
+// size_t id;
+// sem_t *sem;
+// #elif defined(SEM_USE_SEM)
+// semaphore_t sem;
+// #else // SEM_USE_PTHREAD
+// dispatch_semaphore_t sem;
+// #endif // SEM_USE_PTHREAD
+
+// volatile unsigned int valid : 1;
+// };
+
+// int tsem_init(tsem_t *sem, int pshared, unsigned int value) {
+// // fprintf(stderr, "==%s[%d]%s():[%p]==creating\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__, sem);
+// if (*sem) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==already initialized\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
+// sem);
+// abort();
+// }
+// struct tsem_s *p = (struct tsem_s *)taosMemoryCalloc(1, sizeof(*p));
+// if (!p) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==out of memory\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__, sem);
+// abort();
+// }
+
+// #ifdef SEM_USE_PTHREAD
+// int r = taosThreadMutexInit(&p->lock, NULL);
+// do {
+// if (r) break;
+// r = taosThreadCondInit(&p->cond, NULL);
+// if (r) {
+// taosThreadMutexDestroy(&p->lock);
+// break;
+// }
+// p->val = value;
+// } while (0);
+// if (r) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==not created\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__, sem);
+// abort();
+// }
+// #elif defined(SEM_USE_POSIX)
+// static size_t tick = 0;
+// do {
+// size_t id = atomic_add_fetch_64(&tick, 1);
+// if (id == SEM_VALUE_MAX) {
+// atomic_store_64(&tick, 0);
+// id = 0;
+// }
+// char name[NAME_MAX - 4];
+// snprintf(name, sizeof(name), "/t" PRId64, id);
+// p->sem = sem_open(name, O_CREAT | O_EXCL, pshared, value);
+// p->id = id;
+// if (p->sem != SEM_FAILED) break;
+// int e = errno;
+// if (e == EEXIST) continue;
+// if (e == EINTR) continue;
+// fprintf(stderr, "==%s[%d]%s():[%p]==not created[%d]%s\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
+// sem,
+// e, strerror(e));
+// abort();
+// } while (p->sem == SEM_FAILED);
+// #elif defined(SEM_USE_SEM)
+// taosThreadOnce(&sem_once, once_init);
+// if (sem_inited != 1) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==internal resource init failed\n", taosDirEntryBaseName(__FILE__), __LINE__,
+// __func__, sem);
+// errno = ENOMEM;
+// return -1;
+// }
+// kern_return_t ret = semaphore_create(sem_port, &p->sem, SYNC_POLICY_FIFO, value);
+// if (ret != KERN_SUCCESS) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==semophore_create failed\n", taosDirEntryBaseName(__FILE__), __LINE__,
+// __func__,
+// sem);
+// // we fail-fast here, because we have less-doc about semaphore_create for the moment
+// abort();
+// }
+// #else // SEM_USE_PTHREAD
+// p->sem = dispatch_semaphore_create(value);
+// if (p->sem == NULL) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==not created\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__, sem);
+// abort();
+// }
+// #endif // SEM_USE_PTHREAD
+
+// p->valid = 1;
+
+// *sem = p;
+
+// return 0;
+// }
+
+// int tsem_wait(tsem_t *sem) {
+// if (!*sem) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==not initialized\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__, sem);
+// abort();
+// }
+// struct tsem_s *p = *sem;
+// if (!p->valid) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==already destroyed\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
+// sem); abort();
+// }
+// #ifdef SEM_USE_PTHREAD
+// if (taosThreadMutexLock(&p->lock)) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
+// sem);
+// abort();
+// }
+// p->val -= 1;
+// if (p->val < 0) {
+// if (taosThreadCondWait(&p->cond, &p->lock)) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__,
+// __func__,
+// sem);
+// abort();
+// }
+// }
+// if (taosThreadMutexUnlock(&p->lock)) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
+// sem);
+// abort();
+// }
+// return 0;
+// #elif defined(SEM_USE_POSIX)
+// return sem_wait(p->sem);
+// #elif defined(SEM_USE_SEM)
+// return semaphore_wait(p->sem);
+// #else // SEM_USE_PTHREAD
+// return dispatch_semaphore_wait(p->sem, DISPATCH_TIME_FOREVER);
+// #endif // SEM_USE_PTHREAD
+// }
+
+// int tsem_post(tsem_t *sem) {
+// if (!*sem) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==not initialized\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__, sem);
+// abort();
+// }
+// struct tsem_s *p = *sem;
+// if (!p->valid) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==already destroyed\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
+// sem); abort();
+// }
+// #ifdef SEM_USE_PTHREAD
+// if (taosThreadMutexLock(&p->lock)) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
+// sem);
+// abort();
+// }
+// p->val += 1;
+// if (p->val <= 0) {
+// if (taosThreadCondSignal(&p->cond)) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__,
+// __func__,
+// sem);
+// abort();
+// }
+// }
+// if (taosThreadMutexUnlock(&p->lock)) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
+// sem);
+// abort();
+// }
+// return 0;
+// #elif defined(SEM_USE_POSIX)
+// return sem_post(p->sem);
+// #elif defined(SEM_USE_SEM)
+// return semaphore_signal(p->sem);
+// #else // SEM_USE_PTHREAD
+// return dispatch_semaphore_signal(p->sem);
+// #endif // SEM_USE_PTHREAD
+// }
+
+// int tsem_destroy(tsem_t *sem) {
+// // fprintf(stderr, "==%s[%d]%s():[%p]==destroying\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__, sem);
+// if (!*sem) {
+// // fprintf(stderr, "==%s[%d]%s():[%p]==not initialized\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
+// sem);
+// // abort();
+// return 0;
+// }
+// struct tsem_s *p = *sem;
+// if (!p->valid) {
+// // fprintf(stderr, "==%s[%d]%s():[%p]==already destroyed\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
+// // sem); abort();
+// return 0;
+// }
+// #ifdef SEM_USE_PTHREAD
+// if (taosThreadMutexLock(&p->lock)) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
+// sem);
+// abort();
+// }
+// p->valid = 0;
+// if (taosThreadCondDestroy(&p->cond)) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
+// sem);
+// abort();
+// }
+// if (taosThreadMutexUnlock(&p->lock)) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
+// sem);
+// abort();
+// }
+// if (taosThreadMutexDestroy(&p->lock)) {
+// fprintf(stderr, "==%s[%d]%s():[%p]==internal logic error\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
+// sem);
+// abort();
+// }
+// #elif defined(SEM_USE_POSIX)
+// char name[NAME_MAX - 4];
+// snprintf(name, sizeof(name), "/t" PRId64, p->id);
+// int r = sem_unlink(name);
+// if (r) {
+// int e = errno;
+// fprintf(stderr, "==%s[%d]%s():[%p]==unlink failed[%d]%s\n", taosDirEntryBaseName(__FILE__), __LINE__, __func__,
+// sem,
+// e, strerror(e));
+// abort();
+// }
+// #elif defined(SEM_USE_SEM)
+// semaphore_destroy(sem_port, p->sem);
+// #else // SEM_USE_PTHREAD
+// #endif // SEM_USE_PTHREAD
+
+// p->valid = 0;
+// taosMemoryFree(p);
+
+// *sem = NULL;
+// return 0;
+// }
+
+int tsem_init(tsem_t *psem, int flags, unsigned int count) {
+ *psem = dispatch_semaphore_create(count);
+ if (*psem == NULL) return -1;
+ return 0;
+}
+
+int tsem_destroy(tsem_t *psem) {
+ return 0;
+}
+
+int tsem_post(tsem_t *psem) {
+ if (psem == NULL || *psem == NULL) return -1;
+ dispatch_semaphore_signal(*psem);
+ return 0;
+}
+
+int tsem_wait(tsem_t *psem) {
+ if (psem == NULL || *psem == NULL) return -1;
+ dispatch_semaphore_wait(*psem, DISPATCH_TIME_FOREVER);
+ return 0;
+}
+
+int tsem_timewait(tsem_t *psem, int64_t nanosecs) {
+ if (psem == NULL || *psem == NULL) return -1;
+ dispatch_semaphore_wait(*psem, nanosecs);
+ return 0;
+}
+
+bool taosCheckPthreadValid(TdThread thread) {
+ int32_t ret = taosThreadKill(thread, 0);
+ if (ret == ESRCH) return false;
+ if (ret == EINVAL) return false;
+ // alive
+ return true;
+}
+
+int64_t taosGetSelfPthreadId() {
+ TdThread thread = taosThreadSelf();
+ return (int64_t)thread;
+}
+
+int64_t taosGetPthreadId(TdThread thread) { return (int64_t)thread; }
+
+void taosResetPthread(TdThread *thread) { *thread = NULL; }
+
+bool taosComparePthread(TdThread first, TdThread second) { return taosThreadEqual(first, second) ? true : false; }
+
+int32_t taosGetPId() { return (int32_t)getpid(); }
+
+int32_t taosGetAppName(char *name, int32_t *len) {
+ char buf[PATH_MAX + 1];
+ buf[0] = '\0';
+ proc_name(getpid(), buf, sizeof(buf) - 1);
+ buf[PATH_MAX] = '\0';
+ size_t n = strlen(buf);
+ if (len) *len = n;
+ if (name) tstrncpy(name, buf, TSDB_APP_NAME_LEN);
+ return 0;
+}
+
+#else
+
+/*
+ * linux implementation
+ */
+
+#include
+#include
+
+bool taosCheckPthreadValid(TdThread thread) { return thread != 0; }
+
+int64_t taosGetSelfPthreadId() {
+ static __thread int id = 0;
+ if (id != 0) return id;
+ id = syscall(SYS_gettid);
+ return id;
+}
+
+int64_t taosGetPthreadId(TdThread thread) { return (int64_t)thread; }
+void taosResetPthread(TdThread* thread) { *thread = 0; }
+bool taosComparePthread(TdThread first, TdThread second) { return first == second; }
+
+int32_t taosGetPId() {
+ static int32_t pid;
+ if (pid != 0) return pid;
+ pid = getpid();
+ return pid;
+}
+
+int32_t taosGetAppName(char* name, int32_t* len) {
+ const char* self = "/proc/self/exe";
+ char path[PATH_MAX] = {0};
+
+ if (readlink(self, path, PATH_MAX) <= 0) {
+ return -1;
+ }
+
+ path[PATH_MAX - 1] = 0;
+ char* end = strrchr(path, '/');
+ if (end == NULL) {
+ return -1;
+ }
+
+ ++end;
+
+ tstrncpy(name, end, TSDB_APP_NAME_LEN);
+
+ if (len != NULL) {
+ *len = strlen(name);
+ }
+
+ return 0;
+}
+
+int32_t tsem_wait(tsem_t* sem) {
+ int ret = 0;
+ do {
+ ret = sem_wait(sem);
+ } while (ret != 0 && errno == EINTR);
+ return ret;
+}
+
+int32_t tsem_timewait(tsem_t* sem, int64_t nanosecs) {
+ int ret = 0;
+
+ struct timespec tv = {
+ .tv_sec = 0,
+ .tv_nsec = nanosecs,
+ };
+
+ while ((ret = sem_timedwait(sem, &tv)) == -1 && errno == EINTR) continue;
+
+ return ret;
+}
+
+#endif
diff --git a/source/os/src/osSysinfo.c b/source/os/src/osSysinfo.c
index 3aa3f4f29e..19e9568bbe 100644
--- a/source/os/src/osSysinfo.c
+++ b/source/os/src/osSysinfo.c
@@ -595,6 +595,7 @@ int32_t taosGetDiskSize(char *dataDir, SDiskSize *diskSize) {
#else
struct statvfs info;
if (statvfs(dataDir, &info)) {
+ terrno = TAOS_SYSTEM_ERROR(errno);
return -1;
} else {
diskSize->total = info.f_blocks * info.f_frsize;
diff --git a/source/util/src/tlog.c b/source/util/src/tlog.c
index 2e8239c68f..a2d65d6a54 100644
--- a/source/util/src/tlog.c
+++ b/source/util/src/tlog.c
@@ -429,7 +429,7 @@ static inline int32_t taosBuildLogHead(char *buffer, const char *flags) {
}
static inline void taosPrintLogImp(ELogLevel level, int32_t dflag, const char *buffer, int32_t len) {
- if ((dflag & DEBUG_FILE) && tsLogObj.logHandle && tsLogObj.logHandle->pFile != NULL) {
+ if ((dflag & DEBUG_FILE) && tsLogObj.logHandle && tsLogObj.logHandle->pFile != NULL && osLogSpaceAvailable()) {
taosUpdateLogNums(level);
if (tsAsyncLog) {
taosPushLogBuffer(tsLogObj.logHandle, buffer, len);
@@ -451,7 +451,6 @@ static inline void taosPrintLogImp(ELogLevel level, int32_t dflag, const char *b
}
void taosPrintLog(const char *flags, ELogLevel level, int32_t dflag, const char *format, ...) {
- if (!osLogSpaceAvailable()) return;
if (!(dflag & DEBUG_FILE) && !(dflag & DEBUG_SCREEN)) return;
char buffer[LOG_MAX_LINE_BUFFER_SIZE];
diff --git a/source/util/src/tpagedbuf.c b/source/util/src/tpagedbuf.c
index ac2128dd70..4d5532b9a6 100644
--- a/source/util/src/tpagedbuf.c
+++ b/source/util/src/tpagedbuf.c
@@ -33,7 +33,7 @@ struct SDiskbasedBuf {
int32_t pageSize; // current used page size
int32_t inMemPages; // numOfPages that are allocated in memory
SList* freePgList; // free page list
- SHashObj* groupSet; // id hash table, todo remove it
+ SArray* pIdList; // page id list
SHashObj* all;
SList* lruList;
void* emptyDummyIdList; // dummy id list
@@ -241,26 +241,7 @@ static int32_t loadPageFromDisk(SDiskbasedBuf* pBuf, SPageInfo* pg) {
return 0;
}
-static SIDList addNewGroup(SDiskbasedBuf* pBuf, int32_t groupId) {
- assert(taosHashGet(pBuf->groupSet, (const char*)&groupId, sizeof(int32_t)) == NULL);
-
- SArray* pa = taosArrayInit(1, POINTER_BYTES);
- int32_t ret = taosHashPut(pBuf->groupSet, (const char*)&groupId, sizeof(int32_t), &pa, POINTER_BYTES);
- assert(ret == 0);
-
- return pa;
-}
-
-static SPageInfo* registerPage(SDiskbasedBuf* pBuf, int32_t groupId, int32_t pageId) {
- SIDList list = NULL;
-
- char** p = taosHashGet(pBuf->groupSet, (const char*)&groupId, sizeof(int32_t));
- if (p == NULL) { // it is a new group id
- list = addNewGroup(pBuf, groupId);
- } else {
- list = (SIDList)(*p);
- }
-
+static SPageInfo* registerPage(SDiskbasedBuf* pBuf, int32_t pageId) {
pBuf->numOfPages += 1;
SPageInfo* ppi = taosMemoryMalloc(sizeof(SPageInfo));
@@ -273,7 +254,7 @@ static SPageInfo* registerPage(SDiskbasedBuf* pBuf, int32_t groupId, int32_t pag
ppi->pn = NULL;
ppi->dirty = false;
- return *(SPageInfo**)taosArrayPush(list, &ppi);
+ return *(SPageInfo**)taosArrayPush(pBuf->pIdList, &ppi);
}
static SListNode* getEldestUnrefedPage(SDiskbasedBuf* pBuf) {
@@ -293,16 +274,6 @@ static SListNode* getEldestUnrefedPage(SDiskbasedBuf* pBuf) {
}
}
- // int32_t pos = listNEles(pBuf->lruList);
- // SListIter iter1 = {0};
- // tdListInitIter(pBuf->lruList, &iter1, TD_LIST_BACKWARD);
- // SListNode* pn1 = NULL;
- // while((pn1 = tdListNext(&iter1)) != NULL) {
- // SPageInfo* pageInfo = *(SPageInfo**) pn1->data;
- // printf("page %d is used, dirty:%d, pos:%d\n", pageInfo->pageId, pageInfo->dirty, pos - 1);
- // pos -= 1;
- // }
-
return pn;
}
@@ -382,7 +353,8 @@ int32_t createDiskbasedBuf(SDiskbasedBuf** pBuf, int32_t pagesize, int32_t inMem
// init id hash table
_hash_fn_t fn = taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT);
- pPBuf->groupSet = taosHashInit(10, fn, true, false);
+ pPBuf->pIdList = taosArrayInit(4, POINTER_BYTES);
+
pPBuf->assistBuf = taosMemoryMalloc(pPBuf->pageSize + 2); // EXTRA BYTES
pPBuf->all = taosHashInit(10, fn, true, false);
@@ -425,7 +397,7 @@ void* getNewBufPage(SDiskbasedBuf* pBuf, int32_t groupId, int32_t* pageId) {
*pageId = (++pBuf->allocateId);
// register page id info
- pi = registerPage(pBuf, groupId, *pageId);
+ pi = registerPage(pBuf, *pageId);
// add to hash map
taosHashPut(pBuf->all, pageId, sizeof(int32_t), &pi, POINTER_BYTES);
@@ -526,19 +498,11 @@ void releaseBufPageInfo(SDiskbasedBuf* pBuf, SPageInfo* pi) {
pBuf->statis.releasePages += 1;
}
-size_t getNumOfBufGroupId(const SDiskbasedBuf* pBuf) { return taosHashGetSize(pBuf->groupSet); }
-
size_t getTotalBufSize(const SDiskbasedBuf* pBuf) { return (size_t)pBuf->totalBufSize; }
-SIDList getDataBufPagesIdList(SDiskbasedBuf* pBuf, int32_t groupId) {
- assert(pBuf != NULL);
-
- char** p = taosHashGet(pBuf->groupSet, (const char*)&groupId, sizeof(int32_t));
- if (p == NULL) { // it is a new group id
- return pBuf->emptyDummyIdList;
- } else {
- return (SArray*)(*p);
- }
+SIDList getDataBufPagesIdList(SDiskbasedBuf* pBuf) {
+ ASSERT(pBuf != NULL);
+ return pBuf->pIdList;
}
void destroyDiskbasedBuf(SDiskbasedBuf* pBuf) {
@@ -578,26 +542,21 @@ void destroyDiskbasedBuf(SDiskbasedBuf* pBuf) {
taosRemoveFile(pBuf->path);
taosMemoryFreeClear(pBuf->path);
- SArray** p = taosHashIterate(pBuf->groupSet, NULL);
- while (p) {
- size_t n = taosArrayGetSize(*p);
- for (int32_t i = 0; i < n; ++i) {
- SPageInfo* pi = taosArrayGetP(*p, i);
- taosMemoryFreeClear(pi->pData);
- taosMemoryFreeClear(pi);
- }
-
- taosArrayDestroy(*p);
- p = taosHashIterate(pBuf->groupSet, p);
+ size_t n = taosArrayGetSize(pBuf->pIdList);
+ for (int32_t i = 0; i < n; ++i) {
+ SPageInfo* pi = taosArrayGetP(pBuf->pIdList, i);
+ taosMemoryFreeClear(pi->pData);
+ taosMemoryFreeClear(pi);
}
+ taosArrayDestroy(pBuf->pIdList);
+
tdListFree(pBuf->lruList);
tdListFree(pBuf->freePgList);
taosArrayDestroy(pBuf->emptyDummyIdList);
taosArrayDestroy(pBuf->pFree);
- taosHashCleanup(pBuf->groupSet);
taosHashCleanup(pBuf->all);
taosMemoryFreeClear(pBuf->id);
@@ -661,32 +620,32 @@ void dBufPrintStatis(const SDiskbasedBuf* pBuf) {
pBuf->totalBufSize / 1024.0, pBuf->numOfPages, listNEles(pBuf->lruList) * pBuf->pageSize / 1024.0,
listNEles(pBuf->lruList), pBuf->fileSize / 1024.0, pBuf->pageSize / 1024.0f, pBuf->id);
- printf(
- "Get/Release pages:%d/%d, flushToDisk:%.2f Kb (%d Pages), loadFromDisk:%.2f Kb (%d Pages), avgPageSize:%.2f Kb\n",
- ps->getPages, ps->releasePages, ps->flushBytes / 1024.0f, ps->flushPages, ps->loadBytes / 1024.0f, ps->loadPages,
- ps->loadBytes / (1024.0 * ps->loadPages));
+ if (ps->loadPages > 0) {
+ printf(
+ "Get/Release pages:%d/%d, flushToDisk:%.2f Kb (%d Pages), loadFromDisk:%.2f Kb (%d Pages), avgPageSize:%.2f Kb\n",
+ ps->getPages, ps->releasePages, ps->flushBytes / 1024.0f, ps->flushPages, ps->loadBytes / 1024.0f,
+ ps->loadPages, ps->loadBytes / (1024.0 * ps->loadPages));
+ } else {
+ printf("no page loaded\n");
+ }
}
void clearDiskbasedBuf(SDiskbasedBuf* pBuf) {
- SArray** p = taosHashIterate(pBuf->groupSet, NULL);
- while (p) {
- size_t n = taosArrayGetSize(*p);
- for (int32_t i = 0; i < n; ++i) {
- SPageInfo* pi = taosArrayGetP(*p, i);
- taosMemoryFreeClear(pi->pData);
- taosMemoryFreeClear(pi);
- }
- taosArrayDestroy(*p);
- p = taosHashIterate(pBuf->groupSet, p);
+ size_t n = taosArrayGetSize(pBuf->pIdList);
+ for (int32_t i = 0; i < n; ++i) {
+ SPageInfo* pi = taosArrayGetP(pBuf->pIdList, i);
+ taosMemoryFreeClear(pi->pData);
+ taosMemoryFreeClear(pi);
}
+ taosArrayClear(pBuf->pIdList);
+
tdListEmpty(pBuf->lruList);
tdListEmpty(pBuf->freePgList);
taosArrayClear(pBuf->emptyDummyIdList);
taosArrayClear(pBuf->pFree);
- taosHashClear(pBuf->groupSet);
taosHashClear(pBuf->all);
pBuf->numOfPages = 0; // all pages are in buffer in the first place
diff --git a/tests/docs-examples-test/jdbc.sh b/tests/docs-examples-test/jdbc.sh
new file mode 100644
index 0000000000..d71085a403
--- /dev/null
+++ b/tests/docs-examples-test/jdbc.sh
@@ -0,0 +1,27 @@
+#!/bin/bash
+
+pgrep taosd || taosd >> /dev/null 2>&1 &
+pgrep taosadapter || taosadapter >> /dev/null 2>&1 &
+cd ../../docs/examples/java
+
+mvn clean test > jdbc-out.log 2>&1
+tail -n 20 jdbc-out.log
+
+cases=`grep 'Tests run' jdbc-out.log | awk 'END{print $3}'`
+totalJDBCCases=`echo ${cases/%,}`
+failed=`grep 'Tests run' jdbc-out.log | awk 'END{print $5}'`
+JDBCFailed=`echo ${failed/%,}`
+error=`grep 'Tests run' jdbc-out.log | awk 'END{print $7}'`
+JDBCError=`echo ${error/%,}`
+
+totalJDBCFailed=`expr $JDBCFailed + $JDBCError`
+totalJDBCSuccess=`expr $totalJDBCCases - $totalJDBCFailed`
+
+if [ "$totalJDBCSuccess" -gt "0" ]; then
+ echo -e "\n${GREEN} ### Total $totalJDBCSuccess JDBC case(s) succeed! ### ${NC}"
+fi
+
+if [ "$totalJDBCFailed" -ne "0" ]; then
+ echo -e "\n${RED} ### Total $totalJDBCFailed JDBC case(s) failed! ### ${NC}"
+ exit 8
+fi
\ No newline at end of file
diff --git a/tests/script/tsim/query/crash_sql.sim b/tests/script/tsim/query/crash_sql.sim
index 169f2e7272..79a9165e66 100644
--- a/tests/script/tsim/query/crash_sql.sim
+++ b/tests/script/tsim/query/crash_sql.sim
@@ -76,7 +76,7 @@ sql insert into ct4 values ( '2022-05-21 01:01:01.000', NULL, NULL, NULL, NULL,
print ================ start query ======================
-print ================ SQL used to cause taosd or taos shell crash
+print ================ SQL used to cause taosd or TDengine CLI crash
sql_error select sum(c1) ,count(c1) from ct4 group by c1 having sum(c10) between 0 and 1 ;
#system sh/exec.sh -n dnode1 -s stop -x SIGINT
diff --git a/tests/system-test/2-query/sml.py b/tests/system-test/2-query/sml.py
index 6cfb9a1dad..5d74c568ce 100644
--- a/tests/system-test/2-query/sml.py
+++ b/tests/system-test/2-query/sml.py
@@ -85,6 +85,9 @@ class TDTestCase:
tdSql.query("select * from macylr")
tdSql.checkRows(2)
+
+ tdSql.query("desc macylr")
+ tdSql.checkRows(25)
return
def run(self):
diff --git a/tests/system-test/7-tmq/stbTagFilter-1ctb.py b/tests/system-test/7-tmq/stbTagFilter-1ctb.py
index 6a26d2ce1f..526ff7181e 100644
--- a/tests/system-test/7-tmq/stbTagFilter-1ctb.py
+++ b/tests/system-test/7-tmq/stbTagFilter-1ctb.py
@@ -250,14 +250,14 @@ class TDTestCase:
tdLog.printNoPrefix("=============================================")
tdLog.printNoPrefix("======== snapshot is 0: only consume from wal")
self.tmqCase1()
- self.tmqCase2()
+ # self.tmqCase2()
self.prepareTestEnv()
tdLog.printNoPrefix("====================================================================")
tdLog.printNoPrefix("======== snapshot is 1: firstly consume from tsbs, and then from wal")
self.snapshot = 1
self.tmqCase1()
- self.tmqCase2()
+ # self.tmqCase2()
def stop(self):
diff --git a/tests/system-test/7-tmq/tmqDropNtb-snapshot1.py b/tests/system-test/7-tmq/tmqDropNtb-snapshot1.py
index 20e363341f..4cb208b616 100644
--- a/tests/system-test/7-tmq/tmqDropNtb-snapshot1.py
+++ b/tests/system-test/7-tmq/tmqDropNtb-snapshot1.py
@@ -99,8 +99,8 @@ class TDTestCase:
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
- if not ((totalConsumeRows >= expectrowcnt * 3/4) and (totalConsumeRows < expectrowcnt)):
- tdLog.exit("tmq consume rows error with snapshot = 0!")
+ # if not ((totalConsumeRows >= expectrowcnt * 3/4) and (totalConsumeRows < expectrowcnt)):
+ # tdLog.exit("tmq consume rows error with snapshot = 0!")
tdLog.info("wait subscriptions exit ....")
tmqCom.waitSubscriptionExit(tdSql, topicFromDb)
@@ -192,8 +192,8 @@ class TDTestCase:
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
- if not ((totalConsumeRows >= expectrowcnt / 2 * (1 + 3/4)) and (totalConsumeRows < expectrowcnt)):
- tdLog.exit("tmq consume rows error with snapshot = 0!")
+ # if not ((totalConsumeRows >= expectrowcnt / 2 * (1 + 3/4)) and (totalConsumeRows < expectrowcnt)):
+ # tdLog.exit("tmq consume rows error with snapshot = 0!")
tdLog.info("wait subscriptions exit ....")
tmqCom.waitSubscriptionExit(tdSql, topicFromDb)
diff --git a/tests/system-test/7-tmq/tmqDropStbCtb.py b/tests/system-test/7-tmq/tmqDropStbCtb.py
index 992a128ac0..704811d083 100644
--- a/tests/system-test/7-tmq/tmqDropStbCtb.py
+++ b/tests/system-test/7-tmq/tmqDropStbCtb.py
@@ -155,8 +155,9 @@ class TDTestCase:
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
- if not ((totalConsumeRows > expectrowcnt / 2) and (totalConsumeRows < expectrowcnt)):
- tdLog.exit("tmq consume rows error with snapshot = 0!")
+ if self.snapshot == 0:
+ if not ((totalConsumeRows > expectrowcnt / 2) and (totalConsumeRows < expectrowcnt)):
+ tdLog.exit("tmq consume rows error with snapshot = 0!")
tdLog.info("wait subscriptions exit ....")
tmqCom.waitSubscriptionExit(tdSql, topicFromDb)
@@ -246,8 +247,9 @@ class TDTestCase:
tdLog.info("act consume rows: %d, expect consume rows: %d"%(totalConsumeRows, expectrowcnt))
- if not ((totalConsumeRows > expectrowcnt / 2) and (totalConsumeRows < expectrowcnt)):
- tdLog.exit("tmq consume rows error with snapshot = 0!")
+ if self.snapshot == 0:
+ if not ((totalConsumeRows > expectrowcnt / 2) and (totalConsumeRows < expectrowcnt)):
+ tdLog.exit("tmq consume rows error with snapshot = 0!")
tdLog.info("wait subscriptions exit ....")
tmqCom.waitSubscriptionExit(tdSql, topicFromDb)
diff --git a/tests/system-test/7-tmq/tmq_taosx.py b/tests/system-test/7-tmq/tmq_taosx.py
index cd13535684..d38e509d26 100644
--- a/tests/system-test/7-tmq/tmq_taosx.py
+++ b/tests/system-test/7-tmq/tmq_taosx.py
@@ -52,6 +52,148 @@ class TDTestCase:
tdSql.checkData(1, 1, 23)
tdSql.checkData(1, 4, None)
+ tdSql.query("select * from st1 order by ts")
+ tdSql.checkRows(8)
+ tdSql.checkData(0, 1, 1)
+ tdSql.checkData(1, 1, 3)
+ tdSql.checkData(4, 1, 4)
+ tdSql.checkData(6, 1, 23)
+
+ tdSql.checkData(0, 2, 2)
+ tdSql.checkData(1, 2, 4)
+ tdSql.checkData(4, 2, 3)
+ tdSql.checkData(6, 2, 32)
+
+ tdSql.checkData(0, 3, 'a')
+ tdSql.checkData(1, 3, 'b')
+ tdSql.checkData(4, 3, 'hwj')
+ tdSql.checkData(6, 3, 's21ds')
+
+ tdSql.checkData(0, 4, None)
+ tdSql.checkData(1, 4, None)
+ tdSql.checkData(5, 4, 940)
+ tdSql.checkData(6, 4, None)
+
+ tdSql.checkData(0, 5, 1000)
+ tdSql.checkData(1, 5, 2000)
+ tdSql.checkData(4, 5, 1000)
+ tdSql.checkData(6, 5, 5000)
+
+ tdSql.checkData(0, 6, 'ttt')
+ tdSql.checkData(1, 6, None)
+ tdSql.checkData(4, 6, 'ttt')
+ tdSql.checkData(6, 6, None)
+
+ tdSql.checkData(0, 7, True)
+ tdSql.checkData(1, 7, None)
+ tdSql.checkData(4, 7, True)
+ tdSql.checkData(6, 7, None)
+
+ tdSql.checkData(0, 8, None)
+ tdSql.checkData(1, 8, None)
+ tdSql.checkData(4, 8, None)
+ tdSql.checkData(6, 8, None)
+
+ tdSql.query("select * from ct1")
+ tdSql.checkRows(4)
+
+ tdSql.query("select * from ct2")
+ tdSql.checkRows(0)
+
+ tdSql.query("select * from ct0 order by c1")
+ tdSql.checkRows(2)
+ tdSql.checkData(0, 3, "a")
+ tdSql.checkData(1, 4, None)
+
+ tdSql.query("select * from n1 order by cc3 desc")
+ tdSql.checkRows(2)
+ tdSql.checkData(0, 1, "eeee")
+ tdSql.checkData(1, 2, 940)
+
+ tdSql.query("select * from jt order by i desc")
+ tdSql.checkRows(2)
+ tdSql.checkData(0, 1, 11)
+ tdSql.checkData(0, 2, None)
+ tdSql.checkData(1, 1, 1)
+ tdSql.checkData(1, 2, '{"k1":1,"k2":"hello"}')
+
+ tdSql.execute('drop topic if exists topic_ctb_column')
+ return
+
+ def checkFileContentSnapshot(self):
+ buildPath = tdCom.getBuildPath()
+ cfgPath = tdCom.getClientCfgPath()
+ cmdStr = '%s/build/bin/tmq_taosx_snapshot_ci -c %s'%(buildPath, cfgPath)
+ tdLog.info(cmdStr)
+ os.system(cmdStr)
+
+ srcFile = '%s/../log/tmq_taosx_tmp_snapshot.source'%(cfgPath)
+ dstFile = '%s/../log/tmq_taosx_tmp_snapshot.result'%(cfgPath)
+ tdLog.info("compare file: %s, %s"%(srcFile, dstFile))
+
+ consumeFile = open(srcFile, mode='r')
+ queryFile = open(dstFile, mode='r')
+
+ while True:
+ dst = queryFile.readline()
+ src = consumeFile.readline()
+
+ if dst:
+ if dst != src:
+ tdLog.exit("compare error: %s != %s"%src, dst)
+ else:
+ break
+
+ tdSql.execute('use db_taosx')
+ tdSql.query("select * from ct3 order by c1 desc")
+ tdSql.checkRows(2)
+ tdSql.checkData(0, 1, 51)
+ tdSql.checkData(0, 4, 940)
+ tdSql.checkData(1, 1, 23)
+ tdSql.checkData(1, 4, None)
+
+ tdSql.query("select * from st1 order by ts")
+ tdSql.checkRows(8)
+ tdSql.checkData(0, 1, 1)
+ tdSql.checkData(1, 1, 3)
+ tdSql.checkData(4, 1, 4)
+ tdSql.checkData(6, 1, 23)
+
+ tdSql.checkData(0, 2, 2)
+ tdSql.checkData(1, 2, 4)
+ tdSql.checkData(4, 2, 3)
+ tdSql.checkData(6, 2, 32)
+
+ tdSql.checkData(0, 3, 'a')
+ tdSql.checkData(1, 3, 'b')
+ tdSql.checkData(4, 3, 'hwj')
+ tdSql.checkData(6, 3, 's21ds')
+
+ tdSql.checkData(0, 4, None)
+ tdSql.checkData(1, 4, None)
+ tdSql.checkData(5, 4, 940)
+ tdSql.checkData(6, 4, None)
+
+ tdSql.checkData(0, 5, 1000)
+ tdSql.checkData(1, 5, 2000)
+ tdSql.checkData(4, 5, 1000)
+ tdSql.checkData(6, 5, 5000)
+
+ tdSql.checkData(0, 6, 'ttt')
+ tdSql.checkData(1, 6, None)
+ tdSql.checkData(4, 6, 'ttt')
+ tdSql.checkData(6, 6, None)
+
+ tdSql.checkData(0, 7, True)
+ tdSql.checkData(1, 7, None)
+ tdSql.checkData(4, 7, True)
+ tdSql.checkData(6, 7, None)
+
+ tdSql.checkData(0, 8, None)
+ tdSql.checkData(1, 8, None)
+ tdSql.checkData(4, 8, None)
+ tdSql.checkData(6, 8, None)
+
tdSql.query("select * from ct1")
tdSql.checkRows(4)
@@ -80,6 +222,7 @@ class TDTestCase:
def run(self):
tdSql.prepare()
self.checkFileContent()
+ self.checkFileContentSnapshot()
def stop(self):
tdSql.close()
diff --git a/tests/test/c/CMakeLists.txt b/tests/test/c/CMakeLists.txt
index 31331b5265..0fb80e69c2 100644
--- a/tests/test/c/CMakeLists.txt
+++ b/tests/test/c/CMakeLists.txt
@@ -2,6 +2,7 @@ add_executable(tmq_demo tmqDemo.c)
add_executable(tmq_sim tmqSim.c)
add_executable(create_table createTable.c)
add_executable(tmq_taosx_ci tmq_taosx_ci.c)
+add_executable(tmq_taosx_snapshot_ci tmq_taosx_snapshot_ci.c)
add_executable(sml_test sml_test.c)
target_link_libraries(
create_table
@@ -31,6 +32,13 @@ target_link_libraries(
PUBLIC common
PUBLIC os
)
+target_link_libraries(
+ tmq_taosx_snapshot_ci
+ PUBLIC taos_static
+ PUBLIC util
+ PUBLIC common
+ PUBLIC os
+)
target_link_libraries(
sml_test
diff --git a/tests/test/c/sml_test.c b/tests/test/c/sml_test.c
index 50249a5c56..18181d2073 100644
--- a/tests/test/c/sml_test.c
+++ b/tests/test/c/sml_test.c
@@ -1089,7 +1089,7 @@ int sml_add_tag_col_Test() {
if (code) return code;
const char *sql1[] = {
- "macylr,id=macylr_17875_1804,t0=f,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7=\"binaryTagValue\",t8=L\"ncharTagValue\",t11=127i8,t10=L\"ncharTagValue\" c0=f,c1=127i8,c2=32767i16,c3=2147483647i32,c4=9223372036854775807i64,c5=11.12345f32,c6=22.123456789f64,c7=\"binaryColValue\",c8=L\"ncharColValue\",c9=7u64,c11=L\"ncharColValue\",c10=f 1626006833639000000"
+ "macylr,id=macylr_17875_1804,t1=127i8,t2=32767i16,t3=2147483647i32,t4=9223372036854775807i64,t5=11.12345f32,t6=22.123456789f64,t7=\"binaryTagValue\",t8=L\"ncharTagValue\",t11=127i8,t10=L\"ncharTagValue\" c0=f,c1=127i8,c2=32767i16,c3=2147483647i32,c4=9223372036854775807i64,c5=11.12345f32,c6=22.123456789f64,c8=L\"ncharColValue\",c9=7u64,c11=L\"ncharColValue\",c10=f 1626006833639000000"
};
pRes = taos_schemaless_insert(taos, (char **)sql1, sizeof(sql1) / sizeof(sql1[0]), TSDB_SML_LINE_PROTOCOL, 0);
diff --git a/tests/test/c/tmq_taosx_ci.c b/tests/test/c/tmq_taosx_ci.c
index ece7ad4819..ee5af03f05 100644
--- a/tests/test/c/tmq_taosx_ci.c
+++ b/tests/test/c/tmq_taosx_ci.c
@@ -501,7 +501,8 @@ int main(int argc, char* argv[]) {
if(argc == 3 && strcmp(argv[1], "-c") == 0) {
strcpy(dir, argv[2]);
}else{
- strcpy(dir, "../../../sim/psim/cfg");
+// strcpy(dir, "../../../sim/psim/cfg");
+ strcpy(dir, "/var/log");
}
printf("env init\n");
diff --git a/tests/test/c/tmq_taosx_snapshot_ci.c b/tests/test/c/tmq_taosx_snapshot_ci.c
new file mode 100644
index 0000000000..e3a52f7cad
--- /dev/null
+++ b/tests/test/c/tmq_taosx_snapshot_ci.c
@@ -0,0 +1,512 @@
+/*
+ * Copyright (c) 2019 TAOS Data, Inc.
+ *
+ * This program is free software: you can use, redistribute, and/or modify
+ * it under the terms of the GNU Affero General Public License, version 3
+ * or later ("AGPL"), as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * You should have received a copy of the GNU Affero General Public License
+ * along with this program. If not, see .
+ */
+
+#include
+#include
+#include
+#include
+#include
+#include "taos.h"
+#include "types.h"
+
+static int running = 1;
+TdFilePtr g_fp = NULL;
+char dir[64]={0};
+
+static TAOS* use_db(){
+ TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
+ if (pConn == NULL) {
+ return NULL;
+ }
+
+ TAOS_RES* pRes = taos_query(pConn, "use db_taosx");
+ if (taos_errno(pRes) != 0) {
+ printf("error in use db_taosx, reason:%s\n", taos_errstr(pRes));
+ return NULL;
+ }
+ taos_free_result(pRes);
+ return pConn;
+}
+
+static void msg_process(TAOS_RES* msg) {
+ /*memset(buf, 0, 1024);*/
+ printf("-----------topic-------------: %s\n", tmq_get_topic_name(msg));
+ printf("db: %s\n", tmq_get_db_name(msg));
+ printf("vg: %d\n", tmq_get_vgroup_id(msg));
+ TAOS *pConn = use_db();
+ if (tmq_get_res_type(msg) == TMQ_RES_TABLE_META) {
+ char* result = tmq_get_json_meta(msg);
+ if (result) {
+ printf("meta result: %s\n", result);
+ }
+ taosFprintfFile(g_fp, result);
+ taosFprintfFile(g_fp, "\n");
+ tmq_free_json_meta(result);
+ }
+
+ tmq_raw_data raw = {0};
+ tmq_get_raw(msg, &raw);
+ int32_t ret = tmq_write_raw(pConn, raw);
+ printf("write raw data: %s\n", tmq_err2str(ret));
+
+// else{
+// while(1){
+// int numOfRows = 0;
+// void *pData = NULL;
+// taos_fetch_raw_block(msg, &numOfRows, &pData);
+// if(numOfRows == 0) break;
+// printf("write data: tbname:%s, numOfRows:%d\n", tmq_get_table_name(msg), numOfRows);
+// int ret = taos_write_raw_block(pConn, numOfRows, pData, tmq_get_table_name(msg));
+// printf("write raw data: %s\n", tmq_err2str(ret));
+// }
+// }
+
+ taos_close(pConn);
+}
+
+int32_t init_env() {
+ TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
+ if (pConn == NULL) {
+ return -1;
+ }
+
+ TAOS_RES* pRes = taos_query(pConn, "drop database if exists db_taosx");
+ if (taos_errno(pRes) != 0) {
+ printf("error in drop db_taosx, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create database if not exists db_taosx vgroups 1");
+ if (taos_errno(pRes) != 0) {
+ printf("error in create db_taosx, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "drop database if exists abc1");
+ if (taos_errno(pRes) != 0) {
+ printf("error in drop db, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create database if not exists abc1 vgroups 1");
+ if (taos_errno(pRes) != 0) {
+ printf("error in create db, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "use abc1");
+ if (taos_errno(pRes) != 0) {
+ printf("error in use db, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn,
+ "create stable if not exists st1 (ts timestamp, c1 int, c2 float, c3 binary(16)) tags(t1 int, t3 "
+ "nchar(8), t4 bool)");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create super table st1, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create table if not exists ct0 using st1 tags(1000, \"ttt\", true)");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create child table tu1, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "insert into ct0 values(1626006833600, 1, 2, 'a')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to insert into ct0, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create table if not exists ct1 using st1(t1) tags(2000)");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create child table ct1, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create table if not exists ct2 using st1(t1) tags(NULL)");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create child table ct2, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "insert into ct1 values(1626006833600, 3, 4, 'b')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to insert into ct1, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create table if not exists ct3 using st1(t1) tags(3000)");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create child table ct3, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "insert into ct3 values(1626006833600, 5, 6, 'c') ct1 values(1626006833601, 2, 3, 'sds') (1626006833602, 4, 5, 'ddd') ct0 values(1626006833602, 4, 3, 'hwj') ct1 values(now+5s, 23, 32, 's21ds')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to insert into ct3, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "alter table st1 add column c4 bigint");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to alter super table st1, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "alter table st1 modify column c3 binary(64)");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to alter super table st1, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "insert into ct3 values(1626006833605, 53, 63, 'cffffffffffffffffffffffffffff', 8989898899999) (1626006833609, 51, 62, 'c333', 940)");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to insert into ct3, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "insert into ct3 select * from ct1");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to insert into ct3, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "alter table st1 add tag t2 binary(64)");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to alter super table st1, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "alter table ct3 set tag t1=5000");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to slter child table ct3, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "delete from abc1 .ct3 where ts < 1626006833606");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to insert into ct3, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create table if not exists n1(ts timestamp, c1 int, c2 nchar(4))");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create normal table n1, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "alter table n1 add column c3 bigint");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to alter normal table n1, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "alter table n1 modify column c2 nchar(8)");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to alter normal table n1, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "alter table n1 rename column c3 cc3");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to alter normal table n1, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "alter table n1 comment 'hello'");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to alter normal table n1, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "alter table n1 drop column c1");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to alter normal table n1, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "insert into n1 values(now, 'eeee', 8989898899999) (now+9s, 'c333', 940)");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to insert into n1, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create table jt(ts timestamp, i int) tags(t json)");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create super table jt, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create table jt1 using jt tags('{\"k1\":1, \"k2\":\"hello\"}')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create super table jt, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create table jt2 using jt tags('')");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create super table jt2, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "insert into jt1 values(now, 1)");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create super table jt1, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "insert into jt2 values(now, 11)");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create super table jt2, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ taos_close(pConn);
+ return 0;
+}
+
+int32_t create_topic() {
+ printf("create topic\n");
+ TAOS_RES* pRes;
+ TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
+ if (pConn == NULL) {
+ return -1;
+ }
+
+ pRes = taos_query(pConn, "use abc1");
+ if (taos_errno(pRes) != 0) {
+ printf("error in use db, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ pRes = taos_query(pConn, "create topic topic_ctb_column with meta as database abc1");
+ if (taos_errno(pRes) != 0) {
+ printf("failed to create topic topic_ctb_column, reason:%s\n", taos_errstr(pRes));
+ return -1;
+ }
+ taos_free_result(pRes);
+
+ taos_close(pConn);
+ return 0;
+}
+
+void tmq_commit_cb_print(tmq_t* tmq, int32_t code, void* param) {
+ printf("commit %d tmq %p param %p\n", code, tmq, param);
+}
+
+tmq_t* build_consumer() {
+#if 0
+ TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
+ assert(pConn != NULL);
+
+ TAOS_RES* pRes = taos_query(pConn, "use abc1");
+ if (taos_errno(pRes) != 0) {
+ printf("error in use db, reason:%s\n", taos_errstr(pRes));
+ }
+ taos_free_result(pRes);
+#endif
+
+ tmq_conf_t* conf = tmq_conf_new();
+ tmq_conf_set(conf, "group.id", "tg2");
+ tmq_conf_set(conf, "client.id", "my app 1");
+ tmq_conf_set(conf, "td.connect.user", "root");
+ tmq_conf_set(conf, "td.connect.pass", "taosdata");
+ tmq_conf_set(conf, "msg.with.table.name", "true");
+ tmq_conf_set(conf, "enable.auto.commit", "true");
+ tmq_conf_set(conf, "enable.heartbeat.background", "true");
+ tmq_conf_set(conf, "experimental.snapshot.enable", "true");
+ /*tmq_conf_set(conf, "experimental.snapshot.enable", "true");*/
+
+ tmq_conf_set_auto_commit_cb(conf, tmq_commit_cb_print, NULL);
+ tmq_t* tmq = tmq_consumer_new(conf, NULL, 0);
+ assert(tmq);
+ tmq_conf_destroy(conf);
+ return tmq;
+}
+
+tmq_list_t* build_topic_list() {
+ tmq_list_t* topic_list = tmq_list_new();
+ tmq_list_append(topic_list, "topic_ctb_column");
+ /*tmq_list_append(topic_list, "tmq_test_db_multi_insert_topic");*/
+ return topic_list;
+}
+
+void basic_consume_loop(tmq_t* tmq, tmq_list_t* topics) {
+ int32_t code;
+
+ if ((code = tmq_subscribe(tmq, topics))) {
+ fprintf(stderr, "%% Failed to start consuming topics: %s\n", tmq_err2str(code));
+ printf("subscribe err\n");
+ return;
+ }
+ int32_t cnt = 0;
+ while (running) {
+ TAOS_RES* tmqmessage = tmq_consumer_poll(tmq, 1000);
+ if (tmqmessage) {
+ cnt++;
+ msg_process(tmqmessage);
+ /*if (cnt >= 2) break;*/
+ /*printf("get data\n");*/
+ taos_free_result(tmqmessage);
+ /*} else {*/
+ /*break;*/
+ /*tmq_commit_sync(tmq, NULL);*/
+ }else{
+ break;
+ }
+ }
+
+ code = tmq_consumer_close(tmq);
+ if (code)
+ fprintf(stderr, "%% Failed to close consumer: %s\n", tmq_err2str(code));
+ else
+ fprintf(stderr, "%% Consumer closed\n");
+}
+
+void sync_consume_loop(tmq_t* tmq, tmq_list_t* topics) {
+ static const int MIN_COMMIT_COUNT = 1;
+
+ int msg_count = 0;
+ int32_t code;
+
+ if ((code = tmq_subscribe(tmq, topics))) {
+ fprintf(stderr, "%% Failed to start consuming topics: %s\n", tmq_err2str(code));
+ return;
+ }
+
+ tmq_list_t* subList = NULL;
+ tmq_subscription(tmq, &subList);
+ char** subTopics = tmq_list_to_c_array(subList);
+ int32_t sz = tmq_list_get_size(subList);
+ printf("subscribed topics: ");
+ for (int32_t i = 0; i < sz; i++) {
+ printf("%s, ", subTopics[i]);
+ }
+ printf("\n");
+ tmq_list_destroy(subList);
+
+ while (running) {
+ TAOS_RES* tmqmessage = tmq_consumer_poll(tmq, 1000);
+ if (tmqmessage) {
+ msg_process(tmqmessage);
+ taos_free_result(tmqmessage);
+
+ /*tmq_commit_sync(tmq, NULL);*/
+ /*if ((++msg_count % MIN_COMMIT_COUNT) == 0) tmq_commit(tmq, NULL, 0);*/
+ }
+ }
+
+ code = tmq_consumer_close(tmq);
+ if (code)
+ fprintf(stderr, "%% Failed to close consumer: %s\n", tmq_err2str(code));
+ else
+ fprintf(stderr, "%% Consumer closed\n");
+}
+
+void initLogFile() {
+ char f1[256] = {0};
+ char f2[256] = {0};
+
+ sprintf(f1, "%s/../log/tmq_taosx_tmp_snapshot.source", dir);
+ sprintf(f2, "%s/../log/tmq_taosx_tmp_snapshot.result", dir);
+ TdFilePtr pFile = taosOpenFile(f1, TD_FILE_TEXT | TD_FILE_TRUNC | TD_FILE_STREAM);
+ if (NULL == pFile) {
+ fprintf(stderr, "Failed to open %s for save result\n", f1);
+ exit(-1);
+ }
+ g_fp = pFile;
+
+ TdFilePtr pFile2 = taosOpenFile(f2, TD_FILE_TEXT | TD_FILE_TRUNC | TD_FILE_STREAM);
+ if (NULL == pFile2) {
+ fprintf(stderr, "Failed to open %s for save result\n", f2);
+ exit(-1);
+ }
+ char *result[] = {
+ "{\"type\":\"create\",\"tableName\":\"st1\",\"tableType\":\"super\",\"columns\":[{\"name\":\"ts\",\"type\":9},{\"name\":\"c1\",\"type\":4},{\"name\":\"c2\",\"type\":6},{\"name\":\"c3\",\"type\":8,\"length\":64},{\"name\":\"c4\",\"type\":5}],\"tags\":[{\"name\":\"t1\",\"type\":4},{\"name\":\"t3\",\"type\":10,\"length\":8},{\"name\":\"t4\",\"type\":1},{\"name\":\"t2\",\"type\":8,\"length\":64}]}",
+ "{\"type\":\"create\",\"tableName\":\"ct0\",\"tableType\":\"child\",\"using\":\"st1\",\"tagNum\":4,\"tags\":[{\"name\":\"t1\",\"type\":4,\"value\":1000},{\"name\":\"t3\",\"type\":10,\"value\":\"\\\"ttt\\\"\"},{\"name\":\"t4\",\"type\":1,\"value\":1}]}",
+ "{\"type\":\"create\",\"tableName\":\"ct1\",\"tableType\":\"child\",\"using\":\"st1\",\"tagNum\":4,\"tags\":[{\"name\":\"t1\",\"type\":4,\"value\":2000}]}",
+ "{\"type\":\"create\",\"tableName\":\"ct2\",\"tableType\":\"child\",\"using\":\"st1\",\"tagNum\":4,\"tags\":[]}",
+ "{\"type\":\"create\",\"tableName\":\"ct3\",\"tableType\":\"child\",\"using\":\"st1\",\"tagNum\":4,\"tags\":[{\"name\":\"t1\",\"type\":4,\"value\":5000}]}",
+ "{\"type\":\"create\",\"tableName\":\"n1\",\"tableType\":\"normal\",\"columns\":[{\"name\":\"ts\",\"type\":9},{\"name\":\"c2\",\"type\":10,\"length\":8},{\"name\":\"cc3\",\"type\":5}],\"tags\":[]}",
+ "{\"type\":\"create\",\"tableName\":\"jt\",\"tableType\":\"super\",\"columns\":[{\"name\":\"ts\",\"type\":9},{\"name\":\"i\",\"type\":4}],\"tags\":[{\"name\":\"t\",\"type\":15}]}",
+ "{\"type\":\"create\",\"tableName\":\"jt1\",\"tableType\":\"child\",\"using\":\"jt\",\"tagNum\":1,\"tags\":[{\"name\":\"t\",\"type\":15,\"value\":\"{\\\"k1\\\":1,\\\"k2\\\":\\\"hello\\\"}\"}]}",
+ "{\"type\":\"create\",\"tableName\":\"jt2\",\"tableType\":\"child\",\"using\":\"jt\",\"tagNum\":1,\"tags\":[]}",
+ };
+
+ for(int i = 0; i < sizeof(result)/sizeof(result[0]); i++){
+ taosFprintfFile(pFile2, result[i]);
+ taosFprintfFile(pFile2, "\n");
+ }
+ taosCloseFile(&pFile2);
+}
+
+int main(int argc, char* argv[]) {
+ if(argc == 3 && strcmp(argv[1], "-c") == 0) {
+ strcpy(dir, argv[2]);
+ }else{
+// strcpy(dir, "../../../sim/psim/cfg");
+ strcpy(dir, "/var/log");
+ }
+
+ printf("env init\n");
+ initLogFile();
+
+ if (init_env() < 0) {
+ return -1;
+ }
+ create_topic();
+
+ tmq_t* tmq = build_consumer();
+ tmq_list_t* topic_list = build_topic_list();
+ basic_consume_loop(tmq, topic_list);
+ /*sync_consume_loop(tmq, topic_list);*/
+ taosCloseFile(&g_fp);
+}