other:merge 3.0

This commit is contained in:
Haojun Liao 2022-09-05 10:24:03 +08:00
commit 2224df24f7
336 changed files with 16063 additions and 12131 deletions

View File

@ -88,4 +88,3 @@ Standard: Auto
TabWidth: 8
UseTab: Never
...

1
.gitattributes vendored Normal file
View File

@ -0,0 +1 @@
*.py linguist-detectable=false

1
.gitignore vendored
View File

@ -1,5 +1,6 @@
build/
compile_commands.json
CMakeSettings.json
.cache
.ycm_extra_conf.py
.tasks

View File

@ -34,7 +34,7 @@ endif(${BUILD_TEST})
add_subdirectory(source)
add_subdirectory(tools)
add_subdirectory(tests)
add_subdirectory(utils)
add_subdirectory(examples/c)
# docs

View File

@ -46,7 +46,7 @@ IF (${CMAKE_SYSTEM_NAME} MATCHES "Linux" OR ${CMAKE_SYSTEM_NAME} MATCHES "Darwin
MESSAGE("Current system processor is ${CMAKE_SYSTEM_PROCESSOR}.")
IF (${CMAKE_SYSTEM_PROCESSOR} MATCHES "arm64" OR ${CMAKE_SYSTEM_PROCESSOR} MATCHES "x86_64")
MESSAGE("Current system arch is arm64")
MESSAGE("Current system arch is 64")
SET(TD_DARWIN_64 TRUE)
ADD_DEFINITIONS("-D_TD_DARWIN_64")
ENDIF ()

View File

@ -2,7 +2,7 @@
# taos-tools
ExternalProject_Add(taos-tools
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
GIT_TAG aa45ad4
GIT_TAG a4d9b92
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
BINARY_DIR ""
#BUILD_IN_SOURCE TRUE

View File

@ -2,7 +2,7 @@
# taosws-rs
ExternalProject_Add(taosws-rs
GIT_REPOSITORY https://github.com/taosdata/taos-connector-rust.git
GIT_TAG 7a54d21
GIT_TAG 6fc47d7
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosws-rs"
BINARY_DIR ""
#BUILD_IN_SOURCE TRUE

View File

@ -4,25 +4,24 @@ sidebar_label: Documentation Home
slug: /
---
TDengine is an [open source](https://tdengine.com/tdengine/open-source-time-series-database/), [cloud native](https://tdengine.com/tdengine/cloud-native-time-series-database/) time-series database optimized for Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design and other topics. Its written mainly for architects, developers and system administrators.
TDengine is an [open-source](https://tdengine.com/tdengine/open-source-time-series-database/), [cloud-native](https://tdengine.com/tdengine/cloud-native-time-series-database/) time-series database optimized for the Internet of Things (IoT), Connected Cars, and Industrial IoT. It enables efficient, real-time data ingestion, processing, and monitoring of TB and even PB scale data per day, generated by billions of sensors and data collectors. This document is the TDengine user manual. It introduces the basic, as well as novel concepts, in TDengine, and also talks in detail about installation, features, SQL, APIs, operation, maintenance, kernel design, and other topics. Its written mainly for architects, developers, and system administrators.
To get an overview of TDengine, such as a feature list, benchmarks, and competitive advantages, please browse through the [Introduction](./intro) section.
TDengine greatly improves the efficiency of data ingestion, querying and storage by exploiting the characteristics of time series data, introducing the novel concepts of "one table for one data collection point" and "super table", and designing an innovative storage engine. To understand the new concepts in TDengine and make full use of the features and capabilities of TDengine, please read [Concepts](./concept) thoroughly.
TDengine greatly improves the efficiency of data ingestion, querying, and storage by exploiting the characteristics of time series data, introducing the novel concepts of "one table for one data collection point" and "super table", and designing an innovative storage engine. To understand the new concepts in TDengine and make full use of the features and capabilities of TDengine, please read [Concepts](./concept) thoroughly.
If you are a developer, please read the [Developer Guide](./develop) carefully. This section introduces the database connection, data modeling, data ingestion, query, continuous query, cache, data subscription, user-defined functions, and other functionality in detail. Sample code is provided for a variety of programming languages. In most cases, you can just copy and paste the sample code, make a few changes to accommodate your application, and it will work.
If you are a developer, please read the [Developer Guide](./develop) carefully. This section introduces the database connection, data modeling, data ingestion, query, continuous query, cache, data subscription, user-defined functions, and other functionality in detail. Sample code is provided for a variety of programming languages. In most cases, you can just copy and paste the sample code, and make a few changes to accommodate your application, and it will work.
We live in the era of big data, and scale-up is unable to meet the growing needs of business. Any modern data system must have the ability to scale out, and clustering has become an indispensable feature of big data systems. Not only did the TDengine team develop the cluster feature, but also decided to open source this important feature. To learn how to deploy, manage and maintain a TDengine cluster please refer to ["cluster deployment"](../deployment).
We live in the era of big data, and scale-up is unable to meet the growing needs of the business. Any modern data system must have the ability to scale out, and clustering has become an indispensable feature of big data systems. Not only did the TDengine team develop the cluster feature, but also decided to open source this important feature. To learn how to deploy, manage and maintain a TDengine cluster please refer to [Cluster Deployment](../deployment).
TDengine uses ubiquitious SQL as its query language, which greatly reduces learning costs and migration costs. In addition to the standard SQL, TDengine has extensions to better support time series data analysis. These extensions include functions such as roll up, interpolation and time weighted average, among many others. The ["SQL Reference"](./taos-sql) chapter describes the SQL syntax in detail, and lists the various supported commands and functions.
TDengine uses ubiquitous SQL as its query language, which greatly reduces learning costs and migration costs. In addition to the standard SQL, TDengine has extensions to better support time series data analysis. These extensions include functions such as roll-up, interpolation, and time-weighted average, among many others. The [SQL Reference](./taos-sql) chapter describes the SQL syntax in detail and lists the various supported commands and functions.
If you are a system administrator who cares about installation, upgrade, fault tolerance, disaster recovery, data import, data export, system configuration, how to monitor whether TDengine is running healthily, and how to improve system performance, please refer to, and thoroughly read the ["Administration"](./operation) section.
If you are a system administrator who cares about installation, upgrade, fault tolerance, disaster recovery, data import, data export, system configuration, how to monitor whether TDengine is running healthily, and how to improve system performance, please refer to, and thoroughly read the [Administration](./operation) section.
If you want to know more about TDengine tools, the REST API, and connectors for various programming languages, please see the ["Reference"](./reference) chapter.
If you want to know more about TDengine tools, the REST API, and connectors for various programming languages, please see the [Reference](./reference) chapter.
If you are very interested in the internal design of TDengine, please read the chapter ["Inside TDengine](./tdinternal), which introduces the cluster design, data partitioning, sharding, writing, and reading processes in detail. If you want to study TDengine code or even contribute code, please read this chapter carefully.
If you are very interested in the internal design of TDengine, please read the chapter [Inside TDengine](./tdinternal), which introduces the cluster design, data partitioning, sharding, writing, and reading processes in detail. If you want to study TDengine code or even contribute code, please read this chapter carefully.
TDengine is an open source database, and we would love for you to be a part of TDengine. If you find any errors in the documentation, or see parts where more clarity or elaboration is needed, please click "Edit this page" at the bottom of each page to edit it directly.
TDengine is an open-source database, and we would love for you to be a part of TDengine. If you find any errors in the documentation or see parts where more clarity or elaboration is needed, please click "Edit this page" at the bottom of each page to edit it directly.
Together, we make a difference.
Together, we make a difference!

View File

@ -12,34 +12,34 @@ This section introduces the major features, competitive advantages, typical use-
The major features are listed below:
1. Insert data
* supports [using SQL to insert](../develop/insert-data/sql-writing).
* supports [schemaless writing](../reference/schemaless/) just like NoSQL databases. It also supports standard protocols like [InfluxDB LINE](../develop/insert-data/influxdb-line)[OpenTSDB Telnet](../develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](../develop/insert-data/opentsdb-json) among others.
* supports seamless integration with third-party tools like [Telegraf](../third-party/telegraf/), [Prometheus](../third-party/prometheus/), [collectd](../third-party/collectd/), [StatsD](../third-party/statsd/), [TCollector](../third-party/tcollector/) and [icinga2/](../third-party/icinga2/), they can write data into TDengine with simple configuration and without a single line of code.
- Supports [using SQL to insert](../develop/insert-data/sql-writing).
- Supports [schemaless writing](../reference/schemaless/) just like NoSQL databases. It also supports standard protocols like [InfluxDB Line](../develop/insert-data/influxdb-line), [OpenTSDB Telnet](../develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](../develop/insert-data/opentsdb-json) among others.
- Supports seamless integration with third-party tools like [Telegraf](../third-party/telegraf/), [Prometheus](../third-party/prometheus/), [collectd](../third-party/collectd/), [StatsD](../third-party/statsd/), [TCollector](../third-party/tcollector/), [EMQX](../third-party/emq-broker), [HiveMQ](../third-party/hive-mq-broker), and [Icinga2](../third-party/icinga2/), they can write data into TDengine with simple configuration and without a single line of code.
2. Query data
* supports standard [SQL](../taos-sql/), including nested query.
* supports [time series specific functions](../taos-sql/function/#time-series-extensions) and [time series specific queries](../taos-sql/distinguished), like downsampling, interpolation, cumulated sum, time weighted average, state window, session window and many others.
* supports [user defined functions](../taos-sql/udf).
- Supports standard [SQL](../taos-sql/), including nested query.
- Supports [time series specific functions](../taos-sql/function/#time-series-extensions) and [time series specific queries](../taos-sql/distinguished), like downsampling, interpolation, cumulated sum, time weighted average, state window, session window and many others.
- Supports [User Defined Functions (UDF)](../taos-sql/udf).
3. [Caching](../develop/cache/): TDengine always saves the last data point in cache, so Redis is not needed for time-series data processing.
4. [Stream Processing](../develop/stream/): not only is the continuous query is supported, but TDengine also supports even driven stream processing, so Flink or spark is not needed for time-series daata processing.
5. [Data Dubscription](../develop/tmq/): application can subscribe a table or a set of tables. API is the same as Kafka, but you can specify filter conditions.
4. [Stream Processing](../develop/stream/): Not only is the continuous query is supported, but TDengine also supports event driven stream processing, so Flink or Spark is not needed for time-series data processing.
5. [Data Subscription](../develop/tmq/): Application can subscribe a table or a set of tables. API is the same as Kafka, but you can specify filter conditions.
6. Visualization
* supports seamless integration with [Grafana](../third-party/grafana/) for visualization.
* supports seamless integration with Google Data Studio.
- Supports seamless integration with [Grafana](../third-party/grafana/) for visualization.
- Supports seamless integration with Google Data Studio.
7. Cluster
* supports [cluster](../deployment/) with the capability of increasing processing power by adding more nodes.
* supports [deployment on Kubernetes](../deployment/k8s/)
* supports high availability via data replication.
- Supports [cluster](../deployment/) with the capability of increasing processing power by adding more nodes.
- Supports [deployment on Kubernetes](../deployment/k8s/).
- Supports high availability via data replication.
8. Administration
* provides [monitoring](../operation/monitor) on running instances of TDengine.
* provides many ways to [import](../operation/import) and [export](../operation/export) data.
- Provides [monitoring](../operation/monitor) on running instances of TDengine.
- Provides many ways to [import](../operation/import) and [export](../operation/export) data.
9. Tools
* provides an interactive [command-line interface](../reference/taos-shell) for management, maintenance and ad-hoc queries.
* provides a tool [taosBenchmark](../reference/taosbenchmark/) for testing the performance of TDengine.
- Provides an interactive [Command-line Interface (CLI)](../reference/taos-shell) for management, maintenance and ad-hoc queries.
- Provides a tool [taosBenchmark](../reference/taosbenchmark/) for testing the performance of TDengine.
10. Programming
* provides [connectors](../reference/connector/) for [C/C++](../reference/connector/cpp), [Java](../reference/connector/java), [Python](../reference/connector/python), [Go](../reference/connector/go), [Rust](../reference/connector/rust), [Node.js](../reference/connector/node) and other programming languages.
* provides a [REST API](../reference/rest-api/).
- Provides [connectors](../reference/connector/) for [C/C++](../reference/connector/cpp), [Java](../reference/connector/java), [Python](../reference/connector/python), [Go](../reference/connector/go), [Rust](../reference/connector/rust), [Node.js](../reference/connector/node) and other programming languages.
- Provides a [REST API](../reference/rest-api/).
For more details on features, please read through the entire documentation.
For more details on features, please read through the entire documentation.
## Competitive Advantages
@ -49,23 +49,31 @@ By making full use of [characteristics of time series data](https://tdengine.com
- **[Simplified Solution](https://tdengine.com/tdengine/simplified-time-series-data-solution/)**: Through built-in caching, stream processing and data subscription features, TDengine provides a simplified solution for time-series data processing. It reduces system design complexity and operation costs significantly.
- **[Cloud Native](https://tdengine.com/tdengine/cloud-native-time-series-database/)**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for kubernetes deployment and full observability, TDengine is a cloud native Time-Series Database and can be deployed on public, private or hybrid clouds.
- **[Cloud Native](https://tdengine.com/tdengine/cloud-native-time-series-database/)**: Through native distributed design, sharding and partitioning, separation of compute and storage, RAFT, support for Kubernetes deployment and full observability, TDengine is a cloud native Time-series Database and can be deployed on public, private or hybrid clouds.
- **[Ease of Use](https://tdengine.com/tdengine/easy-time-series-data-platform/)**: For administrators, TDengine significantly reduces the effort to[
](https://tdengine.com/tdengine/easy-time-series-data-platform/) deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access.
](https://tdengine.com/tdengine/easy-time-series-data-platform/) deploy and maintain. For developers, it provides a simple interface, simplified solution and seamless integrations for third party tools. For data users, it gives easy data access.
- **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
- **[Easy Data Analytics](https://tdengine.com/tdengine/time-series-data-analytics-made-easy/)**: Through super tables, storage and compute separation, data partitioning by time interval, pre-computation and other means, TDengine makes it easy to explore, format, and get access to data in a highly efficient way.
- **[Open Source](https://tdengine.com/tdengine/open-source-time-series-database/)**: TDengines core modules, including cluster feature, are all available under open source licenses. It has gathered over 19k stars on GitHub. There is an active developer community, and over 140k running instances worldwide.
With TDengine, the total cost of ownership of your time-series data platform can be greatly reduced. 1: With its superior performance, the computing and storage resources are reduced significantly2: With SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly3: With its simplified solution and nearly zero management, the operation and maintenance costs are reduced significantly.
With TDengine, the total cost of ownership of your time-series data platform can be greatly reduced.
1. With its superior performance, the computing and storage resources are reduced significantly.
2. With SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly.
3. With its simplified solution and nearly zero management, the operation and maintenance costs are reduced significantly.
## Technical Ecosystem
This is how TDengine would be situated, in a typical time-series data processing platform:
<figure>
![TDengine Database Technical Ecosystem ](eco_system.webp)
<center>Figure 1. TDengine Technical Ecosystem</center>
<center><figcaption>Figure 1. TDengine Technical Ecosystem</figcaption></center>
</figure>
On the left-hand side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right-hand side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides an interactive command-line interface and a web interface for management and maintenance.
@ -75,42 +83,42 @@ As a high-performance, scalable and SQL supported time-series database, TDengine
### Characteristics and Requirements of Data Sources
| **Data Source Characteristics and Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
| -------------------------------------------------------- | ------------------ | ----------------------- | ------------------- | :----------------------------------------------------------- |
| A massive amount of total data | | | √ | TDengine provides excellent scale-out functions in terms of capacity, and has a storage structure with matching high compression ratio to achieve the best storage efficiency in the industry.|
| Data input velocity is extremely high | | | √ | TDengine's performance is much higher than that of other similar products. It can continuously process larger amounts of input data in the same hardware environment, and provides a performance evaluation tool that can easily run in the user environment. |
| A huge number of data sources | | | √ | TDengine is optimized specifically for a huge number of data sources. It is especially suitable for efficiently ingesting, writing and querying data from billions of data sources. |
| **Data Source Characteristics and Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
| ------------------------------------------------ | ------------------ | ----------------------- | ------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| A massive amount of total data | | | √ | TDengine provides excellent scale-out functions in terms of capacity, and has a storage structure with matching high compression ratio to achieve the best storage efficiency in the industry. |
| Data input velocity is extremely high | | | √ | TDengine's performance is much higher than that of other similar products. It can continuously process larger amounts of input data in the same hardware environment, and provides a performance evaluation tool that can easily run in the user environment. |
| A huge number of data sources | | | √ | TDengine is optimized specifically for a huge number of data sources. It is especially suitable for efficiently ingesting, writing and querying data from billions of data sources. |
### System Architecture Requirements
| **System Architecture Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
| **System Architecture Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
| ----------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| A simple and reliable system architecture | | | √ | TDengine's system architecture is very simple and reliable, with its own message queue, cache, stream computing, monitoring and other functions. There is no need to integrate any additional third-party products. |
| Fault-tolerance and high-reliability | | | √ | TDengine has cluster functions to automatically provide high-reliability and high-availability functions such as fault tolerance and disaster recovery. |
| Standardization support | | | √ | TDengine supports standard SQL and provides SQL extensions for time-series data analysis. |
| Fault-tolerance and high-reliability | | | √ | TDengine has cluster functions to automatically provide high-reliability and high-availability functions such as fault tolerance and disaster recovery. |
| Standardization support | | | √ | TDengine supports standard SQL and provides SQL extensions for time-series data analysis. |
### System Function Requirements
| **System Function Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
| Complete data processing algorithms built-in | | √ | | While TDengine implements various general data processing algorithms, industry specific algorithms and special types of processing will need to be implemented at the application level.|
| A large number of crosstab queries | | √ | | This type of processing is better handled by general purpose relational database systems but TDengine can work in concert with relational database systems to provide more complete solutions. |
| **System Function Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
| -------------------------------------------- | ------------------ | ----------------------- | ------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Complete data processing algorithms built-in | | √ | | While TDengine implements various general data processing algorithms, industry specific algorithms and special types of processing will need to be implemented at the application level. |
| A large number of crosstab queries | | √ | | This type of processing is better handled by general purpose relational database systems but TDengine can work in concert with relational database systems to provide more complete solutions. |
### System Performance Requirements
| **System Performance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
| Very large total processing capacity | | | √ | TDengines cluster functions can easily improve processing capacity via multi-server coordination. |
| Extremely high-speed data processing | | | √ | TDengines storage and data processing are optimized for IoT, and can process data many times faster than similar products.|
| Extremely fast processing of high resolution data | | | √ | TDengine has achieved the same or better performance than other relational and NoSQL data processing systems. |
| **System Performance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | --------------------------------------------------------------------------------------------------------------------------- |
| Very large total processing capacity | | | √ | TDengines cluster functions can easily improve processing capacity via multi-server coordination. |
| Extremely high-speed data processing | | | √ | TDengines storage and data processing are optimized for IoT, and can process data many times faster than similar products. |
| Extremely fast processing of high resolution data | | | √ | TDengine has achieved the same or better performance than other relational and NoSQL data processing systems. |
### System Maintenance Requirements
| **System Maintenance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
| ------------------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------ |
| Native high-reliability | | | √ | TDengine has a very robust, reliable and easily configurable system architecture to simplify routine operation. Human errors and accidents are eliminated to the greatest extent, with a streamlined experience for operators. |
| Minimize learning and maintenance costs | | | √ | In addition to being easily configurable, standard SQL support and the TDengine CLI for ad hoc queries makes maintenance simpler, allows reuse and reduces learning costs.|
| Abundant talent supply | √ | | | Given the above, and given the extensive training and professional services provided by TDengine, it is easy to migrate from existing solutions or create a new and lasting solution based on TDengine.|
| **System Maintenance Requirements** | **Not Applicable** | **Might Be Applicable** | **Very Applicable** | **Description** |
| --------------------------------------- | ------------------ | ----------------------- | ------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| Native high-reliability | | | √ | TDengine has a very robust, reliable and easily configurable system architecture to simplify routine operation. Human errors and accidents are eliminated to the greatest extent, with a streamlined experience for operators. |
| Minimize learning and maintenance costs | | | √ | In addition to being easily configurable, standard SQL support and the TDengine CLI for ad hoc queries makes maintenance simpler, allows reuse and reduces learning costs. |
| Abundant talent supply | √ | | | Given the above, and given the extensive training and professional services provided by TDengine, it is easy to migrate from existing solutions or create a new and lasting solution based on TDengine. |
## Comparison with other databases

View File

@ -1,6 +1,7 @@
---
title: Connect
description: "This document explains how to establish connections to TDengine and how to install and use TDengine connectors."
sidebar_label: Connect
title: Connect to TDengine
description: "How to establish connections to TDengine and how to install and use TDengine connectors."
---
import Tabs from "@theme/Tabs";

View File

@ -16,7 +16,7 @@ To achieve high performance writing, there are a few aspects to consider. In the
From the perspective of application program, you need to consider:
1. The data size of each single write, also known as batch size. Generally speaking, higher batch size generates better writing performance. However, once the batch size is over a specific value, you will not get any additional benefit anymore. When using SQL to write into TDengine, it's better to put as much as possible data in single SQL. The maximum SQL length supported by TDengine is 1,048,576 bytes, i.e. 1 MB. It can be configured by parameter `maxSQLLength` on client side, and the default value is 65,480.
1. The data size of each single write, also known as batch size. Generally speaking, higher batch size generates better writing performance. However, once the batch size is over a specific value, you will not get any additional benefit anymore. When using SQL to write into TDengine, it's better to put as much as possible data in single SQL. The maximum SQL length supported by TDengine is 1,048,576 bytes, i.e. 1 MB.
2. The number of concurrent connections. Normally more connections can get better result. However, once the number of connections exceeds the processing ability of the server side, the performance may downgrade.
@ -46,12 +46,9 @@ If the data source is Kafka, then the appication program is a consumer of Kafka,
### Tune TDengine
TDengine is a distributed and high performance time series database, there are also some ways to tune TDengine to get better writing performance.
On the server side, database configuration parameter `vgroups` needs to be set carefully to maximize the system performance. If it's set too low, the system capability can't be utilized fully; if it's set too big, unnecessary resource competition may be produced. A normal recommendation for `vgroups` parameter is 2 times of the number of CPU cores. However, depending on the actual system resources, it may still need to tuned.
1. Set proper number of `vgroups` according to available CPU cores. Normally, we recommend 2 \* number_of_cores as a starting point. If the verification result shows this is not enough to utilize CPU resources, you can use a higher value.
2. Set proper `minTablesPerVnode`, `tableIncStepPerVnode`, and `maxVgroupsPerDb` according to the number of tables so that tables are distributed even across vgroups. The purpose is to balance the workload among all vnodes so that system resources can be utilized better to get higher performance.
For more performance tuning parameters, please refer to [Configuration Parameters](../../../reference/config).
For more configuration parameters, please refer to [Database Configuration](../../../taos-sql/database) and [Server Configuration](../../../reference/config)。
## Sample Programs
@ -359,7 +356,7 @@ Writing process tries to read as much as possible data from message queue and wr
<details>
SQLWriter class encapsulates the logic of composing SQL and writing data. Please be noted that the tables have not been created before writing, but are created automatically when catching the exception of table doesn't exist. For other exceptions caught, the SQL which caused the exception are logged for you to debug. This class also checks the SQL length, if the SQL length is closed to `maxSQLLength` the SQL will be executed immediately. To improve writing efficiency, it's better to increase `maxSQLLength` properly.
SQLWriter class encapsulates the logic of composing SQL and writing data. Please be noted that the tables have not been created before writing, but are created automatically when catching the exception of table doesn't exist. For other exceptions caught, the SQL which caused the exception are logged for you to debug. This class also checks the SQL length, and passes the maximum SQL length by parameter maxSQLLength according to actual TDengine limit.
<summary>SQLWriter</summary>

View File

@ -71,9 +71,9 @@ database_option: {
- SINGLE_STABLE: specifies whether the database can contain more than one supertable.
- 0: The database can contain multiple supertables.
- 1: The database can contain only one supertable.
- WAL_RETENTION_PERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription. Enter a time in seconds. The default value is 0. A value of 0 indicates that each WAL file is deleted immediately after its contents are written to disk. -1: WAL files are never deleted.
- WAL_RETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription. Enter a size in KB. The default value is 0. A value of 0 indicates that each WAL file is deleted immediately after its contents are written to disk. -1: WAL files are never deleted.
- WAL_ROLL_PERIOD: specifies the time after which WAL files are rotated. After this period elapses, a new WAL file is created. The default value is 0. A value of 0 indicates that a new WAL file is created only after the previous WAL file was written to disk.
- WAL_RETENTION_PERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription. Enter a time in seconds. The default value of single copy is 0. A value of 0 indicates that each WAL file is deleted immediately after its contents are written to disk. -1: WAL files are never deleted. The default value of multiple copy is 4 days.
- WAL_RETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription. Enter a size in KB. The default value of single copy is 0. A value of 0 indicates that each WAL file is deleted immediately after its contents are written to disk. -1: WAL files are never deleted. The default value of multiple copy is -1.
- WAL_ROLL_PERIOD: specifies the time after which WAL files are rotated. After this period elapses, a new WAL file is created. The default value of single copy is 0. A value of 0 indicates that a new WAL file is created only after the previous WAL file was written to disk. The default values of multiple copy is 1 day.
- WAL_SEGMENT_SIZE: specifies the maximum size of a WAL file. After the current WAL file reaches this size, a new WAL file is created. The default value is 0. A value of 0 indicates that a new WAL file is created only after the previous WAL file was written to disk.
### Example Statement

View File

@ -52,11 +52,6 @@ window_clause: {
| STATE_WINDOW(col)
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)]
changes_option: {
DURATION duration_val
| ROWS rows_val
}
group_by_clause:
GROUP BY expr [, expr] ... HAVING condition
@ -126,7 +121,6 @@ SELECT DISTINCT col_name [, col_name ...] FROM tb_name;
1. Configuration parameter `maxNumOfDistinctRes` in `taos.cfg` is used to control the number of rows to output. The minimum configurable value is 100,000, the maximum configurable value is 100,000,000, the default value is 1,000,000. If the actual number of rows exceeds the value of this parameter, only the number of rows specified by this parameter will be output.
2. It can't be guaranteed that the results selected by using `DISTINCT` on columns of `FLOAT` or `DOUBLE` are exactly unique because of the precision errors in floating point numbers.
3. `DISTINCT` can't be used in the sub-query of a nested query statement, and can't be used together with aggregate functions, `GROUP BY` or `JOIN` in the same SQL statement.
:::

View File

@ -613,6 +613,7 @@ SELECT APERCENTILE(field_name, P[, algo_type]) FROM { tb_name | stb_name } [WHER
**Explanations**
- _P_ is in range [0,100], when _P_ is 0, the result is same as using function MIN; when _P_ is 100, the result is same as function MAX.
- `algo_type` can only be input as `default` or `t-digest` Enter `default` to use a histogram-based algorithm. Enter `t-digest` to use the t-digest algorithm to calculate the approximation of the quantile. `default` is used by default.
- The approximation result of `t-digest` algorithm is sensitive to input data order. For example, when querying STable with different input data order there might be minor differences in calculated results.
### AVG
@ -916,7 +917,7 @@ SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause];
**Return value type**:Same as the data type of the column being operated upon
**Applicable data types**: Numeric
**Applicable data types**: Numeric, Timestamp
**Applicable table types**: standard tables and supertables
@ -931,7 +932,7 @@ SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause];
**Return value type**:Same as the data type of the column being operated upon
**Applicable data types**: Numeric
**Applicable data types**: Numeric, Timestamp
**Applicable table types**: standard tables and supertables

View File

@ -44,13 +44,13 @@ For example, the following SQL statement creates a stream and automatically crea
```sql
CREATE STREAM avg_vol_s INTO avg_vol AS
SELECT _wstartts, count(*), avg(voltage) FROM meters PARTITION BY tbname INTERVAL(1m) SLIDING(30s);
SELECT _wstart, count(*), avg(voltage) FROM meters PARTITION BY tbname INTERVAL(1m) SLIDING(30s);
```
## Delete a Stream
```sql
DROP STREAM [IF NOT EXISTS] stream_name
DROP STREAM [IF EXISTS] stream_name
```
This statement deletes the stream processing service only. The data generated by the stream is retained.

View File

@ -245,3 +245,35 @@ Provides dnode configuration information.
| 1 | dnode_id | INT | Dnode ID |
| 2 | name | BINARY(32) | Parameter |
| 3 | value | BINARY(64) | Value |
## INS_TOPICS
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | ------------------------------ |
| 1 | topic_name | BINARY(192) | Topic name |
| 2 | db_name | BINARY(64) | Database for the topic |
| 3 | create_time | TIMESTAMP | Creation time |
| 4 | sql | BINARY(1024) | SQL statement used to create the topic |
## INS_SUBSCRIPTIONS
| # | **Column** | **Data Type** | **Description** |
| --- | :------------: | ------------ | ------------------------ |
| 1 | topic_name | BINARY(204) | Subscribed topic |
| 2 | consumer_group | BINARY(193) | Subscribed consumer group |
| 3 | vgroup_id | INT | Vgroup ID for the consumer |
| 4 | consumer_id | BIGINT | Consumer ID |
## INS_STREAMS
| # | **Column** | **Data Type** | **Description** |
| --- | :----------: | ------------ | --------------------------------------- |
| 1 | stream_name | BINARY(64) | Stream name |
| 2 | create_time | TIMESTAMP | Creation time |
| 3 | sql | BINARY(1024) | SQL statement used to create the stream |
| 4 | status | BIANRY(20) | Current status |
| 5 | source_db | BINARY(64) | Source database |
| 6 | target_db | BIANRY(64) | Target database |
| 7 | target_table | BINARY(192) | Target table |
| 8 | watermark | BIGINT | Watermark (see stream processing documentation) |
| 9 | trigger | INT | Method of triggering the result push (see stream processing documentation) |

View File

@ -61,15 +61,6 @@ Provides information about SQL queries currently running. Similar to SHOW QUERIE
| 12 | sub_status | BINARY(1000) | Subquery status |
| 13 | sql | BINARY(1024) | SQL statement |
## PERF_TOPICS
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | ------------------------------ |
| 1 | topic_name | BINARY(192) | Topic name |
| 2 | db_name | BINARY(64) | Database for the topic |
| 3 | create_time | TIMESTAMP | Creation time |
| 4 | sql | BINARY(1024) | SQL statement used to create the topic |
## PERF_CONSUMERS
| # | **Column** | **Data Type** | **Description** |
@ -83,15 +74,6 @@ Provides information about SQL queries currently running. Similar to SHOW QUERIE
| 7 | subscribe_time | TIMESTAMP | Time of first subscription |
| 8 | rebalance_time | TIMESTAMP | Time of first rebalance triggering |
## PERF_SUBSCRIPTIONS
| # | **Column** | **Data Type** | **Description** |
| --- | :------------: | ------------ | ------------------------ |
| 1 | topic_name | BINARY(204) | Subscribed topic |
| 2 | consumer_group | BINARY(193) | Subscribed consumer group |
| 3 | vgroup_id | INT | Vgroup ID for the consumer |
| 4 | consumer_id | BIGINT | Consumer ID |
## PERF_TRANS
| # | **Column** | **Data Type** | **Description** |
@ -113,17 +95,3 @@ Provides information about SQL queries currently running. Similar to SHOW QUERIE
| 2 | create_time | TIMESTAMP | Creation time |
| 3 | stable_name | BINARY(192) | Supertable name |
| 4 | vgroup_id | INT | Dedicated vgroup name |
## PERF_STREAMS
| # | **Column** | **Data Type** | **Description** |
| --- | :----------: | ------------ | --------------------------------------- |
| 1 | stream_name | BINARY(64) | Stream name |
| 2 | create_time | TIMESTAMP | Creation time |
| 3 | sql | BINARY(1024) | SQL statement used to create the stream |
| 4 | status | BIANRY(20) | Current status |
| 5 | source_db | BINARY(64) | Source database |
| 6 | target_db | BIANRY(64) | Target database |
| 7 | target_table | BINARY(192) | Target table |
| 8 | watermark | BIGINT | Watermark (see stream processing documentation) |
| 9 | trigger | INT | Method of triggering the result push (see stream processing documentation) |

View File

@ -5,16 +5,6 @@ title: SHOW Statement for Metadata
`SHOW` command can be used to get brief system information. To get details about metatadata, information, and status in the system, please use `select` to query the tables in database `INFORMATION_SCHEMA`.
## SHOW ACCOUNTS
```sql
SHOW ACCOUNTS;
```
Shows information about tenants on the system.
Note: TDengine Enterprise Edition only.
## SHOW APPS
```sql

View File

@ -7,7 +7,7 @@ title: TDengine Go Connector
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import Preparition from "./_preparition.mdx"
import Preparition from "./_preparation.mdx"
import GoInsert from "../../07-develop/03-insert-data/_go_sql.mdx"
import GoInfluxLine from "../../07-develop/03-insert-data/_go_line.mdx"
import GoOpenTSDBTelnet from "../../07-develop/03-insert-data/_go_opts_telnet.mdx"

View File

@ -7,7 +7,7 @@ title: TDengine Rust Connector
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import Preparition from "./_preparition.mdx"
import Preparition from "./_preparation.mdx"
import RustInsert from "../../07-develop/03-insert-data/_rust_sql.mdx"
import RustBind from "../../07-develop/03-insert-data/_rust_stmt.mdx"
import RustQuery from "../../07-develop/04-query-data/_rust.mdx"

View File

@ -7,7 +7,7 @@ description: "taospy is the official Python connector for TDengine. taospy provi
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
`taospy is the official Python connector for TDengine. taospy provides a rich API that makes it easy for Python applications to use TDengine. `taospy` wraps both the [native interface](/reference/connector/cpp) and [REST interface](/reference/rest-api) of TDengine, which correspond to the `taos` and `taosrest` modules of the `taospy` package, respectively.
`taospy` is the official Python connector for TDengine. taospy provides a rich API that makes it easy for Python applications to use TDengine. `taospy` wraps both the [native interface](/reference/connector/cpp) and [REST interface](/reference/rest-api) of TDengine, which correspond to the `taos` and `taosrest` modules of the `taospy` package, respectively.
In addition to wrapping the native and REST interfaces, `taospy` also provides a set of programming interfaces that conforms to the [Python Data Access Specification (PEP 249)](https://peps.python.org/pep-0249/). It is easy to integrate `taospy` with many third-party tools, such as [SQLAlchemy](https://www.sqlalchemy.org/) and [pandas](https://pandas.pydata.org/).
The direct connection to the server using the native interface provided by the client driver is referred to hereinafter as a "native connection"; the connection to the server using the REST interface provided by taosAdapter is referred to hereinafter as a "REST connection".

View File

@ -7,7 +7,7 @@ title: TDengine Node.js Connector
import Tabs from "@theme/Tabs";
import TabItem from "@theme/TabItem";
import Preparition from "./_preparition.mdx";
import Preparition from "./_preparation.mdx";
import NodeInsert from "../../07-develop/03-insert-data/_js_sql.mdx";
import NodeInfluxLine from "../../07-develop/03-insert-data/_js_line.mdx";
import NodeOpenTSDBTelnet from "../../07-develop/03-insert-data/_js_opts_telnet.mdx";

View File

@ -7,7 +7,7 @@ title: C# Connector
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import Preparition from "./_preparition.mdx"
import Preparition from "./_preparation.mdx"
import CSInsert from "../../07-develop/03-insert-data/_cs_sql.mdx"
import CSInfluxLine from "../../07-develop/03-insert-data/_cs_line.mdx"
import CSOpenTSDBTelnet from "../../07-develop/03-insert-data/_cs_opts_telnet.mdx"

View File

@ -1,10 +0,0 @@
- 已安装客户端驱动(使用原生连接必须安装,使用 REST 连接无需安装)
:::info
由于 TDengine 的客户端驱动使用 C 语言编写,使用原生连接时需要加载系统对应安装在本地的客户端驱动共享库文件,通常包含在 TDengine 安装包。TDengine Linux 服务端安装包附带了 TDengine 客户端,也可以单独安装 [Linux 客户端](/get-started/) 。在 Windows 环境开发时需要安装 TDengine 对应的 [Windows 客户端](https://www.taosdata.com/cn/all-downloads/#TDengine-Windows-Client) 。
- libtaos.so: 在 Linux 系统中成功安装 TDengine 后,依赖的 Linux 版客户端驱动 libtaos.so 文件会被自动拷贝至 /usr/lib/libtaos.so该目录包含在 Linux 自动扫描路径上,无需单独指定。
- taos.dll: 在 Windows 系统中安装完客户端之后,依赖的 Windows 版客户端驱动 taos.dll 文件会自动拷贝到系统默认搜索路径 C:/Windows/System32 下,同样无需要单独指定。
:::

View File

@ -1,7 +1,7 @@
---
sidebar_label: taosKeeper
title: taosKeeper
description: Instructions and tips for using taosKeeper
description: exports TDengine monitoring metrics.
---
## Introduction
@ -22,26 +22,35 @@ You can compile taosKeeper separately and install it. Please refer to the [taosK
### Configuration and running methods
<!-- taosKeeper needs to be executed on the terminal of the operating system, it supports two configuration methods: [Command-line arguments](#command-line-arguments-in-detail) and [configuration file](#configuration-file-parameters-in-detail). Command-line arguments take precedence over values in the configuration file. -->
taosKeeper needs to be executed on the terminal of the operating system. To run taosKeeper, see [configuration file](#configuration-file-parameters-in-detail).
taosKeeper needs to be executed on the terminal of the operating system, it supports three configuration methods: [Command-line arguments](#command-line-arguments-in-detail), [environment variable](#environment-variable-in-detail) and [configuration file](#configuration-file-parameters-in-detail). The precedence of those is Command-line, environment variable and configuration file.
**Make sure that the TDengine cluster is running correctly before running taosKeeper. ** Ensure that the monitoring service in TDengine has been started. For more information, see [TDengine Monitoring Configuration](../config/#monitoring).
<!--
### Command-Line Parameters
You can use command-line parameters to run taosBenchmark and control its behavior:
You can use command-line parameters to run taosKeeper and control its behavior:
```shell
taosKeeper
$ taosKeeper
```
-->
### Environment variable
You can use Environment variable to run taosKeeper and control its behavior:
```shell
$ export TAOS_KEEPER_TDENGINE_HOST=192.168.64.3
$ taoskeeper
```
you can run `taoskeeper -h` for more detail.
### Configuration File
You can quickly launch taosKeeper with the following commands. If you do not specify a configuration file, `/etc/taos/keeper.toml` is used by default. If this file does not specify configurations, the default values are used.
```shell
taoskeeper -c <keeper config file>
$ taoskeeper -c <keeper config file>
```
**Sample configuration files**
@ -110,7 +119,7 @@ Query OK, 1 rows in database (0.036162s)
#### Export Monitoring Metrics
```shell
curl http://127.0.0.1:6043/metrics
$ curl http://127.0.0.1:6043/metrics
```
Sample result set (excerpt):

View File

@ -0,0 +1,36 @@
---
sidebar_label: Google Data Studio
title: Use Google Data Studio to access TDengine
---
Data Studio is a powerful tool for reporting and visualization, offering a wide variety of charts and connectors and making it easy to generate reports based on predefined templates. Its ease of use and robust ecosystem have made it one of the first choices for people working in data analysis.
TDengine is a high-performance, scalable time-series database that supports SQL. Many businesses and developers in fields spanning from IoT and Industry Internet to IT and finance are using TDengine as their time-series database management solution.
The TDengine team immediately saw the benefits of using TDengine to process time-series data with Data Studio to analyze it, and they got to work to create a connector for Data Studio.
With the release of the TDengine connector in Data Studio, you can now get even more out of your data. To obtain the connector, first go to the Data Studio Connector Gallery, click Connect to Data, and search for “TDengine”.
![02](gds/gds-02.png.webp)
Select the TDengine connector and click Authorize.
![03](gds/gds-03.png.webp)
Then sign in to your Google Account and click Allow to enable the connection to TDengine.
![04](gds/gds-04.png.webp)
In the Enter URL field, type the hostname and port of the server running the TDengine REST service. In the following fields, type your username, password, database name, table name, and the start and end times of your query range. Then, click Connect.
![05](gds/gds-05.png.webp)
After the connection is established, you can use Data Studio to process your data and create reports.
![06](gds/gds-06.png.webp)
In Data Studio, TDengine timestamps and tags are considered dimensions, and all other items are considered metrics. You can create all kinds of custom charts with your data some examples are shown below.
![07](gds/gds-07.png.webp)
With the ability to process petabytes of data per day and provide monitoring and alerting in real time, TDengine is a great solution for time-series data management. Now, with the Data Studio connector, were sure youll be able to gain new insights and obtain even more value from your data.

BIN
docs/en/20-third-party/gds/gds-01.webp vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View File

@ -4,22 +4,22 @@ sidebar_label: 文档首页
slug: /
---
TDengine是一款[开源](https://www.taosdata.com/tdengine/open_source_time-series_database)、[高性能](https://www.taosdata.com/fast)、[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)的<a href="https://www.taosdata.com/" data-internallinksmanager029f6b8e52c="2" title="时序数据库" target="_blank" rel="noopener">时序数据库</a><a href="https://www.taosdata.com/time-series-database" data-internallinksmanager029f6b8e52c="9" title="Time Series DataBase" target="_blank" rel="noopener">Time Series Database</a>, <a href="https://www.taosdata.com/tsdb" data-internallinksmanager029f6b8e52c="8" title="TSDB" target="_blank" rel="noopener">TSDB</a>, 它专为物联网、工业互联网、金融等场景优化设计。同时它还带有内建的缓存、流式计算、数据订阅等系统功能,能大幅减少系统设计的复杂度,降低研发和运营成本,是一极简的时序数据处理平台。本文档是 TDengine 用户手册,主要是介绍 TDengine 的基本概念、安装、使用、功能、开发接口、运营维护、TDengine 内核设计等等,它主要是面向架构师、开发与系统管理员的。
TDengine 是一款[开源](https://www.taosdata.com/tdengine/open_source_time-series_database)、[高性能](https://www.taosdata.com/fast)、[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)的<a href="https://www.taosdata.com/" data-internallinksmanager029f6b8e52c="2" title="时序数据库" target="_blank" rel="noopener">时序数据库</a><a href="https://www.taosdata.com/time-series-database" data-internallinksmanager029f6b8e52c="9" title="Time Series DataBase" target="_blank" rel="noopener">Time Series Database</a>, <a href="https://www.taosdata.com/tsdb" data-internallinksmanager029f6b8e52c="8" title="TSDB" target="_blank" rel="noopener">TSDB</a>, 它专为物联网、车联网、工业互联网、金融、IT 运维等场景优化设计。同时它还带有内建的缓存、流式计算、数据订阅等系统功能,能大幅减少系统设计的复杂度,降低研发和运营成本,是一极简的时序数据处理平台。本文档是 TDengine 用户手册,主要是介绍 TDengine 的基本概念、安装、使用、功能、开发接口、运营维护、TDengine 内核设计等等,它主要是面向架构师、开发工程师与系统管理员的。
TDengine 充分利用了时序数据的特点,提出了“一个数据采集点一张表”与“超级表”的概念,设计了创新的存储引擎,让数据的写入、查询和存储效率都得到极大的提升。为正确理解并使用TDengine, 无论如何,请您仔细阅读[基本概念](./concept)一章。
TDengine 充分利用了时序数据的特点,提出了“一个数据采集点一张表”与“超级表”的概念,设计了创新的存储引擎,让数据的写入、查询和存储效率都得到极大的提升。为正确理解并使用 TDengine无论如何,请您仔细阅读[基本概念](./concept)一章。
如果你是开发,请一定仔细阅读[开发指南](./develop)一章,该部分对数据库连接、建模、插入数据、查询、流式计算、缓存、数据订阅、用户自定义函数等功能都做了详细介绍,并配有各种编程语言的示例代码。大部分情况下,你只要把示例代码拷贝粘贴,针对自己的应用稍作改动,就能跑起来。
如果你是开发工程师,请一定仔细阅读[开发指南](./develop)一章,该部分对数据库连接、建模、插入数据、查询、流式计算、缓存、数据订阅、用户自定义函数等功能都做了详细介绍,并配有各种编程语言的示例代码。大部分情况下,你只要复制粘贴示例代码,针对自己的应用稍作改动,就能跑起来。
我们已经生活在大数据时代,纵向扩展已经无法满足日益增长的业务需求,任何系统都必须具有水平扩展的能力,集群成为大数据以及 database 系统的不可缺失功能。TDengine 团队不仅实现了集群功能,而且将这一重要核心功能开源。怎么部署、管理和维护 TDengine 集群,请参考[部署集群](./deployment)一章。
我们已经生活在大数据时代,纵向扩展已经无法满足日益增长的业务需求,任何系统都必须具有水平扩展的能力,集群成为大数据以及 Database 系统的不可缺失功能。TDengine 团队不仅实现了集群功能,而且将这一重要核心功能开源。怎么部署、管理和维护 TDengine 集群,请仔细参考[部署集群](./deployment)一章。
TDengine 采用 SQL 作为查询语言,大大降低学习成本、降低迁移成本,但同时针对时序数据场景,又做了一些扩展,以支持插值、降采样、时间加权平均等操作。[SQL 手册](./taos-sql)一章详细描述了 SQL 语法、详细列出了各种支持的命令和函数。
TDengine 采用 SQL 作为查询语言,大大降低学习成本、降低迁移成本,但同时针对时序数据场景,又做了一些扩展,以支持插值、降采样、时间加权平均等操作。[SQL 手册](./taos-sql)一章详细描述了 SQL 语法、详细列出了各种支持的命令和函数。
如果你是系统管理员,关心安装、升级、容错灾备、关心数据导入、导出,配置参数,怎么监测 TDengine 是否健康运行,怎么提升系统运行的性能,那么请仔细参考[运维指南](./operation)一章。
如果你是系统管理员,关心安装、升级、容错灾备、关心数据导入、导出、配置参数,如何监测 TDengine 是否健康运行,如何提升系统运行的性能,请仔细参考[运维指南](./operation)一章。
如果你对 TDengine 外围工具REST API, 各种编程语言的连接器想做更多详细了解,请看[参考指南](./reference)一章。
如果你对 TDengine 的外围工具、REST API、各种编程语言的连接器Connector想做更多详细了解,请看[参考指南](./reference)一章。
如果你对 TDengine 内部架构设计很有兴趣,欢迎仔细阅读[技术内幕](./tdinternal)一章,里面对集群的设计、数据分区、分片、写入、读出、查询、聚合查询的流程都做了详细的介绍。如果你想研读 TDengine 代码甚至贡献代码,请一定仔细读完这一章。
如果你对 TDengine 内部架构设计很有兴趣,欢迎仔细阅读[技术内幕](./tdinternal)一章,里面对集群的设计、数据分区、分片、写入、读出、查询、聚合查询的流程都做了详细的介绍。如果你想研读 TDengine 代码甚至贡献代码,请一定仔细读完这一章。
最后,作为一个开源软件,欢迎大家的参与。如果发现文档的任何错误,描述不清晰的地方,都请在每个页面的最下方,点击“编辑本文档“直接进行修改。
最后,作为一个开源软件,欢迎大家的参与。如果发现文档有任何错误、描述不清晰的地方,请在每个页面的最下方,点击“编辑本文档”直接进行修改。
Together, we make a difference!

View File

@ -4,53 +4,53 @@ description: 简要介绍 TDengine 的主要功能
toc_max_heading_level: 2
---
TDengine 是一款开源、高性能、云原生的[时序数据库](https://tdengine.com/tsdb/),且针对物联网、车联网以及工业互联网进行了优化。TDengine 的代码,包括其集群功能,都在 GNU AGPL v3.0 下开源。除核心的时序数据库功能外TDengine 还提供[缓存](../develop/cache/)、[数据订阅](../develop/tmq)、[流式计算](../develop/stream)等其它功能以降低系统复杂度及研发和运维成本。
TDengine 是一款开源、高性能、云原生的[时序数据库](https://tdengine.com/tsdb/),且针对物联网、车联网、工业互联网、金融、IT 运维等场景进行了优化。TDengine 的代码,包括集群功能,都在 GNU AGPL v3.0 下开源。除核心的时序数据库功能外TDengine 还提供[缓存](../develop/cache/)、[数据订阅](../develop/tmq)、[流式计算](../develop/stream)等其它功能以降低系统复杂度及研发和运维成本。
本章节介绍TDengine的主要功能、竞争优势、适用场景、与其他数据库的对比测试等等让大家对TDengine有个整体的了解。
本章节介绍 TDengine 的主要功能、竞争优势、适用场景、与其他数据库的对比测试等等,让大家对 TDengine 有个整体的了解。
## 主要功能
TDengine的主要功能如下
TDengine 的主要功能如下:
1. 写入数据,支持
- [SQL 写入](../develop/insert-data/sql-writing)
- [Schemaless 写入](../reference/schemaless/),支持多种标准写入协议
- [InfluxDB LINE 协议](../develop/insert-data/influxdb-line)
- [OpenTSDB Telnet 协议](../develop/insert-data/opentsdb-telnet)
- [OpenTSDB JSON 协议](../develop/insert-data/opentsdb-json)
- 与多种第三方工具的无缝集成,它们都可以仅通过配置而无需任何代码即可将数据写入 TDengine
- [Telegraf](../third-party/telegraf)
- [Prometheus](../third-party/prometheus)
- [StatsD](../third-party/statsd)
- [collectd](../third-party/collectd)
- [icinga2](../third-party/icinga2)
- [TCollector](../third-party/tcollector)
- [EMQ](../third-party/emq-broker)
- [HiveMQ](../third-party/hive-mq-broker)
- [SQL 写入](../develop/insert-data/sql-writing)
- [无模式Schemaless写入](../reference/schemaless/),支持多种标准写入协议
- [InfluxDB Line 协议](../develop/insert-data/influxdb-line)
- [OpenTSDB Telnet 协议](../develop/insert-data/opentsdb-telnet)
- [OpenTSDB JSON 协议](../develop/insert-data/opentsdb-json)
- 与多种第三方工具的无缝集成,它们都可以仅通过配置而无需任何代码即可将数据写入 TDengine
- [Telegraf](../third-party/telegraf)
- [Prometheus](../third-party/prometheus)
- [StatsD](../third-party/statsd)
- [collectd](../third-party/collectd)
- [Icinga2](../third-party/icinga2)
- [TCollector](../third-party/tcollector)
- [EMQX](../third-party/emq-broker)
- [HiveMQ](../third-party/hive-mq-broker)
2. 查询数据,支持
- [标准SQL](../taos-sql),含嵌套查询
- [时序数据特色函数](../taos-sql/function/#time-series-extensions)
- [时序顺序特色查询](../taos-sql/distinguished),例如降采样、插值、累加和、时间加权平均、状态窗口、会话窗口等
- [用户自定义函数](../taos-sql/udf)
- [标准 SQL](../taos-sql),含嵌套查询
- [时序数据特色函数](../taos-sql/function/#time-series-extensions)
- [时序数据特色查询](../taos-sql/distinguished),例如降采样、插值、累加和、时间加权平均、状态窗口、会话窗口等
- [用户自定义函数UDF](../taos-sql/udf)
3. [缓存](../develop/cache),将每张表的最后一条记录缓存起来,这样无需 Redis 就能对时序数据进行高效处理
4. [流式计算](../develop/stream)(Stream Processing)TDengine 不仅支持连续查询,还支持事件驱动的流式计算,这样在处理时序数据时就无需 Flink 或 Spark 这样流计算组件
5. [数据订阅](../develop/tmq),应用程序可以订阅一张表或一组表的数据,API 与 Kafka 相同,而且可以指定过滤条件
4. [流式计算Stream Processing](../develop/stream)TDengine 不仅支持连续查询,还支持事件驱动的流式计算,这样在处理时序数据时就无需 Flink 或 Spark 这样流计算组件
5. [数据订阅](../develop/tmq),应用程序可以订阅一张表或一组表的数据,提供与 Kafka 相同的 API,而且可以指定过滤条件
6. 可视化
- 支持与 [Grafana](../third-party/grafana/) 的无缝集成
- 支持与 Google Data Studio 的无缝集成
- 支持与 [Grafana](../third-party/grafana/) 的无缝集成
- 支持与 Google Data Studio 的无缝集成
7. 集群
- 集群部署(../deployment/),可以通过增加节点进行水平扩展以提升处理能力
- 可以通过 [Kubernets 部署 TDengine](../deployment/k8s/)
- 通过多副本提供高可用能力
- [集群部署](../deployment/),可以通过增加节点进行水平扩展以提升处理能力
- 可以通过 [Kubernetes 部署 TDengine](../deployment/k8s/)
- 通过多副本提供高可用能力
8. 管理
- [监控](../operation/monitor)运行中的 TDengine 实例
- 多种[数据导入](../operation/import)方式
- 多种[数据导出](../operation/export)方式
- [监控](../operation/monitor)运行中的 TDengine 实例
- 多种[数据导入](../operation/import)方式
- 多种[数据导出](../operation/export)方式
9. 工具
- 提供交互式[命令行程序](../reference/taos-shell),便于管理集群,检查系统状态,做即席查询
- 提供压力测试工具[taosBenchmark](../reference/taosbenchmark),用于测试 TDengine 的性能
- 提供[交互式命令行程序CLI](../reference/taos-shell),便于管理集群,检查系统状态,做即席查询
- 提供压力测试工具[taosBenchmark](../reference/taosbenchmark),用于测试 TDengine 的性能
10. 编程
- 提供各种语言的[连接器](../connector): 如 [C/C++](../connector/cpp), [Java](../connector/java), [Go](../connector/go), [Node.JS](../connector/node), [Rust](../connector/rust), [Python](../connector/python), [C#](../connector/csharp) 等
- 提供各种语言的[连接器Connector](../connector): 如 [C/C++](../connector/cpp)、[Java](../connector/java)、[Go](../connector/go)、[Node.js](../connector/node)、[Rust](../connector/rust)、[Python](../connector/python)、[C#](../connector/csharp) 等
- 支持 [REST 接口](../connector/rest-api/)
更多细节功能,请阅读整个文档。
@ -63,36 +63,36 @@ TDengine的主要功能如下
- **[极简时序数据平台](https://www.taosdata.com/tdengine/simplified_solution_for_time-series_data_processing)**TDengine 内建缓存、流式计算和数据订阅等功能,为时序数据的处理提供了极简的解决方案,从而大幅降低了业务系统的设计复杂度和运维成本。
- **[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)**通过原生的分布式设计、数据分片和分区、存算分离、RAFT 协议、Kubernets 部署和完整的可观测性TDengine 是一款云原生时序数据库并且能够部署在公有云、私有云和混合云上。
- **[云原生](https://www.taosdata.com/tdengine/cloud_native_time-series_database)**通过原生的分布式设计、数据分片和分区、存算分离、RAFT 协议、Kubernetes 部署和完整的可观测性TDengine 是一款云原生时序数据库并且能够部署在公有云、私有云和混合云上。
- **[简单易用](https://www.taosdata.com/tdengine/ease_of_use)**对系统管理员来说TDengine 大幅降低了管理和维护的代价。对开发者来说, TDengine 提供了简单的接口、极简的解决方案和与第三方工具的无缝集成。对数据分析专家来说TDengine 提供了便捷的数据访问。
- **[简单易用](https://www.taosdata.com/tdengine/ease_of_use)**对系统管理员来说TDengine 大幅降低了管理和维护的代价。对开发者来说, TDengine 提供了简单的接口、极简的解决方案和与第三方工具的无缝集成。对数据分析专家来说TDengine 提供了便捷的数据访问能力
- **[分析能力](https://www.taosdata.com/tdengine/easy_data_analytics)**通过超级表、存储计算分离、分区分片、预计算和其它技术TDengine 能够高效地浏览、格式化和访问数据。
- **[核心开源](https://www.taosdata.com/tdengine/open_source_time-series_database)**TDengine 的核心代码包括集群功能全部在开源协议下公开。全球超过 140k 个运行实例GitHub Star 19k且拥有一个活跃的开发者社区。
- **[核心开源](https://www.taosdata.com/tdengine/open_source_time-series_database)**TDengine 的核心代码包括集群功能全部在开源协议下公开。全球超过 140k 个运行实例GitHub Star 19k且拥有一个活跃的开发者社区。
采用 TDengine可将典型的物联网、车联网、工业互联网大数据平台的总拥有成本大幅降低。表现在几个方面
1. 由于其超强性能,它能将系统需的计算资源和存储资源大幅降低
1. 由于其超强性能,它能将系统需的计算资源和存储资源大幅降低
2. 因为支持 SQL能与众多第三方软件无缝集成学习迁移成本大幅下降
3. 因为是一极简的时序数据平台,系统复杂度、研发和运营成本大幅降低
3. 因为是一极简的时序数据平台,系统复杂度、研发和运营成本大幅降低
## 技术生态
在整个时序大数据平台中TDengine 在其中扮演的角色如下:
在整个时序大数据平台中TDengine 扮演的角色如下:
<figure>
![TDengine Database 技术生态图](eco_system.webp)
<center><figcaption>图 1. TDengine 技术生态图</figcaption></center>
</figure>
<center>图 1. TDengine技术生态图</center>
上图中,左侧是各种数据采集或消息队列,包括 OPC-UA、MQTT、Telegraf、也包括 Kafka, 他们的数据将被源源不断的写入到 TDengine。右侧则是可视化、BI 工具、组态软件、应用程序。下侧则是 TDengine 自身提供的命令行程序 (CLI) 以及可视化管理管理
上图中,左侧是各种数据采集或消息队列,包括 OPC-UA、MQTT、Telegraf、也包括 Kafka他们的数据将被源源不断的写入到 TDengine。右侧则是可视化、BI 工具、组态软件、应用程序。下侧则是 TDengine 自身提供的命令行程序CLI以及可视化管理工具
## 典型适用场景
作为一个高性能、分布式、支持 SQL 的时序数据库 Database)TDengine 的典型适用场景包括但不限于 IoT、工业互联网、车联网、IT 运维、能源、金融证券等领域。需要指出的是TDengine 是针对时序数据场景设计的专用数据库和专用大数据处理工具因充分利用了时序大数据的特点它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM 等通用型数据。本文对适用场景做更多详细的分析。
作为一个高性能、分布式、支持 SQL 的时序数据库DatabaseTDengine 的典型适用场景包括但不限于 IoT、工业互联网、车联网、IT 运维、能源、金融证券等领域。需要指出的是TDengine 是针对时序数据场景设计的专用数据库和专用大数据处理工具,因充分利用了时序大数据的特点它无法用来处理网络爬虫、微博、微信、电商、ERP、CRM 等通用型数据。下面本文对适用场景做更多详细的分析。
### 数据源特点和需求
@ -114,18 +114,18 @@ TDengine的主要功能如下
### 系统功能需求
| 系统功能需求 | 不适用 | 可能适用 | 非常适用 | 简单说明 |
| -------------------------- | ------ | -------- | -------- | --------------------------------------------------------------------------------------------------------------------- |
| 要求完整的内置数据处理算法 | | √ | | TDengine 实现了通用的数据处理算法,但是还没有做到妥善处理各行各业的所有要求,因此特殊类型的处理还需要应用层面处理。 |
| 需要大量的交叉查询处理 | | √ | | 这种类型的处理更多应该用关系型数据系统处理,或者应该考虑 TDengine 和关系型数据系统配合实现系统功能。 |
| 系统功能需求 | 不适用 | 可能适用 | 非常适用 | 简单说明 |
| -------------------------- | ------ | -------- | -------- | ------------------------------------------------------------------------------------------------------------------------- |
| 要求完整的内置数据处理算法 | | √ | | TDengine 实现了通用的数据处理算法,但是还没有做到妥善处理各行各业的所有需求,因此特殊类型的处理需求还需要在应用层面解决。 |
| 需要大量的交叉查询处理 | | √ | | 这种类型的处理更多应该用关系型数据库处理,或者应该考虑 TDengine 和关系型数据库配合实现系统功能。 |
### 系统性能需求
| 系统性能需求 | 不适用 | 可能适用 | 非常适用 | 简单说明 |
| ---------------------- | ------ | -------- | -------- | ------------------------------------------------------------------------------------------------------ |
| 要求较大的总体处理能力 | | | √ | TDengine 的集群功能可以轻松地让多服务器配合达成处理能力的提升。 |
| 要求高速处理数据 | | | √ | TDengine 专门为 IoT 优化的存储和数据处理设计,一般可以让系统得到超出同类产品多倍数的处理速度提升。 |
| 要求快速处理小粒度数据 | | | √ | 这方面 TDengine 性能可以完全对标关系型和 NoSQL 型数据处理系统。 |
| 系统性能需求 | 不适用 | 可能适用 | 非常适用 | 简单说明 |
| ---------------------- | ------ | -------- | -------- | -------------------------------------------------------------------------------------------------- |
| 要求较大的总体处理能力 | | | √ | TDengine 的集群功能可以轻松地让多服务器配合达成处理能力的提升。 |
| 要求高速处理数据 | | | √ | TDengine 专门为 IoT 优化的存储和数据处理设计,一般可以让系统得到超出同类产品多倍数的处理速度提升。 |
| 要求快速处理小粒度数据 | | | √ | 这方面 TDengine 性能可以完全对标关系型和 NoSQL 型数据处理系统。 |
### 系统维护需求

View File

@ -11,7 +11,7 @@ import TabItem from "@theme/TabItem";
从客户端程序的角度来说,高效写入数据要考虑以下几个因素:
1. 单次写入的数据量。一般来讲,每批次写入的数据量越大越高效(但超过一定阈值其优势会消失)。使用 SQL 写入 TDengine 时,尽量在一条 SQL 中拼接更多数据。目前TDengine 支持的一条 SQL 的最大长度为 1,048,5761M个字符。可通过配置客户端参数 maxSQLLength默认值为 65480进行修改。
1. 单次写入的数据量。一般来讲,每批次写入的数据量越大越高效(但超过一定阈值其优势会消失)。使用 SQL 写入 TDengine 时,尽量在一条 SQL 中拼接更多数据。目前TDengine 支持的一条 SQL 的最大长度为 1,048,5761M个字符。
2. 并发连接数。一般来讲,同时写入数据的并发连接数越多写入越高效(但超过一定阈值反而会下降,取决于服务端处理能力)。
3. 数据在不同表(或子表)之间的分布,即要写入数据的相邻性。一般来说,每批次只向同一张表(或子表)写入数据比向多张表(或子表)写入数据要更高效;
4. 写入方式。一般来讲:
@ -38,13 +38,9 @@ import TabItem from "@theme/TabItem";
### 服务器配置的角度 {#setting-view}
从服务器配置的角度来说,也有很多优化写入性能的方法
从服务端配置的角度,要根据系统中磁盘的数量,磁盘的 I/O 能力,以及处理器能力在创建数据库时设置适当的 vgroups 数量以充分发挥系统性能。如果 vgroups 过少,则系统性能无法发挥;如果 vgroups 过多,会造成无谓的资源竞争。常规推荐 vgroups 数量为 CPU 核数的 2 倍,但仍然要结合具体的系统资源配置进行调优
如果总表数不多(远小于核数乘以1000), 且无论怎么调节客户端程序taosd 进程的 CPU 使用率都很低,那么很可能是因为表在各个 vgroup 分布不均。比如:数据库总表数是 1000 且 minTablesPerVnode 设置的也是 1000那么所有的表都会分布在 1 个 vgroup 上。此时如果将 minTablesPerVnode 和 tablelncStepPerVnode 都设置成 100 则可将表分布至 10 个 vgroup。假设 maxVgroupsPerDb 大于等于 10
如果总表数比较大比如大于500万适当增加 maxVgroupsPerDb 也能显著提高建表的速度。maxVgroupsPerDb 默认值为 0 自动配置为 CPU 的核数。 如果表的数量巨大,也建议调节 maxTablesPerVnode 参数,以免超过单个 vnode 建表的上限。
更多调优参数,请参考 [配置参考](../../../reference/config)部分。
更多调优参数,请参考 [数据库管理](../../../taos-sql/database) 和 [服务端配置](../../../reference/config)。
## 高效写入示例 {#sample-code}
@ -352,7 +348,7 @@ main 函数可以接收 5 个启动参数,依次是:
<details>
SQLWriter 类封装了拼 SQL 和写数据的逻辑。所有的表都没有提前创建,而是在发生表不存在错误的时候,再以超级表为模板批量建表,然后重新执行 INSERT 语句。对于其它错误会记录当时执行的 SQL 以便排查错误和故障恢复。这个类也对 SQL 是否超过最大长度限制做了检查,如果接近 SQL 最大长度限制maxSQLLength将会立即执行 SQL。为了减少 SQL 此时,建议将 maxSQLLength 适当调大
SQLWriter 类封装了拼 SQL 和写数据的逻辑。所有的表都没有提前创建,而是在发生表不存在错误的时候,再以超级表为模板批量建表,然后重新执行 INSERT 语句。对于其它错误会记录当时执行的 SQL 以便排查错误和故障恢复。这个类也对 SQL 是否超过最大长度限制做了检查,根据 TDengine 3.0 的限制由输入参数 maxSQLLength 传入了支持的最大 SQL 长度,即 1048576
<summary>SQLWriter</summary>

View File

@ -1,7 +1,5 @@
```java
{{#include docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java}}
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}
{{#include docs/examples/java/src/main/java/com/taos/example/Meters.java}}
```
```java
{{#include docs/examples/java/src/main/java/com/taos/example/MetersDeserializer.java}}

View File

@ -169,9 +169,9 @@ namespace TDengineExample
### 第三方驱动
`Maikebing.Data.Taos` 是一个 TDengine 的 ADO.NET 连接器,支持 LinuxWindows 平台。该连接器由社区贡献者`麦壳饼@@maikebing` 提供,具体请参考:
[`IoTSharp.Data.Taos`](https://github.com/IoTSharp/EntityFrameworkCore.Taos) 是一个 TDengine 的 ADO.NET 连接器,其中包含了用于EntityFrameworkCore 的提供程序 IoTSharp.EntityFrameworkCore.Taos 和健康检查组件 IoTSharp.HealthChecks.Taos ,支持 LinuxWindows 平台。该连接器由社区贡献者`麦壳饼@@maikebing` 提供,具体请参考:
* 接口下载:<https://github.com/maikebing/Maikebing.EntityFrameworkCore.Taos>
* 接口下载: <https://github.com/IoTSharp/EntityFrameworkCore.Taos>
* 用法说明:<https://www.taosdata.com/blog/2020/11/02/1901.html>
## 常见问题

View File

@ -1,6 +1,6 @@
---
sidebar_label: 支持的数据类型
title: 支持的数据类型
sidebar_label: 数据类型
title: 数据类型
description: "TDengine 支持的数据类型: 时间戳、浮点型、JSON 类型等"
---

View File

@ -1,6 +1,6 @@
---
sidebar_label: 数据库管理
title: 数据库管理
sidebar_label: 数据库
title: 数据库
description: "创建、删除数据库,查看、修改数据库参数"
---
@ -71,9 +71,9 @@ database_option: {
- SINGLE_STABLE表示此数据库中是否只可以创建一个超级表用于超级表列非常多的情况。
- 0表示可以创建多张超级表。
- 1表示只可以创建一张超级表。
- WAL_RETENTION_PERIODwal 文件的额外保留策略用于数据订阅。wal 的保存时长,单位为 s。默认为 0即落盘后立即删除。-1 表示不删除。
- WAL_RETENTION_SIZEwal 文件的额外保留策略用于数据订阅。wal 的保存的最大上限,单位为 KB。默认为 0即落盘后立即删除。-1 表示不删除。
- WAL_ROLL_PERIODwal 文件切换时长,单位为 s。当 wal 文件创建并写入后,经过该时间,会自动创建一个新的 wal 文件。默认为 0即仅在落盘时创建新文件。
- WAL_RETENTION_PERIODwal 文件的额外保留策略用于数据订阅。wal 的保存时长,单位为 s。单副本默认为 0即落盘后立即删除。-1 表示不删除。多副本默认为 4 天。
- WAL_RETENTION_SIZEwal 文件的额外保留策略用于数据订阅。wal 的保存的最大上限,单位为 KB。单副本默认为 0即落盘后立即删除。多副本默认为-1表示不删除。
- WAL_ROLL_PERIODwal 文件切换时长,单位为 s。当 wal 文件创建并写入后,经过该时间,会自动创建一个新的 wal 文件。单副本默认为 0即仅在落盘时创建新文件。多副本默认为 1 天。
- WAL_SEGMENT_SIZEwal 单个文件大小,单位为 KB。当前写入文件大小超过上限后会自动创建一个新的 wal 文件。默认为 0即仅在落盘时创建新文件。
### 创建数据库示例

View File

@ -1,5 +1,5 @@
---
title: 表管理
title: 表
sidebar_label: 表
description: 对表的各种管理操作
---

View File

@ -1,6 +1,6 @@
---
sidebar_label: 超级表管理
title: 超级表 STable 管理
sidebar_label: 超级表
title: 超级表
description: 对超级表的各种管理操作
---

View File

@ -53,11 +53,6 @@ window_clause: {
| STATE_WINDOW(col)
| INTERVAL(interval_val [, interval_offset]) [SLIDING (sliding_val)] [WATERMARK(watermark_val)] [FILL(fill_mod_and_val)]
changes_option: {
DURATION duration_val
| ROWS rows_val
}
group_by_clause:
GROUP BY expr [, expr] ... HAVING condition
@ -127,7 +122,6 @@ SELECT DISTINCT col_name [, col_name ...] FROM tb_name;
1. cfg 文件中的配置参数 maxNumOfDistinctRes 将对 DISTINCT 能够输出的数据行数进行限制。其最小值是 100000最大值是 100000000默认值是 10000000。如果实际计算结果超出了这个限制那么会仅输出这个数量范围内的部分。
2. 由于浮点数天然的精度机制原因,在特定情况下,对 FLOAT 和 DOUBLE 列使用 DISTINCT 并不能保证输出值的完全唯一性。
3. 在当前版本下DISTINCT 不能在嵌套查询的子查询中使用也不能与聚合函数、GROUP BY、或 JOIN 在同一条语句中混用。
:::

View File

@ -614,6 +614,7 @@ SELECT APERCENTILE(field_name, P[, algo_type]) FROM { tb_name | stb_name } [WHER
**说明**
- P值范围是[0,100]当为0时等同于MIN为100时等同于MAX。
- algo_type 取值为 "default" 或 "t-digest"。 输入为 "default" 时函数使用基于直方图算法进行计算。输入为 "t-digest" 时使用t-digest算法计算分位数的近似结果。如果不指定 algo_type 则使用 "default" 算法。
- "t-digest"算法的近似结果对于输入数据顺序敏感,对超级表查询时不同的输入排序结果可能会有微小的误差。
### AVG
@ -917,7 +918,7 @@ SELECT MAX(field_name) FROM { tb_name | stb_name } [WHERE clause];
**返回数据类型**:同应用的字段。
**适用数据类型**:数值类型。
**适用数据类型**:数值类型,时间戳类型
**适用于**:表和超级表。
@ -932,7 +933,7 @@ SELECT MIN(field_name) FROM {tb_name | stb_name} [WHERE clause];
**返回数据类型**:同应用的字段。
**适用数据类型**:数值类型。
**适用数据类型**:数值类型,时间戳类型
**适用于**:表和超级表。

View File

@ -1,6 +1,6 @@
---
sidebar_label: 时序数据特色查询
title: 时序数据特色查询
sidebar_label: 特色查询
title: 特色查询
description: TDengine 提供的时序数据特有的查询功能
---

View File

@ -44,7 +44,7 @@ window_clause: {
```sql
CREATE STREAM avg_vol_s INTO avg_vol AS
SELECT _wstartts, count(*), avg(voltage) FROM meters PARTITION BY tbname INTERVAL(1m) SLIDING(30s);
SELECT _wstart, count(*), avg(voltage) FROM meters PARTITION BY tbname INTERVAL(1m) SLIDING(30s);
```
## 流式计算的 partition
@ -58,7 +58,7 @@ SELECT _wstartts, count(*), avg(voltage) FROM meters PARTITION BY tbname INTERVA
## 删除流式计算
```sql
DROP STREAM [IF NOT EXISTS] stream_name;
DROP STREAM [IF EXISTS] stream_name;
```
仅删除流式计算任务,由流式计算写入的数据不会被删除。

View File

@ -1,6 +1,6 @@
---
sidebar_label: JSON 类型使用说明
title: JSON 类型使用说明
sidebar_label: JSON 类型
title: JSON 类型
description: 对 JSON 类型如何使用的详细说明
---

View File

@ -1,5 +1,5 @@
---
title: 转义字符说明
title: 转义字符
sidebar_label: 转义字符
description: TDengine 中使用转义字符的详细规则
---

View File

@ -1,6 +1,6 @@
---
sidebar_label: 命名与边界限制
title: 命名与边界限制
sidebar_label: 命名与边界
title: 命名与边界
description: 合法字符集和命名中的限制规则
---

View File

@ -1,6 +1,6 @@
---
sidebar_label: 保留关键字
title: TDengine 保留关键字
title: 保留关键字
description: TDengine 保留关键字的详细列表
---

View File

@ -1,6 +1,6 @@
---
sidebar_label: 元数据
title: 存储元数据的 Information_Schema 数据库
title: 元数据
description: Information_Schema 数据库中存储了系统中所有的元数据信息
---
@ -246,3 +246,35 @@ Note: 由于 SHOW 语句已经被开发者熟悉和广泛使用,所以它们
| 1 | dnode_id | INT | dnode 的 ID |
| 2 | name | BINARY(32) | 配置项名称 |
| 3 | value | BINARY(64) | 该配置项的值 |
## INS_TOPICS
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------ | ------------------------------ |
| 1 | topic_name | BINARY(192) | topic 名称 |
| 2 | db_name | BINARY(64) | topic 相关的 DB |
| 3 | create_time | TIMESTAMP | topic 的 创建时间 |
| 4 | sql | BINARY(1024) | 创建该 topic 时所用的 SQL 语句 |
## INS_SUBSCRIPTIONS
| # | **列名** | **数据类型** | **说明** |
| --- | :------------: | ------------ | ------------------------ |
| 1 | topic_name | BINARY(204) | 被订阅的 topic |
| 2 | consumer_group | BINARY(193) | 订阅者的消费者组 |
| 3 | vgroup_id | INT | 消费者被分配的 vgroup id |
| 4 | consumer_id | BIGINT | 消费者的唯一 id |
## INS_STREAMS
| # | **列名** | **数据类型** | **说明** |
| --- | :----------: | ------------ | --------------------------------------- |
| 1 | stream_name | BINARY(64) | 流计算名称 |
| 2 | create_time | TIMESTAMP | 创建时间 |
| 3 | sql | BINARY(1024) | 创建流计算时提供的 SQL 语句 |
| 4 | status | BIANRY(20) | 流当前状态 |
| 5 | source_db | BINARY(64) | 源数据库 |
| 6 | target_db | BIANRY(64) | 目的数据库 |
| 7 | target_table | BINARY(192) | 流计算写入的目标表 |
| 8 | watermark | BIGINT | watermark详见 SQL 手册流式计算 |
| 9 | trigger | INT | 计算结果推送模式,详见 SQL 手册流式计算 |

View File

@ -1,6 +1,6 @@
---
sidebar_label: 统计数据
title: 存储统计数据的 Performance_Schema 数据库
title: 统计数据
description: Performance_Schema 数据库中存储了系统中的各种统计信息
---
@ -62,15 +62,6 @@ TDengine 3.0 版本开始提供一个内置数据库 `performance_schema`,其
| 12 | sub_status | BINARY(1000) | 子查询状态 |
| 13 | sql | BINARY(1024) | SQL 语句 |
## PERF_TOPICS
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------ | ------------------------------ |
| 1 | topic_name | BINARY(192) | topic 名称 |
| 2 | db_name | BINARY(64) | topic 相关的 DB |
| 3 | create_time | TIMESTAMP | topic 的 创建时间 |
| 4 | sql | BINARY(1024) | 创建该 topic 时所用的 SQL 语句 |
## PERF_CONSUMERS
| # | **列名** | **数据类型** | **说明** |
@ -84,15 +75,6 @@ TDengine 3.0 版本开始提供一个内置数据库 `performance_schema`,其
| 7 | subscribe_time | TIMESTAMP | 上一次发起订阅的时间 |
| 8 | rebalance_time | TIMESTAMP | 上一次触发 rebalance 的时间 |
## PERF_SUBSCRIPTIONS
| # | **列名** | **数据类型** | **说明** |
| --- | :------------: | ------------ | ------------------------ |
| 1 | topic_name | BINARY(204) | 被订阅的 topic |
| 2 | consumer_group | BINARY(193) | 订阅者的消费者组 |
| 3 | vgroup_id | INT | 消费者被分配的 vgroup id |
| 4 | consumer_id | BIGINT | 消费者的唯一 id |
## PERF_TRANS
| # | **列名** | **数据类型** | **说明** |
@ -114,17 +96,3 @@ TDengine 3.0 版本开始提供一个内置数据库 `performance_schema`,其
| 2 | create_time | TIMESTAMP | sma 创建时间 |
| 3 | stable_name | BINARY(192) | sma 所属的超级表名称 |
| 4 | vgroup_id | INT | sma 专属的 vgroup 名称 |
## PERF_STREAMS
| # | **列名** | **数据类型** | **说明** |
| --- | :----------: | ------------ | --------------------------------------- |
| 1 | stream_name | BINARY(64) | 流计算名称 |
| 2 | create_time | TIMESTAMP | 创建时间 |
| 3 | sql | BINARY(1024) | 创建流计算时提供的 SQL 语句 |
| 4 | status | BIANRY(20) | 流当前状态 |
| 5 | source_db | BINARY(64) | 源数据库 |
| 6 | target_db | BIANRY(64) | 目的数据库 |
| 7 | target_table | BINARY(192) | 流计算写入的目标表 |
| 8 | watermark | BIGINT | watermark详见 SQL 手册流式计算 |
| 9 | trigger | INT | 计算结果推送模式,详见 SQL 手册流式计算 |

View File

@ -1,21 +1,11 @@
---
sidebar_label: SHOW 命令
title: 使用 SHOW 命令查看系统元数据
title: SHOW 命令
description: SHOW 命令的完整列表
---
SHOW 命令可以用来获取简要的系统信息。若想获取系统中详细的各种元数据、系统信息和状态,请使用 select 语句查询 INFORMATION_SCHEMA 数据库中的表。
## SHOW ACCOUNTS
```sql
SHOW ACCOUNTS;
```
显示当前系统中所有租户的信息。
注:企业版独有
## SHOW APPS
```sql

View File

@ -1,6 +1,6 @@
---
sidebar_label: 自定义函数
title: 用户自定义函数
title: 自定义函数
description: 使用 UDF 的详细指南
---

View File

@ -1,6 +1,6 @@
---
sidebar_label: 索引
title: 使用索引
title: 索引
description: 索引功能的使用细节
---

View File

@ -1,6 +1,6 @@
---
sidebar_label: 3.0 版本语法变更
title: 3.0 版本语法变更
sidebar_label: 语法变更
title: 语法变更
description: "TDengine 3.0 版本的语法变更说明"
---

View File

@ -405,37 +405,7 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
订阅子表或者普通表的配置参数在 `specified_table_query` 中设置。
- **threads** : 执行 SQL 的线程数,默认为 1。
- **interval** : 执行订阅的时间间隔,单位为秒,默认为 0。
- **restart** : "yes" 表示开始新的订阅,"no" 表示继续之前的订阅,默认值为 "no"。
- **keepProgress** : "yes" 表示保留订阅进度,"no" 表示不保留,默认值为 "no"。
- **resubAfterConsume** : "yes" 表示取消之前的订阅然后再次订阅, "no" 表示继续之前的订阅,默认值为 "no"。
- **threads/concurrent** : 执行 SQL 的线程数,默认为 1。
- **sqls**
- **sql** : 执行的 SQL 命令,必填。
- **result** : 保存查询结果的文件,未指定则不保存。
#### 订阅超级表的配置参数
订阅超级表的配置参数在 `super_table_query` 中设置。
- **stblname** : 要订阅的超级表名称,必填。
- **threads** : 执行 SQL 的线程数,默认为 1。
- **interval** : 执行订阅的时间间隔,单位为秒,默认为 0。
- **restart** : "yes" 表示开始新的订阅,"no" 表示继续之前的订阅,默认值为 "no"。
- **keepProgress** : "yes" 表示保留订阅进度,"no" 表示不保留,默认值为 "no"。
- **resubAfterConsume** : "yes" 表示取消之前的订阅然后再次订阅, "no" 表示继续之前的订阅,默认值为 "no"。
- **sqls**
- **sql** : 执行的 SQL 命令,必填;对于超级表的查询 SQL在 SQL 命令中保留 "xxxx",程序会自动将其替换为超级表的所有子表名。
替换为超级表中所有的子表名。
- **result** : 保存查询结果的文件,未指定则不保存。

View File

@ -119,7 +119,7 @@ taos -h tdengine -P 6030
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y wget
ENV TDENGINE_VERSION=3.0.0.0
RUN wget -c https://www.taosdata.com/assets-download/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
RUN wget -c https://www.tdengine.com/assets-download/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
&& tar xvf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
&& cd TDengine-client-${TDENGINE_VERSION} \
&& ./install_client.sh \
@ -234,7 +234,7 @@ go mod tidy
```dockerfile
FROM golang:1.19.0-buster as builder
ENV TDENGINE_VERSION=3.0.0.0
RUN wget -c https://www.taosdata.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
RUN wget -c https://www.tdengine.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
&& tar xvf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
&& cd TDengine-client-${TDENGINE_VERSION} \
&& ./install_client.sh \
@ -250,7 +250,7 @@ RUN go build
FROM ubuntu:20.04
RUN apt-get update && apt-get install -y wget
ENV TDENGINE_VERSION=3.0.0.0
RUN wget -c https://www.taosdata.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
RUN wget -c https://www.tdengine.com/assets-download/3.0/TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
&& tar xvf TDengine-client-${TDENGINE_VERSION}-Linux-x64.tar.gz \
&& cd TDengine-client-${TDENGINE_VERSION} \
&& ./install_client.sh \

View File

@ -1,7 +1,7 @@
---
sidebar_label: taosKeeper
title: taosKeeper
description: TDengine taosKeeper 使用说明
description: TDengine 3.0 版本监控指标的导出工具
---
## 简介
@ -22,26 +22,36 @@ taosKeeper 安装方式:
### 配置和运行方式
<!-- taosKeeper 需要在操作系统终端执行,该工具支持两种配置方式:[命令行参数](#命令行参数启动) 和 [配置文件](#配置文件启动)。命令行参数优先级高于配置文件参数。-->
taosKeeper 需要在操作系统终端执行,该工具支持 [配置文件启动](#配置文件启动)。
taosKeeper 需要在操作系统终端执行,该工具支持三种配置方式:[命令行参数](#命令行参数启动)、[环境变量](#环境变量启动) 和 [配置文件](#配置文件启动)。优先级为:命令行参数、环境变量、配置文件参数。
**在运行 taosKeeper 之前要确保 TDengine 集群与 taosAdapter 已经在正确运行。** 并且 TDengine 已经开启监控服务,具体请参考:[TDengine 监控配置](../config/#监控相关)。
<!--
### 命令行参数启动
在使用命令行参数运行 taosBenchmark 并控制其行为。
在使用命令行参数运行 taosKeeper 并控制其行为。
```shell
taosKeeper
$ taosKeeper
```
-->
### 环境变量启动
通过设置环境变量达到控制启动参数的目的,通常在容器中运行时使用。
```shell
$ export TAOS_KEEPER_TDENGINE_HOST=192.168.64.3
$ taoskeeper
```
具体参数列表请参照 `taoskeeper -h` 输入结果。
### 配置文件启动
执行以下命令即可快速体验 taosKeeper。当不指定 taosKeeper 配置文件时,优先使用 `/etc/taos/keeper.toml` 配置,否则将使用默认配置。
```shell
taoskeeper -c <keeper config file>
$ taoskeeper -c <keeper config file>
```
**下面是配置文件的示例:**
@ -110,7 +120,7 @@ Query OK, 1 rows in database (0.036162s)
#### 导出监控指标
```shell
curl http://127.0.0.1:6043/metrics
$ curl http://127.0.0.1:6043/metrics
```
部分结果集:

View File

@ -0,0 +1,39 @@
---
sidebar_label: Google Data Studio
title: TDengine Google Data Studio Connector
description: 使用 Google Data Studio 存取 TDengine 数据的详细指南
---
Google Data Studio 是一个强大的报表可视化工具,它提供了丰富的数据图表和数据连接,可以非常方便地按照既定模板生成报表。因其简便易用和生态丰富而在数据分析领域得到一众数据科学家的青睐。
Data Studio 可以支持多种数据来源,除了诸如 Google Analytics、Google AdWords、Search Console、BigQuery 等 Google 自己的服务之外,用户也可以直接将离线文件上传至 Google Cloud Storage或是通过连接器来接入其它数据源。
![01](gds/gds-01.webp)
目前 TDengine 连接器已经发布到 Google Data Studio 应用商店,你可以在 “Connect to Data” 页面下直接搜索 TDengine将其选作数据源。
![02](gds/gds-02.png.webp)
接下来选择 AUTHORIZE 按钮。
![03](gds/gds-03.png.webp)
设置允许连接自己的账号到外部服务。
![04](gds/gds-04.png.webp)
在接下来的页面选择运行 TDengine REST 服务的 URL并输入用户名、密码、数据库名称、表名称以及查询时间范围并点击右上角的 CONNECT 按钮。
![05](gds/gds-05.png.webp)
连接成功后,就可以使用 GDS 方便地进行数据处理并创建报表了。
![06](gds/gds-06.png.webp)
目前的维度和指标规则是timestamp 类型的字段和 tag 字段会被连接器定义为维度,而其他类型的字段是指标。用户还可以根据自己的需求创建不同的表。
![07](gds/gds-07.png.webp)
![08](gds/gds-08.png.webp)
![09](gds/gds-09.png.webp)
![10](gds/gds-10.png.webp)
![11](gds/gds-11.png.webp)

BIN
docs/zh/20-third-party/gds/gds-01.webp vendored Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.6 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View File

@ -254,6 +254,7 @@ enum tmq_res_t {
TMQ_RES_INVALID = -1,
TMQ_RES_DATA = 1,
TMQ_RES_TABLE_META = 2,
TMQ_RES_TAOSX = 3,
};
typedef struct tmq_raw_data {

View File

@ -43,17 +43,17 @@ extern "C" {
#define TSDB_INS_TABLE_VNODES "ins_vnodes"
#define TSDB_INS_TABLE_CONFIGS "ins_configs"
#define TSDB_INS_TABLE_DNODE_VARIABLES "ins_dnode_variables"
#define TSDB_INS_TABLE_SUBSCRIPTIONS "ins_subscriptions"
#define TSDB_INS_TABLE_TOPICS "ins_topics"
#define TSDB_INS_TABLE_STREAMS "ins_streams"
#define TSDB_PERFORMANCE_SCHEMA_DB "performance_schema"
#define TSDB_PERFS_TABLE_SMAS "perf_smas"
#define TSDB_PERFS_TABLE_CONNECTIONS "perf_connections"
#define TSDB_PERFS_TABLE_QUERIES "perf_queries"
#define TSDB_PERFS_TABLE_TOPICS "perf_topics"
#define TSDB_PERFS_TABLE_CONSUMERS "perf_consumers"
#define TSDB_PERFS_TABLE_SUBSCRIPTIONS "perf_subscriptions"
#define TSDB_PERFS_TABLE_OFFSETS "perf_offsets"
#define TSDB_PERFS_TABLE_TRANS "perf_trans"
#define TSDB_PERFS_TABLE_STREAMS "perf_streams"
#define TSDB_PERFS_TABLE_APPS "perf_apps"
typedef struct SSysDbTableSchema {

View File

@ -65,13 +65,6 @@ typedef enum {
TSDB_STATIS_NONE = 1, // statis part not exist
} ETsdbStatisStatus;
typedef enum {
TSDB_SMA_STAT_UNKNOWN = -1, // unknown
TSDB_SMA_STAT_OK = 0, // ready to provide service
TSDB_SMA_STAT_EXPIRED = 1, // not ready or expired
TSDB_SMA_STAT_DROPPED = 2, // sma dropped
} ETsdbSmaStat; // bit operation
typedef enum {
TSDB_SMA_TYPE_BLOCK = 0, // Block-wise SMA
TSDB_SMA_TYPE_TIME_RANGE = 1, // Time-range-wise SMA

View File

@ -73,6 +73,7 @@ enum {
TMQ_MSG_TYPE__POLL_RSP,
TMQ_MSG_TYPE__POLL_META_RSP,
TMQ_MSG_TYPE__EP_RSP,
TMQ_MSG_TYPE__TAOSX_RSP,
TMQ_MSG_TYPE__END_RSP,
};
@ -129,7 +130,6 @@ typedef struct SDataBlockInfo {
uint32_t capacity;
// TODO: optimize and remove following
int64_t version; // used for stream, and need serialization
int64_t ts; // used for stream, and need serialization
int32_t childId; // used for stream, do not serialize
EStreamType type; // used for stream, do not serialize
STimeWindow calWin; // used for stream, do not serialize
@ -184,7 +184,6 @@ typedef struct SQueryTableDataCond {
STimeWindow twindows;
int64_t startVersion;
int64_t endVersion;
int64_t schemaVersion;
} SQueryTableDataCond;
int32_t tEncodeDataBlock(void** buf, const SSDataBlock* pBlock);

View File

@ -184,7 +184,8 @@ static FORCE_INLINE void colDataAppendDouble(SColumnInfoData* pColumnInfoData, u
int32_t getJsonValueLen(const char* data);
int32_t colDataAppend(SColumnInfoData* pColumnInfoData, uint32_t currentRow, const char* pData, bool isNull);
int32_t colDataAppendNItems(SColumnInfoData* pColumnInfoData, uint32_t currentRow, const char* pData, uint32_t numOfRows);
int32_t colDataAppendNItems(SColumnInfoData* pColumnInfoData, uint32_t currentRow, const char* pData,
uint32_t numOfRows);
int32_t colDataMergeCol(SColumnInfoData* pColumnInfoData, int32_t numOfRow1, int32_t* capacity,
const SColumnInfoData* pSource, int32_t numOfRow2);
int32_t colDataAssign(SColumnInfoData* pColumnInfoData, const SColumnInfoData* pSource, int32_t numOfRows,
@ -225,15 +226,16 @@ size_t blockDataGetCapacityInRow(const SSDataBlock* pBlock, size_t pageSize);
int32_t blockDataTrimFirstNRows(SSDataBlock* pBlock, size_t n);
int32_t blockDataKeepFirstNRows(SSDataBlock* pBlock, size_t n);
int32_t assignOneDataBlock(SSDataBlock* dst, const SSDataBlock* src);
int32_t copyDataBlock(SSDataBlock* dst, const SSDataBlock* src);
int32_t assignOneDataBlock(SSDataBlock* dst, const SSDataBlock* src);
int32_t copyDataBlock(SSDataBlock* dst, const SSDataBlock* src);
SSDataBlock* createDataBlock();
void* blockDataDestroy(SSDataBlock* pBlock);
void blockDataFreeRes(SSDataBlock* pBlock);
SSDataBlock* createOneDataBlock(const SSDataBlock* pDataBlock, bool copyData);
SSDataBlock* createSpecialDataBlock(EStreamType type);
int32_t blockDataAppendColInfo(SSDataBlock* pBlock, SColumnInfoData* pColInfoData);
int32_t blockDataAppendColInfo(SSDataBlock* pBlock, SColumnInfoData* pColInfoData);
SColumnInfoData createColumnInfoData(int16_t type, int32_t bytes, int16_t colId);
SColumnInfoData* bdGetColumnInfoData(const SSDataBlock* pBlock, int32_t index);
@ -249,7 +251,6 @@ char* dumpBlockData(SSDataBlock* pDataBlock, const char* flag, char** dumpBuf);
int32_t buildSubmitReqFromDataBlock(SSubmitReq** pReq, const SSDataBlock* pDataBlocks, STSchema* pTSchema, int32_t vgId,
tb_uid_t suid);
char* buildCtbNameByGroupId(const char* stbName, uint64_t groupId);
static FORCE_INLINE int32_t blockGetEncodeSize(const SSDataBlock* pBlock) {

View File

@ -89,11 +89,12 @@ extern uint16_t tsTelemPort;
// query buffer management
extern int32_t tsQueryBufferSize; // maximum allowed usage buffer size in MB for each data node during query processing
extern int64_t tsQueryBufferSizeBytes; // maximum allowed usage buffer size in byte for each data node
extern int64_t tsQueryBufferSizeBytes; // maximum allowed usage buffer size in byte for each data node
// query client
extern int32_t tsQueryPolicy;
extern int32_t tsQuerySmaOptimize;
extern bool tsQueryPlannerTrace;
// client
extern int32_t tsMinSlidingTime;
@ -144,10 +145,10 @@ void taosCfgDynamicOptions(const char *option, const char *value);
struct SConfig *taosGetCfg();
void taosSetAllDebugFlag(int32_t flag);
void taosSetDebugFlag(int32_t *pFlagPtr, const char *flagName, int32_t flagVal);
void taosSetAllDebugFlag(int32_t flag, bool rewrite);
void taosSetDebugFlag(int32_t *pFlagPtr, const char *flagName, int32_t flagVal, bool rewrite);
int32_t taosSetCfg(SConfig *pCfg, char *name);
void taosLocalCfgForbiddenToChange(char* name, bool* forbidden);
void taosLocalCfgForbiddenToChange(char *name, bool *forbidden);
#ifdef __cplusplus
}

View File

@ -276,7 +276,6 @@ struct SSchema {
char name[TSDB_COL_NAME_LEN];
};
typedef struct {
char tbName[TSDB_TABLE_NAME_LEN];
char stbName[TSDB_TABLE_NAME_LEN];
@ -295,17 +294,15 @@ typedef struct {
SSchema* pSchemas;
} STableMetaRsp;
typedef struct {
int32_t code;
int8_t hashMeta;
int64_t uid;
char* tblFName;
int32_t numOfRows;
int32_t affectedRows;
int64_t sver;
STableMetaRsp* pMeta;
int32_t code;
int8_t hashMeta;
int64_t uid;
char* tblFName;
int32_t numOfRows;
int32_t affectedRows;
int64_t sver;
STableMetaRsp* pMeta;
} SSubmitBlkRsp;
typedef struct {
@ -320,7 +317,7 @@ typedef struct {
int32_t tEncodeSSubmitRsp(SEncoder* pEncoder, const SSubmitRsp* pRsp);
int32_t tDecodeSSubmitRsp(SDecoder* pDecoder, SSubmitRsp* pRsp);
void tFreeSSubmitBlkRsp(void* param);
void tFreeSSubmitBlkRsp(void* param);
void tFreeSSubmitRsp(SSubmitRsp* pRsp);
#define COL_SMA_ON ((int8_t)0x1)
@ -787,6 +784,9 @@ typedef struct {
int64_t walRetentionSize;
int32_t walRollPeriod;
int64_t walSegmentSize;
int32_t sstTrigger;
int16_t hashPrefix;
int16_t hashSuffix;
} SCreateDbReq;
int32_t tSerializeSCreateDbReq(void* buf, int32_t bufLen, SCreateDbReq* pReq);
@ -808,6 +808,7 @@ typedef struct {
int8_t strict;
int8_t cacheLast;
int8_t replications;
int32_t sstTrigger;
} SAlterDbReq;
int32_t tSerializeSAlterDbReq(void* buf, int32_t bufLen, SAlterDbReq* pReq);
@ -1069,6 +1070,7 @@ typedef struct {
typedef struct {
int32_t vgId;
int32_t syncState;
int64_t cacheUsage;
int64_t numOfTables;
int64_t numOfTimeSeries;
int64_t totalStorage;
@ -1193,6 +1195,9 @@ typedef struct {
int64_t walRetentionSize;
int32_t walRollPeriod;
int64_t walSegmentSize;
int16_t sstTrigger;
int16_t hashPrefix;
int16_t hashSuffix;
} SCreateVnodeReq;
int32_t tSerializeSCreateVnodeReq(void* buf, int32_t bufLen, SCreateVnodeReq* pReq);
@ -2049,8 +2054,8 @@ typedef struct {
STableMetaRsp* pMeta;
} SVCreateTbRsp, SVUpdateTbRsp;
int tEncodeSVCreateTbRsp(SEncoder* pCoder, const SVCreateTbRsp* pRsp);
int tDecodeSVCreateTbRsp(SDecoder* pCoder, SVCreateTbRsp* pRsp);
int tEncodeSVCreateTbRsp(SEncoder* pCoder, const SVCreateTbRsp* pRsp);
int tDecodeSVCreateTbRsp(SDecoder* pCoder, SVCreateTbRsp* pRsp);
void tFreeSVCreateTbRsp(void* param);
int32_t tSerializeSVCreateTbReq(void** buf, SVCreateTbReq* pReq);
@ -2072,8 +2077,9 @@ int32_t tDeserializeSVCreateTbBatchRsp(void* buf, int32_t bufLen, SVCreateTbBatc
// TDMT_VND_DROP_TABLE =================
typedef struct {
char* name;
int8_t igNotExists;
char* name;
uint64_t suid; // for tmq in wal format
int8_t igNotExists;
} SVDropTbReq;
typedef struct {
@ -2629,6 +2635,22 @@ typedef struct {
};
} STqOffsetVal;
static FORCE_INLINE void tqOffsetResetToData(STqOffsetVal* pOffsetVal, int64_t uid, int64_t ts) {
pOffsetVal->type = TMQ_OFFSET__SNAPSHOT_DATA;
pOffsetVal->uid = uid;
pOffsetVal->ts = ts;
}
static FORCE_INLINE void tqOffsetResetToMeta(STqOffsetVal* pOffsetVal, int64_t uid) {
pOffsetVal->type = TMQ_OFFSET__SNAPSHOT_META;
pOffsetVal->uid = uid;
}
static FORCE_INLINE void tqOffsetResetToLog(STqOffsetVal* pOffsetVal, int64_t ver) {
pOffsetVal->type = TMQ_OFFSET__LOG;
pOffsetVal->version = ver;
}
int32_t tEncodeSTqOffsetVal(SEncoder* pEncoder, const STqOffsetVal* pOffsetVal);
int32_t tDecodeSTqOffsetVal(SDecoder* pDecoder, STqOffsetVal* pOffsetVal);
int32_t tFormatOffset(char* buf, int32_t maxLen, const STqOffsetVal* pVal);
@ -2960,6 +2982,27 @@ typedef struct {
int32_t tEncodeSMqDataRsp(SEncoder* pEncoder, const SMqDataRsp* pRsp);
int32_t tDecodeSMqDataRsp(SDecoder* pDecoder, SMqDataRsp* pRsp);
void tDeleteSMqDataRsp(SMqDataRsp* pRsp);
typedef struct {
SMqRspHead head;
STqOffsetVal reqOffset;
STqOffsetVal rspOffset;
int32_t blockNum;
int8_t withTbName;
int8_t withSchema;
SArray* blockDataLen;
SArray* blockData;
SArray* blockTbName;
SArray* blockSchema;
int32_t createTableNum;
SArray* createTableLen;
SArray* createTableReq;
} STaosxRsp;
int32_t tEncodeSTaosxRsp(SEncoder* pEncoder, const STaosxRsp* pRsp);
int32_t tDecodeSTaosxRsp(SDecoder* pDecoder, STaosxRsp* pRsp);
void tDeleteSTaosxRsp(STaosxRsp* pRsp);
typedef struct {
SMqRspHead head;

View File

@ -102,224 +102,227 @@
#define TK_WAL_RETENTION_SIZE 84
#define TK_WAL_ROLL_PERIOD 85
#define TK_WAL_SEGMENT_SIZE 86
#define TK_NK_COLON 87
#define TK_TABLE 88
#define TK_NK_LP 89
#define TK_NK_RP 90
#define TK_STABLE 91
#define TK_ADD 92
#define TK_COLUMN 93
#define TK_MODIFY 94
#define TK_RENAME 95
#define TK_TAG 96
#define TK_SET 97
#define TK_NK_EQ 98
#define TK_USING 99
#define TK_TAGS 100
#define TK_COMMENT 101
#define TK_BOOL 102
#define TK_TINYINT 103
#define TK_SMALLINT 104
#define TK_INT 105
#define TK_INTEGER 106
#define TK_BIGINT 107
#define TK_FLOAT 108
#define TK_DOUBLE 109
#define TK_BINARY 110
#define TK_TIMESTAMP 111
#define TK_NCHAR 112
#define TK_UNSIGNED 113
#define TK_JSON 114
#define TK_VARCHAR 115
#define TK_MEDIUMBLOB 116
#define TK_BLOB 117
#define TK_VARBINARY 118
#define TK_DECIMAL 119
#define TK_MAX_DELAY 120
#define TK_WATERMARK 121
#define TK_ROLLUP 122
#define TK_TTL 123
#define TK_SMA 124
#define TK_FIRST 125
#define TK_LAST 126
#define TK_SHOW 127
#define TK_DATABASES 128
#define TK_TABLES 129
#define TK_STABLES 130
#define TK_MNODES 131
#define TK_MODULES 132
#define TK_QNODES 133
#define TK_FUNCTIONS 134
#define TK_INDEXES 135
#define TK_ACCOUNTS 136
#define TK_APPS 137
#define TK_CONNECTIONS 138
#define TK_LICENCES 139
#define TK_GRANTS 140
#define TK_QUERIES 141
#define TK_SCORES 142
#define TK_TOPICS 143
#define TK_VARIABLES 144
#define TK_BNODES 145
#define TK_SNODES 146
#define TK_CLUSTER 147
#define TK_TRANSACTIONS 148
#define TK_DISTRIBUTED 149
#define TK_CONSUMERS 150
#define TK_SUBSCRIPTIONS 151
#define TK_LIKE 152
#define TK_INDEX 153
#define TK_FUNCTION 154
#define TK_INTERVAL 155
#define TK_TOPIC 156
#define TK_AS 157
#define TK_WITH 158
#define TK_META 159
#define TK_CONSUMER 160
#define TK_GROUP 161
#define TK_DESC 162
#define TK_DESCRIBE 163
#define TK_RESET 164
#define TK_QUERY 165
#define TK_CACHE 166
#define TK_EXPLAIN 167
#define TK_ANALYZE 168
#define TK_VERBOSE 169
#define TK_NK_BOOL 170
#define TK_RATIO 171
#define TK_NK_FLOAT 172
#define TK_OUTPUTTYPE 173
#define TK_AGGREGATE 174
#define TK_BUFSIZE 175
#define TK_STREAM 176
#define TK_INTO 177
#define TK_TRIGGER 178
#define TK_AT_ONCE 179
#define TK_WINDOW_CLOSE 180
#define TK_IGNORE 181
#define TK_EXPIRED 182
#define TK_KILL 183
#define TK_CONNECTION 184
#define TK_TRANSACTION 185
#define TK_BALANCE 186
#define TK_VGROUP 187
#define TK_MERGE 188
#define TK_REDISTRIBUTE 189
#define TK_SPLIT 190
#define TK_DELETE 191
#define TK_INSERT 192
#define TK_NULL 193
#define TK_NK_QUESTION 194
#define TK_NK_ARROW 195
#define TK_ROWTS 196
#define TK_TBNAME 197
#define TK_QSTART 198
#define TK_QEND 199
#define TK_QDURATION 200
#define TK_WSTART 201
#define TK_WEND 202
#define TK_WDURATION 203
#define TK_CAST 204
#define TK_NOW 205
#define TK_TODAY 206
#define TK_TIMEZONE 207
#define TK_CLIENT_VERSION 208
#define TK_SERVER_VERSION 209
#define TK_SERVER_STATUS 210
#define TK_CURRENT_USER 211
#define TK_COUNT 212
#define TK_LAST_ROW 213
#define TK_BETWEEN 214
#define TK_IS 215
#define TK_NK_LT 216
#define TK_NK_GT 217
#define TK_NK_LE 218
#define TK_NK_GE 219
#define TK_NK_NE 220
#define TK_MATCH 221
#define TK_NMATCH 222
#define TK_CONTAINS 223
#define TK_IN 224
#define TK_JOIN 225
#define TK_INNER 226
#define TK_SELECT 227
#define TK_DISTINCT 228
#define TK_WHERE 229
#define TK_PARTITION 230
#define TK_BY 231
#define TK_SESSION 232
#define TK_STATE_WINDOW 233
#define TK_SLIDING 234
#define TK_FILL 235
#define TK_VALUE 236
#define TK_NONE 237
#define TK_PREV 238
#define TK_LINEAR 239
#define TK_NEXT 240
#define TK_HAVING 241
#define TK_RANGE 242
#define TK_EVERY 243
#define TK_ORDER 244
#define TK_SLIMIT 245
#define TK_SOFFSET 246
#define TK_LIMIT 247
#define TK_OFFSET 248
#define TK_ASC 249
#define TK_NULLS 250
#define TK_ABORT 251
#define TK_AFTER 252
#define TK_ATTACH 253
#define TK_BEFORE 254
#define TK_BEGIN 255
#define TK_BITAND 256
#define TK_BITNOT 257
#define TK_BITOR 258
#define TK_BLOCKS 259
#define TK_CHANGE 260
#define TK_COMMA 261
#define TK_COMPACT 262
#define TK_CONCAT 263
#define TK_CONFLICT 264
#define TK_COPY 265
#define TK_DEFERRED 266
#define TK_DELIMITERS 267
#define TK_DETACH 268
#define TK_DIVIDE 269
#define TK_DOT 270
#define TK_EACH 271
#define TK_END 272
#define TK_FAIL 273
#define TK_FILE 274
#define TK_FOR 275
#define TK_GLOB 276
#define TK_ID 277
#define TK_IMMEDIATE 278
#define TK_IMPORT 279
#define TK_INITIALLY 280
#define TK_INSTEAD 281
#define TK_ISNULL 282
#define TK_KEY 283
#define TK_NK_BITNOT 284
#define TK_NK_SEMI 285
#define TK_NOTNULL 286
#define TK_OF 287
#define TK_PLUS 288
#define TK_PRIVILEGE 289
#define TK_RAISE 290
#define TK_REPLACE 291
#define TK_RESTRICT 292
#define TK_ROW 293
#define TK_SEMI 294
#define TK_STAR 295
#define TK_STATEMENT 296
#define TK_STRING 297
#define TK_TIMES 298
#define TK_UPDATE 299
#define TK_VALUES 300
#define TK_VARIABLE 301
#define TK_VIEW 302
#define TK_VNODES 303
#define TK_WAL 304
#define TK_SST_TRIGGER 87
#define TK_TABLE_PREFIX 88
#define TK_TABLE_SUFFIX 89
#define TK_NK_COLON 90
#define TK_TABLE 91
#define TK_NK_LP 92
#define TK_NK_RP 93
#define TK_STABLE 94
#define TK_ADD 95
#define TK_COLUMN 96
#define TK_MODIFY 97
#define TK_RENAME 98
#define TK_TAG 99
#define TK_SET 100
#define TK_NK_EQ 101
#define TK_USING 102
#define TK_TAGS 103
#define TK_COMMENT 104
#define TK_BOOL 105
#define TK_TINYINT 106
#define TK_SMALLINT 107
#define TK_INT 108
#define TK_INTEGER 109
#define TK_BIGINT 110
#define TK_FLOAT 111
#define TK_DOUBLE 112
#define TK_BINARY 113
#define TK_TIMESTAMP 114
#define TK_NCHAR 115
#define TK_UNSIGNED 116
#define TK_JSON 117
#define TK_VARCHAR 118
#define TK_MEDIUMBLOB 119
#define TK_BLOB 120
#define TK_VARBINARY 121
#define TK_DECIMAL 122
#define TK_MAX_DELAY 123
#define TK_WATERMARK 124
#define TK_ROLLUP 125
#define TK_TTL 126
#define TK_SMA 127
#define TK_FIRST 128
#define TK_LAST 129
#define TK_SHOW 130
#define TK_DATABASES 131
#define TK_TABLES 132
#define TK_STABLES 133
#define TK_MNODES 134
#define TK_MODULES 135
#define TK_QNODES 136
#define TK_FUNCTIONS 137
#define TK_INDEXES 138
#define TK_ACCOUNTS 139
#define TK_APPS 140
#define TK_CONNECTIONS 141
#define TK_LICENCES 142
#define TK_GRANTS 143
#define TK_QUERIES 144
#define TK_SCORES 145
#define TK_TOPICS 146
#define TK_VARIABLES 147
#define TK_BNODES 148
#define TK_SNODES 149
#define TK_CLUSTER 150
#define TK_TRANSACTIONS 151
#define TK_DISTRIBUTED 152
#define TK_CONSUMERS 153
#define TK_SUBSCRIPTIONS 154
#define TK_VNODES 155
#define TK_LIKE 156
#define TK_INDEX 157
#define TK_FUNCTION 158
#define TK_INTERVAL 159
#define TK_TOPIC 160
#define TK_AS 161
#define TK_WITH 162
#define TK_META 163
#define TK_CONSUMER 164
#define TK_GROUP 165
#define TK_DESC 166
#define TK_DESCRIBE 167
#define TK_RESET 168
#define TK_QUERY 169
#define TK_CACHE 170
#define TK_EXPLAIN 171
#define TK_ANALYZE 172
#define TK_VERBOSE 173
#define TK_NK_BOOL 174
#define TK_RATIO 175
#define TK_NK_FLOAT 176
#define TK_OUTPUTTYPE 177
#define TK_AGGREGATE 178
#define TK_BUFSIZE 179
#define TK_STREAM 180
#define TK_INTO 181
#define TK_TRIGGER 182
#define TK_AT_ONCE 183
#define TK_WINDOW_CLOSE 184
#define TK_IGNORE 185
#define TK_EXPIRED 186
#define TK_KILL 187
#define TK_CONNECTION 188
#define TK_TRANSACTION 189
#define TK_BALANCE 190
#define TK_VGROUP 191
#define TK_MERGE 192
#define TK_REDISTRIBUTE 193
#define TK_SPLIT 194
#define TK_DELETE 195
#define TK_INSERT 196
#define TK_NULL 197
#define TK_NK_QUESTION 198
#define TK_NK_ARROW 199
#define TK_ROWTS 200
#define TK_TBNAME 201
#define TK_QSTART 202
#define TK_QEND 203
#define TK_QDURATION 204
#define TK_WSTART 205
#define TK_WEND 206
#define TK_WDURATION 207
#define TK_CAST 208
#define TK_NOW 209
#define TK_TODAY 210
#define TK_TIMEZONE 211
#define TK_CLIENT_VERSION 212
#define TK_SERVER_VERSION 213
#define TK_SERVER_STATUS 214
#define TK_CURRENT_USER 215
#define TK_COUNT 216
#define TK_LAST_ROW 217
#define TK_BETWEEN 218
#define TK_IS 219
#define TK_NK_LT 220
#define TK_NK_GT 221
#define TK_NK_LE 222
#define TK_NK_GE 223
#define TK_NK_NE 224
#define TK_MATCH 225
#define TK_NMATCH 226
#define TK_CONTAINS 227
#define TK_IN 228
#define TK_JOIN 229
#define TK_INNER 230
#define TK_SELECT 231
#define TK_DISTINCT 232
#define TK_WHERE 233
#define TK_PARTITION 234
#define TK_BY 235
#define TK_SESSION 236
#define TK_STATE_WINDOW 237
#define TK_SLIDING 238
#define TK_FILL 239
#define TK_VALUE 240
#define TK_NONE 241
#define TK_PREV 242
#define TK_LINEAR 243
#define TK_NEXT 244
#define TK_HAVING 245
#define TK_RANGE 246
#define TK_EVERY 247
#define TK_ORDER 248
#define TK_SLIMIT 249
#define TK_SOFFSET 250
#define TK_LIMIT 251
#define TK_OFFSET 252
#define TK_ASC 253
#define TK_NULLS 254
#define TK_ABORT 255
#define TK_AFTER 256
#define TK_ATTACH 257
#define TK_BEFORE 258
#define TK_BEGIN 259
#define TK_BITAND 260
#define TK_BITNOT 261
#define TK_BITOR 262
#define TK_BLOCKS 263
#define TK_CHANGE 264
#define TK_COMMA 265
#define TK_COMPACT 266
#define TK_CONCAT 267
#define TK_CONFLICT 268
#define TK_COPY 269
#define TK_DEFERRED 270
#define TK_DELIMITERS 271
#define TK_DETACH 272
#define TK_DIVIDE 273
#define TK_DOT 274
#define TK_EACH 275
#define TK_END 276
#define TK_FAIL 277
#define TK_FILE 278
#define TK_FOR 279
#define TK_GLOB 280
#define TK_ID 281
#define TK_IMMEDIATE 282
#define TK_IMPORT 283
#define TK_INITIALLY 284
#define TK_INSTEAD 285
#define TK_ISNULL 286
#define TK_KEY 287
#define TK_NK_BITNOT 288
#define TK_NK_SEMI 289
#define TK_NOTNULL 290
#define TK_OF 291
#define TK_PLUS 292
#define TK_PRIVILEGE 293
#define TK_RAISE 294
#define TK_REPLACE 295
#define TK_RESTRICT 296
#define TK_ROW 297
#define TK_SEMI 298
#define TK_STAR 299
#define TK_STATEMENT 300
#define TK_STRING 301
#define TK_TIMES 302
#define TK_UPDATE 303
#define TK_VALUES 304
#define TK_VARIABLE 305
#define TK_VIEW 306
#define TK_WAL 307
#define TK_NK_SPACE 300
#define TK_NK_COMMENT 301

View File

@ -49,9 +49,6 @@ typedef struct {
#define varDataCopy(dst, v) memcpy((dst), (void *)(v), varDataTLen(v))
#define varDataLenByData(v) (*(VarDataLenT *)(((char *)(v)) - VARSTR_HEADER_SIZE))
#define varDataSetLen(v, _len) (((VarDataLenT *)(v))[0] = (VarDataLenT)(_len))
#define IS_VAR_DATA_TYPE(t) \
(((t) == TSDB_DATA_TYPE_VARCHAR) || ((t) == TSDB_DATA_TYPE_NCHAR) || ((t) == TSDB_DATA_TYPE_JSON))
#define IS_STR_DATA_TYPE(t) (((t) == TSDB_DATA_TYPE_VARCHAR) || ((t) == TSDB_DATA_TYPE_NCHAR))
#define varDataNetLen(v) (htons(((VarDataLenT *)(v))[0]))
#define varDataNetTLen(v) (sizeof(VarDataLenT) + varDataNetLen(v))
@ -268,11 +265,16 @@ typedef struct {
#define IS_UNSIGNED_NUMERIC_TYPE(_t) ((_t) >= TSDB_DATA_TYPE_UTINYINT && (_t) <= TSDB_DATA_TYPE_UBIGINT)
#define IS_FLOAT_TYPE(_t) ((_t) == TSDB_DATA_TYPE_FLOAT || (_t) == TSDB_DATA_TYPE_DOUBLE)
#define IS_INTEGER_TYPE(_t) ((IS_SIGNED_NUMERIC_TYPE(_t)) || (IS_UNSIGNED_NUMERIC_TYPE(_t)))
#define IS_TIMESTAMP_TYPE(_t) ((_t) == TSDB_DATA_TYPE_TIMESTAMP)
#define IS_NUMERIC_TYPE(_t) ((IS_SIGNED_NUMERIC_TYPE(_t)) || (IS_UNSIGNED_NUMERIC_TYPE(_t)) || (IS_FLOAT_TYPE(_t)))
#define IS_MATHABLE_TYPE(_t) \
(IS_NUMERIC_TYPE(_t) || (_t) == (TSDB_DATA_TYPE_BOOL) || (_t) == (TSDB_DATA_TYPE_TIMESTAMP))
#define IS_VAR_DATA_TYPE(t) \
(((t) == TSDB_DATA_TYPE_VARCHAR) || ((t) == TSDB_DATA_TYPE_NCHAR) || ((t) == TSDB_DATA_TYPE_JSON))
#define IS_STR_DATA_TYPE(t) (((t) == TSDB_DATA_TYPE_VARCHAR) || ((t) == TSDB_DATA_TYPE_NCHAR))
#define IS_VALID_TINYINT(_t) ((_t) >= INT8_MIN && (_t) <= INT8_MAX)
#define IS_VALID_SMALLINT(_t) ((_t) >= INT16_MIN && (_t) <= INT16_MAX)
#define IS_VALID_INT(_t) ((_t) >= INT32_MIN && (_t) <= INT32_MAX)

View File

@ -176,7 +176,8 @@ int32_t fmGetFuncInfo(SFunctionNode* pFunc, char* pMsg, int32_t msgLen);
EFuncReturnRows fmGetFuncReturnRows(SFunctionNode* pFunc);
bool fmIsBuiltinFunc(const char* pFunc);
bool fmIsBuiltinFunc(const char* pFunc);
EFunctionType fmGetFuncType(const char* pFunc);
bool fmIsAggFunc(int32_t funcId);
bool fmIsScalarFunc(int32_t funcId);

View File

@ -78,6 +78,12 @@ typedef struct SDatabaseOptions {
int32_t walRetentionSize;
int32_t walRollPeriod;
int32_t walSegmentSize;
bool walRetentionPeriodIsSet;
bool walRetentionSizeIsSet;
bool walRollPeriodIsSet;
int32_t sstTrigger;
int32_t tablePrefix;
int32_t tableSuffix;
} SDatabaseOptions;
typedef struct SCreateDatabaseStmt {
@ -268,6 +274,12 @@ typedef struct SShowDnodeVariablesStmt {
SNode* pDnodeId;
} SShowDnodeVariablesStmt;
typedef struct SShowVnodesStmt {
ENodeType type;
SNode* pDnodeId;
SNode* pDnodeEndpoint;
} SShowVnodesStmt;
typedef enum EIndexType { INDEX_TYPE_SMA = 1, INDEX_TYPE_FULLTEXT } EIndexType;
typedef struct SIndexOptions {

View File

@ -183,12 +183,12 @@ typedef enum ENodeType {
QUERY_NODE_SHOW_DNODE_VARIABLES_STMT,
QUERY_NODE_SHOW_TRANSACTIONS_STMT,
QUERY_NODE_SHOW_SUBSCRIPTIONS_STMT,
QUERY_NODE_SHOW_VNODES_STMT,
QUERY_NODE_SHOW_CREATE_DATABASE_STMT,
QUERY_NODE_SHOW_CREATE_TABLE_STMT,
QUERY_NODE_SHOW_CREATE_STABLE_STMT,
QUERY_NODE_SHOW_TABLE_DISTRIBUTED_STMT,
QUERY_NODE_SHOW_LOCAL_VARIABLES_STMT,
QUERY_NODE_SHOW_VNODES_STMT,
QUERY_NODE_SHOW_SCORES_STMT,
QUERY_NODE_KILL_CONNECTION_STMT,
QUERY_NODE_KILL_QUERY_STMT,
@ -244,6 +244,7 @@ typedef enum ENodeType {
QUERY_NODE_PHYSICAL_PLAN_MERGE_STATE,
QUERY_NODE_PHYSICAL_PLAN_STREAM_STATE,
QUERY_NODE_PHYSICAL_PLAN_PARTITION,
QUERY_NODE_PHYSICAL_PLAN_STREAM_PARTITION,
QUERY_NODE_PHYSICAL_PLAN_INDEF_ROWS_FUNC,
QUERY_NODE_PHYSICAL_PLAN_INTERP_FUNC,
QUERY_NODE_PHYSICAL_PLAN_DISPATCH,

View File

@ -488,6 +488,8 @@ typedef struct SPartitionPhysiNode {
SNodeList* pTargets;
} SPartitionPhysiNode;
typedef SPartitionPhysiNode SStreamPartitionPhysiNode;
typedef struct SDataSinkNode {
ENodeType type;
SDataBlockDescNode* pInputDataBlockDesc;

View File

@ -56,6 +56,7 @@ void taosRemoveDir(const char *dirname);
bool taosDirExist(const char *dirname);
int32_t taosMkDir(const char *dirname);
int32_t taosMulMkDir(const char *dirname);
int32_t taosMulModeMkDir(const char *dirname, int mode);
void taosRemoveOldFiles(const char *dirname, int32_t keepDays);
int32_t taosExpandDir(const char *dirname, char *outname, int32_t maxlen);
int32_t taosRealPath(char *dirname, char *realPath, int32_t maxlen);

View File

@ -616,6 +616,7 @@ int32_t* taosGetErrno();
#define TSDB_CODE_RSMA_FETCH_MSG_MSSED_UP TAOS_DEF_ERROR_CODE(0, 0x3155)
#define TSDB_CODE_RSMA_EMPTY_INFO TAOS_DEF_ERROR_CODE(0, 0x3156)
#define TSDB_CODE_RSMA_INVALID_SCHEMA TAOS_DEF_ERROR_CODE(0, 0x3157)
#define TSDB_CODE_RSMA_REGEX_MATCH TAOS_DEF_ERROR_CODE(0, 0x3158)
//index
#define TSDB_CODE_INDEX_REBUILDING TAOS_DEF_ERROR_CODE(0, 0x3200)

View File

@ -359,15 +359,27 @@ typedef enum ELogicConditionType {
#define TSDB_DB_SCHEMALESS_ON 1
#define TSDB_DB_SCHEMALESS_OFF 0
#define TSDB_DEFAULT_DB_SCHEMALESS TSDB_DB_SCHEMALESS_OFF
#define TSDB_MIN_SST_TRIGGER 1
#define TSDB_MAX_SST_TRIGGER 128
#define TSDB_DEFAULT_SST_TRIGGER 8
#define TSDB_MIN_HASH_PREFIX 0
#define TSDB_MAX_HASH_PREFIX 128
#define TSDB_DEFAULT_HASH_PREFIX 0
#define TSDB_MIN_HASH_SUFFIX 0
#define TSDB_MAX_HASH_SUFFIX 128
#define TSDB_DEFAULT_HASH_SUFFIX 0
#define TSDB_DB_MIN_WAL_RETENTION_PERIOD -1
#define TSDB_DEFAULT_DB_WAL_RETENTION_PERIOD (24 * 60 * 60 * 4)
#define TSDB_DB_MIN_WAL_RETENTION_SIZE -1
#define TSDB_DEFAULT_DB_WAL_RETENTION_SIZE -1
#define TSDB_DB_MIN_WAL_ROLL_PERIOD 0
#define TSDB_DEFAULT_DB_WAL_ROLL_PERIOD (24 * 60 * 60 * 1)
#define TSDB_DB_MIN_WAL_SEGMENT_SIZE 0
#define TSDB_DEFAULT_DB_WAL_SEGMENT_SIZE 0
#define TSDB_DB_MIN_WAL_RETENTION_PERIOD -1
#define TSDB_REP_DEF_DB_WAL_RET_PERIOD 0
#define TSDB_REPS_DEF_DB_WAL_RET_PERIOD (24 * 60 * 60 * 4)
#define TSDB_DB_MIN_WAL_RETENTION_SIZE -1
#define TSDB_REP_DEF_DB_WAL_RET_SIZE 0
#define TSDB_REPS_DEF_DB_WAL_RET_SIZE -1
#define TSDB_DB_MIN_WAL_ROLL_PERIOD 0
#define TSDB_REP_DEF_DB_WAL_ROLL_PERIOD 0
#define TSDB_REPS_DEF_DB_WAL_ROLL_PERIOD (24 * 60 * 60 * 1)
#define TSDB_DB_MIN_WAL_SEGMENT_SIZE 0
#define TSDB_DEFAULT_DB_WAL_SEGMENT_SIZE 0
#define TSDB_MIN_ROLLUP_MAX_DELAY 1 // unit millisecond
#define TSDB_MAX_ROLLUP_MAX_DELAY (15 * 60 * 1000)
@ -386,7 +398,7 @@ typedef enum ELogicConditionType {
#define TSDB_DEFAULT_EXPLAIN_VERBOSE false
#define TSDB_EXPLAIN_RESULT_ROW_SIZE (16*1024)
#define TSDB_EXPLAIN_RESULT_ROW_SIZE (16 * 1024)
#define TSDB_EXPLAIN_RESULT_COLUMN_NAME "QUERY_PLAN"
#define TSDB_MAX_FIELD_LEN 16384

View File

@ -264,12 +264,14 @@ static FORCE_INLINE int32_t tEncodeDouble(SEncoder* pCoder, double val) {
static FORCE_INLINE int32_t tEncodeBinary(SEncoder* pCoder, const uint8_t* val, uint32_t len) {
if (tEncodeU32v(pCoder, len) < 0) return -1;
if (pCoder->data) {
if (TD_CODER_CHECK_CAPACITY_FAILED(pCoder, len)) return -1;
memcpy(TD_CODER_CURRENT(pCoder), val, len);
}
if (len) {
if (pCoder->data) {
if (TD_CODER_CHECK_CAPACITY_FAILED(pCoder, len)) return -1;
memcpy(TD_CODER_CURRENT(pCoder), val, len);
}
TD_CODER_MOVE_POS(pCoder, len);
TD_CODER_MOVE_POS(pCoder, len);
}
return 0;
}
@ -414,14 +416,18 @@ static int32_t tDecodeCStrTo(SDecoder* pCoder, char* val) {
static FORCE_INLINE int32_t tDecodeBinaryAlloc(SDecoder* pCoder, void** val, uint64_t* len) {
uint64_t length = 0;
if (tDecodeU64v(pCoder, &length) < 0) return -1;
if (len) *len = length;
if (length) {
if (len) *len = length;
if (TD_CODER_CHECK_CAPACITY_FAILED(pCoder, length)) return -1;
*val = taosMemoryMalloc(length);
if (*val == NULL) return -1;
memcpy(*val, TD_CODER_CURRENT(pCoder), length);
if (TD_CODER_CHECK_CAPACITY_FAILED(pCoder, length)) return -1;
*val = taosMemoryMalloc(length);
if (*val == NULL) return -1;
memcpy(*val, TD_CODER_CURRENT(pCoder), length);
TD_CODER_MOVE_POS(pCoder, length);
TD_CODER_MOVE_POS(pCoder, length);
} else {
*val = NULL;
}
return 0;
}

78
include/util/trbtree.h Normal file
View File

@ -0,0 +1,78 @@
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef _TD_UTIL_RBTREE_H_
#define _TD_UTIL_RBTREE_H_
#include "os.h"
#ifdef __cplusplus
extern "C" {
#endif
typedef struct SRBTree SRBTree;
typedef struct SRBTreeNode SRBTreeNode;
typedef struct SRBTreeIter SRBTreeIter;
typedef int32_t (*tRBTreeCmprFn)(const void *, const void *);
// SRBTree =============================================
#define tRBTreeMin(T) ((T)->min == ((T)->NIL) ? NULL : (T)->min)
#define tRBTreeMax(T) ((T)->max == ((T)->NIL) ? NULL : (T)->max)
void tRBTreeCreate(SRBTree *pTree, tRBTreeCmprFn cmprFn);
SRBTreeNode *tRBTreePut(SRBTree *pTree, SRBTreeNode *z);
void tRBTreeDrop(SRBTree *pTree, SRBTreeNode *z);
SRBTreeNode *tRBTreeDropByKey(SRBTree *pTree, void *pKey);
SRBTreeNode *tRBTreeGet(SRBTree *pTree, void *pKey);
// SRBTreeIter =============================================
#define tRBTreeIterCreate(tree, ascend) \
(SRBTreeIter) { .asc = (ascend), .pTree = (tree), .pNode = (ascend) ? (tree)->min : (tree)->max }
SRBTreeNode *tRBTreeIterNext(SRBTreeIter *pIter);
// STRUCT =============================================
typedef enum { RED, BLACK } ECOLOR;
struct SRBTreeNode {
ECOLOR color;
SRBTreeNode *parent;
SRBTreeNode *left;
SRBTreeNode *right;
};
#define RBTREE_NODE_PAYLOAD(N) ((const void *)&(N)[1])
struct SRBTree {
tRBTreeCmprFn cmprFn;
int64_t n;
SRBTreeNode *root;
SRBTreeNode *min;
SRBTreeNode *max;
SRBTreeNode *NIL;
SRBTreeNode NILNODE;
};
struct SRBTreeIter {
int8_t asc;
SRBTree *pTree;
SRBTreeNode *pNode;
};
#ifdef __cplusplus
}
#endif
#endif /*_TD_UTIL_RBTREE_H_*/

View File

@ -20,6 +20,7 @@
#include "tcrc32c.h"
#include "tdef.h"
#include "tmd5.h"
#include "thash.h"
#ifdef __cplusplus
extern "C" {
@ -68,6 +69,8 @@ static FORCE_INLINE void taosEncryptPass_c(uint8_t *inBuf, size_t len, char *tar
memcpy(target, buf, TSDB_PASSWORD_LEN);
}
#define taosGetTbHashVal(tbname, tblen, method, prefix, suffix) MurmurHash3_32((tbname), (tblen))
#ifdef __cplusplus
}
#endif

View File

@ -64,6 +64,11 @@ pipeline {
defaultValue:'2.1.2',
description: 'This number of baseVerison is generally not modified.Now it is 3.0.0.1'
)
string (
name:'nasPassword',
defaultValue:'password',
description: 'the pasword of the NAS server which has installPackage-192.168.1.131'
)
}
environment{
WORK_DIR = '/var/lib/jenkins/workspace'
@ -111,17 +116,17 @@ pipeline {
sync_source("${BRANCH_NAME}")
sh '''
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server
bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server ${nasPassword}
python3 checkPackageRuning.py
'''
sh '''
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server
bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server ${nasPassword}
python3 checkPackageRuning.py
'''
sh '''
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_SERVER_DEB} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server
bash testpackage.sh ${TD_SERVER_DEB} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server ${nasPassword}
python3 checkPackageRuning.py
'''
}
@ -134,17 +139,22 @@ pipeline {
sync_source("${BRANCH_NAME}")
sh '''
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server
bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server ${nasPassword}
python3 checkPackageRuning.py
'''
sh '''
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server
bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server ${nasPassword}
python3 checkPackageRuning.py
'''
sh '''
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_SERVER_DEB} ${version} ${BASE_TD_SERVER_DEB} ${baseVersion} server
bash testpackage.sh ${TD_SERVER_DEB} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server ${nasPassword}
python3 checkPackageRuning.py
'''
sh '''
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_CLIENT_TAR} ${version} ${BASE_TD_CLIENT_TAR} ${baseVersion} client ${nasPassword}
python3 checkPackageRuning.py
'''
}
@ -157,17 +167,17 @@ pipeline {
sync_source("${BRANCH_NAME}")
sh '''
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server
bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server ${nasPassword}
python3 checkPackageRuning.py
'''
sh '''
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server
bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server ${nasPassword}
python3 checkPackageRuning.py
'''
sh '''
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_SERVER_RPM} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server
bash testpackage.sh ${TD_SERVER_RPM} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server ${nasPassword}
python3 checkPackageRuning.py
'''
}
@ -179,19 +189,42 @@ pipeline {
timeout(time: 30, unit: 'MINUTES'){
sync_source("${BRANCH_NAME}")
sh '''
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server
python3 checkPackageRuning.py
rmtaos
'''
sh '''
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_SERVER_TAR} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server ${nasPassword}
python3 checkPackageRuning.py
'''
sh '''
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_SERVER_RPM} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server
bash testpackage.sh ${TD_SERVER_LITE_TAR} ${version} ${BASE_TD_SERVER_LITE_TAR} ${baseVersion} server ${nasPassword}
python3 checkPackageRuning.py
'''
sh '''
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_SERVER_RPM} ${version} ${BASE_TD_SERVER_TAR} ${baseVersion} server ${nasPassword}
python3 checkPackageRuning.py
'''
sh '''
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_CLIENT_LITE_TAR} ${version} ${BASE_TD_CLIENT_LITE_TAR} ${baseVersion} client ${nasPassword}
python3 checkPackageRuning.py
'''
}
}
}
stage('arm64') {
agent{label 'linux_arm64'}
steps {
timeout(time: 30, unit: 'MINUTES'){
sync_source("${BRANCH_NAME}")
sh '''
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_SERVER_ARM_TAR} ${version} ${BASE_TD_SERVER_ARM_TAR} ${baseVersion} server ${nasPassword}
python3 checkPackageRuning.py
'''
sh '''
cd ${TDENGINE_ROOT_DIR}/packaging
bash testpackage.sh ${TD_CLIENT_ARM_TAR} ${version} ${BASE_TD_CLIENT_ARM_TAR} ${baseVersion} client ${nasPassword}
python3 checkPackageRuning.py
'''
}

View File

@ -45,6 +45,7 @@ mkdir -p ${pkg_dir}${install_home_path}/include
mkdir -p ${pkg_dir}${install_home_path}/script
cp ${compile_dir}/../packaging/cfg/taos.cfg ${pkg_dir}${install_home_path}/cfg
cp ${compile_dir}/../packaging/cfg/taosd.service ${pkg_dir}${install_home_path}/cfg
if [ -f "${compile_dir}/test/cfg/taosadapter.toml" ]; then
cp ${compile_dir}/test/cfg/taosadapter.toml ${pkg_dir}${install_home_path}/cfg || :
fi

13
packaging/debRpmAutoInstall.sh Executable file
View File

@ -0,0 +1,13 @@
#!/usr/bin/expect
set packgeName [lindex $argv 0]
set packageSuffix [lindex $argv 1]
set timeout 3
if { ${packageSuffix} == "deb" } {
spawn dpkg -i ${packgeName}
} elseif { ${packageSuffix} == "rpm"} {
spawn rpm -ivh ${packgeName}
}
expect "*one:"
send "\r"
expect "*skip:"
send "\r"

View File

@ -1,12 +1,24 @@
#!/bin/sh
#parameter
scriptDir=$(dirname $(readlink -f $0))
packgeName=$1
version=$2
originPackageName=$3
originversion=$4
testFile=$5
subFile="taos.tar.gz"
password=$6
# Color setting
RED='\033[41;30m'
GREEN='\033[1;32m'
YELLOW='\033[1;33m'
BLUE='\033[1;34m'
GREEN_DARK='\033[0;32m'
YELLOW_DARK='\033[0;33m'
BLUE_DARK='\033[0;34m'
GREEN_UNDERLINE='\033[4;32m'
NC='\033[0m'
if [ ${testFile} = "server" ];then
tdPath="TDengine-server-${version}"
@ -23,111 +35,211 @@ elif [ ${testFile} = "tools" ];then
fi
function cmdInstall {
comd=$1
if command -v ${comd} ;then
echo "${comd} is already installed"
command=$1
if command -v ${command} ;then
echoColor YD "${command} is already installed"
else
if command -v apt ;then
apt-get install ${comd} -y
apt-get install ${command} -y
elif command -v yum ;then
yum -y install ${comd}
echo "you should install ${comd} manually"
yum -y install ${command}
echoColor YD "you should install ${command} manually"
fi
fi
}
function echoColor {
color=$1
command=$2
echo "Uninstall all components of TDeingne"
if command -v rmtaos ;then
echo "uninstall all components of TDeingne:rmtaos"
rmtaos
else
echo "os doesn't include TDengine "
if [ ${color} = 'Y' ];then
echo -e "${YELLOW}${command}${NC}"
elif [ ${color} = 'YD' ];then
echo -e "${YELLOW_DARK}${command}${NC}"
elif [ ${color} = 'R' ];then
echo -e "${RED}${command}${NC}"
elif [ ${color} = 'G' ];then
echo -e "${GREEN}${command}${NC}\r\n"
elif [ ${color} = 'B' ];then
echo -e "${BLUE}${command}${NC}"
elif [ ${color} = 'BD' ];then
echo -e "${BLUE_DARK}${command}${NC}"
fi
}
if command -v rmtaostools ;then
echo "uninstall all components of TDeingne:rmtaostools"
rmtaostools
else
echo "os doesn't include rmtaostools "
fi
echoColor G "===== install basesoft ====="
cmdInstall tree
cmdInstall wget
cmdInstall sshpass
echo "new workroom path"
echoColor G "===== Uninstall all components of TDeingne ====="
if command -v rmtaos ;then
echoColor YD "uninstall all components of TDeingne:rmtaos"
rmtaos
else
echoColor YD "os doesn't include TDengine"
fi
if command -v rmtaostools ;then
echoColor YD "uninstall all components of TDeingne:rmtaostools"
rmtaostools
else
echoColor YD "os doesn't include rmtaostools "
fi
echoColor G "===== new workroom path ====="
installPath="/usr/local/src/packageTest"
oriInstallPath="/usr/local/src/packageTest/3.1"
if [ ! -d ${installPath} ] ;then
echoColor BD "mkdir -p ${installPath}"
mkdir -p ${installPath}
else
echo "${installPath} already exists"
echoColor YD "${installPath} already exists"
fi
if [ -d ${installPath}/${tdPath} ] ;then
echoColor BD "rm -rf ${installPath}/${tdPath}/*"
rm -rf ${installPath}/${tdPath}/*
fi
if [ ! -d ${oriInstallPath} ] ;then
echoColor BD "mkdir -p ${oriInstallPath}"
mkdir -p ${oriInstallPath}
else
echo "${oriInstallPath} already exists"
echoColor YD "${oriInstallPath} already exists"
fi
echo "decompress installPackage"
if [ -d ${oriInstallPath}/${originTdpPath} ] ;then
echoColor BD "rm -rf ${oriInstallPath}/${originTdpPath}/*"
rm -rf ${oriInstallPath}/${originTdpPath}/*
fi
echoColor G "===== download installPackage ====="
# cd ${installPath}
# wget https://www.taosdata.com/assets-download/3.0/${packgeName}
# cd ${oriInstallPath}
# wget https://www.taosdata.com/assets-download/3.0/${originPackageName}
cd ${installPath}
wget https://www.taosdata.com/assets-download/3.0/${packgeName}
cd ${oriInstallPath}
wget https://www.taosdata.com/assets-download/3.0/${originPackageName}
cp -r ${scriptDir}/debRpmAutoInstall.sh .
if [ ! -f {packgeName} ];then
echoColor BD "sshpass -p ${password} scp -oStrictHostKeyChecking=no 192.168.1.131:/nas/TDengine3/v${version}/community/${packgeName} ."
sshpass -p ${password} scp -oStrictHostKeyChecking=no -oStrictHostKeyChecking=no 192.168.1.131:/nas/TDengine3/v${version}/community/${packgeName} .
fi
packageSuffix=$(echo ${packgeName} | awk -F '.' '{print $NF}')
if [ ! -f debRpmAutoInstall.sh ];then
echo '#!/usr/bin/expect ' > debRpmAutoInstall.sh
echo 'set packgeName [lindex $argv 0]' >> debRpmAutoInstall.sh
echo 'set packageSuffix [lindex $argv 1]' >> debRpmAutoInstall.sh
echo 'set timeout 3 ' >> debRpmAutoInstall.sh
echo 'if { ${packageSuffix} == "deb" } {' >> debRpmAutoInstall.sh
echo ' spawn dpkg -i ${packgeName} ' >> debRpmAutoInstall.sh
echo '} elseif { ${packageSuffix} == "rpm"} {' >> debRpmAutoInstall.sh
echo ' spawn rpm -ivh ${packgeName}' >> debRpmAutoInstall.sh
echo '}' >> debRpmAutoInstall.sh
echo 'expect "*one:"' >> debRpmAutoInstall.sh
echo 'send "\r"' >> debRpmAutoInstall.sh
echo 'expect "*skip:"' >> debRpmAutoInstall.sh
echo 'send "\r" ' >> debRpmAutoInstall.sh
fi
echoColor G "===== instal Package ====="
if [[ ${packgeName} =~ "deb" ]];then
cd ${installPath}
echo "dpkg ${packgeName}" && dpkg -i ${packgeName}
dpkg -r taostools
dpkg -r tdengine
if [[ ${packgeName} =~ "TDengine" ]];then
echoColor BD "./debRpmAutoInstall.sh ${packgeName} ${packageSuffix}" && chmod 755 debRpmAutoInstall.sh && ./debRpmAutoInstall.sh ${packgeName} ${packageSuffix}
else
echoColor BD "dpkg -i ${packgeName}" && dpkg -i ${packgeName}
fi
elif [[ ${packgeName} =~ "rpm" ]];then
cd ${installPath}
echo "rpm ${packgeName}" && rpm -ivh ${packgeName}
sudo rpm -e tdengine
sudo rpm -e taostools
if [[ ${packgeName} =~ "TDengine" ]];then
echoColor BD "./debRpmAutoInstall.sh ${packgeName} ${packageSuffix}" && chmod 755 debRpmAutoInstall.sh && ./debRpmAutoInstall.sh ${packgeName} ${packageSuffix}
else
echoColor BD "rpm -ivh ${packgeName}" && rpm -ivh ${packgeName}
fi
elif [[ ${packgeName} =~ "tar" ]];then
echo "tar ${packgeName}" && tar -xvf ${packgeName}
cd ${oriInstallPath}
echo "tar -xvf ${originPackageName}" && tar -xvf ${originPackageName}
echoColor G "===== check installPackage File of tar ====="
cd ${oriInstallPath}
if [ ! -f {originPackageName} ];then
echoColor YD "download base installPackage"
echoColor BD "sshpass -p ${password} scp -oStrictHostKeyChecking=no 192.168.1.131:/nas/TDengine3/v${originversion}/community/${originPackageName} ."
sshpass -p ${password} scp -oStrictHostKeyChecking=no 192.168.1.131:/nas/TDengine3/v${originversion}/community/${originPackageName} .
fi
echoColor YD "unzip the base installation package"
echoColor BD "tar -xf ${originPackageName}" && tar -xf ${originPackageName}
cd ${installPath}
echo "tar -xvf ${packgeName}" && tar -xvf ${packgeName}
echoColor YD "unzip the new installation package"
echoColor BD "tar -xf ${packgeName}" && tar -xf ${packgeName}
if [ ${testFile} != "tools" ] ;then
cd ${installPath}/${tdPath} && tar vxf ${subFile}
cd ${oriInstallPath}/${originTdpPath} && tar vxf ${subFile}
cd ${installPath}/${tdPath} && tar xf ${subFile}
cd ${oriInstallPath}/${originTdpPath} && tar xf ${subFile}
fi
echo "check installPackage File"
cd ${oriInstallPath}/${originTdpPath} && tree > ${installPath}/base_${originversion}_checkfile
cd ${installPath}/${tdPath} && tree > ${installPath}/now_${version}_checkfile
cd ${installPath}
tree ${oriInstallPath}/${originTdpPath} > ${originPackageName}_checkfile
tree ${installPath}/${tdPath} > ${packgeName}_checkfile
diff ${packgeName}_checkfile ${originPackageName}_checkfile > ${installPath}/diffFile.log
diff ${installPath}/base_${originversion}_checkfile ${installPath}/now_${version}_checkfile > ${installPath}/diffFile.log
diffNumbers=`cat ${installPath}/diffFile.log |wc -l `
if [ ${diffNumbers} != 0 ];then
echo "The number and names of files have changed from the previous installation package"
echo `cat ${installPath}/diffFile.log`
exit -1
fi
if [ ${diffNumbers} != 0 ];then
echoColor R "The number and names of files is different from the previous installation package"
echoColor Y `cat ${installPath}/diffFile.log`
exit -1
else
echoColor G "The number and names of files are the same as previous installation packages"
fi
echoColor YD "===== install Package of tar ====="
cd ${installPath}/${tdPath}
if [ ${testFile} = "server" ];then
echoColor BD "bash ${installCmd} -e no "
bash ${installCmd} -e no
else
echoColor BD "bash ${installCmd} "
bash ${installCmd}
fi
if [[ ${packgeName} =~ "Lite" ]];then
cd ${installPath}
wget https://www.taosdata.com/assets-download/3.0/taosTools-2.1.2-Linux-x64.tar.gz
tar xvf taosTools-2.1.2-Linux-x64.tar.gz
cd taosTools-2.1.2 && bash install-taostools.sh
fi
fi
cd ${installPath}
if ([[ ${packgeName} =~ "Lite" ]] && [[ ${packgeName} =~ "tar" ]]) || [[ ${packgeName} =~ "client" ]] ;then
echoColor G "===== install taos-tools when package is lite or client ====="
cd ${installPath}
sshpass -p ${password} scp -oStrictHostKeyChecking=no 192.168.1.131:/nas/TDengine3/v${version}/community/taosTools-2.1.2-Linux-x64.tar.gz .
# wget https://www.taosdata.com/assets-download/3.0/taosTools-2.1.2-Linux-x64.tar.gz
tar xf taosTools-2.1.2-Linux-x64.tar.gz
cd taosTools-2.1.2 && bash install-taostools.sh
elif [[ ${packgeName} =~ "Lite" ]] && [[ ${packgeName} =~ "deb" ]] ;then
echoColor G "===== install taos-tools when package is lite or client ====="
cd ${installPath}
sshpass -p ${password} scp -oStrictHostKeyChecking=no 192.168.1.131:/nas/TDengine3/v${version}/community/taosTools-2.1.2-Linux-x64.tar.gz .
tar xf taosTools-2.1.2-Linux-x64.tar.gz
cd taosTools-2.1.2 && bash install-taostools.sh
elif [[ ${packgeName} =~ "Lite" ]] && [[ ${packgeName} =~ "rpm" ]] ;then
echoColor G "===== install taos-tools when package is lite or client ====="
cd ${installPath}
sshpass -p ${password} scp -oStrictHostKeyChecking=no -oStrictHostKeyChecking=no 192.168.1.131:/nas/TDengine3/v${version}/community/taosTools-2.1.2-Linux-x64.tar.gz .
tar xf taosTools-2.1.2-Linux-x64.tar.gz
cd taosTools-2.1.2 && bash install-taostools.sh
fi

View File

@ -33,6 +33,7 @@ adapterName="taosadapter"
benchmarkName="taosBenchmark"
dumpName="taosdump"
demoName="taosdemo"
xname="taosx"
data_dir=${dataDir}
log_dir=${logDir}
@ -199,6 +200,7 @@ function install_bin() {
${csudo}rm -f ${bin_link_dir}/${demoName} || :
${csudo}rm -f ${bin_link_dir}/${benchmarkName} || :
${csudo}rm -f ${bin_link_dir}/${dumpName} || :
${csudo}rm -f ${bin_link_dir}/${xname} || :
${csudo}rm -f ${bin_link_dir}/set_core || :
${csudo}rm -f ${bin_link_dir}/TDinsight.sh || :
@ -212,6 +214,7 @@ function install_bin() {
[ -x ${install_main_dir}/bin/${benchmarkName} ] && ${csudo}ln -s ${install_main_dir}/bin/${benchmarkName} ${bin_link_dir}/${demoName} || :
[ -x ${install_main_dir}/bin/${benchmarkName} ] && ${csudo}ln -s ${install_main_dir}/bin/${benchmarkName} ${bin_link_dir}/${benchmarkName} || :
[ -x ${install_main_dir}/bin/${dumpName} ] && ${csudo}ln -s ${install_main_dir}/bin/${dumpName} ${bin_link_dir}/${dumpName} || :
[ -x ${install_main_dir}/bin/${xname} ] && ${csudo}ln -s ${install_main_dir}/bin/${dumpName} ${bin_link_dir}/${xname} || :
[ -x ${install_main_dir}/bin/TDinsight.sh ] && ${csudo}ln -s ${install_main_dir}/bin/TDinsight.sh ${bin_link_dir}/TDinsight.sh || :
[ -x ${install_main_dir}/bin/remove.sh ] && ${csudo}ln -s ${install_main_dir}/bin/remove.sh ${bin_link_dir}/${uninstallScript} || :
[ -x ${install_main_dir}/bin/set_core.sh ] && ${csudo}ln -s ${install_main_dir}/bin/set_core.sh ${bin_link_dir}/set_core || :

View File

@ -172,6 +172,7 @@ function install_bin() {
${csudo}rm -f ${bin_link_dir}/udfd || :
${csudo}rm -f ${bin_link_dir}/taosdemo || :
${csudo}rm -f ${bin_link_dir}/taosdump || :
${csudo}rm -f ${bin_link_dir}/taosx || :
if [ "$osType" != "Darwin" ]; then
${csudo}rm -f ${bin_link_dir}/perfMonitor || :
@ -184,6 +185,7 @@ function install_bin() {
[ -f ${binary_dir}/build/bin/taosdump ] && ${csudo}cp -r ${binary_dir}/build/bin/taosdump ${install_main_dir}/bin || :
[ -f ${binary_dir}/build/bin/taosadapter ] && ${csudo}cp -r ${binary_dir}/build/bin/taosadapter ${install_main_dir}/bin || :
[ -f ${binary_dir}/build/bin/udfd ] && ${csudo}cp -r ${binary_dir}/build/bin/udfd ${install_main_dir}/bin || :
[ -f ${binary_dir}/build/bin/taosx ] && ${csudo}cp -r ${binary_dir}/build/bin/taosx ${install_main_dir}/bin || :
${csudo}cp -r ${binary_dir}/build/bin/${serverName} ${install_main_dir}/bin || :
${csudo}cp -r ${script_dir}/taosd-dump-cfg.gdb ${install_main_dir}/bin || :
@ -199,6 +201,7 @@ function install_bin() {
[ -x ${install_main_dir}/bin/udfd ] && ${csudo}ln -s ${install_main_dir}/bin/udfd ${bin_link_dir}/udfd || :
[ -x ${install_main_dir}/bin/taosdump ] && ${csudo}ln -s ${install_main_dir}/bin/taosdump ${bin_link_dir}/taosdump || :
[ -x ${install_main_dir}/bin/taosdemo ] && ${csudo}ln -s ${install_main_dir}/bin/taosdemo ${bin_link_dir}/taosdemo || :
[ -x ${install_main_dir}/bin/taosx ] && ${csudo}ln -s ${install_main_dir}/bin/taosx ${bin_link_dir}/taosx || :
[ -x ${install_main_dir}/bin/perfMonitor ] && ${csudo}ln -s ${install_main_dir}/bin/perfMonitor ${bin_link_dir}/perfMonitor || :
[ -x ${install_main_dir}/set_core.sh ] && ${csudo}ln -s ${install_main_dir}/bin/set_core.sh ${bin_link_dir}/set_core || :
[ -x ${install_main_dir}/bin/remove.sh ] && ${csudo}ln -s ${install_main_dir}/bin/remove.sh ${bin_link_dir}/${uninstallScript} || :
@ -215,6 +218,7 @@ function install_bin() {
[ -x ${install_main_dir}/bin/udfd ] || [ -x ${install_main_2_dir}/bin/udfd ] && ${csudo}ln -s ${install_main_dir}/bin/udfd ${bin_link_dir}/udfd || ${csudo}ln -s ${install_main_2_dir}/bin/udfd || :
[ -x ${install_main_dir}/bin/taosdump ] || [ -x ${install_main_2_dir}/bin/taosdump ] && ${csudo}ln -s ${install_main_dir}/bin/taosdump ${bin_link_dir}/taosdump || ln -s ${install_main_2_dir}/bin/taosdump ${bin_link_dir}/taosdump || :
[ -x ${install_main_dir}/bin/taosdemo ] || [ -x ${install_main_2_dir}/bin/taosdemo ] && ${csudo}ln -s ${install_main_dir}/bin/taosdemo ${bin_link_dir}/taosdemo || ln -s ${install_main_2_dir}/bin/taosdemo ${bin_link_dir}/taosdemo || :
[ -x ${install_main_dir}/bin/taosx ] || [ -x ${install_main_2_dir}/bin/taosx ] && ${csudo}ln -s ${install_main_dir}/bin/taosx ${bin_link_dir}/taosx || ln -s ${install_main_2_dir}/bin/taosx ${bin_link_dir}/taosx || :
fi
}

Some files were not shown because too many files have changed in this diff Show More