Merge remote-tracking branch 'origin/3.0' into feat/TD-30268
|
@ -416,7 +416,7 @@ pipeline {
|
|||
echo "${WKDIR}/restore.sh -p ${BRANCH_NAME} -n ${BUILD_ID} -c {container name}"
|
||||
}
|
||||
catchError(buildResult: 'FAILURE', stageResult: 'FAILURE') {
|
||||
timeout(time: 150, unit: 'MINUTES'){
|
||||
timeout(time: 200, unit: 'MINUTES'){
|
||||
pre_test()
|
||||
script {
|
||||
sh '''
|
||||
|
@ -454,7 +454,7 @@ pipeline {
|
|||
cd ${WKC}/tests/parallel_test
|
||||
export DEFAULT_RETRY_TIME=2
|
||||
date
|
||||
''' + timeout_cmd + ''' time ./run.sh -e -m /home/m.json -t cases.task -b ${BRANCH_NAME}_${BUILD_ID} -l ${WKDIR}/log -o 900 ''' + extra_param + '''
|
||||
''' + timeout_cmd + ''' time ./run.sh -e -m /home/m.json -t cases.task -b ${BRANCH_NAME}_${BUILD_ID} -l ${WKDIR}/log -o 1200 ''' + extra_param + '''
|
||||
'''
|
||||
}
|
||||
}
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
IF (DEFINED VERNUMBER)
|
||||
SET(TD_VER_NUMBER ${VERNUMBER})
|
||||
ELSE ()
|
||||
SET(TD_VER_NUMBER "3.3.2.0.alpha")
|
||||
SET(TD_VER_NUMBER "3.3.3.0.alpha")
|
||||
ENDIF ()
|
||||
|
||||
IF (DEFINED VERCOMPATIBLE)
|
||||
|
|
|
@ -27,7 +27,7 @@ docker pull tdengine/tdengine:3.0.1.4
|
|||
And then run the following command:
|
||||
|
||||
```shell
|
||||
docker run -d -p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
|
||||
docker run -d -p 6030:6030 -p 6041:6041 -p 6043-6060:6043-6060 -p 6043-6060:6043-6060/udp tdengine/tdengine
|
||||
```
|
||||
|
||||
Note that TDengine Server 3.0 uses TCP port 6030. Port 6041 is used by taosAdapter for the REST API service. Ports 6043 through 6049 are used by taosAdapter for other connections. You can open these ports as needed.
|
||||
|
@ -36,7 +36,7 @@ If you need to persist data to a specific directory on your local machine, pleas
|
|||
```shell
|
||||
docker run -d -v ~/data/taos/dnode/data:/var/lib/taos \
|
||||
-v ~/data/taos/dnode/log:/var/log/taos \
|
||||
-p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
|
||||
-p 6030:6030 -p 6041:6041 -p 6043-6060:6043-6060 -p 6043-6060:6043-6060/udp tdengine/tdengine
|
||||
```
|
||||
:::note
|
||||
|
||||
|
@ -62,7 +62,7 @@ You can now access TDengine or run other Linux commands.
|
|||
|
||||
Note: For information about installing docker, see the [official documentation](https://docs.docker.com/get-docker/).
|
||||
|
||||
## Open the TDengine CLI
|
||||
## TDengine Command Line Interface
|
||||
|
||||
On the container, run the following command to open the TDengine CLI:
|
||||
|
||||
|
@ -73,6 +73,12 @@ taos>
|
|||
|
||||
```
|
||||
|
||||
## TDegnine Graphic User Interface
|
||||
|
||||
From TDengine 3.3.0.0, there is a new componenet called `taos-explorer` added in the TDengine docker image. You can use it to manage the databases, super tables, child tables, and data in your TDengine system. There are also some features only available in TDengine Enterprise Edition, please contact TDengine sales team in case you need these features.
|
||||
|
||||
To use taos-explorer in the container, you need to access the host port mapped from container port 6060. Assuming the host name is abc.com, and the port used on host is 6060, you need to access `http://abc.com:6060`. taos-explorer uses port 6060 by default in the container. When you use it the first time, you need to register with your enterprise email, then can logon using your user name and password in the TDengine database management system.
|
||||
|
||||
## Test data insert performance
|
||||
|
||||
After your TDengine Server is running normally, you can run the taosBenchmark utility to test its performance:
|
||||
|
@ -125,6 +131,7 @@ SELECT FIRST(ts), AVG(current), MAX(voltage), MIN(phase) FROM test.d10 INTERVAL(
|
|||
|
||||
In the query above you are selecting the first timestamp (ts) in the interval, another way of selecting this would be `\_wstart` which will give the start of the time window. For more information about windowed queries, see [Time-Series Extensions](../../taos-sql/distinguished/).
|
||||
|
||||
|
||||
## Additional Information
|
||||
|
||||
For more information about deploying TDengine in a Docker environment, see [Deploying TDengine with Docker](../../deployment/docker).
|
||||
|
|
|
@ -35,6 +35,10 @@ gcc version - 9.3.1 or above;
|
|||
|
||||
## Installation
|
||||
|
||||
**Note**
|
||||
|
||||
Since TDengine 3.0.6.0, we don't provide standalone taosTools pacakge for downloading. However, all the tools included in the taosTools pacakge can be found in TDengine-server pacakge.
|
||||
|
||||
<Tabs>
|
||||
<TabItem label=".deb" value="debinst">
|
||||
|
||||
|
@ -119,11 +123,18 @@ This installation method is supported only for Debian and Ubuntu.
|
|||
</TabItem>
|
||||
<TabItem label="Windows" value="windows">
|
||||
|
||||
Note: TDengine only supports Windows Server 2016/2019 and Windows 10/11 on the Windows platform.
|
||||
**Note**
|
||||
- TDengine only supports Windows Server 2016/2019 and Windows 10/11 on the Windows platform.
|
||||
- Since TDengine 3.1.0.0, we wonly provide client package for Windows. If you need to run TDenginer server on Windows, please contact TDengine sales team to upgrade to TDengine Enterprise.
|
||||
- To run on Windows, the Microsoft Visual C++ Runtime library is required. If the Microsoft Visual C++ Runtime Library is missing on your platform, you can download and install it from [VC Runtime Library](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170).
|
||||
|
||||
Follow the steps below:
|
||||
|
||||
1. Download the Windows installation package.
|
||||
<PkgListV3 type={3}/>
|
||||
2. Run the downloaded package to install TDengine.
|
||||
Note: From version 3.0.1.7, only TDengine client pacakge can be downloaded for Windows platform. If you want to run TDengine servers on Windows, please contact our sales team to upgrade to TDengine Enterprise.
|
||||
|
||||
|
||||
</TabItem>
|
||||
<TabItem label="macOS" value="macos">
|
||||
|
@ -153,38 +164,26 @@ After the installation is complete, run the following command to start the TDeng
|
|||
|
||||
```bash
|
||||
systemctl start taosd
|
||||
systemctl start taosadapter
|
||||
systemctl start taoskeeper
|
||||
systemctl start taos-explorer
|
||||
```
|
||||
|
||||
Run the following command to confirm that TDengine is running normally:
|
||||
Or you can run a scrip to start all the above services together
|
||||
|
||||
```bash
|
||||
start-all.sh
|
||||
```
|
||||
|
||||
systemctl can also be used to stop, restart a specific service or check its status, like below using `taosd` as example:
|
||||
|
||||
```bash
|
||||
systemctl start taosd
|
||||
systemctl stop taosd
|
||||
systemctl restart taosd
|
||||
systemctl status taosd
|
||||
```
|
||||
|
||||
Output similar to the following indicates that TDengine is running normally:
|
||||
|
||||
```
|
||||
Active: active (running)
|
||||
```
|
||||
|
||||
Output similar to the following indicates that TDengine has not started successfully:
|
||||
|
||||
```
|
||||
Active: inactive (dead)
|
||||
```
|
||||
|
||||
After confirming that TDengine is running, run the `taos` command to access the TDengine CLI.
|
||||
|
||||
The following `systemctl` commands can help you manage TDengine service:
|
||||
|
||||
- Start TDengine Server: `systemctl start taosd`
|
||||
|
||||
- Stop TDengine Server: `systemctl stop taosd`
|
||||
|
||||
- Restart TDengine Server: `systemctl restart taosd`
|
||||
|
||||
- Check TDengine Server status: `systemctl status taosd`
|
||||
|
||||
:::info
|
||||
|
||||
- The `systemctl` command requires _root_ privileges. If you are not logged in as the _root_ user, use the `sudo` command.
|
||||
|
@ -193,35 +192,38 @@ The following `systemctl` commands can help you manage TDengine service:
|
|||
|
||||
:::
|
||||
|
||||
## Command Line Interface (CLI)
|
||||
|
||||
You can use the TDengine CLI to monitor your TDengine deployment and execute ad hoc queries. To open the CLI, you can execute `taos` in terminal.
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem label="Windows" value="windows">
|
||||
|
||||
After the installation is complete, please run `sc start taosd` or run `C:\TDengine\taosd.exe` with administrator privilege to start TDengine Server. Please run `sc start taosadapter` or run `C:\TDengine\taosadapter.exe` with administrator privilege to start taosAdapter to provide http/REST service.
|
||||
|
||||
## Command Line Interface (CLI)
|
||||
|
||||
You can use the TDengine CLI to monitor your TDengine deployment and execute ad hoc queries. To open the CLI, you can run `taos.exe` in the `C:\TDengine` directory of the Windows terminal to start the TDengine command line.
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem label="macOS" value="macos">
|
||||
|
||||
After the installation is complete, double-click the /applications/TDengine to start the program, or run `launchctl start com.tdengine.taosd` to start TDengine Server.
|
||||
After the installation is complete, double-click the /applications/TDengine to start the program, or run `sudo launchctl start ` to start TDengine services.
|
||||
|
||||
The following `launchctl` commands can help you manage TDengine service:
|
||||
```bash
|
||||
sudo launchctl start com.tdengine.taosd
|
||||
sudo launchctl start com.tdengine.taosadapter
|
||||
sudo launchctl start com.tdengine.taoskeeper
|
||||
sudo launchctl start com.tdengine.taos-explorer
|
||||
```
|
||||
|
||||
- Start TDengine Server: `sudo launchctl start com.tdengine.taosd`
|
||||
Or you can run a scrip to start all the above services together
|
||||
```bash
|
||||
start-all.sh
|
||||
```
|
||||
|
||||
- Stop TDengine Server: `sudo launchctl stop com.tdengine.taosd`
|
||||
The following `launchctl` commands can help you manage TDengine service, using `taosd` service as an example below:
|
||||
|
||||
- Check TDengine Server status: `sudo launchctl list | grep taosd`
|
||||
|
||||
- Check TDengine Server status details: `launchctl print system/com.tdengine.taosd`
|
||||
```bash
|
||||
sudo launchctl start com.tdengine.taosd
|
||||
sudo launchctl stop com.tdengine.taosd
|
||||
sudo launchctl list | grep taosd
|
||||
sudo launchctl print system/com.tdengine.taosd
|
||||
```
|
||||
|
||||
:::info
|
||||
- Please use `sudo` to run `launchctl` to manage _com.tdengine.taosd_ with administrator privileges.
|
||||
|
@ -232,24 +234,20 @@ The following `launchctl` commands can help you manage TDengine service:
|
|||
|
||||
:::
|
||||
|
||||
## Command Line Interface (CLI)
|
||||
|
||||
You can use the TDengine CLI to monitor your TDengine deployment and execute ad hoc queries. To open the CLI, you can execute `taos` in terminal.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
```bash
|
||||
taos
|
||||
```
|
||||
|
||||
The TDengine CLI displays a welcome message and version information to indicate that its connection to the TDengine service was successful. If an error message is displayed, see the [FAQ](../../train-faq/faq) for troubleshooting information. At the following prompt, you can execute SQL commands.
|
||||
## TDengine Command Line Interface
|
||||
|
||||
You can use the TDengine CLI to monitor your TDengine deployment and execute ad hoc queries. To open the CLI, you can execute `taos` (Linux/Mac) or `taos.exe` (Windows) in terminal. The prompt of TDengine CLI is like below:
|
||||
|
||||
```cmd
|
||||
taos>
|
||||
```
|
||||
|
||||
For example, you can create and delete databases and tables and run all types of queries. Each SQL command must be end with a semicolon (;). For example:
|
||||
Using TDengine CLI, you can create and delete databases and tables and run all types of queries. Each SQL command must be end with a semicolon (;). For example:
|
||||
|
||||
```sql
|
||||
CREATE DATABASE demo;
|
||||
|
@ -269,6 +267,12 @@ Query OK, 2 row(s) in set (0.003128s)
|
|||
|
||||
You can also can monitor the deployment status, add and remove user accounts, and manage running instances. You can run the TDengine CLI on either machines. For more information, see [TDengine CLI](../../reference/taos-shell/).
|
||||
|
||||
## TDengine Graphic User Interface
|
||||
|
||||
From TDengine 3.3.0.0, there is a new componenet called `taos-explorer` added in the TDengine docker image. You can use it to manage the databases, super tables, child tables, and data in your TDengine system. There are also some features only available in TDengine Enterprise Edition, please contact TDengine sales team in case you need these features.
|
||||
|
||||
To use taos-explorer in the container, you need to access the host port mapped from container port 6060. Assuming the host name is abc.com, and the port used on host is 6060, you need to access `http://abc.com:6060`. taos-explorer uses port 6060 by default in the container. When you use it the first time, you need to register with your enterprise email, then can logon using your user name and password in the TDengine
|
||||
|
||||
## Test data insert performance
|
||||
|
||||
After your TDengine Server is running normally, you can run the taosBenchmark utility to test its performance:
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
```rust title="Native Connection/REST Connection"
|
||||
```rust title="Native Connection"
|
||||
{{#include docs/examples/rust/nativeexample/examples/connect.rs}}
|
||||
```
|
||||
|
||||
:::note
|
||||
For Rust client library, the connection depends on the feature being used. If "rest" feature is enabled, then only the implementation for "rest" is compiled and packaged.
|
||||
For Rust client library, the connection depends on the feature being used. If "ws" feature is enabled, then only the implementation for "websocket" is compiled and packaged.
|
||||
|
||||
:::
|
||||
|
|
After Width: | Height: | Size: 14 KiB |
|
@ -26,17 +26,24 @@ Any application running on any platform can access TDengine through the REST API
|
|||
|
||||
## Establish Connection
|
||||
|
||||
There are two ways for a client library to establish connections to TDengine:
|
||||
There are three ways for a client library to establish connections to TDengine:
|
||||
|
||||
1. REST connection through the REST API provided by the taosAdapter component.
|
||||
2. Native connection through the TDengine client driver (taosc).
|
||||
1. Native connection through the TDengine client driver (taosc).
|
||||
2. REST connection through the REST API provided by the taosAdapter component.
|
||||
3. Websocket connection provided by the taosAdapter component.
|
||||
|
||||
For REST and native connections, client libraries provide similar APIs for performing operations and running SQL statements on your databases. The main difference is the method of establishing the connection, which is not visible to users.
|
||||

|
||||
|
||||
For these ways of connections, client libraries provide similar APIs for performing operations and running SQL statements on your databases. The main difference is the method of establishing the connection, which is not visible to users.
|
||||
|
||||
Key differences:
|
||||
|
||||
3. The REST connection is more accessible with cross-platform support, however it results in a 30% performance downgrade.
|
||||
1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](../../client-libraries/cpp#parameter-binding-api), [Subscription](../../client-libraries/cpp#subscription-api), etc.
|
||||
1. For a Native connection, the client driver taosc and the server TDengine version must be compatible.
|
||||
2. For a REST connection, users do not need to install the client driver taosc, providing the advantage of cross-platform ease of use. However, functions such as data subscription and binary data types are not available. Additionally, compared to Native and Websocket connections, a REST connection has the worst performance.
|
||||
3. For a Websocket connection, users also do not need to install the client driver taosc.
|
||||
4. To connect to a cloud service instance, you need to use the REST connection or Websocket connection.
|
||||
|
||||
Normally we recommend using **Websocket connection**.
|
||||
|
||||
## Install Client Driver taosc
|
||||
|
||||
|
@ -83,7 +90,7 @@ If `maven` is used to manage the projects, what needs to be done is only adding
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.2.4</version>
|
||||
<version>3.3.0</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
|
@ -123,18 +130,18 @@ require github.com/taosdata/driver-go/v3 latest
|
|||
</TabItem>
|
||||
<TabItem label="Rust" value="rust">
|
||||
|
||||
Just need to add `libtaos` dependency in `Cargo.toml`.
|
||||
Just need to add `taos` dependency in `Cargo.toml`.
|
||||
|
||||
```toml title=Cargo.toml
|
||||
[dependencies]
|
||||
libtaos = { version = "0.4.2"}
|
||||
taos = { version = "*"}
|
||||
```
|
||||
|
||||
:::info
|
||||
Rust client library uses different features to distinguish the way to establish connection. To establish REST connection, please enable `rest` feature.
|
||||
Rust client library uses different features to distinguish the way to establish connection. To establish Websocket connection, please enable `ws` feature.
|
||||
|
||||
```toml
|
||||
libtaos = { version = "*", features = ["rest"] }
|
||||
taos = { version = "*", default-features = false, features = ["ws"] }
|
||||
```
|
||||
|
||||
:::
|
||||
|
|
|
@ -13,14 +13,23 @@ import GoInfluxLine from "../07-develop/03-insert-data/_go_line.mdx"
|
|||
import GoOpenTSDBTelnet from "../07-develop/03-insert-data/_go_opts_telnet.mdx"
|
||||
import GoOpenTSDBJson from "../07-develop/03-insert-data/_go_opts_json.mdx"
|
||||
import GoQuery from "../07-develop/04-query-data/_go.mdx"
|
||||
import RequestId from "./_request_id.mdx";
|
||||
|
||||
`driver-go` is the official Go language client library for TDengine. It implements the [database/sql](https://golang.org/pkg/database/sql/) package, the generic Go language interface to SQL databases. Go developers can use it to develop applications that access TDengine cluster data.
|
||||
|
||||
`driver-go` provides two ways to establish connections. One is **native connection**, which connects to TDengine instances natively through the TDengine client driver (taosc), supporting data writing, querying, subscriptions, schemaless writing, and bind interface. The other is the **REST connection**, which connects to TDengine instances via the REST interface provided by taosAdapter. The set of features implemented by the REST connection differs slightly from those implemented by the native connection.
|
||||
## Connection types
|
||||
|
||||
This article describes how to install `driver-go` and connect to TDengine clusters and perform basic operations such as data query and data writing through `driver-go`.
|
||||
`driver-go` provides 3 connection types.
|
||||
|
||||
The source code of `driver-go` is hosted on [GitHub](https://github.com/taosdata/driver-go).
|
||||
* **Native connection**, which connects to TDengine instances natively through the TDengine client driver (taosc), supporting data writing, querying, subscriptions, schemaless writing, and bind interface.
|
||||
* **REST connection**, which is implemented through taosAdapter. Some features like schemaless and subscriptions are not supported.
|
||||
* **Websocket connection** which is implemented through taosAdapter. The set of features implemented by the WebSocket connection differs slightly from those implemented by the native connection.
|
||||
|
||||
For a detailed introduction of the connection types, please refer to: [Establish Connection](../../develop/connect/#establish-connection)
|
||||
|
||||
## Compatibility
|
||||
|
||||
Supports minimum Go version 1.14, it is recommended to use the latest Go version
|
||||
|
||||
## Supported platforms
|
||||
|
||||
|
@ -233,45 +242,27 @@ The Go client library does not support this feature
|
|||
### Create database and tables
|
||||
|
||||
```go
|
||||
var taosDSN = "root:taosdata@tcp(localhost:6030)/"
|
||||
taos, err := sql.Open("taosSql", taosDSN)
|
||||
if err != nil {
|
||||
log.Fatalln("failed to connect TDengine, err:", err)
|
||||
}
|
||||
defer taos.Close()
|
||||
_, err := taos.Exec("CREATE DATABASE power")
|
||||
if err != nil {
|
||||
log.Fatalln("failed to create database, err:", err)
|
||||
}
|
||||
_, err = taos.Exec("CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)")
|
||||
if err != nil {
|
||||
log.Fatalln("failed to create stable, err:", err)
|
||||
}
|
||||
{{#include docs/examples/go/demo/query/main.go:create_db_and_table}}
|
||||
```
|
||||
|
||||
### Insert data
|
||||
|
||||
<GoInsert />
|
||||
```go
|
||||
{{#include docs/examples/go/demo/query/main.go:insert_data}}
|
||||
```
|
||||
|
||||
### Querying data
|
||||
|
||||
<GoQuery />
|
||||
```go
|
||||
{{#include docs/examples/go/demo/query/main.go:query_data}}
|
||||
```
|
||||
|
||||
### execute SQL with reqId
|
||||
|
||||
This reqId can be used to request link tracing.
|
||||
<RequestId />
|
||||
|
||||
```go
|
||||
db, err := sql.Open("taosSql", "root:taosdata@tcp(localhost:6030)/")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer db.Close()
|
||||
ctx := context.WithValue(context.Background(), common.ReqIDKey, common.GetReqID())
|
||||
_, err = db.ExecContext(ctx, "create database if not exists example_taos_sql")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
{{#include docs/examples/go/demo/query/main.go:with_reqid}}
|
||||
```
|
||||
|
||||
### Writing data via parameter binding
|
||||
|
@ -280,375 +271,14 @@ if err != nil {
|
|||
<TabItem value="native" label="native connection">
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"time"
|
||||
|
||||
"github.com/taosdata/driver-go/v3/af"
|
||||
"github.com/taosdata/driver-go/v3/common"
|
||||
"github.com/taosdata/driver-go/v3/common/param"
|
||||
)
|
||||
|
||||
func main() {
|
||||
db, err := af.Open("", "root", "taosdata", "", 0)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer db.Close()
|
||||
_, err = db.Exec("create database if not exists example_stmt")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
_, err = db.Exec("create table if not exists example_stmt.tb1(ts timestamp," +
|
||||
"c1 bool," +
|
||||
"c2 tinyint," +
|
||||
"c3 smallint," +
|
||||
"c4 int," +
|
||||
"c5 bigint," +
|
||||
"c6 tinyint unsigned," +
|
||||
"c7 smallint unsigned," +
|
||||
"c8 int unsigned," +
|
||||
"c9 bigint unsigned," +
|
||||
"c10 float," +
|
||||
"c11 double," +
|
||||
"c12 binary(20)," +
|
||||
"c13 nchar(20)" +
|
||||
")")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
stmt := db.InsertStmt()
|
||||
err = stmt.Prepare("insert into example_stmt.tb1 values(?,?,?,?,?,?,?,?,?,?,?,?,?,?)")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
now := time.Now()
|
||||
params := make([]*param.Param, 14)
|
||||
params[0] = param.NewParam(2).
|
||||
AddTimestamp(now, common.PrecisionMilliSecond).
|
||||
AddTimestamp(now.Add(time.Second), common.PrecisionMilliSecond)
|
||||
params[1] = param.NewParam(2).AddBool(true).AddNull()
|
||||
params[2] = param.NewParam(2).AddTinyint(2).AddNull()
|
||||
params[3] = param.NewParam(2).AddSmallint(3).AddNull()
|
||||
params[4] = param.NewParam(2).AddInt(4).AddNull()
|
||||
params[5] = param.NewParam(2).AddBigint(5).AddNull()
|
||||
params[6] = param.NewParam(2).AddUTinyint(6).AddNull()
|
||||
params[7] = param.NewParam(2).AddUSmallint(7).AddNull()
|
||||
params[8] = param.NewParam(2).AddUInt(8).AddNull()
|
||||
params[9] = param.NewParam(2).AddUBigint(9).AddNull()
|
||||
params[10] = param.NewParam(2).AddFloat(10).AddNull()
|
||||
params[11] = param.NewParam(2).AddDouble(11).AddNull()
|
||||
params[12] = param.NewParam(2).AddBinary([]byte("binary")).AddNull()
|
||||
params[13] = param.NewParam(2).AddNchar("nchar").AddNull()
|
||||
|
||||
paramTypes := param.NewColumnType(14).
|
||||
AddTimestamp().
|
||||
AddBool().
|
||||
AddTinyint().
|
||||
AddSmallint().
|
||||
AddInt().
|
||||
AddBigint().
|
||||
AddUTinyint().
|
||||
AddUSmallint().
|
||||
AddUInt().
|
||||
AddUBigint().
|
||||
AddFloat().
|
||||
AddDouble().
|
||||
AddBinary(6).
|
||||
AddNchar(5)
|
||||
err = stmt.BindParam(params, paramTypes)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = stmt.AddBatch()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = stmt.Execute()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = stmt.Close()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// select * from example_stmt.tb1
|
||||
}
|
||||
{{#include docs/examples/go/demo/stmt/main.go}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="WebSocket" label="WebSocket connection">
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/taosdata/driver-go/v3/common"
|
||||
"github.com/taosdata/driver-go/v3/common/param"
|
||||
_ "github.com/taosdata/driver-go/v3/taosRestful"
|
||||
"github.com/taosdata/driver-go/v3/ws/stmt"
|
||||
)
|
||||
|
||||
func main() {
|
||||
db, err := sql.Open("taosRestful", "root:taosdata@http(localhost:6041)/")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer db.Close()
|
||||
prepareEnv(db)
|
||||
|
||||
config := stmt.NewConfig("ws://127.0.0.1:6041/rest/stmt", 0)
|
||||
config.SetConnectUser("root")
|
||||
config.SetConnectPass("taosdata")
|
||||
config.SetConnectDB("example_ws_stmt")
|
||||
config.SetMessageTimeout(common.DefaultMessageTimeout)
|
||||
config.SetWriteWait(common.DefaultWriteWait)
|
||||
config.SetErrorHandler(func(connector *stmt.Connector, err error) {
|
||||
panic(err)
|
||||
})
|
||||
config.SetCloseHandler(func() {
|
||||
fmt.Println("stmt connector closed")
|
||||
})
|
||||
|
||||
connector, err := stmt.NewConnector(config)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
now := time.Now()
|
||||
{
|
||||
stmt, err := connector.Init()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = stmt.Prepare("insert into ? using all_json tags(?) values(?,?,?,?,?,?,?,?,?,?,?,?,?,?)")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = stmt.SetTableName("tb1")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = stmt.SetTags(param.NewParam(1).AddJson([]byte(`{"tb":1}`)), param.NewColumnType(1).AddJson(0))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
params := []*param.Param{
|
||||
param.NewParam(3).AddTimestamp(now, 0).AddTimestamp(now.Add(time.Second), 0).AddTimestamp(now.Add(time.Second*2), 0),
|
||||
param.NewParam(3).AddBool(true).AddNull().AddBool(true),
|
||||
param.NewParam(3).AddTinyint(1).AddNull().AddTinyint(1),
|
||||
param.NewParam(3).AddSmallint(1).AddNull().AddSmallint(1),
|
||||
param.NewParam(3).AddInt(1).AddNull().AddInt(1),
|
||||
param.NewParam(3).AddBigint(1).AddNull().AddBigint(1),
|
||||
param.NewParam(3).AddUTinyint(1).AddNull().AddUTinyint(1),
|
||||
param.NewParam(3).AddUSmallint(1).AddNull().AddUSmallint(1),
|
||||
param.NewParam(3).AddUInt(1).AddNull().AddUInt(1),
|
||||
param.NewParam(3).AddUBigint(1).AddNull().AddUBigint(1),
|
||||
param.NewParam(3).AddFloat(1).AddNull().AddFloat(1),
|
||||
param.NewParam(3).AddDouble(1).AddNull().AddDouble(1),
|
||||
param.NewParam(3).AddBinary([]byte("test_binary")).AddNull().AddBinary([]byte("test_binary")),
|
||||
param.NewParam(3).AddNchar("test_nchar").AddNull().AddNchar("test_nchar"),
|
||||
}
|
||||
paramTypes := param.NewColumnType(14).
|
||||
AddTimestamp().
|
||||
AddBool().
|
||||
AddTinyint().
|
||||
AddSmallint().
|
||||
AddInt().
|
||||
AddBigint().
|
||||
AddUTinyint().
|
||||
AddUSmallint().
|
||||
AddUInt().
|
||||
AddUBigint().
|
||||
AddFloat().
|
||||
AddDouble().
|
||||
AddBinary(0).
|
||||
AddNchar(0)
|
||||
err = stmt.BindParam(params, paramTypes)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = stmt.AddBatch()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = stmt.Exec()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
affected := stmt.GetAffectedRows()
|
||||
fmt.Println("all_json affected rows:", affected)
|
||||
err = stmt.Close()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
{
|
||||
stmt, err := connector.Init()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = stmt.Prepare("insert into ? using all_all tags(?,?,?,?,?,?,?,?,?,?,?,?,?,?) values(?,?,?,?,?,?,?,?,?,?,?,?,?,?)")
|
||||
err = stmt.SetTableName("tb1")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
err = stmt.SetTableName("tb2")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = stmt.SetTags(
|
||||
param.NewParam(14).
|
||||
AddTimestamp(now, 0).
|
||||
AddBool(true).
|
||||
AddTinyint(2).
|
||||
AddSmallint(2).
|
||||
AddInt(2).
|
||||
AddBigint(2).
|
||||
AddUTinyint(2).
|
||||
AddUSmallint(2).
|
||||
AddUInt(2).
|
||||
AddUBigint(2).
|
||||
AddFloat(2).
|
||||
AddDouble(2).
|
||||
AddBinary([]byte("tb2")).
|
||||
AddNchar("tb2"),
|
||||
param.NewColumnType(14).
|
||||
AddTimestamp().
|
||||
AddBool().
|
||||
AddTinyint().
|
||||
AddSmallint().
|
||||
AddInt().
|
||||
AddBigint().
|
||||
AddUTinyint().
|
||||
AddUSmallint().
|
||||
AddUInt().
|
||||
AddUBigint().
|
||||
AddFloat().
|
||||
AddDouble().
|
||||
AddBinary(0).
|
||||
AddNchar(0),
|
||||
)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
params := []*param.Param{
|
||||
param.NewParam(3).AddTimestamp(now, 0).AddTimestamp(now.Add(time.Second), 0).AddTimestamp(now.Add(time.Second*2), 0),
|
||||
param.NewParam(3).AddBool(true).AddNull().AddBool(true),
|
||||
param.NewParam(3).AddTinyint(1).AddNull().AddTinyint(1),
|
||||
param.NewParam(3).AddSmallint(1).AddNull().AddSmallint(1),
|
||||
param.NewParam(3).AddInt(1).AddNull().AddInt(1),
|
||||
param.NewParam(3).AddBigint(1).AddNull().AddBigint(1),
|
||||
param.NewParam(3).AddUTinyint(1).AddNull().AddUTinyint(1),
|
||||
param.NewParam(3).AddUSmallint(1).AddNull().AddUSmallint(1),
|
||||
param.NewParam(3).AddUInt(1).AddNull().AddUInt(1),
|
||||
param.NewParam(3).AddUBigint(1).AddNull().AddUBigint(1),
|
||||
param.NewParam(3).AddFloat(1).AddNull().AddFloat(1),
|
||||
param.NewParam(3).AddDouble(1).AddNull().AddDouble(1),
|
||||
param.NewParam(3).AddBinary([]byte("test_binary")).AddNull().AddBinary([]byte("test_binary")),
|
||||
param.NewParam(3).AddNchar("test_nchar").AddNull().AddNchar("test_nchar"),
|
||||
}
|
||||
paramTypes := param.NewColumnType(14).
|
||||
AddTimestamp().
|
||||
AddBool().
|
||||
AddTinyint().
|
||||
AddSmallint().
|
||||
AddInt().
|
||||
AddBigint().
|
||||
AddUTinyint().
|
||||
AddUSmallint().
|
||||
AddUInt().
|
||||
AddUBigint().
|
||||
AddFloat().
|
||||
AddDouble().
|
||||
AddBinary(0).
|
||||
AddNchar(0)
|
||||
err = stmt.BindParam(params, paramTypes)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = stmt.AddBatch()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = stmt.Exec()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
affected := stmt.GetAffectedRows()
|
||||
fmt.Println("all_all affected rows:", affected)
|
||||
err = stmt.Close()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
func prepareEnv(db *sql.DB) {
|
||||
steps := []string{
|
||||
"create database example_ws_stmt",
|
||||
"create table example_ws_stmt.all_json(ts timestamp," +
|
||||
"c1 bool," +
|
||||
"c2 tinyint," +
|
||||
"c3 smallint," +
|
||||
"c4 int," +
|
||||
"c5 bigint," +
|
||||
"c6 tinyint unsigned," +
|
||||
"c7 smallint unsigned," +
|
||||
"c8 int unsigned," +
|
||||
"c9 bigint unsigned," +
|
||||
"c10 float," +
|
||||
"c11 double," +
|
||||
"c12 binary(20)," +
|
||||
"c13 nchar(20)" +
|
||||
")" +
|
||||
"tags(t json)",
|
||||
"create table example_ws_stmt.all_all(" +
|
||||
"ts timestamp," +
|
||||
"c1 bool," +
|
||||
"c2 tinyint," +
|
||||
"c3 smallint," +
|
||||
"c4 int," +
|
||||
"c5 bigint," +
|
||||
"c6 tinyint unsigned," +
|
||||
"c7 smallint unsigned," +
|
||||
"c8 int unsigned," +
|
||||
"c9 bigint unsigned," +
|
||||
"c10 float," +
|
||||
"c11 double," +
|
||||
"c12 binary(20)," +
|
||||
"c13 nchar(20)" +
|
||||
")" +
|
||||
"tags(" +
|
||||
"tts timestamp," +
|
||||
"tc1 bool," +
|
||||
"tc2 tinyint," +
|
||||
"tc3 smallint," +
|
||||
"tc4 int," +
|
||||
"tc5 bigint," +
|
||||
"tc6 tinyint unsigned," +
|
||||
"tc7 smallint unsigned," +
|
||||
"tc8 int unsigned," +
|
||||
"tc9 bigint unsigned," +
|
||||
"tc10 float," +
|
||||
"tc11 double," +
|
||||
"tc12 binary(20)," +
|
||||
"tc13 nchar(20))",
|
||||
}
|
||||
for _, step := range steps {
|
||||
_, err := db.Exec(step)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
{{#include docs/examples/go/demo/stmtws/main.go}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
@ -661,98 +291,14 @@ func prepareEnv(db *sql.DB) {
|
|||
<TabItem value="native" label="native connection">
|
||||
|
||||
```go
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/taosdata/driver-go/v3/af"
|
||||
)
|
||||
|
||||
func main() {
|
||||
conn, err := af.Open("localhost", "root", "taosdata", "", 6030)
|
||||
if err != nil {
|
||||
fmt.Println("fail to connect, err:", err)
|
||||
}
|
||||
defer conn.Close()
|
||||
_, err = conn.Exec("create database if not exists example")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
_, err = conn.Exec("use example")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
influxdbData := "st,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000"
|
||||
err = conn.InfluxDBInsertLines([]string{influxdbData}, "ns")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
telnetData := "stb0_0 1626006833 4 host=host0 interface=eth0"
|
||||
err = conn.OpenTSDBInsertTelnetLines([]string{telnetData})
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
jsonData := "{\"metric\": \"meter_current\",\"timestamp\": 1626846400,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"California.SanFrancisco\", \"id\": \"d1001\"}}"
|
||||
err = conn.OpenTSDBInsertJsonPayload(jsonData)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
{{#include docs/examples/go/demo/sml/main.go}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="WebSocket" label="WebSocket connection">
|
||||
|
||||
```go
|
||||
import (
|
||||
"database/sql"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/taosdata/driver-go/v3/common"
|
||||
_ "github.com/taosdata/driver-go/v3/taosWS"
|
||||
"github.com/taosdata/driver-go/v3/ws/schemaless"
|
||||
)
|
||||
|
||||
func main() {
|
||||
db, err := sql.Open("taosWS", "root:taosdata@ws(localhost:6041)/")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer db.Close()
|
||||
_, err = db.Exec("create database if not exists schemaless_ws")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
s, err := schemaless.NewSchemaless(schemaless.NewConfig("ws://localhost:6041/rest/schemaless", 1,
|
||||
schemaless.SetDb("schemaless_ws"),
|
||||
schemaless.SetReadTimeout(10*time.Second),
|
||||
schemaless.SetWriteTimeout(10*time.Second),
|
||||
schemaless.SetUser("root"),
|
||||
schemaless.SetPassword("taosdata"),
|
||||
schemaless.SetErrorHandler(func(err error) {
|
||||
log.Fatal(err)
|
||||
}),
|
||||
))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
influxdbData := "st,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000"
|
||||
telnetData := "stb0_0 1626006833 4 host=host0 interface=eth0"
|
||||
jsonData := "{\"metric\": \"meter_current\",\"timestamp\": 1626846400,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"California.SanFrancisco\", \"id\": \"d1001\"}}"
|
||||
|
||||
err = s.Insert(influxdbData, schemaless.InfluxDBLineProtocol, "ns", 0, common.GetReqID())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = s.Insert(telnetData, schemaless.OpenTSDBTelnetLineProtocol, "ms", 0, common.GetReqID())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = s.Insert(jsonData, schemaless.OpenTSDBJsonFormatProtocol, "ms", 0, common.GetReqID())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
{{#include docs/examples/go/demo/smlws/main.go}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
@ -774,89 +320,31 @@ The TDengine Go client library supports subscription functionality with the foll
|
|||
#### Create a Topic
|
||||
|
||||
```go
|
||||
db, err := af.Open("", "root", "taosdata", "", 0)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer db.Close()
|
||||
_, err = db.Exec("create database if not exists example_tmq WAL_RETENTION_PERIOD 86400")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
_, err = db.Exec("create topic if not exists example_tmq_topic as DATABASE example_tmq")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
{{#include docs/examples/go/demo/consumer/main.go:create_topic}}
|
||||
```
|
||||
|
||||
#### Create a Consumer
|
||||
|
||||
```go
|
||||
consumer, err := tmq.NewConsumer(&tmqcommon.ConfigMap{
|
||||
"group.id": "test",
|
||||
"auto.offset.reset": "latest",
|
||||
"td.connect.ip": "127.0.0.1",
|
||||
"td.connect.user": "root",
|
||||
"td.connect.pass": "taosdata",
|
||||
"td.connect.port": "6030",
|
||||
"client.id": "test_tmq_client",
|
||||
"enable.auto.commit": "false",
|
||||
"msg.with.table.name": "true",
|
||||
})
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
{{#include docs/examples/go/demo/consumer/main.go:create_consumer}}
|
||||
```
|
||||
|
||||
#### Subscribe to consume data
|
||||
|
||||
```go
|
||||
err = consumer.Subscribe("example_tmq_topic", nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
for i := 0; i < 5; i++ {
|
||||
ev := consumer.Poll(500)
|
||||
if ev != nil {
|
||||
switch e := ev.(type) {
|
||||
case *tmqcommon.DataMessage:
|
||||
fmt.Printf("get message:%v\n", e)
|
||||
case tmqcommon.Error:
|
||||
fmt.Fprintf(os.Stderr, "%% Error: %v: %v\n", e.Code(), e)
|
||||
panic(e)
|
||||
}
|
||||
consumer.Commit()
|
||||
}
|
||||
}
|
||||
{{#include docs/examples/go/demo/consumer/main.go:poll_data}}
|
||||
```
|
||||
|
||||
#### Assignment subscription Offset
|
||||
|
||||
```go
|
||||
partitions, err := consumer.Assignment()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
for i := 0; i < len(partitions); i++ {
|
||||
fmt.Println(partitions[i])
|
||||
err = consumer.Seek(tmqcommon.TopicPartition{
|
||||
Topic: partitions[i].Topic,
|
||||
Partition: partitions[i].Partition,
|
||||
Offset: 0,
|
||||
}, 0)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
{{#include docs/examples/go/demo/consumer/main.go:consumer_seek}}
|
||||
```
|
||||
|
||||
#### Close subscriptions
|
||||
|
||||
```go
|
||||
err = consumer.Close()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
{{#include docs/examples/go/demo/consumer/main.go:consumer_close}}
|
||||
```
|
||||
|
||||
#### Full Sample Code
|
||||
|
@ -865,232 +353,14 @@ The TDengine Go client library supports subscription functionality with the foll
|
|||
<TabItem value="native" label="native connection">
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/taosdata/driver-go/v3/af"
|
||||
"github.com/taosdata/driver-go/v3/af/tmq"
|
||||
tmqcommon "github.com/taosdata/driver-go/v3/common/tmq"
|
||||
)
|
||||
|
||||
func main() {
|
||||
db, err := af.Open("", "root", "taosdata", "", 0)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer db.Close()
|
||||
_, err = db.Exec("create database if not exists example_tmq WAL_RETENTION_PERIOD 86400")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
_, err = db.Exec("create topic if not exists example_tmq_topic as DATABASE example_tmq")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
consumer, err := tmq.NewConsumer(&tmqcommon.ConfigMap{
|
||||
"group.id": "test",
|
||||
"auto.offset.reset": "latest",
|
||||
"td.connect.ip": "127.0.0.1",
|
||||
"td.connect.user": "root",
|
||||
"td.connect.pass": "taosdata",
|
||||
"td.connect.port": "6030",
|
||||
"client.id": "test_tmq_client",
|
||||
"enable.auto.commit": "false",
|
||||
"msg.with.table.name": "true",
|
||||
})
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = consumer.Subscribe("example_tmq_topic", nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
_, err = db.Exec("create table example_tmq.t1 (ts timestamp,v int)")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
go func() {
|
||||
for {
|
||||
_, err = db.Exec("insert into example_tmq.t1 values(now,1)")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
time.Sleep(time.Millisecond * 100)
|
||||
}
|
||||
}()
|
||||
|
||||
for i := 0; i < 5; i++ {
|
||||
ev := consumer.Poll(500)
|
||||
if ev != nil {
|
||||
switch e := ev.(type) {
|
||||
case *tmqcommon.DataMessage:
|
||||
fmt.Printf("get message:%v\n", e)
|
||||
case tmqcommon.Error:
|
||||
fmt.Fprintf(os.Stderr, "%% Error: %v: %v\n", e.Code(), e)
|
||||
panic(e)
|
||||
}
|
||||
consumer.Commit()
|
||||
}
|
||||
}
|
||||
partitions, err := consumer.Assignment()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
for i := 0; i < len(partitions); i++ {
|
||||
fmt.Println(partitions[i])
|
||||
err = consumer.Seek(tmqcommon.TopicPartition{
|
||||
Topic: partitions[i].Topic,
|
||||
Partition: partitions[i].Partition,
|
||||
Offset: 0,
|
||||
}, 0)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
partitions, err = consumer.Assignment()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
for i := 0; i < len(partitions); i++ {
|
||||
fmt.Println(partitions[i])
|
||||
}
|
||||
|
||||
err = consumer.Close()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
{{#include docs/examples/go/demo/consumer/main.go}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="WebSocket" label="WebSocket connection">
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/taosdata/driver-go/v3/common"
|
||||
tmqcommon "github.com/taosdata/driver-go/v3/common/tmq"
|
||||
_ "github.com/taosdata/driver-go/v3/taosRestful"
|
||||
"github.com/taosdata/driver-go/v3/ws/tmq"
|
||||
)
|
||||
|
||||
func main() {
|
||||
db, err := sql.Open("taosRestful", "root:taosdata@http(localhost:6041)/")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer db.Close()
|
||||
prepareEnv(db)
|
||||
consumer, err := tmq.NewConsumer(&tmqcommon.ConfigMap{
|
||||
"ws.url": "ws://127.0.0.1:6041/rest/tmq",
|
||||
"ws.message.channelLen": uint(0),
|
||||
"ws.message.timeout": common.DefaultMessageTimeout,
|
||||
"ws.message.writeWait": common.DefaultWriteWait,
|
||||
"td.connect.user": "root",
|
||||
"td.connect.pass": "taosdata",
|
||||
"group.id": "example",
|
||||
"client.id": "example_consumer",
|
||||
"auto.offset.reset": "latest",
|
||||
})
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = consumer.Subscribe("example_ws_tmq_topic", nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
_, err = db.Exec("create table example_ws_tmq.t_all(ts timestamp," +
|
||||
"c1 bool," +
|
||||
"c2 tinyint," +
|
||||
"c3 smallint," +
|
||||
"c4 int," +
|
||||
"c5 bigint," +
|
||||
"c6 tinyint unsigned," +
|
||||
"c7 smallint unsigned," +
|
||||
"c8 int unsigned," +
|
||||
"c9 bigint unsigned," +
|
||||
"c10 float," +
|
||||
"c11 double," +
|
||||
"c12 binary(20)," +
|
||||
"c13 nchar(20)" +
|
||||
")")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
go func() {
|
||||
for {
|
||||
_, err = db.Exec("insert into example_ws_tmq.t_all values(now,true,2,3,4,5,6,7,8,9,10.123,11.123,'binary','nchar')")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
time.Sleep(time.Millisecond * 100)
|
||||
}
|
||||
|
||||
}()
|
||||
for i := 0; i < 5; i++ {
|
||||
ev := consumer.Poll(500)
|
||||
if ev != nil {
|
||||
switch e := ev.(type) {
|
||||
case *tmqcommon.DataMessage:
|
||||
fmt.Printf("get message:%v\n", e)
|
||||
case tmqcommon.Error:
|
||||
fmt.Printf("%% Error: %v: %v\n", e.Code(), e)
|
||||
panic(e)
|
||||
}
|
||||
consumer.Commit()
|
||||
}
|
||||
}
|
||||
partitions, err := consumer.Assignment()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
for i := 0; i < len(partitions); i++ {
|
||||
fmt.Println(partitions[i])
|
||||
err = consumer.Seek(tmqcommon.TopicPartition{
|
||||
Topic: partitions[i].Topic,
|
||||
Partition: partitions[i].Partition,
|
||||
Offset: 0,
|
||||
}, 0)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
partitions, err = consumer.Assignment()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
for i := 0; i < len(partitions); i++ {
|
||||
fmt.Println(partitions[i])
|
||||
}
|
||||
|
||||
err = consumer.Close()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
func prepareEnv(db *sql.DB) {
|
||||
_, err := db.Exec("create database example_ws_tmq WAL_RETENTION_PERIOD 86400")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
_, err = db.Exec("create topic example_ws_tmq_topic as database example_ws_tmq")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
{{#include docs/examples/go/demo/consumerws/main.go}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
|
|
@ -13,15 +13,24 @@ import RustInsert from "../07-develop/03-insert-data/_rust_sql.mdx"
|
|||
import RustBind from "../07-develop/03-insert-data/_rust_stmt.mdx"
|
||||
import RustSml from "../07-develop/03-insert-data/_rust_schemaless.mdx"
|
||||
import RustQuery from "../07-develop/04-query-data/_rust.mdx"
|
||||
import RequestId from "./_request_id.mdx";
|
||||
|
||||
[](https://crates.io/crates/taos)  [](https://docs.rs/taos)
|
||||
|
||||
`taos` is the official Rust client library for TDengine. Rust developers can develop applications to access the TDengine instance data.
|
||||
|
||||
`taos` provides two ways to establish connections. One is the **Native Connection**, which connects to TDengine instances via the TDengine client driver (taosc). The other is the **WebSocket connection**, which connects to TDengine instances via the WebSocket interface provided by taosAdapter. You can specify a connection type with Cargo features. By default, both types are supported. The Websocket connection can be used on any platform. The native connection can be used on any platform that the TDengine Client supports.
|
||||
|
||||
The source code for the Rust client library is located on [GitHub](https://github.com/taosdata/taos-connector-rust).
|
||||
|
||||
## Connection types
|
||||
|
||||
`taos` provides two ways to establish connections, among which we recommend using **websocket connection**.
|
||||
- **Native Connection**, which connects to TDengine instances via the TDengine client driver (taosc).
|
||||
- **WebSocket connection**, which connects to TDengine instances via the WebSocket interface provided by taosAdapter.
|
||||
|
||||
You can specify a connection type with Cargo features. By default, both types are supported.
|
||||
|
||||
For a detailed introduction of the connection types, please refer to: [Establish Connection](../../develop/connect/#establish-connection)
|
||||
|
||||
## Supported platforms
|
||||
|
||||
Native connections are supported on the same platforms as the TDengine client driver.
|
||||
|
@ -31,7 +40,10 @@ Websocket connections are supported on all platforms that can run Go.
|
|||
|
||||
| connector-rust version | TDengine version | major features |
|
||||
| :----------------: | :--------------: | :--------------------------------------------------: |
|
||||
| v0.9.2 | 3.0.7.0 or later | STMT: Get tag_fields and col_fields under ws. |
|
||||
| v0.12.0 | 3.2.3.0 or later | WS supports compression |
|
||||
| v0.11.0 | 3.2.0.0 | TMQ feature optimization |
|
||||
| v0.10.0 | 3.1.0.0 | WS endpoint changes |
|
||||
| v0.9.2 | 3.0.7.0 | STMT: Get tag_fields and col_fields under ws. |
|
||||
| v0.8.12 | 3.0.5.0 | TMQ: Get consuming progress and seek offset to consume. |
|
||||
| v0.8.0 | 3.0.4.0 | Support schemaless insert. |
|
||||
| v0.7.6 | 3.0.3.0 | Support req_id in query. |
|
||||
|
@ -269,52 +281,28 @@ There are two ways to query data: Using built-in types or the [serde](https://se
|
|||
### Create database and tables
|
||||
|
||||
```rust
|
||||
use taos::*;
|
||||
|
||||
#[tokio::main]
|
||||
async fn main() -> anyhow::Result<()> {
|
||||
let dsn = "taos://localhost:6030";
|
||||
let builder = TaosBuilder::from_dsn(dsn)?;
|
||||
|
||||
let taos = builder.build()?;
|
||||
|
||||
let db = "query";
|
||||
|
||||
// create database
|
||||
taos.exec_many([
|
||||
format!("DROP DATABASE IF EXISTS `{db}`"),
|
||||
format!("CREATE DATABASE `{db}`"),
|
||||
format!("USE `{db}`"),
|
||||
])
|
||||
.await?;
|
||||
|
||||
// create table
|
||||
taos.exec_many([
|
||||
// create super table
|
||||
"CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) \
|
||||
TAGS (`groupid` INT, `location` BINARY(16))",
|
||||
// create child table
|
||||
"CREATE TABLE `d0` USING `meters` TAGS(0, 'Los Angles')",
|
||||
]).await?;
|
||||
}
|
||||
{{#include docs/examples/rust/nativeexample/examples/query.rs:create_db_and_table}}
|
||||
```
|
||||
|
||||
> The query is consistent with operating a relational database. When using subscripts to get the contents of the returned fields, you have to start from 1. However, we recommend using the field names to get the values of the fields in the result set.
|
||||
|
||||
### Insert data
|
||||
|
||||
<RustInsert />
|
||||
|
||||
```rust
|
||||
{{#include docs/examples/rust/nativeexample/examples/query.rs:insert_data}}
|
||||
```
|
||||
### Query data
|
||||
|
||||
<RustQuery />
|
||||
```rust
|
||||
{{#include docs/examples/rust/nativeexample/examples/query.rs:query_data}}
|
||||
```
|
||||
|
||||
### execute SQL with req_id
|
||||
|
||||
This req_id can be used to request link tracing.
|
||||
<RequestId />
|
||||
|
||||
```rust
|
||||
let rs = taos.query_with_req_id("select * from stable where tag1 is null", 1)?;
|
||||
{{#include docs/examples/rust/nativeexample/examples/query.rs:query_with_req_id}}
|
||||
```
|
||||
|
||||
### Writing data via parameter binding
|
||||
|
@ -323,13 +311,17 @@ TDengine has significantly improved the bind APIs to support data writing (INSER
|
|||
|
||||
Parameter binding details see [API Reference](#bind-interface)
|
||||
|
||||
<RustBind />
|
||||
```rust
|
||||
{{#include docs/examples/rust/nativeexample/examples/stmt.rs}}
|
||||
```
|
||||
|
||||
### Schemaless Writing
|
||||
|
||||
TDengine supports schemaless writing. It is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. For more information, see [Schemaless Writing](../../reference/schemaless/).
|
||||
|
||||
<RustSml />
|
||||
```rust
|
||||
{{#include docs/examples/rust/nativeexample/examples/schemaless.rs}}
|
||||
```
|
||||
|
||||
### Schemaless with req_id
|
||||
|
||||
|
@ -352,25 +344,15 @@ TDengine starts subscriptions through [TMQ](../../taos-sql/tmq/).
|
|||
#### Create a Topic
|
||||
|
||||
```rust
|
||||
taos.exec_many([
|
||||
// create topic for subscription
|
||||
format!("CREATE TOPIC tmq_meters with META AS DATABASE {db}")
|
||||
])
|
||||
.await?;
|
||||
{{#include docs/examples/rust/nativeexample/examples/tmq.rs:create_topic}}
|
||||
```
|
||||
|
||||
#### Create a Consumer
|
||||
|
||||
You create a TMQ connection by using a DSN.
|
||||
|
||||
```rust
|
||||
let tmq = TmqBuilder::from_dsn("taos://localhost:6030/?group.id=test")?;
|
||||
```
|
||||
|
||||
Create a consumer:
|
||||
|
||||
```rust
|
||||
let mut consumer = tmq.build()?;
|
||||
{{#include docs/examples/rust/nativeexample/examples/tmq.rs:create_consumer}}
|
||||
```
|
||||
|
||||
#### Subscribe to consume data
|
||||
|
@ -378,40 +360,13 @@ let mut consumer = tmq.build()?;
|
|||
A single consumer can subscribe to one or more topics.
|
||||
|
||||
```rust
|
||||
consumer.subscribe(["tmq_meters"]).await?;
|
||||
{{#include docs/examples/rust/nativeexample/examples/tmq.rs:subscribe}}
|
||||
```
|
||||
|
||||
The TMQ is of [futures::Stream](https://docs.rs/futures/latest/futures/stream/index.html) type. You can use the corresponding API to consume each message in the queue and then use `.commit` to mark them as consumed.
|
||||
|
||||
```rust
|
||||
{
|
||||
let mut stream = consumer.stream();
|
||||
|
||||
while let Some((offset, message)) = stream.try_next().await? {
|
||||
// get information from offset
|
||||
|
||||
// the topic
|
||||
let topic = offset.topic();
|
||||
// the vgroup id, like partition id in kafka.
|
||||
let vgroup_id = offset.vgroup_id();
|
||||
println!("* in vgroup id {vgroup_id} of topic {topic}\n");
|
||||
|
||||
if let Some(data) = message.into_data() {
|
||||
while let Some(block) = data.fetch_raw_block().await? {
|
||||
// one block for one table, get table name if needed
|
||||
let name = block.table_name();
|
||||
let records: Vec<Record> = block.deserialize().try_collect()?;
|
||||
println!(
|
||||
"** table: {}, got {} records: {:#?}\n",
|
||||
name.unwrap(),
|
||||
records.len(),
|
||||
records
|
||||
);
|
||||
}
|
||||
}
|
||||
consumer.commit(offset).await?;
|
||||
}
|
||||
}
|
||||
{{#include docs/examples/rust/nativeexample/examples/tmq.rs:consume}}
|
||||
```
|
||||
|
||||
Get assignments:
|
||||
|
@ -419,7 +374,7 @@ Get assignments:
|
|||
Version requirements connector-rust >= v0.8.8, TDengine >= 3.0.5.0
|
||||
|
||||
```rust
|
||||
let assignments = consumer.assignments().await.unwrap();
|
||||
{{#include docs/examples/rust/nativeexample/examples/tmq.rs:assignments}}
|
||||
```
|
||||
|
||||
#### Assignment subscription Offset
|
||||
|
@ -429,13 +384,13 @@ Seek offset:
|
|||
Version requirements connector-rust >= v0.8.8, TDengine >= 3.0.5.0
|
||||
|
||||
```rust
|
||||
consumer.offset_seek(topic, vgroup_id, offset).await;
|
||||
{{#include docs/examples/rust/nativeexample/examples/tmq.rs:seek_offset}}
|
||||
```
|
||||
|
||||
#### Close subscriptions
|
||||
|
||||
```rust
|
||||
consumer.unsubscribe().await;
|
||||
{{#include docs/examples/rust/nativeexample/examples/tmq.rs:unsubscribe}}
|
||||
```
|
||||
|
||||
The following parameters can be configured for the TMQ DSN. Only `group.id` is mandatory.
|
||||
|
|
|
@ -6,15 +6,25 @@ description: This document describes taospy, the TDengine Python client library.
|
|||
|
||||
import Tabs from "@theme/Tabs";
|
||||
import TabItem from "@theme/TabItem";
|
||||
import RequestId from "./_request_id.mdx";
|
||||
|
||||
`taospy` is the official Python client library for TDengine. taospy provides a rich API that makes it easy for Python applications to use TDengine.
|
||||
|
||||
The source code for the Python client library is hosted on [GitHub](https://github.com/taosdata/taos-connector-python).
|
||||
|
||||
## Connection types
|
||||
|
||||
`taospy` mainly provides 3 connection types, among which we recommend using **websocket connection**.
|
||||
- **Native connection**, which correspond to the `taos` modules of the `taospy` package, connects to TDengine instances natively through the TDengine client driver (taosc), supporting data writing, querying, subscriptions, schemaless writing, and bind interface.
|
||||
- **REST connection**, which correspond to the `taosrest` modules of the `taospy` package, which is implemented through taosAdapter. Some features like schemaless and subscriptions are not supported.
|
||||
- **Websocket connection** `taos-ws-py` is an optional package to enable using WebSocket to connect TDengine, which is implemented through taosAdapter. The set of features implemented by the WebSocket connection differs slightly from those implemented by the native connection.
|
||||
|
||||
For a detailed introduction of the connection types, please refer to: [Establish Connection](../../develop/connect/#establish-connection)
|
||||
|
||||
`taospy` is the official Python client library for TDengine. taospy provides a rich API that makes it easy for Python applications to use TDengine. `taospy` wraps both the [native interface](../cpp) and [REST interface](../../reference/rest-api) of TDengine, which correspond to the `taos` and `taosrest` modules of the `taospy` package, respectively.
|
||||
In addition to wrapping the native and REST interfaces, `taospy` also provides a set of programming interfaces that conforms to the [Python Data Access Specification (PEP 249)](https://peps.python.org/pep-0249/). It is easy to integrate `taospy` with many third-party tools, such as [SQLAlchemy](https://www.sqlalchemy.org/) and [pandas](https://pandas.pydata.org/).
|
||||
|
||||
`taos-ws-py` is an optional package to enable using WebSocket to connect TDengine.
|
||||
|
||||
The direct connection to the server using the native interface provided by the client driver is referred to hereinafter as a "native connection"; the connection to the server using the REST or WebSocket interface provided by taosAdapter is referred to hereinafter as a "REST connection" or "WebSocket connection".
|
||||
|
||||
The source code for the Python client library is hosted on [GitHub](https://github.com/taosdata/taos-connector-python).
|
||||
## Supported platforms
|
||||
|
||||
- The [supported platforms](../#supported-platforms) for the native connection are the same as the ones supported by the TDengine client.
|
||||
|
@ -348,13 +358,7 @@ If the configuration parameters are duplicated in the parameters or client confi
|
|||
<TabItem value="native" label="native connection">
|
||||
|
||||
```python
|
||||
conn = taos.connect()
|
||||
# Execute a sql, ignore the result set, just get affected rows. It's useful for DDL and DML statement.
|
||||
conn.execute("DROP DATABASE IF EXISTS test")
|
||||
conn.execute("CREATE DATABASE test")
|
||||
# change database. same as execute "USE db"
|
||||
conn.select_db("test")
|
||||
conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)")
|
||||
{{#include docs/examples/python/create_db_native.py}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
@ -362,12 +366,7 @@ conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (locat
|
|||
<TabItem value="rest" label="REST connection">
|
||||
|
||||
```python
|
||||
conn = taosrest.connect(url="http://localhost:6041")
|
||||
# Execute a sql, ignore the result set, just get affected rows. It's useful for DDL and DML statement.
|
||||
conn.execute("DROP DATABASE IF EXISTS test")
|
||||
conn.execute("CREATE DATABASE test")
|
||||
conn.execute("USE test")
|
||||
conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)")
|
||||
{{#include docs/examples/python/create_db_rest.py}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
@ -375,12 +374,7 @@ conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (locat
|
|||
<TabItem value="websocket" label="WebSocket connection">
|
||||
|
||||
```python
|
||||
conn = taosws.connect("taosws://localhost:6041")
|
||||
# Execute a sql, ignore the result set, just get affected rows. It's useful for DDL and DML statement.
|
||||
conn.execute("DROP DATABASE IF EXISTS test")
|
||||
conn.execute("CREATE DATABASE test")
|
||||
conn.execute("USE test")
|
||||
conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)")
|
||||
{{#include docs/examples/python/create_db_ws.py}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
@ -388,100 +382,35 @@ conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (locat
|
|||
|
||||
### Insert data
|
||||
|
||||
```python
|
||||
conn.execute("INSERT INTO t1 USING weather TAGS(1) VALUES (now, 23.5) (now+1m, 23.5) (now+2m, 24.4)")
|
||||
```
|
||||
|
||||
:::
|
||||
now is an internal function. The default is the current time of the client's computer. now + 1s represents the current time of the client plus 1 second, followed by the number representing the unit of time: a (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), n (months), y (years).
|
||||
:::
|
||||
|
||||
|
||||
### Basic Usage
|
||||
|
||||
<Tabs defaultValue="rest">
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
##### TaosConnection class
|
||||
|
||||
The `TaosConnection` class contains both an implementation of the PEP249 Connection interface (e.g., the `cursor()` method and the `close()` method) and many extensions (e.g., the `execute()`, `query()`, `schemaless_insert()`, and `subscribe()` methods).
|
||||
|
||||
```python title="execute method"
|
||||
{{#include docs/examples/python/connection_usage_native_reference.py:insert}}
|
||||
```python
|
||||
{{#include docs/examples/python/insert_native.py:insert}}
|
||||
```
|
||||
|
||||
```python title="query method"
|
||||
{{#include docs/examples/python/connection_usage_native_reference.py:query}}
|
||||
```
|
||||
|
||||
:::tip
|
||||
The queried results can only be fetched once. For example, only one of `fetch_all()` and `fetch_all_into_dict()` can be used in the example above. Repeated fetches will result in an empty list.
|
||||
:::
|
||||
|
||||
##### Use of TaosResult class
|
||||
|
||||
In the above example of using the `TaosConnection` class, we have shown two ways to get the result of a query: `fetch_all()` and `fetch_all_into_dict()`. In addition, `TaosResult` also provides methods to iterate through the result set by rows (`rows_iter`) or by data blocks (`blocks_iter`). Using these two methods will be more efficient in scenarios where the query has a large amount of data.
|
||||
|
||||
```python title="blocks_iter method"
|
||||
{{#include docs/examples/python/result_set_examples.py}}
|
||||
```
|
||||
##### Use of the TaosCursor class
|
||||
|
||||
The `TaosConnection` class and the `TaosResult` class already implement all the functionality of the native interface. If you are familiar with the interfaces in the PEP249 specification, you can also use the methods provided by the `TaosCursor` class.
|
||||
|
||||
```python title="Use of TaosCursor"
|
||||
{{#include docs/examples/python/cursor_usage_native_reference.py}}
|
||||
```
|
||||
|
||||
:::note
|
||||
The TaosCursor class uses native connections for write and query operations. In a client-side multi-threaded scenario, this cursor instance must remain thread exclusive and cannot be shared across threads for use, otherwise, it will result in errors in the returned results.
|
||||
|
||||
The best practice for TaosCursor is to create a cursor at the beginning of a query and close it immediately after use. Please avoid reusing the same cursor for multiple executions.
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="rest" label="REST connection">
|
||||
|
||||
##### Use of the RestClient class
|
||||
|
||||
The `RestClient` class is a direct wrapper for the [REST API](../../reference/rest-api). It contains only a `sql()` method for executing arbitrary SQL statements and returning the result.
|
||||
|
||||
```python title="Use of RestClient"
|
||||
{{#include docs/examples/python/rest_client_example.py}}
|
||||
```python
|
||||
{{#include docs/examples/python/insert_rest.py:insert}}
|
||||
```
|
||||
|
||||
For a more detailed description of the `sql()` method, please refer to [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html).
|
||||
|
||||
##### Use of TaosRestCursor class
|
||||
|
||||
The `TaosRestCursor` class is an implementation of the PEP249 Cursor interface.
|
||||
|
||||
```python title="Use of TaosRestCursor"
|
||||
{{#include docs/examples/python/connect_rest_examples.py:basic}}
|
||||
```
|
||||
- `cursor.execute`: Used to execute arbitrary SQL statements.
|
||||
- `cursor.rowcount` : For write operations, returns the number of successful rows written. For query operations, returns the number of rows in the result set.
|
||||
- `cursor.description` : Returns the description of the field. Please refer to [TaosRestCursor](https://docs.taosdata.com/api/taospy/taosrest/cursor.html) for the specific format of the description information.
|
||||
|
||||
:::note
|
||||
The best practice for TaosRestCursor is to create a cursor at the beginning of a query and close it immediately after use. Please avoid reusing the same cursor for multiple executions.
|
||||
:::
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="websocket" label="WebSocket connection">
|
||||
|
||||
The `Connection` class contains both an implementation of the PEP249 Connection interface (e.g., the `cursor()` method and the `close()` method) and many extensions (e.g., the `execute()`, `query()`, `schemaless_insert()`, and `subscribe()` methods).
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/connect_websocket_examples.py:basic}}
|
||||
{{#include docs/examples/python/insert_ws.py:insert}}
|
||||
```
|
||||
|
||||
- `conn.execute`: can use to execute arbitrary SQL statements, and return the number of rows affected.
|
||||
- `conn.query`: can use to execute query SQL statements, and return the query results.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
> NOW is an internal function. The default is the current time of the client's computer.
|
||||
> `NOW + 1s` represents the current time of the client plus 1 second, followed by the number representing the unit of time: a (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), n (months), y (years).
|
||||
|
||||
### Querying Data
|
||||
|
||||
<Tabs defaultValue="rest">
|
||||
|
@ -490,7 +419,7 @@ The `Connection` class contains both an implementation of the PEP249 Connection
|
|||
The `query` method of the `TaosConnection` class can be used to query data and return the result data of type `TaosResult`.
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/connection_usage_native_reference.py:query}}
|
||||
{{#include docs/examples/python/insert_native.py:query}}
|
||||
```
|
||||
|
||||
:::tip
|
||||
|
@ -504,7 +433,7 @@ The queried results can only be fetched once. For example, only one of `fetch_al
|
|||
The `RestClient` class is a direct wrapper for the [REST API](../../reference/rest-api). It contains only a `sql()` method for executing arbitrary SQL statements and returning the result.
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/rest_client_example.py}}
|
||||
{{#include docs/examples/python/insert_rest.py:query}}
|
||||
```
|
||||
|
||||
For a more detailed description of the `sql()` method, please refer to [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html).
|
||||
|
@ -516,7 +445,7 @@ For a more detailed description of the `sql()` method, please refer to [RestClie
|
|||
The `query` method of the `TaosConnection` class can be used to query data and return the result data of type `TaosResult`.
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/connect_websocket_examples.py:basic}}
|
||||
{{#include docs/examples/python/insert_ws.py:query}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
@ -524,61 +453,25 @@ The `query` method of the `TaosConnection` class can be used to query data and r
|
|||
|
||||
### Execute SQL with reqId
|
||||
|
||||
By using the optional req_id parameter, you can specify a request ID that can be used for tracing.
|
||||
<RequestId />
|
||||
|
||||
<Tabs defaultValue="rest">
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
##### TaosConnection class
|
||||
|
||||
As the way to connect introduced above but add `req_id` argument.
|
||||
|
||||
```python title="execute method"
|
||||
{{#include docs/examples/python/connection_usage_native_reference_with_req_id.py:insert}}
|
||||
```
|
||||
|
||||
```python title="query method"
|
||||
{{#include docs/examples/python/connection_usage_native_reference_with_req_id.py:query}}
|
||||
```
|
||||
|
||||
##### Use of TaosResult class
|
||||
|
||||
As the way to fetch data introduced above but add `req_id` argument.
|
||||
|
||||
```python title="blocks_iter method"
|
||||
{{#include docs/examples/python/result_set_with_req_id_examples.py}}
|
||||
```
|
||||
##### Use of the TaosCursor class
|
||||
|
||||
The `TaosConnection` class and the `TaosResult` class already implement all the functionality of the native interface. If you are familiar with the interfaces in the PEP249 specification, you can also use the methods provided by the `TaosCursor` class.
|
||||
|
||||
```python title="Use of TaosCursor"
|
||||
{{#include docs/examples/python/cursor_usage_native_reference_with_req_id.py}}
|
||||
```python
|
||||
{{#include docs/examples/python/insert_native.py:req_id}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="rest" label="REST connection">
|
||||
|
||||
##### Use of the RestClient class
|
||||
|
||||
The `RestClient` class is a direct wrapper for the [REST API](../../reference/rest-api). It contains only a `sql()` method for executing arbitrary SQL statements and returning the result.
|
||||
|
||||
```python title="Use of RestClient"
|
||||
{{#include docs/examples/python/rest_client_with_req_id_example.py}}
|
||||
```
|
||||
|
||||
For a more detailed description of the `sql()` method, please refer to [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html).
|
||||
|
||||
##### Use of TaosRestCursor class
|
||||
|
||||
As the way to connect introduced above but add `req_id` argument.
|
||||
|
||||
```python title="Use of TaosRestCursor"
|
||||
{{#include docs/examples/python/connect_rest_with_req_id_examples.py:basic}}
|
||||
```python
|
||||
{{#include docs/examples/python/insert_rest.py:req_id}}
|
||||
```
|
||||
- `cursor.execute`: Used to execute arbitrary SQL statements.
|
||||
- `cursor.rowcount` : For write operations, returns the number of successful rows written. For query operations, returns the number of rows in the result set.
|
||||
- `cursor.description` : Returns the description of the field. Please refer to [TaosRestCursor](https://docs.taosdata.com/api/taospy/taosrest/cursor.html) for the specific format of the description information.
|
||||
|
||||
</TabItem>
|
||||
|
||||
|
@ -587,36 +480,7 @@ As the way to connect introduced above but add `req_id` argument.
|
|||
As the way to connect introduced above but add `req_id` argument.
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/connect_websocket_with_req_id_examples.py:basic}}
|
||||
```
|
||||
|
||||
- `conn.execute`: can use to execute arbitrary SQL statements, and return the number of rows affected.
|
||||
- `conn.query`: can use to execute query SQL statements, and return the query results.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Used with pandas
|
||||
|
||||
<Tabs defaultValue="rest">
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/conn_native_pandas.py}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="rest" label="REST connection">
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/conn_rest_pandas.py}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="websocket" label="WebSocket connection">
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/conn_websocket_pandas.py}}
|
||||
{{#include docs/examples/python/insert_ws.py:req_id}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
@ -629,126 +493,15 @@ The Python client library provides a parameter binding api for inserting data. S
|
|||
<Tabs>
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
##### Create Stmt
|
||||
|
||||
Call the `statement` method in `Connection` to create the `stmt` for parameter binding.
|
||||
|
||||
```
|
||||
import taos
|
||||
|
||||
conn = taos.connect()
|
||||
stmt = conn.statement("insert into log values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)")
|
||||
```
|
||||
|
||||
##### parameter binding
|
||||
|
||||
Call the `new_multi_binds` function to create the parameter list for parameter bindings.
|
||||
|
||||
```
|
||||
params = new_multi_binds(16)
|
||||
params[0].timestamp((1626861392589, 1626861392590, 1626861392591))
|
||||
params[1].bool((True, None, False))
|
||||
params[2].tinyint([-128, -128, None]) # -128 is tinyint null
|
||||
params[3].tinyint([0, 127, None])
|
||||
params[4].smallint([3, None, 2])
|
||||
params[5].int([3, 4, None])
|
||||
params[6].bigint([3, 4, None])
|
||||
params[7].tinyint_unsigned([3, 4, None])
|
||||
params[8].smallint_unsigned([3, 4, None])
|
||||
params[9].int_unsigned([3, 4, None])
|
||||
params[10].bigint_unsigned([3, 4, None])
|
||||
params[11].float([3, None, 1])
|
||||
params[12].double([3, None, 1.2])
|
||||
params[13].binary(["abc", "dddafadfadfadfadfa", None])
|
||||
params[14].nchar(["涛思数据", None, "a long string with 中文字符"])
|
||||
params[15].timestamp([None, None, 1626861392591])
|
||||
```
|
||||
|
||||
Call the `bind_param` (for a single row) method or the `bind_param_batch` (for multiple rows) method to set the values.
|
||||
|
||||
```
|
||||
stmt.bind_param_batch(params)
|
||||
```
|
||||
|
||||
##### execute sql
|
||||
|
||||
Call `execute` method to execute sql.
|
||||
|
||||
```
|
||||
stmt.execute()
|
||||
```
|
||||
|
||||
##### Close Stmt
|
||||
|
||||
```
|
||||
stmt.close()
|
||||
```
|
||||
|
||||
##### Example
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/stmt_example.py}}
|
||||
{{#include docs/examples/python/stmt_native.py:stmt}}
|
||||
```
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="websocket" label="WebSocket connection">
|
||||
|
||||
##### Create Stmt
|
||||
|
||||
Call the `statement` method in `Connection` to create the `stmt` for parameter binding.
|
||||
|
||||
```
|
||||
import taosws
|
||||
|
||||
conn = taosws.connect('taosws://localhost:6041/test')
|
||||
stmt = conn.statement()
|
||||
```
|
||||
|
||||
##### Prepare sql
|
||||
|
||||
Call `prepare` method in stmt to prepare sql.
|
||||
|
||||
```
|
||||
stmt.prepare("insert into t1 values (?, ?, ?, ?)")
|
||||
```
|
||||
|
||||
##### parameter binding
|
||||
|
||||
Call the `bind_param` method to bind parameters.
|
||||
|
||||
```
|
||||
stmt.bind_param([
|
||||
taosws.millis_timestamps_to_column([1686844800000, 1686844801000, 1686844802000, 1686844803000]),
|
||||
taosws.ints_to_column([1, 2, 3, 4]),
|
||||
taosws.floats_to_column([1.1, 2.2, 3.3, 4.4]),
|
||||
taosws.varchar_to_column(['a', 'b', 'c', 'd']),
|
||||
])
|
||||
```
|
||||
|
||||
Call the `add_batch` method to add parameters to the batch.
|
||||
|
||||
```
|
||||
stmt.add_batch()
|
||||
```
|
||||
|
||||
##### execute sql
|
||||
|
||||
Call `execute` method to execute sql.
|
||||
|
||||
```
|
||||
stmt.execute()
|
||||
```
|
||||
|
||||
##### Close Stmt
|
||||
|
||||
```
|
||||
stmt.close()
|
||||
```
|
||||
|
||||
##### Example
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/stmt_websocket_example.py}}
|
||||
{{#include docs/examples/python/stmt_ws.py:stmt}}
|
||||
```
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
@ -758,46 +511,18 @@ stmt.close()
|
|||
Client library support schemaless insert.
|
||||
|
||||
<Tabs defaultValue="list">
|
||||
<TabItem value="list" label="List Insert">
|
||||
|
||||
##### Simple insert
|
||||
<TabItem value="list" label="native connection">
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/schemaless_insert.py}}
|
||||
```
|
||||
|
||||
##### Insert with ttl argument
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/schemaless_insert_ttl.py}}
|
||||
```
|
||||
|
||||
##### Insert with req_id argument
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/schemaless_insert_req_id.py}}
|
||||
{{#include docs/examples/python/schemaless_native.py}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="raw" label="Raw Insert">
|
||||
|
||||
##### Simple insert
|
||||
<TabItem value="raw" label="WebSocket connection">
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/schemaless_insert_raw.py}}
|
||||
```
|
||||
|
||||
##### Insert with ttl argument
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/schemaless_insert_raw_ttl.py}}
|
||||
```
|
||||
|
||||
##### Insert with req_id argument
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/schemaless_insert_raw_req_id.py}}
|
||||
{{#include docs/examples/python/schemaless_ws.py}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
|
@ -808,11 +533,12 @@ Client library support schemaless insert.
|
|||
There is a optional parameter called `req_id` in `schemaless_insert` and `schemaless_insert_raw` method. This reqId can be used to request link tracing.
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/schemaless_insert_req_id.py}}
|
||||
```
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/schemaless_insert_raw_req_id.py}}
|
||||
conn.schemaless_insert(
|
||||
lines=lineDemo,
|
||||
protocol=taos.SmlProtocol.LINE_PROTOCOL,
|
||||
precision=taos.SmlPrecision.NANO_SECONDS,
|
||||
req_id=1,
|
||||
)
|
||||
```
|
||||
|
||||
### Data Subscription
|
||||
|
@ -821,194 +547,56 @@ Client library support data subscription. For more information about subscroptio
|
|||
|
||||
#### Create a Topic
|
||||
|
||||
To create topic, please refer to [Data Subscription](../../develop/tmq/#create-a-topic).
|
||||
```python
|
||||
{{#include docs/examples/python/tmq_native.py:create_topic}}
|
||||
```
|
||||
|
||||
#### Create a Consumer
|
||||
|
||||
<Tabs defaultValue="native">
|
||||
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
The consumer in the client library contains the subscription api. The syntax for creating a consumer is consumer = Consumer(configs). For more subscription api parameters, please refer to [Data Subscription](../../develop/tmq/#create-a-consumer).
|
||||
|
||||
```python
|
||||
from taos.tmq import Consumer
|
||||
|
||||
consumer = Consumer({"group.id": "local", "td.connect.ip": "127.0.0.1"})
|
||||
{{#include docs/examples/python/tmq_native.py:create_consumer}}
|
||||
```
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="websocket" label="WebSocket connection">
|
||||
|
||||
In addition to native connections, the client library also supports subscriptions via websockets.
|
||||
|
||||
The syntax for creating a consumer is "consumer = Consumer(conf=configs)". You need to specify that the `td.connect.websocket.scheme` parameter is set to "ws" in the configuration. For more subscription api parameters, please refer to [Data Subscription](../../develop/tmq/#create-a-consumer).
|
||||
|
||||
```python
|
||||
import taosws
|
||||
|
||||
consumer = taosws.Consumer(conf={"group.id": "local", "td.connect.websocket.scheme": "ws"})
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
#### Subscribe to a Topic
|
||||
|
||||
<Tabs defaultValue="native">
|
||||
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
The `subscribe` function is used to subscribe to a list of topics.
|
||||
|
||||
```python
|
||||
consumer.subscribe(['topic1', 'topic2'])
|
||||
{{#include docs/examples/python/tmq_native.py:subscribe}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="websocket" label="WebSocket connection">
|
||||
|
||||
The `subscribe` function is used to subscribe to a list of topics.
|
||||
|
||||
```python
|
||||
consumer.subscribe(['topic1', 'topic2'])
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
#### Consume messages
|
||||
|
||||
<Tabs defaultValue="native">
|
||||
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
The `poll` function is used to consume data in tmq. The parameter of the `poll` function is a value of type float representing the timeout in seconds. It returns a `Message` before timing out, or `None` on timing out. You have to handle error messages in response data.
|
||||
|
||||
```python
|
||||
while True:
|
||||
message = consumer.poll(1)
|
||||
if not message:
|
||||
continue
|
||||
err = message.error()
|
||||
if err is not None:
|
||||
raise err
|
||||
val = message.value()
|
||||
|
||||
for block in val:
|
||||
print(block.fetchall())
|
||||
{{#include docs/examples/python/tmq_native.py:consume}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="websocket" label="WebSocket connection">
|
||||
|
||||
The `poll` function is used to consume data in tmq. The parameter of the `poll` function is a value of type float representing the timeout in seconds. It returns a `Message` before timing out, or `None` on timing out.
|
||||
|
||||
```python
|
||||
while True:
|
||||
message = consumer.poll(1)
|
||||
if not message:
|
||||
continue
|
||||
|
||||
for block in message:
|
||||
for row in block:
|
||||
print(row)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
#### Assignment subscription Offset
|
||||
|
||||
<Tabs defaultValue="native">
|
||||
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
The `assignment` function is used to get the assignment of the topic.
|
||||
|
||||
```python
|
||||
assignments = consumer.assignment()
|
||||
{{#include docs/examples/python/tmq_native.py:assignment}}
|
||||
```
|
||||
|
||||
The `seek` function is used to reset the assignment of the topic.
|
||||
|
||||
```python
|
||||
tp = TopicPartition(topic='topic1', partition=0, offset=0)
|
||||
consumer.seek(tp)
|
||||
{{#include docs/examples/python/tmq_native.py:consume}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="websocket" label="WebSocket connection">
|
||||
|
||||
The `assignment` function is used to get the assignment of the topic.
|
||||
|
||||
```python
|
||||
assignments = consumer.assignment()
|
||||
```
|
||||
|
||||
The `seek` function is used to reset the assignment of the topic.
|
||||
|
||||
```python
|
||||
consumer.seek(topic='topic1', partition=0, offset=0)
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
#### Close subscriptions
|
||||
|
||||
<Tabs defaultValue="native">
|
||||
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
You should unsubscribe to the topics and close the consumer after consuming.
|
||||
|
||||
```python
|
||||
consumer.unsubscribe()
|
||||
consumer.close()
|
||||
{{#include docs/examples/python/tmq_native.py:unsubscribe}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="websocket" label="WebSocket connection">
|
||||
|
||||
You should unsubscribe to the topics and close the consumer after consuming.
|
||||
|
||||
```python
|
||||
consumer.unsubscribe()
|
||||
consumer.close()
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
#### Full Sample Code
|
||||
|
||||
<Tabs defaultValue="native">
|
||||
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/tmq_example.py}}
|
||||
{{#include docs/examples/python/tmq_native.py}}
|
||||
```
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/tmq_assignment_example.py:taos_get_assignment_and_seek_demo}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="websocket" label="WebSocket connection">
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/tmq_websocket_example.py}}
|
||||
```
|
||||
|
||||
```python
|
||||
{{#include docs/examples/python/tmq_websocket_assgnment_example.py:taosws_get_assignment_and_seek_demo}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Other sample programs
|
||||
|
||||
| Example program links | Example program content |
|
||||
|
|
|
@ -7,35 +7,30 @@ toc_max_heading_level: 4
|
|||
|
||||
import Tabs from "@theme/Tabs";
|
||||
import TabItem from "@theme/TabItem";
|
||||
import RequestId from "./_request_id.mdx";
|
||||
|
||||
import Preparition from "./_preparation.mdx";
|
||||
import NodeInsert from "../07-develop/03-insert-data/_js_sql.mdx";
|
||||
import NodeInfluxLine from "../07-develop/03-insert-data/_js_line.mdx";
|
||||
import NodeOpenTSDBTelnet from "../07-develop/03-insert-data/_js_opts_telnet.mdx";
|
||||
import NodeOpenTSDBJson from "../07-develop/03-insert-data/_js_opts_json.mdx";
|
||||
import NodeQuery from "../07-develop/04-query-data/_js.mdx";
|
||||
`@tdengine/websocket` is the official Node.js client library for TDengine. Node.js developers can develop applications to access the TDengine instance data.
|
||||
|
||||
`@tdengine/client` and `@tdengine/rest` are the official Node.js client libraries. Node.js developers can develop applications to access TDengine instance data. Note: The client libraries for TDengine 3.0 are different than those for TDengine 2.x. The new client libraries do not support TDengine 2.x.
|
||||
The source code for the Node.js client library is hosted on [GitHub](https://github.com/taosdata/taos-connector-node/tree/main).
|
||||
|
||||
`@tdengine/client` is **native connection**, which connects to TDengine instances natively through the TDengine client driver (taosc), supporting data writing, querying, subscriptions, schemaless writing, and bind interface. `@tdengine/rest` is the **REST connection**, which connects to TDengine instances via the REST interface provided by taosAdapter. The REST client library can run on any platform, but performance is slightly degraded, and the interface implements a somewhat different set of functional features than the native interface.
|
||||
## Connection types
|
||||
|
||||
The source code for the Node.js client libraries is located on [GitHub](https://github.com/taosdata/taos-connector-node/tree/3.0).
|
||||
Node.js connector supports only websocket connection through taosAdapter.
|
||||
|
||||
For a detailed introduction of the connection types, please refer to: [Establish Connection](../../develop/connect/#establish-connection)
|
||||
|
||||
## Supported platforms
|
||||
|
||||
The platforms supported by the native client library are the same as those supported by the TDengine client driver.
|
||||
The REST client library supports all platforms that can run Node.js.
|
||||
Node.js client library needs to be run with Node.js 14 or higher version.
|
||||
|
||||
## Version support
|
||||
## Recent update logs
|
||||
|
||||
Please refer to [version support list](../#version-support)
|
||||
| Node.js connector version | major changes | TDengine 版本 |
|
||||
| :-----------------------: | :------------------: | :----------------:|
|
||||
| 3.1.0 | new version, supports websocket | 3.2.0.0 or later |
|
||||
|
||||
## Supported features
|
||||
|
||||
|
||||
<Tabs defaultValue="native">
|
||||
<TabItem value="native" label="Native connection">
|
||||
|
||||
1. Connection Management
|
||||
2. General Query
|
||||
3. Continuous Query
|
||||
|
@ -43,294 +38,300 @@ Please refer to [version support list](../#version-support)
|
|||
5. Subscription
|
||||
6. Schemaless
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="rest" label="REST connection">
|
||||
## Handling exceptions
|
||||
|
||||
1. Connection Management
|
||||
2. General Query
|
||||
3. Continuous Query
|
||||
After an error is reported, the error message and error code can be obtained through try catch. The Node.js client library error code is between 100 and 110, while the other error codes are for the TDengine function module.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
Please refer to the table below for error code, error description and corresponding suggestions.
|
||||
|
||||
| Error Code | Description | Suggested Actions |
|
||||
| ---------- | -------------------------------------------------------------| -------------------------------------------------------------------------------------------------- |
|
||||
| 100 | invalid variables | The parameter is invalid. Check the interface specification and adjust the parameter type and size.|
|
||||
| 101 | invalid url | URL error, please check if the url is correct. |
|
||||
| 102 | received server data but did not find a callback for processing | Client waiting timeout, please check network and TaosAdapter status. |
|
||||
| 103 | invalid message type | Please check if the client version and server version match. |
|
||||
| 104 | connection creation failed | Connection creation failed. Please check the network and TaosAdapter status. |
|
||||
| 105 | websocket request timeout | Increase the execution time by adding the messageWaitTimeout parameter, or check the connection to the TaosAdapter.|
|
||||
| 106 | authentication fail | Authentication failed, please check if the username and password are correct. |
|
||||
| 107 | unknown sql type in tdengine | Check the data type supported by TDengine. |
|
||||
| 108 | connection has been closed | The connection has been closed, check the connection status, or recreate the connection to execute the relevant instructions. |
|
||||
| 109 | fetch block data parse fail | Please check if the client version and server version match. |
|
||||
| 110 | websocket connection has reached its maximum limit | Please check if the connection has been closed after use |
|
||||
|
||||
## Data Type Mapping
|
||||
|
||||
The table below despicts the mapping between TDengine data type and Node.js data type.
|
||||
|
||||
| TDengine Data Type | Node.js Data Type|
|
||||
|-------------------|-------------|
|
||||
| TIMESTAMP | bigint |
|
||||
| TINYINT | number |
|
||||
| SMALLINT | number |
|
||||
| INT | number |
|
||||
| BIGINT | bigint |
|
||||
| TINYINT UNSIGNED | number |
|
||||
| SMALLINT UNSIGNED | number |
|
||||
| INT UNSIGNED | number |
|
||||
| BIGINT UNSIGNED | bigint |
|
||||
| FLOAT | number |
|
||||
| DOUBLE | number |
|
||||
| BOOL | boolean |
|
||||
| BINARY | string |
|
||||
| NCHAR | string |
|
||||
| JSON | string |
|
||||
| VARBINARY | ArrayBuffer |
|
||||
| GEOMETRY | ArrayBuffer |
|
||||
|
||||
**Note**: Only TAG supports JSON types
|
||||
|
||||
## Installation Steps
|
||||
|
||||
### Pre-installation preparation
|
||||
|
||||
- Install the Node.js development environment
|
||||
- If you are using the REST client library, skip this step. However, if you use the native client library, please install the TDengine client driver. Please refer to [Install Client Driver](../#install-client-driver) for more details. We use [node-gyp](https://github.com/nodejs/node-gyp) to interact with TDengine instances and also need to install some dependencies mentioned below depending on the specific OS.
|
||||
- Install the Node.js development environment, using version 14 or above. Download link: https://nodejs.org/en/download/
|
||||
|
||||
<Tabs defaultValue="Linux">
|
||||
<TabItem value="Linux" label="Linux system installation dependencies">
|
||||
|
||||
- `python` (recommended for `v2.7` , `v3.x.x` currently not supported)
|
||||
- `@tdengine/client` 3.0.0 supports Node.js LTS v10.9.0 or later and Node.js LTS v12.8.0 or later. Older versions may be incompatible.
|
||||
- `make`
|
||||
- C compiler, [GCC](https://gcc.gnu.org) v4.8.5 or later.
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="macOS" label="macOS installation dependencies">
|
||||
|
||||
- `python` (recommended for `v2.7` , `v3.x.x` currently not supported)
|
||||
- `@tdengine/client` 3.0.0 currently supports Node.js from v12.22.12, but only later versions of v12. Other versions may be incompatible.
|
||||
- `make`
|
||||
- C compiler, [GCC](https://gcc.gnu.org) v4.8.5 or later.
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="Windows" label="Windows system installation dependencies">
|
||||
|
||||
- Installation method 1
|
||||
|
||||
Use Microsoft's [windows-build-tools](https://github.com/felixrieseberg/windows-build-tools) to execute `npm install --global --production` from the `cmd` command-line interface to install all the necessary tools.
|
||||
|
||||
- Installation method 2
|
||||
|
||||
Manually install the following tools.
|
||||
|
||||
- Install Visual Studio related: [Visual Studio Build Tools](https://visualstudio.microsoft.com/thank-you-downloading-visual-studio/?sku=BuildTools) or [Visual Studio 2017 Community](https://visualstudio.microsoft.com/pl/thank-you-downloading-visual-studio/?sku=Community)
|
||||
- Install [Python](https://www.python.org/downloads/) 2.7 (`v3.x.x` is not supported) and execute `npm config set python python2.7`.
|
||||
- Go to the `cmd` command-line interface, `npm config set msvs_version 2017`
|
||||
|
||||
Refer to Microsoft's Node.js User Manual [Microsoft's Node.js Guidelines for Windows](https://github.com/Microsoft/nodejs-guidelines/blob/master/windows-environment.md#compiling-native-addon-modules).
|
||||
|
||||
If using ARM64 Node.js on Windows 10 ARM, you must add "Visual C++ compilers and libraries for ARM64" and "Visual C++ ATL for ARM64".
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Install via npm
|
||||
|
||||
<Tabs defaultValue="install_rest">
|
||||
<TabItem value="install_native" label="Install native clieny library">
|
||||
### Install Node.js client library via npm
|
||||
|
||||
```bash
|
||||
npm install @tdengine/client
|
||||
npm install @tdengine/websocket
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="install_rest" label="Install REST client library">
|
||||
|
||||
```bash
|
||||
npm install @tdengine/rest
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Verify
|
||||
|
||||
<Tabs defaultValue="native">
|
||||
<TabItem value="native" label="Native client library">
|
||||
|
||||
After installing the TDengine client, use the `nodejsChecker.js` program to verify that the current environment supports Node.js access to TDengine.
|
||||
|
||||
Verification in details:
|
||||
|
||||
- Create an installation test folder such as `~/tdengine-test`. Download the [nodejsChecker.js source code](https://github.com/taosdata/taos-connector-node/blob/3.0/nodejs/examples/nodejsChecker.js) to your local machine.
|
||||
- Create an installation test folder such as `~/tdengine-test`. Download the [nodejsChecker.js](https://github.com/taosdata/TDengine/tree/main/docs/examples/node/websocketexample/nodejsChecker.js) to your local machine.
|
||||
|
||||
- Execute the following command from the command-line.
|
||||
|
||||
```bash
|
||||
npm init -y
|
||||
npm install @tdengine/client
|
||||
npm install @tdengine/websocket
|
||||
node nodejsChecker.js host=localhost
|
||||
```
|
||||
|
||||
- After executing the above steps, the command-line will output the result of `nodejsChecker.js` connecting to the TDengine instance and performing a simple insert and query.
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="rest" label="REST client library">
|
||||
|
||||
After installing the TDengine client, use the `restChecker.js` program to verify that the current environment supports Node.js access to TDengine.
|
||||
|
||||
Verification in details:
|
||||
|
||||
- Create an installation test folder such as `~/tdengine-test`. Download the [restChecker.js source code](https://github.com/taosdata/TDengine/tree/3.0/docs/examples/node/restexample/restChecker.js) to your local.
|
||||
|
||||
- Execute the following command from the command-line.
|
||||
|
||||
```bash
|
||||
npm init -y
|
||||
npm install @tdengine/rest
|
||||
node restChecker.js
|
||||
```
|
||||
|
||||
- After executing the above steps, the command-line will output the result of `restChecker.js` connecting to the TDengine instance and performing a simple insert and query.
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
## Establishing a connection
|
||||
|
||||
Please choose to use one of the client libraries.
|
||||
Install and import the `@tdengine/websocket` package.
|
||||
|
||||
<Tabs defaultValue="rest">
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
Install and import the `@tdengine/client` package.
|
||||
**Note**: After using the Node.js client library, it is necessary to call taos.destroy() Release connector resources.
|
||||
|
||||
```javascript
|
||||
//A cursor also needs to be initialized in order to interact with TDengine from Node.js.
|
||||
const taos = require("@tdengine/client");
|
||||
var conn = taos.connect({
|
||||
host: "127.0.0.1",
|
||||
user: "root",
|
||||
password: "taosdata",
|
||||
config: "/etc/taos",
|
||||
port: 0,
|
||||
});
|
||||
var cursor = conn.cursor(); // Initializing a new cursor
|
||||
const taos = require("@tdengine/websocket");
|
||||
|
||||
//Close a connection
|
||||
conn.close();
|
||||
//database operations......
|
||||
|
||||
taos.destroy();
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="rest" label="REST connection">
|
||||
|
||||
Install and import the `@tdengine/rest` package.
|
||||
|
||||
```javascript
|
||||
//A cursor also needs to be initialized in order to interact with TDengine from Node.js.
|
||||
import { options, connect } from "@tdengine/rest";
|
||||
options.path = "/rest/sql";
|
||||
// set host
|
||||
options.host = "localhost";
|
||||
// set other options like user/passwd
|
||||
|
||||
let conn = connect(options);
|
||||
let cursor = conn.cursor();
|
||||
WSConfig configures the Websocket parameters as follows:
|
||||
getToken(): string | undefined | null;
|
||||
setToken(token: string): void;
|
||||
getUser(): string | undefined | null;
|
||||
setUser(user: string): void;
|
||||
getPwd(): string | undefined | null;
|
||||
setPwd(pws: string): void;
|
||||
getDb(): string | undefined | null;
|
||||
setDb(db: string): void;
|
||||
getUrl(): string;
|
||||
setUrl(url: string): void;
|
||||
setTimeOut(ms: number): void;
|
||||
getTimeOut(): number | undefined | null;
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
```javascript
|
||||
{{#include docs/examples/node/websocketexample/sql_example.js:createConnect}}
|
||||
```
|
||||
|
||||
## Usage examples
|
||||
|
||||
### Write data
|
||||
### Create database and tables
|
||||
|
||||
#### SQL Write
|
||||
|
||||
<Tabs defaultValue="native">
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
<NodeInsert />
|
||||
|
||||
</TabItem>
|
||||
<TabItem value="rest" label="REST connection">
|
||||
|
||||
```js
|
||||
{{#include docs/examples/node/restexample/insert_example.js}}
|
||||
```javascript
|
||||
{{#include docs/examples/node/websocketexample/sql_example.js:create_db_and_table}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
**Note**: If you do not use `USE power` to specify the database, all subsequent operations on the table need to add the database name as a prefix, such as power.meters.
|
||||
|
||||
#### InfluxDB line protocol write
|
||||
### Insert data
|
||||
|
||||
<Tabs defaultValue="native">
|
||||
<TabItem value="native" label="native connection">
|
||||
```javascript
|
||||
{{#include docs/examples/node/websocketexample/sql_example.js:insertData}}
|
||||
```
|
||||
|
||||
<NodeInfluxLine />
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
#### OpenTSDB Telnet line protocol write
|
||||
|
||||
<Tabs defaultValue="native">
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
<NodeOpenTSDBTelnet />
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
#### OpenTSDB JSON line protocol write
|
||||
|
||||
<Tabs defaultValue="native">
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
<NodeOpenTSDBJson />
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
> NOW is an internal function. The default is the current time of the client's computer.
|
||||
> `NOW + 1s` represents the current time of the client plus 1 second, followed by the number representing the unit of time: a (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), n (months), y (years).
|
||||
|
||||
### Querying data
|
||||
|
||||
<Tabs defaultValue="native">
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
<NodeQuery />
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem value="rest" label="REST connection">
|
||||
|
||||
```js
|
||||
{{#include docs/examples/node/restexample/query_example.js}}
|
||||
```javascript
|
||||
{{#include docs/examples/node/websocketexample/sql_example.js:queryData}}
|
||||
```
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
> Discovered data structure
|
||||
|
||||
```javascript
|
||||
wsRow:meta:=> [
|
||||
{ name: 'ts', type: 'TIMESTAMP', length: 8 },
|
||||
{ name: 'current', type: 'FLOAT', length: 4 },
|
||||
{ name: 'voltage', type: 'INT', length: 4 },
|
||||
{ name: 'phase', type: 'FLOAT', length: 4 },
|
||||
{ name: 'location', type: 'VARCHAR', length: 64},
|
||||
{ name: 'groupid', type: 'INT', length: 4 }
|
||||
]
|
||||
wsRow:data:=> [
|
||||
[ 1714013737536n, 12.3, 221, 0.31, 'California.SanFrancisco', 3 ]
|
||||
]
|
||||
```
|
||||
|
||||
### Execute SQL with reqId
|
||||
|
||||
<RequestId />
|
||||
|
||||
```javascript
|
||||
{{#include docs/examples/node/websocketexample/sql_example.js:sqlWithReqid}}
|
||||
```
|
||||
|
||||
### Writing data via parameter binding
|
||||
|
||||
The Node.js client library provides a parameter binding api for inserting data. Similar to most databases, TDengine currently only supports the question mark `?` to indicate the parameters to be bound.
|
||||
|
||||
**Note**: Do not use `db.?` in prepareStatement when specify the database with the table name, should directly use `?`, then specify the database in setTableName, for example: `prepareStatement.setTableName("db.t1")`.
|
||||
|
||||
Sample Code:
|
||||
|
||||
```javascript
|
||||
{{#include docs/examples/node/websocketexample/stmt_example.js}}
|
||||
```
|
||||
|
||||
The methods to set TAGS values or VALUES columns:
|
||||
|
||||
```javascript
|
||||
setBoolean(params: any[]): void;
|
||||
setTinyInt(params: any[]): void;
|
||||
setUTinyInt(params: any[]): void;
|
||||
setSmallInt(params: any[]): void;
|
||||
setUSmallInt(params: any[]): void;
|
||||
setInt(params: any[]): void;
|
||||
setUInt(params: any[]): void;
|
||||
setBigint(params: any[]): void;
|
||||
setUBigint(params: any[]): void;
|
||||
setFloat(params: any[]): void;
|
||||
setDouble(params: any[]): void;
|
||||
setVarchar(params: any[]): void;
|
||||
setBinary(params: any[]): void;
|
||||
setNchar(params: any[]): void;
|
||||
setJson(params: any[]): void;
|
||||
setVarBinary(params: any[]): void;
|
||||
setGeometry(params: any[]): void;
|
||||
setTimestamp(params: any[]): void;
|
||||
```
|
||||
|
||||
**Note**: Only TAG supports JSON types
|
||||
|
||||
### Schemaless Writing
|
||||
|
||||
TDengine supports schemaless writing. It is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. For more information, see [Schemaless Writing](../../reference/schemaless/).
|
||||
|
||||
```javascript
|
||||
{{#include docs/examples/node/websocketexample/line_example.js}}
|
||||
```
|
||||
|
||||
### Schemaless with reqId
|
||||
|
||||
This reqId can be used to request link tracing.
|
||||
|
||||
```javascript
|
||||
await wsSchemaless.schemalessInsert([influxdbData], SchemalessProto.InfluxDBLineProtocol, Precision.NANO_SECONDS, ttl, reqId);
|
||||
await wsSchemaless.schemalessInsert([telnetData], SchemalessProto.OpenTSDBTelnetLineProtocol, Precision.NANO_SECONDS, ttl, reqId);
|
||||
await wsSchemaless.schemalessInsert([jsonData], SchemalessProto.OpenTSDBJsonFormatProtocol, Precision.NANO_SECONDS, ttl, reqId);
|
||||
```
|
||||
|
||||
### Data Subscription
|
||||
|
||||
The TDengine Node.js client library supports subscription functionality with the following application API.
|
||||
|
||||
#### Create a Topic
|
||||
|
||||
```javascript
|
||||
{{#include docs/examples/node/websocketexample/tmq_example.js:create_topic}}
|
||||
```
|
||||
|
||||
#### Create a Consumer
|
||||
|
||||
```javascript
|
||||
{{#include docs/examples/node/websocketexample/tmq_example.js:create_consumer}}
|
||||
```
|
||||
|
||||
**Parameter Description**
|
||||
|
||||
- taos.TMQConstants.CONNECT_USER: username.
|
||||
- taos.TMQConstants.CONNECT_PASS: password.
|
||||
- taos.TMQConstants.GROUP_ID: Specifies the group that the consumer is in.
|
||||
- taos.TMQConstants.CLIENT_ID: client id.
|
||||
- taos.TMQConstants.WS_URL: The URL address of TaosAdapter.
|
||||
- taos.TMQConstants.AUTO_OFFSET_RESET: When offset does not exist, where to start consumption, the optional value is earliest or latest, the default is latest.
|
||||
- taos.TMQConstants.ENABLE_AUTO_COMMIT: Specifies whether to commit automatically.
|
||||
- taos.TMQConstants.AUTO_COMMIT_INTERVAL_MS: Automatic submission interval, the default value is 5000 ms.
|
||||
- taos.TMQConstants.CONNECT_MESSAGE_TIMEOUT: socket timeout in milliseconds, the default value is 10000 ms. It only takes effect when using WebSocket type.
|
||||
|
||||
For more information, see [Consumer Parameters](../../develop/tmq). Note that the default value of auto.offset.reset in data subscription on the TDengine server has changed since version 3.2.0.0.
|
||||
|
||||
#### Subscribe to consume data
|
||||
|
||||
```javascript
|
||||
{{#include docs/examples/node/websocketexample/tmq_example.js:subscribe}}
|
||||
```
|
||||
|
||||
#### Assignment subscription Offset
|
||||
|
||||
```javascript
|
||||
{{#include docs/examples/node/websocketexample/tmq_example.js:assignment}}
|
||||
```
|
||||
|
||||
#### Close subscriptions
|
||||
|
||||
```java
|
||||
// Unsubscribe
|
||||
consumer.unsubscribe();
|
||||
// Close consumer
|
||||
consumer.close()
|
||||
// free connector resource
|
||||
taos.destroy();
|
||||
```
|
||||
|
||||
For more information, see [Data Subscription](../../develop/tmq).
|
||||
|
||||
#### Full Sample Code
|
||||
|
||||
```javascript
|
||||
{{#include docs/examples/node/websocketexample/tmq_example.js}}
|
||||
```
|
||||
|
||||
## More sample programs
|
||||
|
||||
| Sample Programs | Sample Program Description |
|
||||
| --------------------------------------------------------------------------------------------------------------------------------- --------- | -------------------------------------- |
|
||||
| [basicUse](https://github.com/taosdata/taos-connector-node/blob/3.0/nodejs/examples/queryExample.js) | Basic operations such as establishing connections and running SQl commands. |
|
||||
| [stmtBindBatch](https://github.com/taosdata/taos-connector-node/blob/3.0/nodejs/examples/bindParamBatch.js) | Binding multi-line parameter insertion. | |
|
||||
| [stmtBindSingleParamBatch](https://github.com/taosdata/taos-connector-node/blob/3.0/nodejs/examples/bindSingleParamBatch.js) | Columnar binding parameter insertion |
|
||||
| [stmtQuery](https://github.com/taosdata/taos-connector-node/blob/3.0/nodejs/examples/stmtQuery.js) | Binding parameter query |
|
||||
| [schemless insert](https://github.com/taosdata/taos-connector-node/blob/3.0/nodejs/examples/schemaless.js) | Schemaless insert |
|
||||
| [TMQ](https://github.com/taosdata/taos-connector-node/blob/3.0/nodejs/examples/tmq.js) | Using data subscription |
|
||||
| [asyncQuery](https://github.com/taosdata/taos-connector-node/blob/3.0/nodejs/examples/asyncQueryExample.js) | Using asynchronous queries |
|
||||
| [REST](https://github.com/taosdata/taos-connector-node/blob/3.0/typescript-rest/example/example.ts) | Using TypeScript with the REST client library |
|
||||
| ---------------------------------------------------------------------------------------------------------------------------------| -------------------------------------- |
|
||||
| [sql_example](https://github.com/taosdata/TDengine/tree/main/docs/examples/node/websocketexample/sql_example.js) | Basic operations such as establishing connections and running SQl commands.|
|
||||
| [stmt_example](https://github.com/taosdata/TDengine/tree/main/docs/examples/node/websocketexample/stmt_example.js) | Binding multi-line parameter insertion. | |
|
||||
| [line_example](https://github.com/taosdata/TDengine/tree/main/docs/examples/node/websocketexample/line_example.js) | Schemaless insert |
|
||||
| [telnet_line_example](https://github.com/taosdata/TDengine/tree/main/docs/examples/node/websocketexample/telnet_line_example.js) | OpenTSDB Telnet insert |
|
||||
| [json_line_example](https://github.com/taosdata/TDengine/tree/main/docs/examples/node/websocketexample/json_line_example.js) | OpenTSDB Json insert |
|
||||
| [tmq_example](https://github.com/taosdata/TDengine/tree/main/docs/examples/node/websocketexample/tmq_example.js) | Using data subscription |
|
||||
|
||||
## Usage limitations
|
||||
|
||||
`@tdengine/client` 3.0.0 supports Node.js LTS v12.8.0 to 12.9.1 and 10.9.0 to 10.20.0.
|
||||
|
||||
|
||||
|
||||
|
||||
- Node.js client library (`@tdengine/websocket`) supports Node.js 14 or higher.
|
||||
- It supports only WebSocket connection, so taosAdapter needs to be started in advance.
|
||||
- After using the connect, you need to call taos.destroy(); Release connector resources.
|
||||
|
||||
## Frequently Asked Questions
|
||||
|
||||
1. Using REST connections requires starting taosadapter.
|
||||
|
||||
```bash
|
||||
sudo systemctl start taosadapter
|
||||
```
|
||||
|
||||
2. Node.js versions
|
||||
|
||||
`@tdengine/client` supports Node.js v10.9.0 to 10.20.0 and 12.8.0 to 12.9.1.
|
||||
|
||||
3. "Unable to establish connection", "Unable to resolve FQDN"
|
||||
|
||||
Usually, the root cause is an incorrect FQDN configuration. You can refer to this section in the [FAQ](https://docs.tdengine.com/2.4/train-faq/faq/#2-how-to-handle-unable-to-establish-connection) to troubleshoot.
|
||||
|
||||
## Important update records
|
||||
|
||||
### Native client library
|
||||
|
||||
| package name | version | TDengine version | Description |
|
||||
|------------------|---------|---------------------|------------------------------------------------------------------|
|
||||
| @tdengine/client | 3.0.0 | 3.0.0 | Supports TDengine 3.0. Not compatible with TDengine 2.x. |
|
||||
| td2.0-connector | 2.0.12 | 2.4.x; 2.5.x; 2.6.x | Fixed cursor.close() bug. |
|
||||
| td2.0-connector | 2.0.11 | 2.4.x; 2.5.x; 2.6.x | Supports parameter binding, JSON tags and schemaless interface |
|
||||
| td2.0-connector | 2.0.10 | 2.4.x; 2.5.x; 2.6.x | Supports connection management, standard queries, connection queries, system information, and data subscription |
|
||||
### REST client library
|
||||
|
||||
| package name | version | TDengine version | Description |
|
||||
|----------------------|---------|---------------------|---------------------------------------------------------------------------|
|
||||
| @tdengine/rest | 3.0.0 | 3.0.0 | Supports TDengine 3.0. Not compatible with TDengine 2.x. |
|
||||
| td2.0-rest-connector | 1.0.7 | 2.4.x; 2.5.x; 2.6.x | Removed default port 6041 |
|
||||
| td2.0-rest-connector | 1.0.6 | 2.4.x; 2.5.x; 2.6.x | Fixed affectRows bug with create, insert, update, and alter. |
|
||||
| td2.0-rest-connector | 1.0.5 | 2.4.x; 2.5.x; 2.6.x | Support cloud token |
|
||||
| td2.0-rest-connector | 1.0.3 | 2.4.x; 2.5.x; 2.6.x | Supports connection management, standard queries, system information, error information, and continuous queries |
|
||||
1. "Unable to establish connection" or "Unable to resolve FQDN"
|
||||
|
||||
**Solution**: Usually, the root cause is an incorrect FQDN configuration. You can refer to this section in the [FAQ](../../train-faq/faq/#2-how-can-i-resolve-the-unable-to-establish-connection-error) to troubleshoot.
|
||||
|
|
|
@ -7,19 +7,23 @@ toc_max_heading_level: 4
|
|||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
import RequestId from "./_request_id.mdx";
|
||||
|
||||
`TDengine.Connector` is the C# language connector provided by TDengine. C# developers can use it to develop C# application software that accesses TDengine cluster data.
|
||||
|
||||
The `TDengine.Connector` connector supports establishing a connection with the TDengine running instance through the TDengine client driver (taosc), and provides functions such as data writing, query, data subscription, schemaless data writing, and parameter binding interface data writing. `TDengine.Connector` also supports WebSocket since v3.0.1, establishes WebSocket connection, and provides functions such as data writing, query, and parameter binding interface data writing.
|
||||
## Connection types
|
||||
|
||||
This article introduces how to install `TDengine.Connector` in a Linux or Windows environment, and connect to the TDengine cluster through `TDengine.Connector` to perform basic operations such as data writing and querying.
|
||||
`TDengine.Connector` provides 2 connection types.
|
||||
|
||||
* **Native connection**, which connects to TDengine instances natively through the TDengine client driver (taosc), supporting data writing, querying, subscriptions, schemaless writing, and bind interface.
|
||||
* **WebSocket connection** which is implemented through taosAdapter. The set of features implemented by the WebSocket connection differs slightly from those implemented by the native connection.(since v3.0.1)
|
||||
|
||||
For a detailed introduction of the connection types, please refer to: [Establish Connection](../../develop/connect/#establish-connection)
|
||||
|
||||
## Compatibility
|
||||
|
||||
:::warning
|
||||
* `TDengine.Connector` version 3.1.0 has been completely refactored and is no longer compatible with 3.0.2 and previous versions. For 3.0.2 documents, please refer to [nuget](https://www.nuget.org/packages/TDengine.Connector/3.0.2)
|
||||
* `TDengine.Connector` 3.x is not compatible with TDengine 2.x. If you need to use the C# connector in an environment running TDengine 2.x version, please use the 1.x version of TDengine.Connector.
|
||||
:::
|
||||
|
||||
The source code of `TDengine.Connector` is hosted on [GitHub](https://github.com/taosdata/taos-connector-dotnet/tree/3.0).
|
||||
|
||||
## Supported platforms
|
||||
|
||||
|
@ -31,9 +35,12 @@ TDengine no longer supports 32-bit Windows platforms.
|
|||
|
||||
## Version support
|
||||
|
||||
| **Connector version** | **TDengine version** |
|
||||
|-----------------------|----------------------|
|
||||
| 3.1.0 | 3.2.1.0/3.1.1.18 |
|
||||
| **Connector version** | **TDengine version** | **major features** |
|
||||
|-----------------------|----------------------|--------------------------------------|
|
||||
| 3.1.3 | 3.2.1.0/3.1.1.18 | support WebSocket reconnect |
|
||||
| 3.1.2 | 3.2.1.0/3.1.1.18 | fix schemaless result release |
|
||||
| 3.1.1 | 3.2.1.0/3.1.1.18 | support varbinary and geometry |
|
||||
| 3.1.0 | 3.2.1.0/3.1.1.18 | WebSocket uses native implementation |
|
||||
|
||||
## Handling exceptions
|
||||
|
||||
|
@ -58,6 +65,8 @@ TDengine no longer supports 32-bit Windows platforms.
|
|||
| BINARY | byte[] |
|
||||
| NCHAR | string (utf-8 encoding) |
|
||||
| JSON | byte[] |
|
||||
| VARBINARY | byte[] |
|
||||
| GEOMETRY | byte[] |
|
||||
|
||||
**Note**: JSON type is only supported in tag.
|
||||
|
||||
|
@ -67,7 +76,7 @@ TDengine no longer supports 32-bit Windows platforms.
|
|||
|
||||
* Install [.NET SDK](https://dotnet.microsoft.com/download)
|
||||
* [Nuget Client](https://docs.microsoft.com/en-us/nuget/install-nuget-client-tools) (optional installation)
|
||||
* Install the TDengine client driver. For specific steps, please refer to [Installing the client driver](../#install-client-driver)
|
||||
* Only for Native connections, you need to install the TDengine client driver. For specific steps, please refer to [Installing the client driver](../#install-client-driver)
|
||||
|
||||
### Install the connectors
|
||||
|
||||
|
@ -127,6 +136,12 @@ The parameters supported by `ConnectionStringBuilder` are as follows:
|
|||
* connTimeout: WebSocket connection timeout, only valid when the protocol is WebSocket, the default is 1 minute, use the `TimeSpan.Parse` method to parse the string into a `TimeSpan` object.
|
||||
* readTimeout: WebSocket read timeout, only valid when the protocol is WebSocket, the default is 5 minutes, use the `TimeSpan.Parse` method to parse the string into a `TimeSpan` object.
|
||||
* writeTimeout: WebSocket write timeout, only valid when the protocol is WebSocket, the default is 10 seconds, use the `TimeSpan.Parse` method to parse the string into a `TimeSpan` object.
|
||||
* enableCompression: Whether to enable WebSocket compression (effective for dotnet version 6 and above, connector version 3.1.1 and above). The default is false.
|
||||
* autoReconnect: Whether to enable WebSocket reconnect (connector version 3.1.3 and above). The default is false.
|
||||
> **Note**:Enabling automatic reconnection is only effective for simple SQL statement execution, schemaless writing, and data subscription. It is not effective for parameter binding. Automatic reconnection is only effective for the database specified by parameters when the connection is established, and it is not effective for the `use db` statement to switch databases later.
|
||||
|
||||
* reconnectRetryCount: The number of reconnection retries (connector version 3.1.3 and above). The default is 3.
|
||||
* reconnectIntervalMs: The interval between reconnection retries (connector version 3.1.3 and above). The default is 2000.
|
||||
|
||||
### Specify the URL and Properties to get the connection
|
||||
|
||||
|
@ -407,6 +422,8 @@ namespace WSQuery
|
|||
|
||||
### execute SQL with reqId
|
||||
|
||||
<RequestId />
|
||||
|
||||
<Tabs defaultValue="native" groupId="connect">
|
||||
<TabItem value="native" label="native connection">
|
||||
|
||||
|
@ -800,6 +817,10 @@ The configuration parameters supported by consumer are as follows:
|
|||
* auto.commit.interval.ms: The interval for automatically submitting offsets, the default is 5000 milliseconds
|
||||
* auto.offset.reset: When offset does not exist, where to start consumption, the optional value is earliest or latest, the default is latest
|
||||
* msg.with.table.name: Whether the message contains the table name
|
||||
* ws.message.enableCompression: Whether to enable WebSocket compression (effective for dotnet version 6 and above, connector version 3.1.1 and above). The default is false.
|
||||
* ws.autoReconnect: Whether to enable WebSocket reconnect (connector version 3.1.3 and above). The default is false.
|
||||
* ws.reconnect.retry.count: The number of reconnection retries (connector version 3.1.3 and above). The default is 3.
|
||||
* ws.reconnect.interval.ms: The interval between reconnection retries (connector version 3.1.3 and above). The default is 2000.
|
||||
|
||||
Supports subscribing to the result set `Dictionary<string, object>` where the key is the column name and the value is the column value.
|
||||
|
||||
|
|
|
@ -6,32 +6,35 @@ title: TDengine ODBC
|
|||
|
||||
## Introduction
|
||||
|
||||
TDengine ODBC driver is a driver specifically designed for TDengine based on the ODBC standard. It can be used by ODBC based applications on Windows to access a local or remote TDengine cluster or TDengine cloud service, like [PowerBI](https://powerbi.microsoft.com).
|
||||
The TDengine ODBC driver is a driver specifically designed for TDengine based on the ODBC standard. It can be used by ODBC based applications,like [PowerBI](https://powerbi.microsoft.com), on Windows, to access a local or remote TDengine cluster or an instance in the TDengine Cloud service.
|
||||
|
||||
TDengine ODBC provides two kinds of connections, native connection and WebSocket connection. You can choose to use either one for your convenience, WebSocket is recommded choice and you must use WebSocket if you are trying to access TDengine cloud service.
|
||||
TDengine ODBC provides two kinds of connections, native connection and WebSocket connection. You can choose to use either one for your convenience. WebSocket is the recommended choice and you must use WebSocket if you are trying to access an instance in the TDengine Cloud service.
|
||||
|
||||
Note: TDengine ODBC driver can only be run on 64-bit system, and can only be invoked by 64-bit applications.
|
||||
Note: TDengine ODBC driver can only be run on 64-bit systems, and can only be invoked by 64-bit applications.
|
||||
|
||||
## Compatibility with ODBC Versions
|
||||
|
||||
- TDengine ODBC driver compatible with ODBC 3.8 and all earlier versions.
|
||||
|
||||
## Install
|
||||
|
||||
1. TDengine ODBC driver supports only Windows platform. To run on Windows, VisualStudio C Runtime library is required. If VisualStudio C Runtime Library is missing on your platform, you can download and install it from [VC Runtime Library](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170).
|
||||
1. The TDengine ODBC driver only supports the Windows platform. To run on Windows, the Microsoft Visual C++ Runtime library is required. If the Microsoft Visual C++ Runtime Library is missing on your platform, you can download and install it from [VC Runtime Library](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170).
|
||||
|
||||
2. Install TDengine client package for Windows, the version should be above 3.2.1.0, the client package includes both TDengine ODBC driver and some other necessary libraries that will be used in either native connection or WebSocket connection.
|
||||
2. Install TDengine Client package for Windows. The TDengine Client version should be above 3.2.1.0. The client package includes both the TDengine ODBC driver and some other necessary libraries that will be used in either native connection or WebSocket connection.
|
||||
|
||||
## Configure Data Source
|
||||
|
||||
### Connection Types
|
||||
|
||||
TDengine ODBC driver supports two kinds of connections to TDengine cluster, native connection and WebSocket connection, here is the major differences between them.
|
||||
TDengine ODBC driver supports two kinds of connections to TDengine cluster: native connection and WebSocket connection. The major differences between them are listed below.
|
||||
|
||||
1. Only WebSocket can connect to TDengine cloud service.
|
||||
1. Only a WebSocket connection can be used to connect to TDengine Cloud service.
|
||||
|
||||
2. Websocket connection is more compatible with different TDengine server versions, normally you don't need to uupgrade client package with the server side.
|
||||
2. A Websocket connection is more compatible with different TDengine server versions. Usually, you don't need to upgrade the TDengine Client package along with the server side.
|
||||
|
||||
3. Native connection normally has better performance, but you need to keep the version aligned with the server side.
|
||||
3. Native connections usually have better performance, but the TDengine Client version must be compatible with the TDengine server version.
|
||||
|
||||
4. For most users, it's recommended to use **WebSocket** connection, which has much better compatibility and almost same performance as native connection.
|
||||
4. For most users, it's recommended to use **WebSocket** connection, which has much better compatibility and almost the same performance as native connection.
|
||||
|
||||
### WebSocket Connection
|
||||
|
||||
|
@ -57,9 +60,9 @@ TDengine ODBC driver supports two kinds of connections to TDengine cluster, nati
|
|||
|
||||
4.6 [Password]: optional field, only used for connection testing in step 5;
|
||||
|
||||
5. Click "Test Connecting" to test whether the data source can be connectted; if successful, it will prompt "connecting success"
|
||||
5. Click "Test Connection" to test whether the connection to the data source is successful; if successful, it will prompt "Successfully connected to URL"
|
||||
|
||||
6. Click "OK" to sae the configuration and exit.
|
||||
6. Click "OK" to set the configuration and exit.
|
||||
|
||||
7. You can also select an already configured data source name in step 2 to change existing configuration.
|
||||
|
||||
|
@ -72,4 +75,4 @@ The steps are exactly same as "WebSocket" connection, except for you choose "Nat
|
|||
|
||||
## PowerBI
|
||||
|
||||
As an example, you can use PowerBI, which inovkes TDengine ODBC driver, to access TDengine, please refer to[Power BI](../../third-party/powerbi) for more details.
|
||||
As an example, you can use PowerBI, which invokes TDengine ODBC driver, to access TDengine, please refer to [Power BI](../../third-party/powerbi) for more details.
|
||||
|
|
|
@ -0,0 +1,7 @@
|
|||
The reqId is very similar to TraceID in distributed tracing systems. In a distributed system, a request may need to pass through multiple services or modules to be completed. The reqId is used to identify and associate all related operations of this request, allowing us to track and understand the complete execution path of the request.
|
||||
Here are some primary usage of reqId:
|
||||
- **Request Tracing**: By associating the same reqId with all related operations of a request, we can trace the complete path of the request within the system.
|
||||
- **Performance Analysis**: By analyzing a request's reqId, we can understand the processing time of the request across various services or modules, thereby identifying performance bottlenecks.
|
||||
- **Fault Diagnosis**: When a request fails, we can identify the location of the issue by examining the reqId associated with that request.
|
||||
|
||||
If the user does not set a reqId, the client library will generate one randomly internally, but it is still recommended for the user to set it, as it can better associate with the user's request.
|
Before Width: | Height: | Size: 21 KiB After Width: | Height: | Size: 18 KiB |
|
@ -15,9 +15,9 @@ Currently, TDengine's native interface client libraries can support platforms su
|
|||
| -------------- | --------- | -------- | ---------- | ------ | ----------- | ------ | -------- | ----- |
|
||||
| **X86 64bit** | **Linux** | ● | ● | ● | ● | ● | ● | ● |
|
||||
| **X86 64bit** | **Win64** | ● | ● | ● | ● | ● | ● | ● |
|
||||
| **X86 64bit** | **macOS** | ○ | ● | ● | ○ | ○ | ● | ● |
|
||||
| **X86 64bit** | **macOS** | ● | ● | ● | ○ | ○ | ● | ● |
|
||||
| **ARM64** | **Linux** | ● | ● | ● | ● | ○ | ○ | ● |
|
||||
| **ARM64** | **macOS** | ○ | ● | ● | ○ | ○ | ● | ● |
|
||||
| **ARM64** | **macOS** | ● | ● | ● | ○ | ○ | ● | ● |
|
||||
|
||||
Where ● means the official test verification passed, ○ means the unofficial test verification passed, -- means no assurance.
|
||||
|
||||
|
@ -59,9 +59,9 @@ The different database framework specifications for various programming language
|
|||
| -------------------------------------- | ------------- | --------------- | ------------- | ------------- | ------------- | ------------- |
|
||||
| **Connection Management** | Support | Support | Support | Support | Support | Support |
|
||||
| **Regular Query** | Support | Support | Support | Support | Support | Support |
|
||||
| **Parameter Binding** | Supported | Supported | Support | Support | Not Supported | Support |
|
||||
| **Subscription (TMQ) ** | Supported | Support | Support | Not Supported | Not Supported | Support |
|
||||
| **Schemaless** | Supported | Supported | Supported | Not Supported | Not Supported | Not Supported |
|
||||
| **Parameter Binding** | Support | Support | Support | Support | Not Supported | Support |
|
||||
| **Subscription (TMQ) ** | Support | Support | Support | Support | Not Supported | Support |
|
||||
| **Schemaless** | Support | Support | Support | Support | Not Supported | Not Supported |
|
||||
| **Bulk Pulling (based on WebSocket) ** | Support | Support | Support | Support | Support | Support |
|
||||
|
||||
:::warning
|
||||
|
|
|
@ -173,12 +173,6 @@ Query OK, 8 row(s) in set (0.001154s)
|
|||
|
||||
Before running the TDengine CLI, ensure that the taosd process has been stopped on the dnode that you want to delete.
|
||||
|
||||
```sql
|
||||
DROP DNODE "fqdn:port";
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```sql
|
||||
DROP DNODE dnodeId;
|
||||
```
|
||||
|
|
|
@ -60,7 +60,7 @@ database_option: {
|
|||
- PAGES: specifies the number of pages in the metadata storage engine cache on each vnode. Enter a value greater than or equal to 64. The default value is 256. The space occupied by metadata storage on each vnode is equal to the product of the values of the PAGESIZE and PAGES parameters. The space occupied by default is 1 MB.
|
||||
- PAGESIZE: specifies the size (in KB) of each page in the metadata storage engine cache on each vnode. The default value is 4. Enter a value between 1 and 16384.
|
||||
- PRECISION: specifies the precision at which a database records timestamps. Enter ms for milliseconds, us for microseconds, or ns for nanoseconds. The default value is ms.
|
||||
- REPLICA: specifies the number of replicas that are made of the database. Enter 1 or 3. The default value is 1. The value of the REPLICA parameter cannot exceed the number of dnodes in the cluster.
|
||||
- REPLICA: specifies the number of replicas that are made of the database. Enter 1, 2 or 3. The default value is 1. 2 is only available in TDengine Enterprise since version 3.3.0.0. The value of the REPLICA parameter cannot exceed the number of dnodes in the cluster.
|
||||
- WAL_LEVEL: specifies whether fsync is enabled. The default value is 1.
|
||||
- 1: WAL is enabled but fsync is disabled.
|
||||
- 2: WAL and fsync are both enabled.
|
||||
|
|
|
@ -49,6 +49,7 @@ table_option: {
|
|||
7. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for the name length is still valid. The table names specified using escape character are case sensitive.
|
||||
For example \`aBc\` and \`abc\` are different table names but `abc` and `aBc` are same table names because they are both converted to `abc` internally.
|
||||
Only ASCII visible characters can be used with escape character.
|
||||
8. For the details of using `ENCODE` and `COMPRESS`, please refer to [Encode and Compress for Column](../compress).
|
||||
|
||||
**Parameter description**
|
||||
|
||||
|
@ -207,6 +208,8 @@ The following SQL statement deletes one or more tables.
|
|||
DROP TABLE [IF EXISTS] [db_name.]tb_name [, [IF EXISTS] [db_name.]tb_name] ...
|
||||
```
|
||||
|
||||
**Note**:Dropping a table doesn't release the disk space occupied by the table, instead all the rows in the table are marked as deleted, so these data will not occur when querying. The disk space will be released when the system automatically performs `compact` operation or the user performs `compact` manually.
|
||||
|
||||
## View Tables
|
||||
|
||||
### View All Tables
|
||||
|
|
|
@ -13,17 +13,29 @@ create_definition:
|
|||
col_name column_definition
|
||||
|
||||
column_definition:
|
||||
type_name
|
||||
type_name [comment 'string_value'] [PRIMARY KEY] [ENCODE 'encode_type'] [COMPRESS 'compress_type'] [LEVEL 'level_type']
|
||||
|
||||
table_options:
|
||||
table_option ...
|
||||
|
||||
table_option: {
|
||||
COMMENT 'string_value'
|
||||
| SMA(col_name [, col_name] ...)
|
||||
| TTL value
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
**More explanations**
|
||||
- Each supertable can have a maximum of 4096 columns, including tags. The minimum number of columns is 3: a timestamp column used as the key, one tag column, and one data column.
|
||||
- The TAGS keyword defines the tag columns for the supertable. The following restrictions apply to tag columns:
|
||||
1. Each supertable can have a maximum of 4096 columns, including tags. The minimum number of columns is 3: a timestamp column used as the key, one tag column, and one data column.
|
||||
2. Since version 3.3.0.0, besides the timestamp, you can specify another column as primary key using `PRIMARY KEY` keyword, the column specified using `primary key` must be type of integer or varchar.
|
||||
2. The TAGS keyword defines the tag columns for the supertable. The following restrictions apply to tag columns:
|
||||
- A tag column can use the TIMESTAMP data type, but the values in the column must be fixed numbers. Timestamps including formulae, such as "now + 10s", cannot be stored in a tag column.
|
||||
- The name of a tag column cannot be the same as the name of any other column.
|
||||
- The name of a tag column cannot be a reserved keyword.
|
||||
- Each supertable must contain between 1 and 128 tags. The total length of the TAGS keyword cannot exceed 16 KB.
|
||||
- For more information about table parameters, see Create a Table.
|
||||
3. Regarding how to use `ENCODE` and `COMPRESS`, please refer to [Encode and Compress for Column](../compress).
|
||||
3. For more information about table parameters, see [Create a Table](../table).
|
||||
|
||||
## View a Supertable
|
||||
|
||||
|
@ -111,6 +123,8 @@ DROP STABLE [IF EXISTS] [db_name.]stb_name
|
|||
|
||||
Note: Deleting a supertable will delete all subtables created from the supertable, including all data within those subtables.
|
||||
|
||||
**Note**:Dropping a supertable doesn't release the disk space occupied by the table, instead all the rows in the table are marked as deleted, so these data will not occur when querying. The disk space will be released when the system automatically performs `compact` operation or the user performs `compact` manually.
|
||||
|
||||
## Modify a Supertable
|
||||
|
||||
```sql
|
||||
|
|
|
@ -148,6 +148,11 @@ You can query tag columns in supertables and subtables and receive results in th
|
|||
SELECT location, groupid, current FROM d1001 LIMIT 2;
|
||||
```
|
||||
|
||||
### Alias Name
|
||||
|
||||
The naming rules for aliases are the same as those for columns, and it supports directly specifying Chinese aliases in UTF-8 encoding format.
|
||||
|
||||
|
||||
### Distinct Values
|
||||
|
||||
The DISTINCT keyword returns only values that are different over one or more columns. You can use the DISTINCT keyword with tag columns and data columns.
|
||||
|
@ -278,7 +283,7 @@ The GROUP BY clause does not guarantee that the results are ordered. If you want
|
|||
|
||||
The PARTITION BY clause is a TDengine-specific extension to standard SQL introduced in TDengine 3.0. This clause partitions data based on the part_list and performs computations per partition.
|
||||
|
||||
PARTITION BY and GROUP BY have similar meanings. They both group data according to a specified list and then perform calculations. The difference is that PARTITION BY does not have various restrictions on the SELECT list of the GROUP BY clause. Any operation can be performed within the group (constants, aggregations, scalars, expressions, etc.). Therefore, PARTITION BY is fully compatible with GROUP BY in terms of usage. All places that use the GROUP BY clause can be replaced with PARTITION BY.
|
||||
PARTITION BY and GROUP BY have similar meanings. They both group data according to a specified list and then perform calculations. The difference is that PARTITION BY does not have various restrictions on the SELECT list of the GROUP BY clause. Any operation can be performed within the group (constants, aggregations, scalars, expressions, etc.). Therefore, PARTITION BY is fully compatible with GROUP BY in terms of usage. All places that use the GROUP BY clause can be replaced with PARTITION BY, there may be differences in the query results while no aggregation function in the query.
|
||||
|
||||
Because PARTITION BY does not require returning a row of aggregated data, it can also support various window operations after grouping slices. All window operations that need to be grouped can only use the PARTITION BY clause.
|
||||
|
||||
|
@ -454,6 +459,7 @@ SELECT ... FROM (SELECT ... FROM ...) ...;
|
|||
:::info
|
||||
|
||||
- The result of a nested query is returned as a virtual table used by the outer query. It's recommended to give an alias to this table for the convenience of using it in the outer query.
|
||||
- Outer queries support directly referencing columns or pseudo-columns of inner queries in the form of column names or \`column names\`.
|
||||
- JOIN operation is allowed between tables/STables inside both inner and outer queries. Join operation can be performed on the result set of the inner query.
|
||||
- The features that can be used in the inner query are the same as those that can be used in a non-nested query.
|
||||
- `ORDER BY` inside the inner query is unnecessary and will slow down the query performance significantly. It is best to avoid the use of `ORDER BY` inside the inner query.
|
||||
|
|
|
@ -6,6 +6,8 @@ description: This document describes how to delete data from TDengine.
|
|||
|
||||
TDengine provides the functionality of deleting data from a table or STable according to specified time range, it can be used to cleanup abnormal data generated due to device failure.
|
||||
|
||||
**Note**:Deletting some data doesn't release the disk space occupied by the table, instead all the rows in the table are marked as deleted, so these data will not occur when querying. The disk space will be released when the system automatically performs `compact` operation or the user performs `compact` manually.
|
||||
|
||||
**Syntax:**
|
||||
|
||||
```sql
|
||||
|
|
|
@ -1167,7 +1167,7 @@ TDengine includes extensions to standard SQL that are intended specifically for
|
|||
CSUM(expr)
|
||||
```
|
||||
|
||||
**Description**: The cumulative sum of each row for a specific column. The number of output rows is same as that of the input rows.
|
||||
**Description**: The cumulative sum of each row for a specific column, NULL value will be discard.
|
||||
|
||||
**Return value type**: Long integer for integers; Double for floating points. uint64_t for unsigned integers
|
||||
|
||||
|
@ -1209,27 +1209,40 @@ ignore_negative: {
|
|||
### DIFF
|
||||
|
||||
```sql
|
||||
DIFF(expr [, ignore_negative])
|
||||
DIFF(expr [, ignore_option])
|
||||
|
||||
ignore_negative: {
|
||||
ignore_option: {
|
||||
0
|
||||
| 1
|
||||
| 2
|
||||
| 3
|
||||
}
|
||||
```
|
||||
|
||||
**Description**: The different of each row with its previous row for a specific column. `ignore_negative` can be specified as 0 or 1, the default value is 1 if it's not specified. `1` means negative values are ignored. For tables with composite primary key, the data with the smallest primary key value is used to calculate the difference.
|
||||
**Description**: The difference of each row with its previous row for a specific column. `ignore_option` takes the value of 0|1|2|3, the default value is 0 if it's not specified.
|
||||
- `0` means that negative values (diff results) are not ignored and null values are not ignored
|
||||
- `1` means that negative values (diff results) are treated as null values
|
||||
- `2` means that negative values (diff results) are not ignored but null values are ignored
|
||||
- `3` means that negative values (diff results) are ignored and null values are ignored
|
||||
- For tables with composite primary key, the data with the smallest primary key value is used to calculate the difference.
|
||||
|
||||
**Return value type**:Same as the data type of the column being operated upon
|
||||
**Return value type**: `bool`, `timestamp` and `integer` value type all return `int_64`, `float` type returns `double`; if the diff result overflows, it is returned as overflow.
|
||||
|
||||
**Applicable data types**: Numeric
|
||||
**Applicable data types**: Numeric type, timestamp and bool type.
|
||||
|
||||
**Applicable table types**: standard tables and supertables
|
||||
|
||||
**More explanation**:
|
||||
|
||||
- The number of result rows is the number of rows subtracted by one, no output for the first row
|
||||
- It can be used together with a selected column. For example: select \_rowts, DIFF() from.
|
||||
|
||||
- diff is to calculate the difference of a specific column in current row and the **first valid data before the row**. The **first valid data before the row** refers to the most adjacent non-null value of same column with smaller timestamp.
|
||||
- The diff result of numeric type is the corresponding arithmatic difference; the timestamp is calculated based on the timestamp precision of the database; when calculating diff, `true` is treated as 1 and `false` is treated as 0
|
||||
- If the data of current row is NULL or can't find the **first valid data before the current row**, the diff result is NULL
|
||||
- When ignoring negative values (ignore_option is set to 1 or 3), if the diff result is negative, the result is set to null, and then filtered according to the null value filtering rule
|
||||
- When the diff result has an overflow, whether to ignore the negative value depends on the result of the logical operation is positive or negative. For example, the value of 9223372036854775800 - (-9223372036854775806) exceeds the range of BIGINT, and the diff result will display the overflow value -10, but it will not be ignored as a negative value
|
||||
- Single or multiple diffs can be used in a single statement, and for each diff you can specify same or different `ignore_option`. When there are multiple diffs in a single statement, when and only when all the diff results are NULL for a row and each diff's `ignore_option` is specified as ignoring NULL, the output of this row will be removed from the result set.
|
||||
- Can be used with the selected associated columns. For example: `select _rowts, DIFF()`.
|
||||
- When there is not composite primary key, if there are the same timestamps across different subtables, it will prompt "Duplicate timestamps not allowed"
|
||||
- When using with composite primary key, there may be same combination of timestamp and complete primary key across sub-tables, which row will be used depends on which row is found first, that means the result of running diff() multiple times may be different in such a case
|
||||
|
||||
### IRATE
|
||||
|
||||
|
|
|
@ -31,14 +31,14 @@ A PARTITION BY clause is processed as follows:
|
|||
select _wstart, location, max(current) from meters partition by location interval(10m)
|
||||
```
|
||||
|
||||
The most common usage of PARTITION BY is partitioning the data in subtables by tags then perform computation when querying data in a supertable. More specifically, `PARTITION BY TBNAME` partitions the data of each subtable into a single timeline, and this method facilitates the statistical analysis in many use cases of processing timeseries data. For example, calculate the average voltage of each meter every 10 minutes£º
|
||||
The most common usage of PARTITION BY is partitioning the data in subtables by tags then perform computation when querying data in a supertable. More specifically, `PARTITION BY TBNAME` partitions the data of each subtable into a single timeline, and this method facilitates the statistical analysis in many use cases of processing timeseries data. For example, calculate the average voltage of each meter every 10 minutes:
|
||||
```sql
|
||||
select _wstart, tbname, avg(voltage) from meters partition by tbname interval(10m)
|
||||
```
|
||||
|
||||
## Windowed Queries
|
||||
|
||||
Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window. Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are four kinds of windows: time window, status window, session window, and event window. There are two kinds of time windows: sliding window and flip time/tumbling window. The syntax of window clause is as follows:
|
||||
Aggregation by time window is supported in TDengine. For example, in the case where temperature sensors report the temperature every seconds, the average temperature for every 10 minutes can be retrieved by performing a query with a time window. Window related clauses are used to divide the data set to be queried into subsets and then aggregation is performed across the subsets. There are five kinds of windows: time window, status window, session window, event window, and count window. There are two kinds of time windows: sliding window and flip time/tumbling window. The syntax of window clause is as follows:
|
||||
|
||||
```sql
|
||||
window_clause: {
|
||||
|
@ -80,7 +80,7 @@ These pseudocolumns occur after the aggregation clause.
|
|||
`FILL` clause is used to specify how to fill when there is data missing in any window, including:
|
||||
|
||||
1. NONE: No fill (the default fill mode)
|
||||
2. VALUE: Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)` Note: The value filled depends on the data type. For example, if you run FILL(VALUE 1.23) on an integer column, the value 1 is filled.
|
||||
2. VALUE: Fill with a fixed value, which should be specified together, for example `FILL(VALUE, 1.23)` Note: The value filled depends on the data type. For example, if you run FILL(VALUE 1.23) on an integer column, the value 1 is filled. If multiple columns in select list need to be filled, then in the fill clause there must be a fill value for each of these columns, for example, `SELECT _wstart, min(c1), max(c1) FROM ... FILL(VALUE, 0, 0)`.
|
||||
3. PREV: Fill with the previous non-NULL value, `FILL(PREV)`
|
||||
4. NULL: Fill with NULL, `FILL(NULL)`
|
||||
5. LINEAR: Fill with the closest non-NULL value, `FILL(LINEAR)`
|
||||
|
|
|
@ -30,7 +30,7 @@ subquery: SELECT [DISTINCT] select_list
|
|||
from_clause
|
||||
[WHERE condition]
|
||||
[PARTITION BY tag_list]
|
||||
[window_clause]
|
||||
window_clause
|
||||
```
|
||||
|
||||
Session windows, state windows, and sliding windows are supported. When you configure a session or state window for a supertable, you must use PARTITION BY TBNAME. If the source table has a composite primary key, state windows, event windows, and count windows are not supported.
|
||||
|
@ -78,6 +78,10 @@ If a stream is created with PARTITION BY clause and SUBTABLE clause, the name of
|
|||
|
||||
```sql
|
||||
CREATE STREAM avg_vol_s INTO avg_vol SUBTABLE(CONCAT('new-', tname)) AS SELECT _wstart, count(*), avg(voltage) FROM meters PARTITION BY tbname tname INTERVAL(1m);
|
||||
|
||||
CREATE STREAM streams0 INTO streamt0 AS SELECT _wstart, count(*), avg(voltage) from meters PARTITION BY tbname EVENT_WINDOW START WITH voltage < 0 END WITH voltage > 9;
|
||||
|
||||
CREATE STREAM streams1 IGNORE EXPIRED 1 WATERMARK 100s INTO streamt1 AS SELECT _wstart, count(*), avg(voltage) from meters PARTITION BY tbname COUNT_WINDOW(10);
|
||||
```
|
||||
|
||||
IN PARTITION clause, 'tbname', representing each subtable name of source supertable, is given alias 'tname'. And 'tname' is used in SUBTABLE clause. In SUBTABLE clause, each auto created subtable will concat 'new-' and source subtable name as their name(Starting from 3.2.3.0, in order to avoid the expression in subtable being unable to distinguish between different subtables, add '_stableName_groupId' to the end of subtable name).
|
||||
|
@ -189,11 +193,32 @@ All [scalar functions](../function/#scalar-functions) are available in stream pr
|
|||
- [unique](../function/#unique)
|
||||
- [mode](../function/#mode)
|
||||
|
||||
## Pause\Resume stream
|
||||
## Pause Resume stream
|
||||
1.pause stream
|
||||
```sql
|
||||
PAUSE STREAM [IF EXISTS] stream_name;
|
||||
```
|
||||
If "IF EXISTS" is not specified and the stream does not exist, an error will be reported; If "IF EXISTS" is specified and the stream does not exist, success is returned; If the stream exists, paused all stream tasks.
|
||||
|
||||
2.resume stream
|
||||
```sql
|
||||
RESUME STREAM [IF EXISTS] [IGNORE UNTREATED] stream_name;
|
||||
```
|
||||
If "IF EXISTS" is not specified and the stream does not exist, an error will be reported. If "IF EXISTS" is specified and the stream does not exist, success is returned; If the stream exists, all of the stream tasks will be resumed. If "IGNORE UntREATED" is specified, data written during the pause period of stream is ignored when resuming stream.
|
||||
|
||||
## Stream State Backup
|
||||
The intermediate processing results of stream, a.k.a stream state, need to be persistent on the disk properly during stream processing. The stream state, consisting of multiple files on disk, may be transferred between different computing nodes during the stream processing, as a result of a leader/follower switch or physical computing node offline. You need to deploy the rsync on each physical node to enable the backup and restore processing work, since _ver_.3.3.2.1. To ensure it works correctly, please refer to the following instructions:
|
||||
1. add the option "snodeAddress" in the configure file
|
||||
2. add the option "checkpointBackupDir" in the configure file to set the backup data directory.
|
||||
3. create a _snode_ before creating a stream to ensure the backup service is activated. Otherwise, the checkpoint may not generated during the stream procedure.
|
||||
|
||||
>snodeAddress 127.0.0.1:873
|
||||
>
|
||||
>checkpointBackupDir /home/user/stream/backup/checkpoint/
|
||||
|
||||
## create snode
|
||||
The snode, stream node for short, on which the aggregate tasks can be deployed on, is a stateful computing node dedicated to the stream processing. An important feature is to backup and restore the stream state files. The snode needs to be created before creating stream tasks. Use the following SQL statement to create a snode in a TDengine cluster, and only one snode is allowed in a TDengine cluster for now.
|
||||
```sql
|
||||
CREATE SNODE ON DNODE id
|
||||
```
|
||||
is the ordinal number of a dnode, which can be acquired by using ```show dnodes``` statement.
|
||||
|
|
|
@ -18,7 +18,7 @@ description: This document describes the usage of escape characters in TDengine.
|
|||
|
||||
## Restrictions
|
||||
|
||||
1. If there are escape characters in identifiers (database name, table name, column name)
|
||||
1. If there are escape characters in identifiers (database name, table name, column name, alias Name)
|
||||
- Identifier without ``: Error will be returned because identifier must be constituted of digits, ASCII characters or underscore and can't be started with digits
|
||||
- Identifier quoted with ``: Original content is kept, no escaping
|
||||
2. If there are escape characters in values
|
||||
|
|
|
@ -27,10 +27,10 @@ The preceding SQL command shows all dnodes in the cluster with the ID, endpoint,
|
|||
## Delete a DNODE
|
||||
|
||||
```sql
|
||||
DROP DNODE {dnode_id | dnode_endpoint}
|
||||
DROP DNODE dnode_id
|
||||
```
|
||||
|
||||
You can delete a dnode by its ID or by its endpoint. Note that deleting a dnode does not stop its process. You must stop the process after the dnode is deleted.
|
||||
Note that deleting a dnode does not stop its process. You must stop the process after the dnode is deleted.
|
||||
|
||||
## Modify Dnode Configuration
|
||||
|
||||
|
|
|
@ -210,9 +210,13 @@ Provides information about TDengine users. Users whose SYSINFO attribute is 0 ca
|
|||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | ---------------- |
|
||||
| 1 | user_name | VARCHAR(23) | User name |
|
||||
| 2 | privilege | VARCHAR(256) | User permissions |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| 1 | name | VARCHAR(24) | User name |
|
||||
| 2 | super | TINYINT | Wether user is super user. 1 means yes; 0 means no. |
|
||||
| 3 | enable | TINYINT | Wether user is enabled. 1 means yes; 0 means no. |
|
||||
| 4 | sysinfo | TINYINT | Wether user can query system info. 1 means yes; 0 means no. |
|
||||
| 5 | create_time | TIMESTAMP | Create time |
|
||||
| 6 | allowed_host | VARCHAR(49152)| IP whitelist |
|
||||
|
||||
|
||||
## INS_GRANTS
|
||||
|
||||
|
|
|
@ -91,53 +91,4 @@ Query OK, 0 of 0 rows affected (0.001160s)
|
|||
|
||||
## Grant Permissions
|
||||
|
||||
```sql
|
||||
GRANT privileges ON priv_level TO user_name
|
||||
|
||||
privileges : {
|
||||
ALL
|
||||
| priv_type [, priv_type] ...
|
||||
}
|
||||
|
||||
priv_type : {
|
||||
READ
|
||||
| WRITE
|
||||
}
|
||||
|
||||
priv_level : {
|
||||
dbname.*
|
||||
| *.*
|
||||
}
|
||||
```
|
||||
|
||||
Grant permissions to a user, this feature is only available in enterprise edition.
|
||||
|
||||
Permissions are granted on the database level. You can grant read or write permissions.
|
||||
|
||||
TDengine has superusers and standard users. The default superuser name is root. This account has all permissions. You can use the superuser account to create standard users. With no permissions, standard users can create databases and have permissions on the databases that they create. These include deleting, modifying, querying, and writing to their own databases. Superusers can grant users permission to read and write other databases. However, standard users cannot delete or modify databases created by other users.
|
||||
|
||||
For non-database objects such as users, dnodes, and user-defined functions, standard users have read permissions only, generally by means of the SHOW statement. Standard users cannot create or modify these objects.
|
||||
|
||||
## Revoke Permissions
|
||||
|
||||
```sql
|
||||
REVOKE privileges ON priv_level FROM user_name
|
||||
|
||||
privileges : {
|
||||
ALL
|
||||
| priv_type [, priv_type] ...
|
||||
}
|
||||
|
||||
priv_type : {
|
||||
READ
|
||||
| WRITE
|
||||
}
|
||||
|
||||
priv_level : {
|
||||
dbname.*
|
||||
| *.*
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
Revoke permissions from a user, this feature is only available in enterprise edition.
|
||||
Permission control is only available in TDengine Enterprise, please contact TDengine sales team.
|
|
@ -53,7 +53,7 @@ CREATE AGGREGATE FUNCTION function_name AS library_path OUTPUTTYPE output_type [
|
|||
CREATE AGGREGATE FUNCTION l2norm AS "/home/taos/udf_example/libl2norm.so" OUTPUTTYPE DOUBLE bufsize 64;
|
||||
```
|
||||
|
||||
For more information about user-defined functions, see [User-Defined Functions](../../develop/udf).
|
||||
For more information about user-defined functions, see [User-Defined Functions](https://docs.tdengine.com/develop/udf/).
|
||||
|
||||
## Manage UDF
|
||||
|
||||
|
|
|
@ -43,201 +43,204 @@ Launch `TDinsight.sh` with the command above and restart Grafana, then open Dash
|
|||
|
||||
The data of tdinsight dashboard is stored in `log` database (default. You can change it in taoskeeper's config file. For more infrmation, please reference to [taoskeeper document](../../reference/taosKeeper)). The taoskeeper will create log database on taoskeeper startup.
|
||||
|
||||
### cluster\_info table
|
||||
### taosd\_cluster\_basic table
|
||||
|
||||
`cluster_info` table contains cluster information records.
|
||||
`taosd_cluster_basic` table contains cluster basic information.
|
||||
|
||||
|field|type|is\_tag|comment|
|
||||
|:----|:---|:-----|:------|
|
||||
|ts|TIMESTAMP||timestamp|
|
||||
|first\_ep|VARCHAR||first ep of cluster|
|
||||
|first\_ep\_dnode\_id|INT||dnode id or first\_ep|
|
||||
|version|VARCHAR||tdengine version. such as: 3.0.4.0|
|
||||
|master\_uptime|FLOAT||days of master's uptime|
|
||||
|monitor\_interval|INT||monitor interval in second|
|
||||
|dbs\_total|INT||total number of databases in cluster|
|
||||
|tbs\_total|BIGINT||total number of tables in cluster|
|
||||
|stbs\_total|INT||total number of stables in cluster|
|
||||
|dnodes\_total|INT||total number of dnodes in cluster|
|
||||
|dnodes\_alive|INT||total number of dnodes in ready state|
|
||||
|mnodes\_total|INT||total number of mnodes in cluster|
|
||||
|mnodes\_alive|INT||total number of mnodes in ready state|
|
||||
|vgroups\_total|INT||total number of vgroups in cluster|
|
||||
|vgroups\_alive|INT||total number of vgroups in ready state|
|
||||
|vnodes\_total|INT||total number of vnode in cluster|
|
||||
|vnodes\_alive|INT||total number of vnode in ready state|
|
||||
|connections\_total|INT||total number of connections to cluster|
|
||||
|topics\_total|INT||total number of topics in cluster|
|
||||
|streams\_total|INT||total number of streams in cluster|
|
||||
|protocol|INT||protocol version|
|
||||
|cluster\_id|NCHAR|TAG|cluster id|
|
||||
|cluster_version|VARCHAR||tdengine version. such as: 3.0.4.0|
|
||||
|cluster\_id|VARCHAR|TAG|cluster id|
|
||||
|
||||
### d\_info table
|
||||
### taosd\_cluster\_info table
|
||||
|
||||
`d_info` table contains dnodes information records.
|
||||
`taosd_cluster_info` table contains cluster information records.
|
||||
|
||||
|field|type|is\_tag|comment|
|
||||
|:----|:---|:-----|:------|
|
||||
|ts|TIMESTAMP||timestamp|
|
||||
|status|VARCHAR||dnode status|
|
||||
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|
||||
|cluster\_id|NCHAR|TAG|cluster id|
|
||||
|cluster\_uptime|DOUBLE||seconds of master's uptime|
|
||||
|dbs\_total|DOUBLE||total number of databases in cluster|
|
||||
|tbs\_total|DOUBLE||total number of tables in cluster|
|
||||
|stbs\_total|DOUBLE||total number of stables in cluster|
|
||||
|dnodes\_total|DOUBLE||total number of dnodes in cluster|
|
||||
|dnodes\_alive|DOUBLE||total number of dnodes in ready state|
|
||||
|mnodes\_total|DOUBLE||total number of mnodes in cluster|
|
||||
|mnodes\_alive|DOUBLE||total number of mnodes in ready state|
|
||||
|vgroups\_total|DOUBLE||total number of vgroups in cluster|
|
||||
|vgroups\_alive|DOUBLE||total number of vgroups in ready state|
|
||||
|vnodes\_total|DOUBLE||total number of vnode in cluster|
|
||||
|vnodes\_alive|DOUBLE||total number of vnode in ready state|
|
||||
|connections\_total|DOUBLE||total number of connections to cluster|
|
||||
|topics\_total|DOUBLE||total number of topics in cluster|
|
||||
|streams\_total|DOUBLE||total number of streams in cluster|
|
||||
|grants_expire\_time|DOUBLE||time until grants expire in seconds|
|
||||
|grants_timeseries\_used|DOUBLE||timeseries used|
|
||||
|grants_timeseries\_total|DOUBLE||total timeseries|
|
||||
|cluster\_id|VARCHAR|TAG|cluster id|
|
||||
|
||||
### m\_info table
|
||||
### taosd\_vgroups\_info table
|
||||
|
||||
`m_info` table contains mnode information records.
|
||||
`taosd_vgroups_info` table contains vgroups information records.
|
||||
|
||||
|field|type|is\_tag|comment|
|
||||
|:----|:---|:-----|:------|
|
||||
|ts|TIMESTAMP||timestamp|
|
||||
|role|VARCHAR||the role of mnode. leader or follower|
|
||||
|mnode\_id|INT|TAG|master node id|
|
||||
|mnode\_ep|NCHAR|TAG|master node endpoint|
|
||||
|cluster\_id|NCHAR|TAG|cluster id|
|
||||
|tables\_num|DOUBLE||number of tables per vgroup|
|
||||
|status|DOUBLE||status, value range:unsynced = 0, ready = 1|
|
||||
|vgroup\_id|VARCHAR|TAG|vgroup id|
|
||||
|database\_name|VARCHAR|TAG|database for the vgroup|
|
||||
|cluster\_id|VARCHAR|TAG|cluster id|
|
||||
|
||||
### dnodes\_info table
|
||||
### taosd\_dnodes\_info table
|
||||
|
||||
`dnodes_info` table contains dnodes information records.
|
||||
`taosd_dnodes_info` table contains dnodes information records.
|
||||
|
||||
|field|type|is\_tag|comment|
|
||||
|:----|:---|:-----|:------|
|
||||
|ts|TIMESTAMP||timestamp|
|
||||
|uptime|FLOAT||dnode uptime in `days`|
|
||||
|cpu\_engine|FLOAT||cpu usage of tdengine. read from `/proc/<taosd_pid>/stat`|
|
||||
|cpu\_system|FLOAT||cpu usage of server. read from `/proc/stat`|
|
||||
|cpu\_cores|FLOAT||cpu cores of server|
|
||||
|mem\_engine|INT||memory usage of tdengine. read from `/proc/<taosd_pid>/status`|
|
||||
|mem\_system|INT||available memory on the server in `KB`|
|
||||
|mem\_total|INT||total memory of server in `KB`|
|
||||
|disk\_engine|INT|||
|
||||
|disk\_used|BIGINT||usage of data dir in `bytes`|
|
||||
|disk\_total|BIGINT||the capacity of data dir in `bytes`|
|
||||
|net\_in|FLOAT||network throughput rate in byte/s. read from `/proc/net/dev`|
|
||||
|net\_out|FLOAT||network throughput rate in byte/s. read from `/proc/net/dev`|
|
||||
|io\_read|FLOAT||io throughput rate in byte/s. read from `/proc/<taosd_pid>/io`|
|
||||
|io\_write|FLOAT||io throughput rate in byte/s. read from `/proc/<taosd_pid>/io`|
|
||||
|io\_read\_disk|FLOAT||io throughput rate of disk in byte/s. read from `/proc/<taosd_pid>/io`|
|
||||
|io\_write\_disk|FLOAT||io throughput rate of disk in byte/s. read from `/proc/<taosd_pid>/io`|
|
||||
|req\_select|INT||number of select queries received per dnode|
|
||||
|req\_select\_rate|FLOAT||number of select queries received per dnode divided by monitor interval.|
|
||||
|req\_insert|INT||number of insert queries received per dnode|
|
||||
|req\_insert\_success|INT||number of successfully insert queries received per dnode|
|
||||
|req\_insert\_rate|FLOAT||number of insert queries received per dnode divided by monitor interval|
|
||||
|req\_insert\_batch|INT||number of batch insertions|
|
||||
|req\_insert\_batch\_success|INT||number of successful batch insertions|
|
||||
|req\_insert\_batch\_rate|FLOAT||number of batch insertions divided by monitor interval|
|
||||
|errors|INT||dnode errors|
|
||||
|vnodes\_num|INT||number of vnodes per dnode|
|
||||
|masters|INT||number of master vnodes|
|
||||
|has\_mnode|INT||if the dnode has mnode|
|
||||
|has\_qnode|INT||if the dnode has qnode|
|
||||
|has\_snode|INT||if the dnode has snode|
|
||||
|has\_bnode|INT||if the dnode has bnode|
|
||||
|dnode\_id|INT|TAG|dnode id|
|
||||
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|
||||
|cluster\_id|NCHAR|TAG|cluster id|
|
||||
|uptime|DOUBLE||dnode uptime in `seconds`|
|
||||
|cpu\_engine|DOUBLE||cpu usage of tdengine. read from `/proc/<taosd_pid>/stat`|
|
||||
|cpu\_system|DOUBLE||cpu usage of server. read from `/proc/stat`|
|
||||
|cpu\_cores|DOUBLE||cpu cores of server|
|
||||
|mem\_engine|DOUBLE||memory usage of tdengine. read from `/proc/<taosd_pid>/status`|
|
||||
|mem\_free|DOUBLE||available memory on the server in `KB`|
|
||||
|mem\_total|DOUBLE||total memory of server in `KB`|
|
||||
|disk\_used|DOUBLE||usage of data dir in `bytes`|
|
||||
|disk\_total|DOUBLE||the capacity of data dir in `bytes`|
|
||||
|system\_net\_in|DOUBLE||network throughput rate in byte/s. read from `/proc/net/dev`|
|
||||
|system\_net\_out|DOUBLE||network throughput rate in byte/s. read from `/proc/net/dev`|
|
||||
|io\_read|DOUBLE||io throughput rate in byte/s. read from `/proc/<taosd_pid>/io`|
|
||||
|io\_write|DOUBLE||io throughput rate in byte/s. read from `/proc/<taosd_pid>/io`|
|
||||
|io\_read\_disk|DOUBLE||io throughput rate of disk in byte/s. read from `/proc/<taosd_pid>/io`|
|
||||
|io\_write\_disk|DOUBLE||io throughput rate of disk in byte/s. read from `/proc/<taosd_pid>/io`|
|
||||
|vnodes\_num|DOUBLE||number of vnodes per dnode|
|
||||
|masters|DOUBLE||number of master vnodes|
|
||||
|has\_mnode|DOUBLE||if the dnode has mnode, value range:include=1, not_include=0|
|
||||
|has\_qnode|DOUBLE||if the dnode has qnode, value range:include=1, not_include=0|
|
||||
|has\_snode|DOUBLE||if the dnode has snode, value range:include=1, not_include=0|
|
||||
|has\_bnode|DOUBLE||if the dnode has bnode, value range:include=1, not_include=0|
|
||||
|error\_log\_count|DOUBLE||error count|
|
||||
|info\_log\_count|DOUBLE||info count|
|
||||
|debug\_log\_count|DOUBLE||debug count|
|
||||
|trace\_log\_count|DOUBLE||trace count|
|
||||
|dnode\_id|VARCHAR|TAG|dnode id|
|
||||
|dnode\_ep|VARCHAR|TAG|dnode endpoint|
|
||||
|cluster\_id|VARCHAR|TAG|cluster id|
|
||||
|
||||
### data\_dir table
|
||||
### taosd\_dnodes\_status table
|
||||
|
||||
`data_dir` table contains data directory information records.
|
||||
`taosd_dnodes_status` table contains dnodes information records.
|
||||
|
||||
|field|type|is\_tag|comment|
|
||||
|:----|:---|:-----|:------|
|
||||
|ts|TIMESTAMP||timestamp|
|
||||
|name|NCHAR||data directory. default is `/var/lib/taos`|
|
||||
|level|INT||level for multi-level storage|
|
||||
|avail|BIGINT||available space for data directory in `bytes`|
|
||||
|used|BIGINT||used space for data directory in `bytes`|
|
||||
|total|BIGINT||total space for data directory in `bytes`|
|
||||
|dnode\_id|INT|TAG|dnode id|
|
||||
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|
||||
|cluster\_id|NCHAR|TAG|cluster id|
|
||||
|status|DOUBLE||dnode status, value range:ready=1,offline =0|
|
||||
|dnode\_id|VARCHAR|TAG|dnode id|
|
||||
|dnode\_ep|VARCHAR|TAG|dnode endpoint|
|
||||
|cluster\_id|VARCHAR|TAG|cluster id|
|
||||
|
||||
### log\_dir table
|
||||
### taosd\_dnodes\_log\_dir table
|
||||
|
||||
`log_dir` table contains log directory information records.
|
||||
|
||||
|field|type|is\_tag|comment|
|
||||
|:----|:---|:-----|:------|
|
||||
|ts|TIMESTAMP||timestamp|
|
||||
|name|NCHAR||log directory. default is `/var/log/taos/`|
|
||||
|avail|BIGINT||available space for log directory in `bytes`|
|
||||
|used|BIGINT||used space for data directory in `bytes`|
|
||||
|total|BIGINT||total space for data directory in `bytes`|
|
||||
|dnode\_id|INT|TAG|dnode id|
|
||||
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|
||||
|cluster\_id|NCHAR|TAG|cluster id|
|
||||
|avail|DOUBLE||available space for log directory in `bytes`|
|
||||
|used|DOUBLE||used space for data directory in `bytes`|
|
||||
|total|DOUBLE||total space for data directory in `bytes`|
|
||||
|name|VARCHAR|TAG|log directory. default is `/var/log/taos/`|
|
||||
|dnode\_id|VARCHAR|TAG|dnode id|
|
||||
|dnode\_ep|VARCHAR|TAG|dnode endpoint|
|
||||
|cluster\_id|VARCHAR|TAG|cluster id|
|
||||
|
||||
### temp\_dir table
|
||||
### taosd\_dnodes\_data\_dir table
|
||||
|
||||
`temp_dir` table contains temp dir information records.
|
||||
`taosd_dnodes_data_dir` table contains data directory information records.
|
||||
|
||||
|field|type|is\_tag|comment|
|
||||
|:----|:---|:-----|:------|
|
||||
|ts|TIMESTAMP||timestamp|
|
||||
|name|NCHAR||temp directory. default is `/tmp/`|
|
||||
|avail|BIGINT||available space for temp directory in `bytes`|
|
||||
|used|BIGINT||used space for temp directory in `bytes`|
|
||||
|total|BIGINT||total space for temp directory in `bytes`|
|
||||
|dnode\_id|INT|TAG|dnode id|
|
||||
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|
||||
|cluster\_id|NCHAR|TAG|cluster id|
|
||||
|avail|DOUBLE||available space for data directory in `bytes`|
|
||||
|used|DOUBLE||used space for data directory in `bytes`|
|
||||
|total|DOUBLE||total space for data directory in `bytes`|
|
||||
|level|VARCHAR|TAG|level for multi-level storage|
|
||||
|name|VARCHAR|TAG|data directory. default is `/var/lib/taos`|
|
||||
|dnode\_id|VARCHAR|TAG|dnode id|
|
||||
|dnode\_ep|VARCHAR|TAG|dnode endpoint|
|
||||
|cluster\_id|VARCHAR|TAG|cluster id|
|
||||
|
||||
### vgroups\_info table
|
||||
### taosd\_mnodes\_info table
|
||||
|
||||
`vgroups_info` table contains vgroups information records.
|
||||
`taosd_mnodes_info` table contains mnode information records.
|
||||
|
||||
|field|type|is\_tag|comment|
|
||||
|:----|:---|:-----|:------|
|
||||
|ts|TIMESTAMP||timestamp|
|
||||
|vgroup\_id|INT||vgroup id|
|
||||
|database\_name|VARCHAR||database for the vgroup|
|
||||
|tables\_num|BIGINT||number of tables per vgroup|
|
||||
|status|VARCHAR||status|
|
||||
|dnode\_id|INT|TAG|dnode id|
|
||||
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|
||||
|cluster\_id|NCHAR|TAG|cluster id|
|
||||
|role|DOUBLE||the role of mnode. value range:offline = 0,follower = 100,candidate = 101,leader = 102,error = 103,learner = 104|
|
||||
|mnode\_id|VARCHAR|TAG|master node id|
|
||||
|mnode\_ep|VARCHAR|TAG|master node endpoint|
|
||||
|cluster\_id|VARCHAR|TAG|cluster id|
|
||||
|
||||
### vnodes\_role table
|
||||
### taosd\_vnodes\_role table
|
||||
|
||||
`vnodes_role` table contains vnode role information records.
|
||||
`taosd_vnodes_role` table contains vnode role information records.
|
||||
|
||||
|field|type|is\_tag|comment|
|
||||
|:----|:---|:-----|:------|
|
||||
|ts|TIMESTAMP||timestamp|
|
||||
|vnode\_role|VARCHAR||role. leader or follower|
|
||||
|dnode\_id|INT|TAG|dnode id|
|
||||
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|
||||
|cluster\_id|NCHAR|TAG|cluster id|
|
||||
|role|DOUBLE||role. value range:offline = 0,follower = 100,candidate = 101,leader = 102,error = 103,learner = 104|
|
||||
|vgroup\_id|VARCHAR|TAG|vgroup id|
|
||||
|database\_name|VARCHAR|TAG|database for the vgroup|
|
||||
|dnode\_id|VARCHAR|TAG|dnode id|
|
||||
|cluster\_id|VARCHAR|TAG|cluster id|
|
||||
|
||||
### log\_summary table
|
||||
### taosd\_sql\_req table
|
||||
|
||||
`log_summary` table contains log summary information records.
|
||||
`taosd_sql_req` tables contains taosd sql records.
|
||||
|
||||
|field|type|is\_tag|comment|
|
||||
|:----|:---|:-----|:------|
|
||||
|ts|TIMESTAMP||timestamp|
|
||||
|error|INT||error count|
|
||||
|info|INT||info count|
|
||||
|debug|INT||debug count|
|
||||
|trace|INT||trace count|
|
||||
|dnode\_id|INT|TAG|dnode id|
|
||||
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|
||||
|cluster\_id|NCHAR|TAG|cluster id|
|
||||
|count|DOUBLE||sql count|
|
||||
|result|VARCHAR|TAG|sql execution result,value range: Success, Failed|
|
||||
|username|VARCHAR|TAG|user name who executed the sql|
|
||||
|sql\_type|VARCHAR|TAG|sql type,value range:inserted_rows|
|
||||
|dnode\_id|VARCHAR|TAG|dnode id|
|
||||
|dnode\_ep|VARCHAR|TAG|dnode endpoint|
|
||||
|vgroup\_id|VARCHAR|TAG|dnode id|
|
||||
|cluster\_id|VARCHAR|TAG|cluster id|
|
||||
|
||||
### grants\_info table
|
||||
### taos\_sql\_req 表
|
||||
|
||||
`grants_info` table contains grants information records.
|
||||
`taos_sql_req` tables contains taos sql records.
|
||||
|
||||
|field|type|is\_tag|comment|
|
||||
|:----|:---|:-----|:------|
|
||||
|ts|TIMESTAMP||timestamp|
|
||||
|expire\_time|BIGINT||time until grants expire in seconds|
|
||||
|timeseries\_used|BIGINT||timeseries used|
|
||||
|timeseries\_total|BIGINT||total timeseries|
|
||||
|dnode\_id|INT|TAG|dnode id|
|
||||
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|
||||
|cluster\_id|NCHAR|TAG|cluster id|
|
||||
|count|DOUBLE||sql count|
|
||||
|result|VARCHAR|TAG|sql execution result,value range: Success, Failed|
|
||||
|username|VARCHAR|TAG|user name who executed the sql|
|
||||
|sql\_type|VARCHAR|TAG|sql type,value range:select, insert,delete|
|
||||
|cluster\_id|VARCHAR|TAG|cluster id|
|
||||
|
||||
### taos\_slow\_sql 表
|
||||
|
||||
`taos_slow_sql` ables contains taos slow sql records.
|
||||
|
||||
|field|type|is\_tag|comment|
|
||||
|:----|:---|:-----|:------|
|
||||
|ts|TIMESTAMP||timestamp|
|
||||
|count|DOUBLE||sql count|
|
||||
|result|VARCHAR|TAG|sql execution result,value range: Success, Failed|
|
||||
|username|VARCHAR|TAG|user name who executed the sql|
|
||||
|duration|VARCHAR|TAG|sql execution duration,value range:3-10s,10-100s,100-1000s,1000s-|
|
||||
|cluster\_id|VARCHAR|TAG|cluster id|
|
||||
|
||||
|
||||
### keeper\_monitor table
|
||||
|
||||
|
|
|
@ -68,7 +68,7 @@ The following return value results indicate that the verification passed.
|
|||
## HTTP request URL format
|
||||
|
||||
```text
|
||||
http://<fqdn>:<port>/rest/sql/[db_name][?tz=timezone[&req_id=req_id]]
|
||||
http://<fqdn>:<port>/rest/sql/[db_name][?tz=timezone[&req_id=req_id][&row_with_meta=true]]
|
||||
```
|
||||
|
||||
Parameter Description:
|
||||
|
@ -78,6 +78,7 @@ Parameter Description:
|
|||
- db_name: Optional parameter that specifies the default database name for the executed SQL command.
|
||||
- tz: Optional parameter that specifies the timezone of the returned time, following the IANA Time Zone rules, e.g. `America/New_York`.
|
||||
- req_id: Optional parameter that specifies the request id for tracing.
|
||||
- row_with_meta: Optional parameter that specifies whether each row of data carries the column name. The default value is `false`.(Supported starting from version 3.3.2.0)
|
||||
|
||||
:::note
|
||||
|
||||
|
@ -336,6 +337,82 @@ Description:
|
|||
- code: (`int`) Error code.
|
||||
- desc: (`string`): Error code description.
|
||||
|
||||
#### Return key-value pair
|
||||
|
||||
When the parameter `row_with_meta=true` is specified, the data returned in `data` field will change from array format to object format, where the key of the object is the column name and the value is the data.
|
||||
|
||||
insert response example:
|
||||
|
||||
```json
|
||||
{
|
||||
"code": 0,
|
||||
"column_meta": [
|
||||
[
|
||||
"affected_rows",
|
||||
"INT",
|
||||
4
|
||||
]
|
||||
],
|
||||
"data": [
|
||||
{
|
||||
"affected_rows": 1
|
||||
}
|
||||
],
|
||||
"rows": 1
|
||||
}
|
||||
```
|
||||
|
||||
query response example:
|
||||
|
||||
```json
|
||||
{
|
||||
"code": 0,
|
||||
"column_meta": [
|
||||
[
|
||||
"ts",
|
||||
"TIMESTAMP",
|
||||
8
|
||||
],
|
||||
[
|
||||
"current",
|
||||
"FLOAT",
|
||||
4
|
||||
],
|
||||
[
|
||||
"voltage",
|
||||
"INT",
|
||||
4
|
||||
],
|
||||
[
|
||||
"phase",
|
||||
"FLOAT",
|
||||
4
|
||||
],
|
||||
[
|
||||
"groupid",
|
||||
"INT",
|
||||
4
|
||||
],
|
||||
[
|
||||
"location",
|
||||
"VARCHAR",
|
||||
24
|
||||
]
|
||||
],
|
||||
"data": [
|
||||
{
|
||||
"ts": "2017-07-14T02:40:00.000Z",
|
||||
"current": -2.498076,
|
||||
"voltage": 0,
|
||||
"phase": -0.846025,
|
||||
"groupid": 8,
|
||||
"location": "California.Sunnyvale"
|
||||
}
|
||||
],
|
||||
"rows": 1
|
||||
}
|
||||
```
|
||||
|
||||
## Custom Authorization Code
|
||||
|
||||
HTTP requests require an authorization code `<TOKEN>` for identification purposes. The administrator usually provides the authorization code, and it can be obtained simply by sending an ``HTTP GET`` request as follows:
|
||||
|
|
|
@ -58,6 +58,7 @@ And many more parameters.
|
|||
|
||||
- -a AUTHSTR: Authorization information to connect to the server.
|
||||
- -A: Obtain authorization information from username and password.
|
||||
- -B: Set BI mode , all outputs follow the format of BI tools for output if setting
|
||||
- -c CONFIGDIR: Specify the directory where configuration file exists. The default is `/etc/taos`, and the default name of the configuration file in this directory is `taos.cfg`
|
||||
- -C: Print the configuration parameters of `taos.cfg` in the default directory or specified by -c
|
||||
- -d DATABASE: Specify the database to use when connecting to the server
|
||||
|
|
|
@ -729,6 +729,57 @@ The charset that takes effect is UTF-8.
|
|||
| Value Range | -1: none message is compressed; 0: all messages are compressed; N (N>0): messages exceeding N bytes are compressed |
|
||||
| Default | -1 |
|
||||
|
||||
### fPrecision
|
||||
|
||||
| Attribute | Description |
|
||||
| -------- | -------------------------------- |
|
||||
| Application | Server Only |
|
||||
| Meaning | Compression precision for float data type |
|
||||
| Value Range | 0.1 ~ 0.00000001 |
|
||||
| Default | 0.00000001 |
|
||||
| Note | The floating value below this setting will be cut off |
|
||||
|
||||
### dPrecision
|
||||
|
||||
| Attribute | Description |
|
||||
| -------- | -------------------------------- |
|
||||
| Applicable | Server Only |
|
||||
| Meaning | Compression precision for double data type |
|
||||
| Value Range | 0.1 ~ 0.0000000000000001 |
|
||||
| Default | 0.0000000000000001 |
|
||||
| Note | The floating value below this setting will be cut off |
|
||||
|
||||
### lossyColumn
|
||||
|
||||
| Attribute | Description |
|
||||
| -------- | -------------------------------- |
|
||||
| Applicable | Server Only |
|
||||
| Meaning | Enable TSZ lossy compression for float and/or double |
|
||||
| Value Range | float, double |
|
||||
| Default | none: disable TSZ lossy compression |
|
||||
|
||||
**补充说明**
|
||||
1. It's only available since 3.2.0.0 version, and can't downgrade to previous version once upgrading to 3.2.0.0 and enabling this parameter
|
||||
2. TSZ compression algorithm compresses data based on data prediction technique, so it's more suitable for data with specific pattern
|
||||
3. TSZ compression algorithm may take longer time but it has better compression ratio, so it's suitable when you have enough CPU resources and are more sensitive to disk occupation
|
||||
4. Example: enable TSZ for both float and double
|
||||
```shell
|
||||
lossyColumns float|double
|
||||
```
|
||||
5. After configuring, taosd service needs to restarted. After restarting, if you see the following output in taosd logfile, it means the function has been enabled
|
||||
```sql
|
||||
02/22 10:49:27.607990 00002933 UTL lossyColumns float|double
|
||||
```
|
||||
|
||||
### ifAdtFse
|
||||
|
||||
| Attribute | Description |
|
||||
| -------- | -------------------------------- |
|
||||
| Applicable | Server Only |
|
||||
| Meaning | Replace HUFFMAN with FSE in TSZ, FSE is faster when compressing but slower when uncompressing |
|
||||
| Value Range | 0: Use HUFFMAN, 1: Use FSE |
|
||||
| Default | 0: Use HUFFMAN |
|
||||
|
||||
|
||||
## Other Parameters
|
||||
|
||||
|
|
|
@ -137,17 +137,10 @@ port = 6041
|
|||
username = "root"
|
||||
password = "taosdata"
|
||||
|
||||
# set taosAdapter to monitor
|
||||
[taosAdapter]
|
||||
address = ["127.0.0.1:6041","192.168.1.95:6041"]
|
||||
|
||||
[metrics]
|
||||
# monitoring metric prefix
|
||||
prefix = "taos"
|
||||
|
||||
# cluster data identifier
|
||||
cluster = "production"
|
||||
|
||||
# database to store monitoring data
|
||||
database = "log"
|
||||
|
||||
|
@ -157,6 +150,19 @@ tables = ["normal_table"]
|
|||
# database options for db storing metrics data
|
||||
[metrics.databaseoptions]
|
||||
cachemodel = "none"
|
||||
|
||||
[environment]
|
||||
# Whether running in cgroup.
|
||||
incgroup = false
|
||||
|
||||
[log]
|
||||
# rotation file num
|
||||
rotationCount = 5
|
||||
# rotation on time
|
||||
rotationTime = "24h"
|
||||
# rotation on file size (bytes)
|
||||
rotationSize = 100000000
|
||||
|
||||
```
|
||||
|
||||
### Obtain Monitoring Metrics
|
||||
|
@ -169,16 +175,16 @@ taosKeeper records monitoring metrics generated by TDengine in a specified datab
|
|||
$ taos
|
||||
# the log database is used in this example
|
||||
> use log;
|
||||
> select * from cluster_info limit 1;
|
||||
> select * from taosd_cluster_info limit 1;
|
||||
```
|
||||
|
||||
Example result set:
|
||||
|
||||
```shell
|
||||
ts | first_ep | first_ep_dnode_id | version | master_uptime | monitor_interval | dbs_total | tbs_total | stbs_total | dnodes_total | dnodes_alive | mnodes_total | mnodes_alive | vgroups_total | vgroups_alive | vnodes_total | vnodes_alive | connections_total | protocol | cluster_id |
|
||||
===============================================================================================================================================================================================================================================================================================================================================================================
|
||||
2022-08-16 17:37:01.629 | hlb:6030 | 1 | 3.0.0.0 | 0.27250 | 15 | 2 | 27 | 38 | 1 | 1 | 1 | 1 | 4 | 4 | 4 | 4 | 14 | 1 | 5981392874047724755 |
|
||||
Query OK, 1 rows in database (0.036162s)
|
||||
_ts | cluster_uptime | dbs_total | tbs_total | stbs_total | vgroups_total | vgroups_alive | vnodes_total | vnodes_alive | mnodes_total | mnodes_alive | connections_total | topics_total | streams_total | dnodes_total | dnodes_alive | grants_expire_time | grants_timeseries_used | grants_timeseries_total | cluster_id |
|
||||
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
|
||||
2024-06-04 03:03:34.341 | 0.000000000000000 | 2.000000000000000 | 1.000000000000000 | 4.000000000000000 | 4.000000000000000 | 4.000000000000000 | 4.000000000000000 | 4.000000000000000 | 1.000000000000000 | 1.000000000000000 | 2.000000000000000 | 0.000000000000000 | 0.000000000000000 | 1.000000000000000 | 1.000000000000000 | 0.000000000000000 | 3.000000000000000 | 0.000000000000000 | 554014120921134497 |
|
||||
Query OK, 1 row(s) in set (0.001652s)
|
||||
```
|
||||
|
||||
#### Export Monitoring Metrics
|
||||
|
@ -192,19 +198,22 @@ Sample result set (excerpt):
|
|||
```shell
|
||||
# HELP taos_cluster_info_connections_total
|
||||
# TYPE taos_cluster_info_connections_total counter
|
||||
taos_cluster_info_connections_total{cluster_id="5981392874047724755"} 16
|
||||
taos_cluster_info_connections_total{cluster_id="554014120921134497"} 8
|
||||
# HELP taos_cluster_info_dbs_total
|
||||
# TYPE taos_cluster_info_dbs_total counter
|
||||
taos_cluster_info_dbs_total{cluster_id="5981392874047724755"} 2
|
||||
taos_cluster_info_dbs_total{cluster_id="554014120921134497"} 2
|
||||
# HELP taos_cluster_info_dnodes_alive
|
||||
# TYPE taos_cluster_info_dnodes_alive counter
|
||||
taos_cluster_info_dnodes_alive{cluster_id="5981392874047724755"} 1
|
||||
taos_cluster_info_dnodes_alive{cluster_id="554014120921134497"} 1
|
||||
# HELP taos_cluster_info_dnodes_total
|
||||
# TYPE taos_cluster_info_dnodes_total counter
|
||||
taos_cluster_info_dnodes_total{cluster_id="5981392874047724755"} 1
|
||||
taos_cluster_info_dnodes_total{cluster_id="554014120921134497"} 1
|
||||
# HELP taos_cluster_info_first_ep
|
||||
# TYPE taos_cluster_info_first_ep gauge
|
||||
taos_cluster_info_first_ep{cluster_id="5981392874047724755",value="hlb:6030"} 1
|
||||
taos_cluster_info_first_ep{cluster_id="554014120921134497",value="tdengine:6030"} 1
|
||||
# HELP taos_cluster_info_first_ep_dnode_id
|
||||
# TYPE taos_cluster_info_first_ep_dnode_id counter
|
||||
taos_cluster_info_first_ep_dnode_id{cluster_id="554014120921134497"} 1
|
||||
```
|
||||
|
||||
### check\_health
|
||||
|
|
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
title: Graphic User Interface
|
||||
sidebar_label: Taos-Explorer
|
||||
description: User guide about taosExplorer
|
||||
---
|
||||
|
||||
taos-explorer is a web service which provides GUI based interactive database management tool.
|
||||
|
||||
## Install
|
||||
|
||||
taos-explorer is delivered in the TDengine server package since version 3.3.0.0. After installing TDengine server, you will get the `taos-explorer` service.
|
||||
|
||||
## Configure
|
||||
|
||||
The configuration file of `taos-explorer` service is `/etc/taos/explorer.toml` on Linux platform, the key items in the configuration are like below:
|
||||
|
||||
``` toml
|
||||
port = 6060
|
||||
cluster = "http://localhost:6041"
|
||||
```
|
||||
|
||||
The description of these two parameters:
|
||||
|
||||
- port:taos-explorer service port
|
||||
- cluster:The end point of the TDengine cluster for the taos-explorer to manage. It supports only websocket connection, so this address is actually the end point of `taosAdapter` service in the TDengine cluster.
|
||||
|
||||
## Start & Stop
|
||||
|
||||
Before starting the service, please first make sure the configuration is correct, and the TDengine cluster (majorly including `taosd` and `taosAdapter` services) are already alive and working well.
|
||||
|
||||
### Linux
|
||||
|
||||
On Linux system you can use `systemctl` to manage the service as below:
|
||||
|
||||
- Start the service: `systemctl start taos-explorer`
|
||||
|
||||
- Stop the service: `systemctl stop taos-explorer`
|
||||
|
||||
- Restart the service: `systemctl restart taos-explorer`
|
||||
|
||||
- Check service status: `systemctl status taos-explorer`
|
||||
|
||||
## Register & Logon
|
||||
|
||||
### Register
|
||||
|
||||
After installing, configuring and starting, you can use your browser to access taos-explorer using address `http://ip:6060`. At this time, if you have not registered before, the registering page will first show up. You need to enter your valid enterprise email, receive the activation code, then input the code. Congratulations, you have registered successfully.
|
||||
|
||||
### Logon
|
||||
|
||||
After registering, you can use your user name and corresponding password in the database system to logon. The default username is `root`, but you can change it to use another one. After loggin into the system, you can view or manage databases, create super tables, create child tables, or view the data in the database.
|
||||
|
||||
There are some functionalities only available to enterprise users, you can view and experience but can't really use them.
|
|
@ -13,36 +13,30 @@ TDengine can be quickly integrated with the open-source data visualization syste
|
|||
|
||||
In order for Grafana to add the TDengine data source successfully, the following preparations are required:
|
||||
|
||||
1. The TDengine cluster is deployed and functioning properly
|
||||
2. taosAdapter is installed and running properly. Please refer to the taosAdapter manual for details.
|
||||
1. Grafana server is installed and running properly. TDengine currently supports Grafana versions 7.5 and above. Users can go to the Grafana official website to download the installation package and execute the installation according to the current operating system. The download address is as follows: [https://grafana.com/grafana/download](https://grafana.com/grafana/download).
|
||||
2. The TDengine cluster is deployed and functioning properly
|
||||
3. taosAdapter is installed and running properly. Please refer to the taosAdapter manual for details.
|
||||
|
||||
Record these values:
|
||||
|
||||
- TDengine REST API url: `http://tdengine.local:6041`.
|
||||
- TDengine cluster authorization, with user + password.
|
||||
|
||||
## Installing Grafana
|
||||
|
||||
TDengine currently supports Grafana versions 7.5 and above. Users can go to the Grafana official website to download the installation package and execute the installation according to the current operating system. The download address is as follows: [https://grafana.com/grafana/download](https://grafana.com/grafana/download).
|
||||
|
||||
## Configuring Grafana
|
||||
|
||||
### Install Grafana Plugin and Configure Data Source
|
||||
## Install Grafana Plugin and Configure Data Source
|
||||
|
||||
<Tabs defaultValue="script">
|
||||
<TabItem value="gui" label="With GUI">
|
||||
|
||||
Under Grafana 8, plugin catalog allows you to [browse and manage plugins within Grafana](https://grafana.com/docs/grafana/next/administration/plugin-management/#plugin-catalog) (but for Grafana 7.x, use **With Script** or **Install & Configure Manually**). Find the page at **Configurations > Plugins**, search **TDengine** and click it to install.
|
||||
|
||||

|
||||
|
||||
Installation may cost some minutes, then you can **Create a TDengine data source**:
|
||||
|
||||

|
||||
|
||||
Installation may cost some minutes, you can **Create a TDengine data source** when installation finished.
|
||||
Then you can add a TDengine data source by filling up the configuration options.
|
||||
|
||||

|
||||
- Host: IP address of the server where the components of the TDengine cluster provide REST service and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`.
|
||||
- User: TDengine user name.
|
||||
- Password: TDengine user password.
|
||||
|
||||
Click `Save & Test` to test. You should see a success message if the test worked.
|
||||
|
||||
You can create dashboards with TDengine now.
|
||||
|
||||
|
@ -77,7 +71,7 @@ sudo -u grafana grafana-cli plugins install tdengine-datasource
|
|||
You can also download zip files from [GitHub](https://github.com/taosdata/grafanaplugin/releases/latest) or [Grafana](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation) and install manually. The commands are as follows:
|
||||
|
||||
```bash
|
||||
GF_VERSION=3.3.1
|
||||
GF_VERSION=3.5.2
|
||||
# from GitHub
|
||||
wget https://github.com/taosdata/grafanaplugin/releases/download/v$GF_VERSION/tdengine-datasource-$GF_VERSION.zip
|
||||
# from Grafana
|
||||
|
@ -96,26 +90,17 @@ If Grafana is running in a Docker environment, the TDengine plugin can be automa
|
|||
GF_INSTALL_PLUGINS=tdengine-datasource
|
||||
```
|
||||
|
||||
Now users can log in to the Grafana server (username/password: admin/admin) directly through the URL `http://localhost:3000` and add a datasource through `Configuration -> Data Sources` on the left side, as shown in the following figure.
|
||||
|
||||

|
||||
|
||||
Click `Add data source` to enter the Add data source page, and enter TDengine in the query box to add it, as shown in the following figure.
|
||||
|
||||

|
||||
Now users can log in to the Grafana server (username/password: admin/admin) directly through the URL `http://localhost:3000` and add a datasource through `Configuration -> Data Sources` on the left side.
|
||||
|
||||
Click `Add data source` to enter the Add data source page, and enter TDengine in the query box to add it.
|
||||
Enter the datasource configuration page, and follow the default prompts to modify the corresponding configuration.
|
||||
|
||||

|
||||
|
||||
- Host: IP address of the server where the components of the TDengine cluster provide REST service (offered by taosd before 2.4 and by taosAdapter since 2.4) and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`.
|
||||
- Host: IP address of the server where the components of the TDengine cluster provide REST service and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`.
|
||||
- User: TDengine user name.
|
||||
- Password: TDengine user password.
|
||||
|
||||
Click `Save & Test` to test. You should see a success message if the test worked.
|
||||
|
||||

|
||||
|
||||
</TabItem>
|
||||
<TabItem value="container" label="Container">
|
||||
|
||||
|
@ -156,7 +141,7 @@ You can setup a zero-configuration stack for TDengine + Grafana by [docker-compo
|
|||
|
||||
services:
|
||||
tdengine:
|
||||
image: tdengine/tdengine:3.0.2.4
|
||||
image: tdengine/tdengine:3.3.0.0
|
||||
environment:
|
||||
TAOS_FQDN: tdengine
|
||||
volumes:
|
||||
|
@ -186,43 +171,118 @@ Open Grafana (http://localhost:3000), and you can add dashboard with TDengine no
|
|||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
### Create Dashboard
|
||||
:::info
|
||||
|
||||
Go back to the main interface to create a dashboard and click Add Query to enter the panel query page:
|
||||
|
||||

|
||||
|
||||
As shown above, select the `TDengine` data source in the `Query` and enter the corresponding SQL in the query box below for query.
|
||||
|
||||
- INPUT SQL: Enter the desired query (the results being two columns and multiple rows), such as `select _wstart, avg(mem_system) from log.dnodes_info where ts >= $from and ts < $to interval($interval)`. In this statement, $from, $to, and $interval are variables that Grafana replaces with the query time range and interval. In addition to the built-in variables, custom template variables are also supported.
|
||||
- ALIAS BY: This allows you to set the current query alias.
|
||||
- GENERATE SQL: Clicking this button will automatically replace the corresponding variables and generate the final executed statement.
|
||||
- Group by column name(s): `group by` or `partition by` columns name split by comma. By setting `Group by column name(s)`, it can show multi-dimension data if Sql is `group by` or `partition by`. Such as, it can show data by `dnode_ep` if sql is `select _wstart as ts, avg(mem_system), dnode_ep from log.dnodes_info where ts>=$from and ts<=$to partition by dnode_ep interval($interval)` and `Group by column name(s)` is `dnode_ep`.
|
||||
- Format to: format legend for `group by` or `partition by`. Such as it can display series data by `dnode_ep` if sql is `select _wstart as ts, avg(mem_system), dnode_ep from log.dnodes_info where ts>=$from and ts<=$to partition by dnode_ep interval($interval)` and `Group by column name(s)` is `dnode_ep` and `Format to` is `mem_system_{{dnode_ep}}`.
|
||||
|
||||
:::note
|
||||
|
||||
Since the REST connection because is stateless. Grafana plugin can use <db_name>.<table_name> in the SQL command to specify the database name.
|
||||
In the following introduction, we take Grafana v11.0.0 as an example. Other versions may have different features, please refer to [Grafana's official website](https://grafana.com/docs/grafana/latest/).
|
||||
|
||||
:::
|
||||
|
||||
Follow the default prompt to query the average system memory usage for the specified interval on the server where the current TDengine deployment is located as follows.
|
||||
## Built-in Variables and Custom Variables
|
||||
The Variable feature in Grafana is very powerful. It can be used in queries, panel titles, labels, etc., to create more dynamic and interactive Dashboards, improving user experience and efficiency.
|
||||
|
||||

|
||||
The main functions and characteristics of variables include:
|
||||
|
||||
The example to query the average system memory usage for the specified interval on each server as follows.
|
||||
- Dynamic data query: Variables can be used in query statements, allowing users to dynamically change query conditions by selecting different variable values, thus viewing different data views. This is very useful for scenarios that need to dynamically display data based on user input.
|
||||
|
||||

|
||||
- Improved reusability: By defining variables, the same configuration or query logic can be reused in multiple places without the need to rewrite the same code. This makes the maintenance and updating of Dashboards simpler and more efficient.
|
||||
|
||||
- Flexible configuration options: Variables offer a variety of configuration options, such as predefined static value lists, dynamic value querying from data sources, regular expression filtering, etc., making the application of variables more flexible and powerful.
|
||||
|
||||
|
||||
Grafana provides both built-in variables and custom variables, which can be referenced in SQL writing. We can use `$variableName` to reference the variable, where `variableName` is the name of the variable. For detailed reference, please refer to [Variable reference](https://grafana.com/docs/grafana/latest/dashboards/variables/variable-syntax/).
|
||||
|
||||
### Built-in Variables
|
||||
Grafana has built-in variables such as `from`, `to`, and `interval`, all derived from Grafana plugin panels. Their meanings are as follows:
|
||||
- `from` is the start time of the query range
|
||||
- `to` is the end time of the query range
|
||||
- `interval` represent time spans
|
||||
|
||||
It is recommended to set the start and end times of the query range for each query, which can effectively reduce the amount of data scanned by the TDengine server during query execution. `interval` is the size of the window split, which in Grafana version 11, is calculated based on the time range and the number of return points.
|
||||
In addition to the above three common variables, Grafana also provides variables such as `__timezone`, `__org`, `__user`, etc. For details, please refer to [Built-in Variables](https://grafana.com/docs/grafana/latest/dashboards/variables/add-template-variables/#global-variables).
|
||||
|
||||
### Custom Variables
|
||||
We can add custom variables in the Dashboard. The usage of custom variables is no different from that of built-in variables; they are referenced in SQL with `$variableName`.
|
||||
Custom variables support multiple types, such as `Query`, `Constant`, `Interval`, `Data source`, etc.
|
||||
Custom variables can reference other custom variables, for example, one variable represents a region, and another variable can reference the value of the region to query devices in that region.
|
||||
|
||||
#### Adding Query Type Variables
|
||||
In the Dashboard configuration, select `Variables`, then click `New variable`:
|
||||
1. In the `Name` field, enter your variable name, here we set the variable name as `selected_groups`.
|
||||
2. In the `Select variable type` dropdown menu, select `Query`.
|
||||
Depending on the selected variable type, configure the corresponding options. For example, if you choose `Query`, you need to specify the data source and the query statement for obtaining variable values. Here, taking smart meters as an example, we set the query type, select the data source, and configure the SQL as `select distinct(groupid) from power.meters where groupid < 3 and ts > $from and ts < $to;`
|
||||
3. After clicking `Run Query` at the bottom, you can see the variable values generated based on your configuration in the `Preview of values` section.
|
||||
4. Other configurations are not detailed here. After completing the configuration, click the `Apply` button at the bottom of the page, then click `Save dashboard` in the top right corner to save.
|
||||
|
||||
After completing the above steps, we have successfully added a new custom variable `$selected_groups` to the Dashboard. We can later reference this variable in the Dashboard's queries with `$selected_groups`.
|
||||
|
||||
We can also add another custom variable to reference this `selected_groups` variable, for example, we add a query variable named `tbname_max_current`, with the SQL as `select tbname from power.meters where groupid = $selected_groups and ts > $from and ts < $to;`
|
||||
|
||||
#### Adding Interval Type Variables
|
||||
We can customize the time window interval to better fit business needs.
|
||||
1. In the `Name` field, enter the variable name as `interval`.
|
||||
2. In the `Select variable type` dropdown menu, select `Interval`.
|
||||
3. In the `Interval options` enter `1s,2s,5s,10s,15s,30s,1m`.
|
||||
4. Other configurations are not detailed here. After completing the configuration, click the `Apply` button at the bottom of the page, then click `Save dashboard` in the top right corner to save.
|
||||
|
||||
After completing the above steps, we have successfully added a new custom variable `$interval` to the Dashboard. We can later reference this variable in the Dashboard's queries with `$interval`.
|
||||
|
||||
## TDengine Time Series Query Support
|
||||
On top of supporting standard SQL, TDengine also provides a series of special query syntaxes that meet the needs of time series business scenarios, bringing great convenience to the development of applications in time series scenarios.
|
||||
- `partition by` this clause can split data by certain dimensions and then perform a series of calculations within the split data space, which can replace `group by` in most cases.
|
||||
- `interval` this clause is used to generate time windows of the same time interval.
|
||||
- `fill` this clause is used to specify how to fill when there is data missing in any window.
|
||||
- `Window Pseudocolumns` If you need to output the time window information corresponding to the aggregation result in the results, you need to use window pseudocolumns in the SELECT clause: the start time of the time window (_wstart), the end time of the time window (_wend), etc.
|
||||
|
||||
For a detailed introduction to these features, please refer to [Time-Series Extensions](../../taos-sql/distinguished/).
|
||||
|
||||
|
||||
## Create Dashboard
|
||||
|
||||
Return to the main interface to create a Dashboard, click Add Query to enter the panel query page:
|
||||
|
||||

|
||||
|
||||
As shown above, select the `TDengine` data source in the `Query` and enter the corresponding SQL in the query box below for query. We will continue to use power meters as an example. In order to demonstrate the beautiful curves, **virtual data** is used here.
|
||||
|
||||
## Time Series Data Display
|
||||
Suppose we want to query the average current size over a period of time, with the time window divided by $interval, and fill with null if data is missing in any time window interval.
|
||||
- INPUT SQL: Enter the statement to be queried (the result set of this SQL statement should be two columns and multiple rows), here enter: `select _wstart as ts, avg(current) as current from power.meters where groupid in ($selected_groups) and ts > $from and ts < $to interval($interval) fill(null)`, where from, to, and interval are built-in variables of the Grafana, selected_groups is a custom variable.
|
||||
- ALIAS BY: You can set the current query alias.
|
||||
- GENERATE SQL: Clicking this button will automatically replace the corresponding variables and generate the final execution statement.
|
||||
|
||||
In the custom variables at the top, if the value of `selected_groups` is selected as 1, then the query for the average value change of all device currents in the `meters` supertable where `groupid` is 1 is as shown in the following figure:
|
||||
|
||||

|
||||
|
||||
:::note
|
||||
|
||||
Since the REST interface is stateless, it is not possible to use the `use db` statement to switch databases. In the SQL statement in the Grafana plugin, you can use \<db_name>.\<table_name> to specify the database.
|
||||
|
||||
:::
|
||||
|
||||
## Time Series Data Group Display
|
||||
Suppose we want to query the average current value over a period of time, displayed by `groupid` grouping, we can modify the previous SQL to `select _wstart as ts, groupid, avg(current) as current from power.meters where ts > $from and ts < $to partition by groupid interval($interval) fill(null)`
|
||||
|
||||
- Group by column(s): **Half-width** comma-separated `group by` or `partition by` column names. If it is a `group by` or `partition by` query statement, setting the `Group by` column can display multidimensional data. Here, set the Group by column name to `groupid`, which can display data grouped by `groupid`.
|
||||
- Group By Format: Legend formatting format for multidimensional data in Group by or Partition by scenarios. For example, the above INPUT SQL, setting `Group By Format` to `groupid-{{groupid}}`, the displayed legend name is the formatted group name.
|
||||
|
||||
After completing the settings, the data is displayed grouped by `groupid` as shown in the following figure:
|
||||
|
||||

|
||||
|
||||
> For more information on how to use Grafana to create the appropriate monitoring interface and for more details on using Grafana, refer to the official Grafana [documentation](https://grafana.com/docs/).
|
||||
|
||||
### Importing the Dashboard
|
||||
## Performance Suggestions
|
||||
- **Include time range in all queries**, in time series databases, if the time range is not included in the query, it will lead to table scanning and poor performance. A common SQL writing example is `select column_name from db.table where ts > $from and ts < $to;`
|
||||
- For queries of the latest status type, we generally recommend **enabling cache when creating the database** (`CACHEMODEL` set to last_row or both), a common SQL writing example is `select last(column_name) from db.table where ts > $from and ts < $to;`
|
||||
|
||||
## Importing the Dashboard
|
||||
|
||||
You can install TDinsight dashboard in data source configuration page (like `http://localhost:3000/datasources/edit/1/dashboards`) as a monitoring visualization tool for TDengine cluster. Ensure that you use TDinsight for 3.x. Please note TDinsight for 3.x needs to configure and run taoskeeper correctly.
|
||||
|
||||

|
||||
|
||||
A dashboard for TDengine 2.x has been published on Grafana: [Dashboard 15167 - TDinsight](https://grafana.com/grafana/dashboards/15167)).
|
||||
A dashboard for TDengine 2.x has been published on Grafana: [Dashboard 15167 - TDinsight](https://grafana.com/grafana/dashboards/15167).
|
||||
|
||||
For more dashboards using TDengine data source, [search here in Grafana](https://grafana.com/grafana/dashboards/?dataSource=tdengine-datasource). Here is a sub list:
|
||||
|
||||
|
@ -230,3 +290,137 @@ For more dashboards using TDengine data source, [search here in Grafana](https:/
|
|||
- [15155](https://grafana.com/grafana/dashboards/15155): TDengine alert demo.
|
||||
- [15167](https://grafana.com/grafana/dashboards/15167): TDinsight.
|
||||
- [16388](https://grafana.com/grafana/dashboards/16388): Telegraf node metrics dashboard using TDengine data source.
|
||||
|
||||
|
||||
## Alert Configuration Introduction
|
||||
### Alert Configuration Steps
|
||||
The TDengine Grafana plugin supports alerts. To configure alerts, the following steps are required:
|
||||
1. Configure Contact Points: Set up notification channels, including DingDing, Email, Slack, WebHook, Prometheus Alertmanager, etc.
|
||||
2. Configure Notification Policies: Set up routing for which channel to send alerts to, as well as the timing and frequency of notifications.
|
||||
3. Configure "Alert rules": Set up detailed alert rules.
|
||||
3.1 Configure alert name.
|
||||
3.2 Configure query and alert trigger conditions.
|
||||
3.3 Configure evaluation behavior.
|
||||
3.4 Configure labels and notifications.
|
||||
3.5 Configure annotations.
|
||||
|
||||
### Alert Configuration Web UI
|
||||
In Grafana 11, the alert Web UI has 6 tabs: "Alert rules", "Contact points", "Notification policies", "Silences", "Groups", and "Settings".
|
||||
- "Alert rules" displays and configures alert rules.
|
||||
- "Contact points" support notification channels such as DingDing, Email, Slack, WebHook, Prometheus Alertmanager, etc.
|
||||
- "Notification policies" sets up routing for which channel to send alerts to, as well as the timing and frequency of notifications.
|
||||
- "Silences" configures silent periods for alerts.
|
||||
- "Groups" displays grouped alerts after they are triggered.
|
||||
- "Admin" allows modifying alert configurations through JSON.
|
||||
|
||||
## Configuring Email Contact Point
|
||||
### Modifying Grafana Server Configuration File
|
||||
Add SMTP/Emailing and Alerting modules to the Grafana service configuration file. For Linux systems, the configuration file is usually located at `/etc/grafana/grafana.ini`.
|
||||
Add the following content to the configuration file:
|
||||
|
||||
```ini
|
||||
#################################### SMTP / Emailing ##########################
|
||||
[smtp]
|
||||
enabled = true
|
||||
host = smtp.qq.com:465 #Email service used
|
||||
user = receiver@foxmail.com
|
||||
password = *********** #Use mail authorization code
|
||||
skip_verify = true
|
||||
from_address = sender@foxmail.com
|
||||
```
|
||||
|
||||
Then restart the Grafana service. For example, on a Linux system, execute `systemctl restart grafana-server.service`
|
||||
|
||||
### Grafana Configuration for Email Contact Point
|
||||
|
||||
Find "Home" -> "Alerting" -> "Contact points" on the Grafana page to create a new contact point
|
||||
"Name": Email Contact Point
|
||||
"Integration": Select the contact type, here choose Email, fill in the email receiving address, and save the contact point after completion
|
||||
|
||||

|
||||
|
||||
## Configuring Feishu Contact Point
|
||||
|
||||
### Feishu Robot Configuration
|
||||
1. "Feishu Workspace" -> "Get Apps" -> "Search for Feishu Robot Assistant" -> "Create Command"
|
||||
2. Choose Trigger: Grafana
|
||||
3. Choose Action: Send a message through the official robot, fill in the recipient and message content
|
||||
|
||||

|
||||
|
||||
### Grafana Configuration for Feishu Contact Point
|
||||
|
||||
Find "Home" -> "Alerting" -> "Contact points" on the Grafana page to create a new contact point
|
||||
"Name": Feishu Contact Point
|
||||
"Integration": Select the contact type, here choose Webhook, and fill in the URL (the Grafana trigger Webhook address in Feishu Robot Assistant), then save the contact point
|
||||
|
||||

|
||||
|
||||
## Notification Policy
|
||||
After configuring the contact points, you can see there is a Default Policy
|
||||
|
||||

|
||||
|
||||
Click on the "..." on the right -> "Edit", then edit the default notification policy, a configuration window pops up:
|
||||
|
||||

|
||||
|
||||
Configure the parameters as shown in the screenshot above.
|
||||
|
||||
## Configuring Alert Rules
|
||||
|
||||
### Define Query and Alert Conditions
|
||||
|
||||
Select "Edit" -> "Alert" -> "New alert rule" in the panel where you want to configure the alert.
|
||||
|
||||
1. "Enter alert rule name": Here, enter `power meters alert` as an example for smart meters.
|
||||
2. "Define query and alert condition":
|
||||
2.1 Choose data source: `TDengine Datasource`
|
||||
2.2 Query statement:
|
||||
```sql
|
||||
select _wstart as ts, groupid, avg(current) as current from power.meters where ts > $from and ts < $to partition by groupid interval($interval) fill(null)
|
||||
```
|
||||
2.3 Set "Expression": `Threshold is above 100`
|
||||
2.4 Click "Set as alert condition"
|
||||
2.5 "Preview": View the results of the set rules
|
||||
|
||||
After completing the settings, you can see the image displayed below:
|
||||
|
||||

|
||||
|
||||
### Configuring Expressions and Calculation Rules
|
||||
|
||||
Grafana's "Expression" supports various operations and calculations on data, which are divided into:
|
||||
1. "Reduce": Aggregates the values of a time series within the selected time range into a single value
|
||||
1.1 "Function" is used to set the aggregation method, supporting Min, Max, Last, Mean, Sum, and Count.
|
||||
1.2 "Mode" supports the following three:
|
||||
- "Strict": If no data is queried, the data will be assigned NaN.
|
||||
- "Drop Non-numeric Value": Remove illegal data results.
|
||||
- "Replace Non-numeric Value": If it is illegal data, replace it with a constant value.
|
||||
2. "Threshold": Checks whether the time series data meets the threshold judgment condition. Returns 0 when the condition is false, and 1 when true. Supports the following methods:
|
||||
- Is above (x > y)
|
||||
- Is below (x < y)
|
||||
- Is within range (x > y1 AND x < y2)
|
||||
- Is outside range (x < y1 AND x > y2)
|
||||
3. "Math": Performs mathematical operations on the data of the time series.
|
||||
4. "Resample": Changes the timestamps in each time series to have a consistent interval, so that mathematical operations can be performed between them.
|
||||
5. "Classic condition (legacy)": Multiple logical conditions can be configured to determine whether to trigger an alert.
|
||||
|
||||
As shown in the screenshot above, here we set the alert to trigger when the maximum value exceeds 100.
|
||||
|
||||
### Configuring Evaluation behavior
|
||||
|
||||

|
||||
|
||||
Configure the parameters as shown in the screenshot above.
|
||||
|
||||
### Configuring Labels and Notifications
|
||||

|
||||
|
||||
Configure the parameters as shown in the screenshot above.
|
||||
|
||||
### Configuring annotations
|
||||
|
||||

|
||||
|
||||
After setting "Summary" and "Description", you will receive an alert notification if the alert is triggered.
|
||||
|
|
|
@ -94,7 +94,7 @@ The output as bellow:
|
|||
|
||||
The role of the TDengine Sink Connector is to synchronize the data of the specified topic to TDengine. Users do not need to create databases and super tables in advance. The name of the target database can be specified manually (see the configuration parameter connection.database), or it can be generated according to specific rules (see the configuration parameter connection.database.prefix).
|
||||
|
||||
TDengine Sink Connector internally uses TDengine [modeless write interface](../../client-libraries/cpp#modeless write-api) to write data to TDengine, currently supports data in three formats: [InfluxDB line protocol format](../../develop/insert-data/influxdb-line), [OpenTSDB Telnet protocol format](../../develop/insert-data/opentsdb-telnet), and [OpenTSDB JSON protocol format](../../develop/insert-data/opentsdb-json).
|
||||
TDengine Sink Connector internally uses TDengine [modeless write interface](../../client-libraries/cpp/#schemaless-writing-api) to write data to TDengine, currently supports data in three formats: [InfluxDB line protocol format](../../develop/insert-data/influxdb-line), [OpenTSDB Telnet protocol format](../../develop/insert-data/opentsdb-telnet), and [OpenTSDB JSON protocol format](../../develop/insert-data/opentsdb-json).
|
||||
|
||||
The following example synchronizes the data of the topic meters to the target database power. The data format is the InfluxDB Line protocol format.
|
||||
|
||||
|
|
|
@ -4,33 +4,43 @@ title: Power BI
|
|||
description: Use PowerBI and TDengine to analyze time series data
|
||||
---
|
||||
|
||||
## Introduction
|
||||
# Tools - Power BI
|
||||
|
||||
With TDengine ODBC driver, PowerBI can access time series data stored in TDengine. You can import tag data, original time series data, or aggregated data into PowerBI from TDengine, to create reports or dashboard without any coding effort.
|
||||

|
||||
|
||||
## Steps
|
||||

|
||||
[Power BI](https://powerbi.microsoft.com/) is a business analytics tool provided by Microsoft. With TDengine ODBC driver, PowerBI can access time series data stored in the TDengine. You can import tag data, original time series data, or aggregated data into Power BI from a TDengine, to create reports or dashboard without any coding effort.
|
||||
|
||||
### Prerequisites
|
||||
### Prerequisite
|
||||
1. TDengine server software is installed and running.
|
||||
2. Power BI Desktop has been installed and running (If not, please download and install the latest Windows X64 version from [PowerBI](https://www.microsoft.com/zh-cn/download/details.aspx?id=58494) ).
|
||||
|
||||
1. TDengine server has been installed and running well.
|
||||
2. Power BI Desktop has been installed and running. (If not, please download and install latest Windows X64 version from [PowerBI](https://www.microsoft.com/download/details.aspx?id=58494).
|
||||
### Install ODBC connector
|
||||
1. Only support Windows operation system. And you need to install [VC Runtime Library](https://learn.microsoft.com/zh-cn/cpp/windows/latest-supported-vc-redist?view=msvc-170) first. If already installed, please ignore this step.
|
||||
2. Install [TDengine Windows client installation package](https://docs.taosdata.com/get-started/package/).
|
||||
|
||||
### Configure ODBC DataSource
|
||||
1. Click the "Start" Menu, and Search for "ODBC", and choose "ODBC Data Source (64-bit)" (Note: Don't choose 32-bit).
|
||||
2. Select the "User DSN" tab, and click "Add" button to enter the page for "Create Data Source".
|
||||
3. Choose the data source to be added, here we choose "TDengine" and click "Finish", and enter the configuration page for "TDengine ODBC Data Source", fill in required fields as the following:
|
||||
|
||||
  [DSN]:       Data Source Name, required field, such as "MyTDengine"
|
||||
|
||||
Depending on your TDengine server version, download appropriate version of TDengine client package from TDengine website [Download Link](https://docs.tdengine.com/get-started/package/), or TDengine explorer if you are using a local TDengine cluster. Install the TDengine client package on same Windows machine where PowerBI is running.
|
||||
|
||||
|
||||
## Install Driver
|
||||
  [URL]:        taos://localhost:6041
|
||||
|
||||
Depending on your TDengine server version, download appropriate version of TDengine client package from TDengine website [Download Link](../../get-started/package/), or TDengine explorer if you are using a local TDengine cluster. Install the TDengine client package on same Windows machine where PowerBI is running.
|
||||
  [Database]:     optional field, the default database to access, such as "test"
|
||||
|
||||
### Configure Data Source
|
||||
  [UserID]:      Enter the user name. If this parameter is not specified, the user name is root by default
|
||||
|
||||
Please refer to [ODBC](../../client-libraries/odbc) to configure TDengine ODBC Driver with WebSocket connection.
|
||||
  [Password]:     Enter the user password. If not, the default is taosdata
|
||||
|
||||
4. Click "Test Connection" to test whether the data source can be connectted; if successful, it will prompt "Successfully connected to taos://root:taosdata@localhost:6041".
|
||||
|
||||
### Import Data from TDengine to Power BI
|
||||
|
||||
1. Open Power BI and logon, add data source following steps "Home Page" -> "Get Data" -> "Others" -> "ODBC" -> "Connect"
|
||||
|
||||
2. Choose data source name, connect to configured data source, go to the nativator, browse tables of the selected database and load data
|
||||
|
||||
1. Open Power BI and logon, add data source following steps "Home" -> "Get data" -> "Other" -> "ODBC" -> "Connect".
|
||||
2. Choose the created data source name, such as "MyTDengine", then click "OK" button to open the "ODBC Driver" dialog. In the dialog, select "Default or Custom" left menu and then click "Connect" button to connect to the configured data source. After go to the "Nativator", browse tables of the selected database and load data.
|
||||
3. If you want to input some specific SQL, click "Advanced Options", and input your SQL in the open dialogue box and load the data.
|
||||
|
||||
|
||||
|
@ -49,17 +59,9 @@ To better use Power BI to analyze the data stored in TDengine, you need to under
|
|||
4. Correlation: Indicates how to correlate data. Dimentions and Metrics can be correlated by tbname, dates and metrics can be correlated by date. All these can cooperate to form visual reports.
|
||||
|
||||
### Example - Meters
|
||||
|
||||
TDengine has its own specific data model, which uses supertable as template and creates a specific table for each device. Each table can have maximum 4,096 data columns and 128 tags. In the example of meters, assume each meter generates one record per second, then there will be 86,400 records each day and 31,536,000 records every year, then only 1,000 meters will occupy 500GB disk space. So, the common usage of Power BI should be mapping tags to dimention columns, mapping the aggregation of data columns to metric columns, to provide indicators for decision makers.
|
||||
|
||||
1. Import Dimentions
|
||||
|
||||
Import the tags of tables in PowerBI, and name as "tags", the SQL is like `select distinct tbname, groupid, location from test.meters;`.
|
||||
|
||||
2. Import Metrics
|
||||
|
||||
In Power BI, import the average current, average voltage, average phase with 1 hour window, and name it as "data", the SQL is like `select tbname, _wstart ws, avg(current), avg(voltage), avg(phase) from test.meters PARTITION by tbname interval(1h)` .
|
||||
|
||||
3. Correlate Dimentions and Metrics
|
||||
|
||||
In Power BI, open model view, correlate "tags" and "data", and set "tabname" as the correlation column, then you can use the data in histogram, pie chart, etc. For more information about building visual reports in PowerBI, please refer to [Power BI](https://learn.microsoft.com/power-bi/)。
|
||||
TDengine has its own specific data model, which uses supertable as template and creates a specific table for each device. Each table can have maximum 4,096 data columns and 128 tags. In [the example of meters](https://docs.taosdata.com/concept/) , assume each meter generates one record per second, then there will be 86,400 records each day and 31,536,000 records every year, then only 1,000 meters will occupy 500GB disk space. So, the common usage of Power BI should be mapping tags to dimension columns, mapping the aggregation of data columns to metric columns, to provide indicators for decision makers.
|
||||
1. Import Dimensions: Import the tags of tables in PowerBI, and name as "tags", the SQL is as the following:
|
||||
`select distinct tbname, groupid, location from test.meters;`
|
||||
2. Import Metrics: In Power BI, import the average current, average voltage, average phase with 1 hour window, and name it as "data", the SQL is as the following:
|
||||
`select tbname, _wstart ws, avg(current), avg(voltage), avg(phase) from test.meters PARTITION by tbname interval(1h)` ;
|
||||
3. Correlate Dimensions and Metrics: In Power BI, open model view, correlate "tags" and "data", and set "tabname" as the correlation column, then you can use the data in histogram, pie chart, etc. For more information about building visual reports in PowerBI, please refer to [Power BI](https://learn.microsoft.com/zh-cn/power-bi/).
|
|
@ -0,0 +1,64 @@
|
|||
---
|
||||
sidebar_label: Yonghong BI
|
||||
title: Yonghong BI
|
||||
description: Use YonghongBI and TDengine to analyze time series data
|
||||
---
|
||||
|
||||
# Tools - Yonghong BI
|
||||
|
||||

|
||||
|
||||
[Yonghong one-stop big data BI platform](https://www.yonghongtech.com/)to provide enterprises of all sizes with flexible and easy-to-use whole-business chain big data analysis solutions, so that every user can use this platform to easily discover the value of big data and obtain deep insight. TDengine can be added to Yonghong BI as a data source via a JDBC connector. Once the data source is configured, Yonghong BI can read data from TDengine and provide functions such as data presentation, analysis and prediction.
|
||||
|
||||
### Prerequisite
|
||||
|
||||
1. Yonghong Desktop Basic is installed and running (if not,please go to [official download page of Yonghong Technology](https://www.yonghongtech.com/cp/desktop/) download).
|
||||
2. The TDengine is installed and running, and ensure that the taosadapter service is started on the TDengine server side.
|
||||
|
||||
### Install JDBC Connector
|
||||
|
||||
Go to [maven.org](https://central.sonatype.com/artifact/com.taosdata.jdbc/taos-jdbcdriver/versions) download the latest TDengine JDBC connector (current version [3.2.7](https://repo1.maven.org/maven2/com/taosdata/jdbc/taos-jdbcdriver/3.2.7/taos-jdbcdriver-3.2.7-dist.jar)) and install it on the machine where the BI tool is running.
|
||||
|
||||
### Configure JDBC DataSource
|
||||
|
||||
1. In the Yonghong Desktop BI tool, click "Add data source" and select the "GENERIC" type in the SQL data source.
|
||||
2. Click "Select Custom Driver", in the "Driver Management" dialog box, click "+" next to "Driver List", enter the name "MyTDengine". Then click the "upload file" button to upload just download TDengine JDBC connector file "taos-jdbcdriver-3.2.7-dist.jar", and select "com.taosdata.jdbc.rs.RestfulDriver" drive, Finally, click the "OK" button to complete the driver addition.
|
||||
3. Then copy the following into the "URL" field:
|
||||
```
|
||||
jdbc:TAOS-RS://localhost:6041?user=root&password=taosdata
|
||||
```
|
||||
4. Then select "No identity Authentication" under "Authentication Mode".
|
||||
5. In the advanced Settings of the data source, change the value of the Quote symbol to the backquote "`".
|
||||
6. Click "Test connection" and the dialog box "Test success" will pop up. Click the "Save" button and enter "tdengine" to save the TDengine data source.
|
||||
|
||||
### Create TDengine datasets
|
||||
|
||||
1. Click "Add Data Set" in the BI tool, expand the data source you just created, and browse the super table in TDengine.
|
||||
2. You can load all the data of the super table into the BI tool, or you can import some data through custom SQL statements.
|
||||
3. When "Computation in Database" is selected, the BI tool will no longer cache TDengine timing data and will send SQL requests to TDengine for direct processing when processing queries.
|
||||
|
||||
When data is imported, the BI tool automatically sets the numeric type to the "metric" column and the text type to the "dimension" column. In TDengine super tables, ordinary columns are used as data metrics and label columns are used as data dimensions, so you may need to change the properties of some columns when you create a dataset. On the basis of supporting standard SQL, TDengine also provides a series of special query syntax to meet the requirements of time series business scenarios, such as data segmentation query, window segmentation query, etc., for [TDengine Specialized Queries](https://docs.taosdata.com/taos-sql/distinguished/) .By using these featured queries, BI tools can greatly improve data access speed and reduce network transmission bandwidth when they send SQL queries to TDengine databases.
|
||||
|
||||
In BI tools, you can create "parameters" and use them in SQL statements, which can be dynamically executed manually and periodically to achieve a visual report refresh effect.The following SQL statement:
|
||||
|
||||
```sql
|
||||
select _wstart ws, count(*) cnt from supertable where tbname=?{metric} and ts >= ?{from} and ts < ?{to} interval(?{interval})
|
||||
```
|
||||
|
||||
Data can be read in real time from TDengine, where:
|
||||
|
||||
- `_wstart`: Indicates the start time of the time window.
|
||||
- `count(*)`: Indicates the aggregate value in the time window.
|
||||
- `?{interval}`: Indicates that the parameter interval is introduced into the SQL statement. When the BI tool queries data, it assigns a value to the parameter interval. If the value is 1m, the sampling data is reduced based on a 1-minute time window.
|
||||
- `?{metric}`: This parameter is used to specify the name of the data table to be queried. When the ID of a drop-down parameter component is set as metric in the BI tool, the selected items of the drop-down parameter component are bound to this parameter to achieve dynamic selection.
|
||||
- `?{from}` `?{to}`:These two parameters are used to represent the time range of the query data set and can be bound with the Text Parameter Component.
|
||||
|
||||
You can modify the data type, data range, and default values of parameters in the "Edit Parameters" dialog box of the BI tool, and dynamically set the values of these parameters in the "Visual Report".
|
||||
|
||||
### Create a visual report
|
||||
|
||||
1. Click "Make Report" in Yonghong BI tool to create a canvas.
|
||||
2. Drag visual components, such as Table Components, onto the canvas.
|
||||
3. Select the data set to be bound in the Data Set sidebar, and bind Dimensions and Measures in the data column to Table Components as needed.
|
||||
4. Click "Save" to view the report.
|
||||
5. For more information about Yonghong BI tools, please consult them [Help document](https://www.yonghongtech.com/help/Z-Suite/10.0/ch/).
|
Before Width: | Height: | Size: 106 KiB After Width: | Height: | Size: 37 KiB |
After Width: | Height: | Size: 83 KiB |
After Width: | Height: | Size: 39 KiB |
After Width: | Height: | Size: 105 KiB |
After Width: | Height: | Size: 41 KiB |
After Width: | Height: | Size: 36 KiB |
After Width: | Height: | Size: 116 KiB |
After Width: | Height: | Size: 37 KiB |
After Width: | Height: | Size: 98 KiB |
After Width: | Height: | Size: 204 KiB |
Before Width: | Height: | Size: 50 KiB After Width: | Height: | Size: 154 KiB |
Before Width: | Height: | Size: 59 KiB After Width: | Height: | Size: 157 KiB |
After Width: | Height: | Size: 13 KiB |
Before Width: | Height: | Size: 36 KiB |
After Width: | Height: | Size: 14 KiB |
|
@ -90,7 +90,7 @@ Through TAOSC caching mechanism, mnode needs to be accessed only when a table is
|
|||
|
||||
### Storage Model
|
||||
|
||||
The data stored by TDengine includes collected time-series data, metadata and tag data related to database and tablesetc. All of the data is specifically divided into three parts:
|
||||
The data stored by TDengine includes collected time-series data, metadata and tag data related to database and tables, etc. All of the data is specifically divided into three parts:
|
||||
|
||||
- Time-series data: stored in vnode and composed of data, head and last files. Normally the amount of time series data is very huge and query amount depends on the application scenario. Out-of-order writing is allowed. By adopting the model with **one table for each data collection point**, the data of a given time period is continuously stored, and the writing against one single table is a simple appending operation. Multiple records can be read at one time, thus ensuring the best performance for both insert and query operations of a single data collection point.
|
||||
- Table Metadata: table meta data includes tags and table schema and is stored in meta file in each vnode. CRUD can be operated on table metadata. There is a specific record for each table, so the amount of table meta data depends on the number of tables. Table meta data is stored in LRU model and supports index for tag data. TDengine can support multiple queries in parallel. As long as the memory resource is enough, meta data is all stored in memory for quick access. The filtering on tens of millions of tags can be finished in a few milliseconds. Even though when the memory resource is not sufficient, TDengine can still perform high speed query on tens of millions of tables.
|
||||
|
@ -197,20 +197,20 @@ By default, TDengine saves all data in /var/lib/taos directory, and the data fil
|
|||
dataDir format is as follows:
|
||||
|
||||
```
|
||||
dataDir data_path [tier_level]
|
||||
dataDir data_path [tier_level] [primary] [disable_create_new_file]
|
||||
```
|
||||
|
||||
Where data_path is the folder path of mount point and tier_level is the media storage-tier. The higher the media storage-tier, means the older the data file. Multiple hard disks can be mounted at the same storage-tier, and data files on the same storage-tier are distributed on all hard disks within the tier. TDengine supports up to 3 tiers of storage, so tier_level values are 0, 1, and 2. When configuring dataDir, there must be only one mount path without specifying tier_level, which is called special mount disk (path). The mount path defaults to level 0 storage media and contains special file links, which cannot be removed, otherwise it will have a devastating impact on the written data.
|
||||
Where `data_path` is the folder path of mount point, and `tier_level` is the media storage-tier. The higher the media storage-tier, means the older the data file. Multiple hard disks can be mounted at the same storage-tier, and data files on the same storage-tier are distributed on all hard disks within the tier. TDengine supports up to 3 tiers of storage, so tier_level values are 0, 1, and 2. When configuring dataDir, there must be only one mount path without specifying tier_level, which is called special mount disk (path). The mount path defaults to level 0 storage media and contains special file links, which cannot be removed, otherwise it will have a devastating impact on the written data. And `primary` means whether the data dir is the primary mount point. Enter 0 for false or 1 for true. The default value is 1. A TDengine cluster can have only one `primary` mount point, which must be on tier 0. And `disable_create_new_file` means whether to prohibit the creation of new file sets on the specified mount point. Enter 0 for false and 1 for true. The default value is 0. Tier 0 storage must have at least one mount point with disable_create_new_file set to 0. Tier 1 and tier 2 storage do not have this restriction.
|
||||
|
||||
Suppose there is a physical node with six mountable hard disks/mnt/disk1,/mnt/disk2, ..., /mnt/disk6, where disk1 and disk2 need to be designated as level 0 storage media, disk3 and disk4 are level 1 storage media, and disk5 and disk6 are level 2 storage media. Disk1 is a special mount disk, you can configure it in/etc/taos/taos.cfg as follows:
|
||||
|
||||
```
|
||||
dataDir /mnt/disk1/taos
|
||||
dataDir /mnt/disk2/taos 0
|
||||
dataDir /mnt/disk3/taos 1
|
||||
dataDir /mnt/disk4/taos 1
|
||||
dataDir /mnt/disk5/taos 2
|
||||
dataDir /mnt/disk6/taos 2
|
||||
dataDir /mnt/disk1/taos 0 1 0
|
||||
dataDir /mnt/disk2/taos 0 0 0
|
||||
dataDir /mnt/disk3/taos 1 0 0
|
||||
dataDir /mnt/disk4/taos 1 0 1
|
||||
dataDir /mnt/disk5/taos 2 0 0
|
||||
dataDir /mnt/disk6/taos 2 0 0
|
||||
```
|
||||
|
||||
Mounted disks can also be a non-local network disk, as long as the system can access it.
|
||||
|
|
|
@ -4,12 +4,26 @@ sidebar_label: TDengine
|
|||
description: This document provides download links for all released versions of TDengine 3.0.
|
||||
---
|
||||
|
||||
## TDengine Version Rules
|
||||
|
||||
TDengine version number consists of 4 numbers separated by `.`, defined as below:
|
||||
- `[Major+].[Major].[Feature].[Maintenance]`
|
||||
- `Major+`: Significant rearchitecture release, can't be upgraded from an old version with different `Major+` number. If you have such a need, please contact TDengine support team.
|
||||
- `Major`: Important new feature release, can't be rolling upgraded from old version with different `Major` number, and can't be rolled back after upgrading. For example, after upgrading from `3.2.3.0` to `3.3.0.0`, you can't roll back to `3.2.3.0`.
|
||||
- `Feature`:New feature release, can't be rolling upgraded from an old version with different `Feature` number, but can be rolled back after upgrading. For example, after upgrading from `3.3.0.0` to `3.3.1.0`, you can roll back to `3.3.0.0`. The client driver (libtaos.so) must be upgraded to same version as the server side (taosd).
|
||||
- `Maintenance`: Maintenance release, no new features but only bug fixings, can be rolling upgraded from an old version with only `Maintenance` number different, and can be rolled back after upgrading.
|
||||
- `Rolling Upgrade`: For a cluster consisting of three or more dnodes with three replica enabled, you can upgrade one dnode each time by stopping it, upgrading it, and then restarting it, repeat this process to upgrade the whole cluster. During this period, the cluster is still in service. If rolling upgrade is not supported based on the above version number rules, you need to first stop the whole cluster, upgrade all dndoes, and restart all dnodes after upgrading. During this period, the cluster is out of service.
|
||||
|
||||
TDengine 3.x installation packages can be downloaded at the following links:
|
||||
|
||||
For TDengine 2.x installation packages by version, please visit [here](https://tdengine.com/downloads/historical/).
|
||||
|
||||
import Release from "/components/ReleaseV3";
|
||||
|
||||
## 3.3.2.0
|
||||
|
||||
<Release type="tdengine" version="3.3.2.0" />
|
||||
|
||||
## 3.3.1.0
|
||||
|
||||
<Release type="tdengine" version="3.3.1.0" />
|
||||
|
|
|
@ -0,0 +1,113 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/taosdata/driver-go/v3/af"
|
||||
"github.com/taosdata/driver-go/v3/af/tmq"
|
||||
tmqcommon "github.com/taosdata/driver-go/v3/common/tmq"
|
||||
)
|
||||
|
||||
func main() {
|
||||
db, err := af.Open("", "root", "taosdata", "", 0)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer db.Close()
|
||||
_, err = db.Exec("create database if not exists power WAL_RETENTION_PERIOD 86400")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
_, err = db.Exec("CREATE STABLE IF NOT EXISTS power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
_, err = db.Exec("create table if not exists power.d001 using power.meters tags(1,'location')")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// ANCHOR: create_topic
|
||||
_, err = db.Exec("CREATE TOPIC IF NOT EXISTS topic_meters AS SELECT ts, current, voltage, phase, groupid, location FROM power.meters")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// ANCHOR_END: create_topic
|
||||
// ANCHOR: create_consumer
|
||||
consumer, err := tmq.NewConsumer(&tmqcommon.ConfigMap{
|
||||
"group.id": "test",
|
||||
"auto.offset.reset": "latest",
|
||||
"td.connect.ip": "127.0.0.1",
|
||||
"td.connect.user": "root",
|
||||
"td.connect.pass": "taosdata",
|
||||
"td.connect.port": "6030",
|
||||
"client.id": "test_tmq_client",
|
||||
"enable.auto.commit": "false",
|
||||
"msg.with.table.name": "true",
|
||||
})
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// ANCHOR_END: create_consumer
|
||||
// ANCHOR: poll_data
|
||||
go func() {
|
||||
for {
|
||||
_, err = db.Exec("insert into power.d001 values (now, 1.1, 220, 0.1)")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
time.Sleep(time.Millisecond * 100)
|
||||
}
|
||||
}()
|
||||
|
||||
err = consumer.Subscribe("topic_meters", nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
for i := 0; i < 5; i++ {
|
||||
ev := consumer.Poll(500)
|
||||
if ev != nil {
|
||||
switch e := ev.(type) {
|
||||
case *tmqcommon.DataMessage:
|
||||
fmt.Printf("get message:%v\n", e)
|
||||
case tmqcommon.Error:
|
||||
fmt.Fprintf(os.Stderr, "%% Error: %v: %v\n", e.Code(), e)
|
||||
panic(e)
|
||||
}
|
||||
consumer.Commit()
|
||||
}
|
||||
}
|
||||
// ANCHOR_END: poll_data
|
||||
// ANCHOR: consumer_seek
|
||||
partitions, err := consumer.Assignment()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
for i := 0; i < len(partitions); i++ {
|
||||
fmt.Println(partitions[i])
|
||||
err = consumer.Seek(tmqcommon.TopicPartition{
|
||||
Topic: partitions[i].Topic,
|
||||
Partition: partitions[i].Partition,
|
||||
Offset: 0,
|
||||
}, 0)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
partitions, err = consumer.Assignment()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// ANCHOR_END: consumer_seek
|
||||
for i := 0; i < len(partitions); i++ {
|
||||
fmt.Println(partitions[i])
|
||||
}
|
||||
// ANCHOR: consumer_close
|
||||
err = consumer.Close()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// ANCHOR_END: consumer_close
|
||||
}
|
|
@ -0,0 +1,115 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"github.com/taosdata/driver-go/v3/common"
|
||||
tmqcommon "github.com/taosdata/driver-go/v3/common/tmq"
|
||||
_ "github.com/taosdata/driver-go/v3/taosRestful"
|
||||
"github.com/taosdata/driver-go/v3/ws/tmq"
|
||||
)
|
||||
|
||||
func main() {
|
||||
db, err := sql.Open("taosRestful", "root:taosdata@http(localhost:6041)/")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer db.Close()
|
||||
_, err = db.Exec("create database if not exists power WAL_RETENTION_PERIOD 86400")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
_, err = db.Exec("CREATE STABLE IF NOT EXISTS power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
_, err = db.Exec("create table if not exists power.d001 using power.meters tags(1,'location')")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// ANCHOR: create_topic
|
||||
_, err = db.Exec("CREATE TOPIC IF NOT EXISTS topic_meters AS SELECT ts, current, voltage, phase, groupid, location FROM power.meters")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// ANCHOR_END: create_topic
|
||||
// ANCHOR: create_consumer
|
||||
consumer, err := tmq.NewConsumer(&tmqcommon.ConfigMap{
|
||||
"ws.url": "ws://127.0.0.1:6041",
|
||||
"ws.message.channelLen": uint(0),
|
||||
"ws.message.timeout": common.DefaultMessageTimeout,
|
||||
"ws.message.writeWait": common.DefaultWriteWait,
|
||||
"td.connect.user": "root",
|
||||
"td.connect.pass": "taosdata",
|
||||
"group.id": "example",
|
||||
"client.id": "example_consumer",
|
||||
"auto.offset.reset": "latest",
|
||||
})
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// ANCHOR_END: create_consumer
|
||||
// ANCHOR: poll_data
|
||||
go func() {
|
||||
for {
|
||||
_, err = db.Exec("insert into power.d001 values (now, 1.1, 220, 0.1)")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
time.Sleep(time.Millisecond * 100)
|
||||
}
|
||||
}()
|
||||
|
||||
err = consumer.Subscribe("topic_meters", nil)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
|
||||
for i := 0; i < 5; i++ {
|
||||
ev := consumer.Poll(500)
|
||||
if ev != nil {
|
||||
switch e := ev.(type) {
|
||||
case *tmqcommon.DataMessage:
|
||||
fmt.Printf("get message:%v\n", e)
|
||||
case tmqcommon.Error:
|
||||
fmt.Fprintf(os.Stderr, "%% Error: %v: %v\n", e.Code(), e)
|
||||
panic(e)
|
||||
}
|
||||
consumer.Commit()
|
||||
}
|
||||
}
|
||||
// ANCHOR_END: poll_data
|
||||
// ANCHOR: consumer_seek
|
||||
partitions, err := consumer.Assignment()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
for i := 0; i < len(partitions); i++ {
|
||||
fmt.Println(partitions[i])
|
||||
err = consumer.Seek(tmqcommon.TopicPartition{
|
||||
Topic: partitions[i].Topic,
|
||||
Partition: partitions[i].Partition,
|
||||
Offset: 0,
|
||||
}, 0)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
partitions, err = consumer.Assignment()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// ANCHOR_END: consumer_seek
|
||||
for i := 0; i < len(partitions); i++ {
|
||||
fmt.Println(partitions[i])
|
||||
}
|
||||
// ANCHOR: consumer_close
|
||||
err = consumer.Close()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
// ANCHOR_END: consumer_close
|
||||
}
|
|
@ -0,0 +1,76 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/taosdata/driver-go/v3/common"
|
||||
_ "github.com/taosdata/driver-go/v3/taosSql"
|
||||
)
|
||||
|
||||
func main() {
|
||||
var taosDSN = "root:taosdata@tcp(localhost:6030)/"
|
||||
taos, err := sql.Open("taosSql", taosDSN)
|
||||
if err != nil {
|
||||
log.Fatalln("failed to connect TDengine, err:", err)
|
||||
}
|
||||
defer taos.Close()
|
||||
// ANCHOR: create_db_and_table
|
||||
_, err = taos.Exec("CREATE DATABASE if not exists power")
|
||||
if err != nil {
|
||||
log.Fatalln("failed to create database, err:", err)
|
||||
}
|
||||
_, err = taos.Exec("CREATE STABLE IF NOT EXISTS power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))")
|
||||
if err != nil {
|
||||
log.Fatalln("failed to create stable, err:", err)
|
||||
}
|
||||
// ANCHOR_END: create_db_and_table
|
||||
// ANCHOR: insert_data
|
||||
affected, err := taos.Exec("INSERT INTO " +
|
||||
"power.d1001 USING power.meters TAGS(2,'California.SanFrancisco') " +
|
||||
"VALUES " +
|
||||
"(NOW + 1a, 10.30000, 219, 0.31000) " +
|
||||
"(NOW + 2a, 12.60000, 218, 0.33000) " +
|
||||
"(NOW + 3a, 12.30000, 221, 0.31000) " +
|
||||
"power.d1002 USING power.meters TAGS(3, 'California.SanFrancisco') " +
|
||||
"VALUES " +
|
||||
"(NOW + 1a, 10.30000, 218, 0.25000) ")
|
||||
if err != nil {
|
||||
log.Fatalln("failed to insert data, err:", err)
|
||||
}
|
||||
log.Println("affected rows:", affected)
|
||||
// ANCHOR_END: insert_data
|
||||
// ANCHOR: query_data
|
||||
rows, err := taos.Query("SELECT * FROM power.meters")
|
||||
if err != nil {
|
||||
log.Fatalln("failed to select from table, err:", err)
|
||||
}
|
||||
|
||||
defer rows.Close()
|
||||
for rows.Next() {
|
||||
var (
|
||||
ts time.Time
|
||||
current float32
|
||||
voltage int
|
||||
phase float32
|
||||
groupId int
|
||||
location string
|
||||
)
|
||||
err := rows.Scan(&ts, ¤t, &voltage, &phase, &groupId, &location)
|
||||
if err != nil {
|
||||
log.Fatalln("scan error:\n", err)
|
||||
return
|
||||
}
|
||||
log.Println(ts, current, voltage, phase, groupId, location)
|
||||
}
|
||||
// ANCHOR_END: query_data
|
||||
// ANCHOR: with_reqid
|
||||
ctx := context.WithValue(context.Background(), common.ReqIDKey, common.GetReqID())
|
||||
_, err = taos.ExecContext(ctx, "CREATE DATABASE IF NOT EXISTS power")
|
||||
if err != nil {
|
||||
log.Fatalln("failed to create database, err:", err)
|
||||
}
|
||||
// ANCHOR_END: with_reqid
|
||||
}
|
|
@ -0,0 +1,41 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/taosdata/driver-go/v3/af"
|
||||
)
|
||||
|
||||
const LineDemo = "meters,groupid=2,location=California.SanFrancisco current=10.3000002f64,voltage=219i32,phase=0.31f64 1626006833639000000"
|
||||
|
||||
const TelnetDemo = "stb0_0 1707095283260 4 host=host0 interface=eth0"
|
||||
|
||||
const JsonDemo = "{\"metric\": \"meter_current\",\"timestamp\": 1626846400,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"California.SanFrancisco\", \"id\": \"d1001\"}}"
|
||||
|
||||
func main() {
|
||||
conn, err := af.Open("localhost", "root", "taosdata", "", 6030)
|
||||
if err != nil {
|
||||
fmt.Println("fail to connect, err:", err)
|
||||
}
|
||||
defer conn.Close()
|
||||
_, err = conn.Exec("CREATE DATABASE IF NOT EXISTS power")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
_, err = conn.Exec("use power")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = conn.InfluxDBInsertLines([]string{LineDemo}, "ns")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = conn.OpenTSDBInsertTelnetLines([]string{TelnetDemo})
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = conn.OpenTSDBInsertJsonPayload(JsonDemo)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
|
@ -0,0 +1,54 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/taosdata/driver-go/v3/common"
|
||||
_ "github.com/taosdata/driver-go/v3/taosWS"
|
||||
"github.com/taosdata/driver-go/v3/ws/schemaless"
|
||||
)
|
||||
|
||||
const LineDemo = "meters,groupid=2,location=California.SanFrancisco current=10.3000002f64,voltage=219i32,phase=0.31f64 1626006833639000000"
|
||||
|
||||
const TelnetDemo = "stb0_0 1707095283260 4 host=host0 interface=eth0"
|
||||
|
||||
const JsonDemo = "{\"metric\": \"meter_current\",\"timestamp\": 1626846400,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"California.SanFrancisco\", \"id\": \"d1001\"}}"
|
||||
|
||||
func main() {
|
||||
db, err := sql.Open("taosWS", "root:taosdata@ws(localhost:6041)/")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
defer db.Close()
|
||||
_, err = db.Exec("create database if not exists power")
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
s, err := schemaless.NewSchemaless(schemaless.NewConfig("ws://localhost:6041", 1,
|
||||
schemaless.SetDb("power"),
|
||||
schemaless.SetReadTimeout(10*time.Second),
|
||||
schemaless.SetWriteTimeout(10*time.Second),
|
||||
schemaless.SetUser("root"),
|
||||
schemaless.SetPassword("taosdata"),
|
||||
schemaless.SetErrorHandler(func(err error) {
|
||||
log.Fatal(err)
|
||||
}),
|
||||
))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = s.Insert(LineDemo, schemaless.InfluxDBLineProtocol, "ns", 0, common.GetReqID())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = s.Insert(TelnetDemo, schemaless.OpenTSDBTelnetLineProtocol, "ms", 0, common.GetReqID())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = s.Insert(JsonDemo, schemaless.OpenTSDBJsonFormatProtocol, "ms", 0, common.GetReqID())
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
|
@ -0,0 +1,81 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/taosdata/driver-go/v3/af"
|
||||
"github.com/taosdata/driver-go/v3/common"
|
||||
"github.com/taosdata/driver-go/v3/common/param"
|
||||
)
|
||||
|
||||
const (
|
||||
NumOfSubTable = 10
|
||||
NumOfRow = 10
|
||||
)
|
||||
|
||||
func main() {
|
||||
prepare()
|
||||
db, err := af.Open("", "root", "taosdata", "power", 0)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer db.Close()
|
||||
stmt := db.InsertStmt()
|
||||
defer stmt.Close()
|
||||
err = stmt.Prepare("INSERT INTO ? USING meters TAGS(?,?) VALUES (?,?,?,?)")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
for i := 1; i <= NumOfSubTable; i++ {
|
||||
tags := param.NewParam(2).AddInt(i).AddBinary([]byte("location"))
|
||||
err = stmt.SetTableNameWithTags("d_bind_"+strconv.Itoa(i), tags)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
now := time.Now()
|
||||
params := make([]*param.Param, 4)
|
||||
params[0] = param.NewParam(NumOfRow)
|
||||
params[1] = param.NewParam(NumOfRow)
|
||||
params[2] = param.NewParam(NumOfRow)
|
||||
params[3] = param.NewParam(NumOfRow)
|
||||
for i := 0; i < NumOfRow; i++ {
|
||||
params[0].SetTimestamp(i, now.Add(time.Duration(i)*time.Second), common.PrecisionMilliSecond)
|
||||
params[1].SetFloat(i, float32(i))
|
||||
params[2].SetInt(i, i)
|
||||
params[3].SetFloat(i, float32(i))
|
||||
}
|
||||
paramTypes := param.NewColumnType(4).AddTimestamp().AddFloat().AddInt().AddFloat()
|
||||
err = stmt.BindParam(params, paramTypes)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = stmt.AddBatch()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = stmt.Execute()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
affected := stmt.GetAffectedRows()
|
||||
fmt.Println("affected rows:", affected)
|
||||
}
|
||||
}
|
||||
|
||||
func prepare() {
|
||||
db, err := af.Open("", "root", "taosdata", "", 0)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer db.Close()
|
||||
_, err = db.Exec("CREATE DATABASE IF NOT EXISTS power")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
_, err = db.Exec("CREATE STABLE IF NOT EXISTS power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
|
@ -0,0 +1,95 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"time"
|
||||
|
||||
"github.com/taosdata/driver-go/v3/common"
|
||||
"github.com/taosdata/driver-go/v3/common/param"
|
||||
_ "github.com/taosdata/driver-go/v3/taosRestful"
|
||||
"github.com/taosdata/driver-go/v3/ws/stmt"
|
||||
)
|
||||
|
||||
const (
|
||||
NumOfSubTable = 10
|
||||
NumOfRow = 10
|
||||
)
|
||||
|
||||
func main() {
|
||||
prepare()
|
||||
config := stmt.NewConfig("ws://127.0.0.1:6041", 0)
|
||||
config.SetConnectUser("root")
|
||||
config.SetConnectPass("taosdata")
|
||||
config.SetConnectDB("power")
|
||||
config.SetMessageTimeout(common.DefaultMessageTimeout)
|
||||
config.SetWriteWait(common.DefaultWriteWait)
|
||||
config.SetErrorHandler(func(connector *stmt.Connector, err error) {
|
||||
panic(err)
|
||||
})
|
||||
config.SetCloseHandler(func() {
|
||||
fmt.Println("stmt connector closed")
|
||||
})
|
||||
|
||||
connector, err := stmt.NewConnector(config)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
stmt, err := connector.Init()
|
||||
err = stmt.Prepare("INSERT INTO ? USING meters TAGS(?,?) VALUES (?,?,?,?)")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
for i := 1; i <= NumOfSubTable; i++ {
|
||||
tags := param.NewParam(2).AddInt(i).AddBinary([]byte("location"))
|
||||
err = stmt.SetTableName("d_bind_" + strconv.Itoa(i))
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = stmt.SetTags(tags, param.NewColumnType(2).AddInt().AddBinary(8))
|
||||
now := time.Now()
|
||||
params := make([]*param.Param, 4)
|
||||
params[0] = param.NewParam(NumOfRow)
|
||||
params[1] = param.NewParam(NumOfRow)
|
||||
params[2] = param.NewParam(NumOfRow)
|
||||
params[3] = param.NewParam(NumOfRow)
|
||||
for i := 0; i < NumOfRow; i++ {
|
||||
params[0].SetTimestamp(i, now.Add(time.Duration(i)*time.Second), common.PrecisionMilliSecond)
|
||||
params[1].SetFloat(i, float32(i))
|
||||
params[2].SetInt(i, i)
|
||||
params[3].SetFloat(i, float32(i))
|
||||
}
|
||||
paramTypes := param.NewColumnType(4).AddTimestamp().AddFloat().AddInt().AddFloat()
|
||||
err = stmt.BindParam(params, paramTypes)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = stmt.AddBatch()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
err = stmt.Exec()
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
affected := stmt.GetAffectedRows()
|
||||
fmt.Println("affected rows:", affected)
|
||||
}
|
||||
}
|
||||
|
||||
func prepare() {
|
||||
db, err := sql.Open("taosRestful", "root:taosdata@http(localhost:6041)/")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
defer db.Close()
|
||||
_, err = db.Exec("CREATE DATABASE IF NOT EXISTS power")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
_, err = db.Exec("CREATE STABLE IF NOT EXISTS power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (groupId INT, location BINARY(24))")
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
|
@ -22,7 +22,7 @@
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.2.7-SNAPSHOT</version>
|
||||
<version>3.3.0</version>
|
||||
</dependency>
|
||||
<!-- ANCHOR_END: dep-->
|
||||
<dependency>
|
||||
|
|
|
@ -0,0 +1,53 @@
|
|||
const taos = require("@tdengine/websocket");
|
||||
|
||||
var host = null;
|
||||
for(var i = 2; i < global.process.argv.length; i++){
|
||||
var key = global.process.argv[i].split("=")[0];
|
||||
var value = global.process.argv[i].split("=")[1];
|
||||
if("host" == key){
|
||||
host = value;
|
||||
}
|
||||
}
|
||||
|
||||
if(host == null){
|
||||
console.log("Usage: node nodejsChecker.js host=<hostname> port=<port>");
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
let dbData = ["{\"metric\": \"meter_current\",\"timestamp\": 1626846402,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"California.SanFrancisco\", \"id\": \"d1001\"}}",
|
||||
"{\"metric\": \"meter_current\",\"timestamp\": 1626846403,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"California.SanFrancisco\", \"id\": \"d1002\"}}",
|
||||
"{\"metric\": \"meter_current\",\"timestamp\": 1626846404,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"California.SanFrancisco\", \"id\": \"d1003\"}}"]
|
||||
|
||||
async function createConnect() {
|
||||
let dsn = 'ws://' + host + ':6041'
|
||||
let conf = new taos.WSConfig(dsn);
|
||||
conf.setUser('root');
|
||||
conf.setPwd('taosdata');
|
||||
conf.setDb('power');
|
||||
return await taos.sqlConnect(conf);
|
||||
}
|
||||
|
||||
async function test() {
|
||||
let wsSql = null;
|
||||
let wsRows = null;
|
||||
let reqId = 0;
|
||||
try {
|
||||
wsSql = await createConnect()
|
||||
await wsSql.exec('CREATE DATABASE IF NOT EXISTS power KEEP 3650 DURATION 10 BUFFER 16 WAL_LEVEL 1;', reqId++);
|
||||
await wsSql.schemalessInsert([dbData], taos.SchemalessProto.OpenTSDBJsonFormatProtocol, taos.Precision.SECONDS, 0);
|
||||
}
|
||||
catch (err) {
|
||||
console.error(err.code, err.message);
|
||||
}
|
||||
finally {
|
||||
if (wsRows) {
|
||||
await wsRows.close();
|
||||
}
|
||||
if (wsSql) {
|
||||
await wsSql.close();
|
||||
}
|
||||
taos.destroy();
|
||||
}
|
||||
}
|
||||
|
||||
test()
|
|
@ -0,0 +1,49 @@
|
|||
const taos = require("@tdengine/websocket");
|
||||
|
||||
let influxdbData = ["meters,location=California.LosAngeles,groupId=2 current=11.8,voltage=221,phase=0.28 1648432611249",
|
||||
"meters,location=California.LosAngeles,groupId=2 current=13.4,voltage=223,phase=0.29 1648432611250",
|
||||
"meters,location=California.LosAngeles,groupId=3 current=10.8,voltage=223,phase=0.29 1648432611249"];
|
||||
|
||||
let jsonData = ["{\"metric\": \"meter_current\",\"timestamp\": 1626846402,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"California.SanFrancisco\", \"id\": \"d1001\"}}",
|
||||
"{\"metric\": \"meter_current\",\"timestamp\": 1626846403,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"California.SanFrancisco\", \"id\": \"d1002\"}}",
|
||||
"{\"metric\": \"meter_current\",\"timestamp\": 1626846404,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"California.SanFrancisco\", \"id\": \"d1003\"}}"]
|
||||
|
||||
let telnetData = ["meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
|
||||
"meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
|
||||
"meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3"];
|
||||
|
||||
async function createConnect() {
|
||||
let dsn = 'ws://localhost:6041'
|
||||
let conf = new taos.WSConfig(dsn);
|
||||
conf.setUser('root');
|
||||
conf.setPwd('taosdata');
|
||||
let wsSql = await taos.sqlConnect(conf);
|
||||
await wsSql.exec('CREATE DATABASE IF NOT EXISTS power KEEP 3650 DURATION 10 BUFFER 16 WAL_LEVEL 1;');
|
||||
await wsSql.exec('USE power');
|
||||
return wsSql;
|
||||
}
|
||||
|
||||
async function test() {
|
||||
let wsSql = null;
|
||||
let wsRows = null;
|
||||
let ttl = 0;
|
||||
try {
|
||||
wsSql = await createConnect()
|
||||
await wsSql.schemalessInsert(influxdbData, taos.SchemalessProto.InfluxDBLineProtocol, taos.Precision.MILLI_SECONDS, ttl);
|
||||
await wsSql.schemalessInsert(jsonData, taos.SchemalessProto.OpenTSDBJsonFormatProtocol, taos.Precision.SECONDS, ttl);
|
||||
await wsSql.schemalessInsert(telnetData, taos.SchemalessProto.OpenTSDBTelnetLineProtocol, taos.Precision.MILLI_SECONDS, ttl);
|
||||
}
|
||||
catch (err) {
|
||||
console.error(err.code, err.message);
|
||||
}
|
||||
finally {
|
||||
if (wsRows) {
|
||||
await wsRows.close();
|
||||
}
|
||||
if (wsSql) {
|
||||
await wsSql.close();
|
||||
}
|
||||
taos.destroy();
|
||||
}
|
||||
}
|
||||
test()
|
|
@ -0,0 +1,77 @@
|
|||
const taos = require("@tdengine/websocket");
|
||||
|
||||
var host = null;
|
||||
for(var i = 2; i < global.process.argv.length; i++){
|
||||
var key = global.process.argv[i].split("=")[0];
|
||||
var value = global.process.argv[i].split("=")[1];
|
||||
if("host" == key){
|
||||
host = value;
|
||||
}
|
||||
}
|
||||
|
||||
if(host == null){
|
||||
console.log("Usage: node nodejsChecker.js host=<hostname> port=<port>");
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
|
||||
async function createConnect() {
|
||||
let dsn = 'ws://' + host + ':6041'
|
||||
console.log(dsn)
|
||||
let conf = new taos.WSConfig(dsn);
|
||||
conf.setUser('root')
|
||||
conf.setPwd('taosdata')
|
||||
return await taos.sqlConnect(conf);
|
||||
}
|
||||
|
||||
async function test() {
|
||||
let wsSql = null;
|
||||
let wsRows = null;
|
||||
let reqId = 0;
|
||||
try {
|
||||
wsSql = await createConnect()
|
||||
let version = await wsSql.version();
|
||||
console.log(version);
|
||||
let taosResult = await wsSql.exec('SHOW DATABASES', reqId++);
|
||||
console.log(taosResult);
|
||||
|
||||
taosResult = await wsSql.exec('CREATE DATABASE IF NOT EXISTS power KEEP 3650 DURATION 10 BUFFER 16 WAL_LEVEL 1;', reqId++);
|
||||
console.log(taosResult);
|
||||
|
||||
taosResult = await wsSql.exec('USE power', reqId++);
|
||||
console.log(taosResult);
|
||||
|
||||
taosResult = await wsSql.exec('CREATE STABLE IF NOT EXISTS meters (_ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);', reqId++);
|
||||
console.log(taosResult);
|
||||
|
||||
taosResult = await wsSql.exec('DESCRIBE meters', reqId++);
|
||||
console.log(taosResult);
|
||||
|
||||
taosResult = await wsSql.exec('INSERT INTO d1001 USING meters (location, groupId) TAGS ("California.SanFrancisco", 3) VALUES (NOW, 10.2, 219, 0.32)', reqId++);
|
||||
console.log(taosResult);
|
||||
|
||||
wsRows = await wsSql.query('SELECT * FROM meters', reqId++);
|
||||
let meta = wsRows.getMeta();
|
||||
console.log("wsRow:meta:=>", meta);
|
||||
|
||||
while (await wsRows.next()) {
|
||||
let result = wsRows.getData();
|
||||
console.log('queryRes.Scan().then=>', result);
|
||||
}
|
||||
|
||||
}
|
||||
catch (err) {
|
||||
console.error(err.code, err.message);
|
||||
}
|
||||
finally {
|
||||
if (wsRows) {
|
||||
await wsRows.close();
|
||||
}
|
||||
if (wsSql) {
|
||||
await wsSql.close();
|
||||
}
|
||||
taos.destroy();
|
||||
}
|
||||
}
|
||||
|
||||
test()
|
|
@ -0,0 +1,143 @@
|
|||
const taos = require("@tdengine/websocket");
|
||||
|
||||
// ANCHOR: createConnect
|
||||
async function createConnect() {
|
||||
let dsn = 'ws://localhost:6041';
|
||||
let conf = new taos.WSConfig(dsn);
|
||||
conf.setUser('root');
|
||||
conf.setPwd('taosdata');
|
||||
conf.setDb('power');
|
||||
return await taos.sqlConnect(conf);
|
||||
}
|
||||
// ANCHOR_END: createConnect
|
||||
|
||||
// ANCHOR: create_db_and_table
|
||||
async function createDbAndTable(wsSql) {
|
||||
let wsSql = null;
|
||||
try {
|
||||
wsSql = await createConnect();
|
||||
await wsSql.exec('CREATE DATABASE IF NOT EXISTS POWER ' +
|
||||
'KEEP 3650 DURATION 10 BUFFER 16 WAL_LEVEL 1;');
|
||||
|
||||
await wsSql.exec('USE power');
|
||||
|
||||
await wsSql.exec('CREATE STABLE IF NOT EXISTS meters ' +
|
||||
'(_ts timestamp, current float, voltage int, phase float) ' +
|
||||
'TAGS (location binary(64), groupId int);');
|
||||
|
||||
taosResult = await wsSql.exec('describe meters');
|
||||
console.log(taosResult);
|
||||
} catch (err) {
|
||||
|
||||
console.error(err.code, err.message);
|
||||
} finally {
|
||||
if (wsSql) {
|
||||
await wsSql.close();
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
// ANCHOR_END: create_db_and_table
|
||||
|
||||
// ANCHOR: insertData
|
||||
async function insertData(wsSql) {
|
||||
let wsSql = null;
|
||||
try {
|
||||
wsSql = await createConnect();
|
||||
let insertQuery = "INSERT INTO " +
|
||||
"power.d1001 USING power.meters (location, groupId) TAGS('California.SanFrancisco', 2) " +
|
||||
"VALUES " +
|
||||
"(NOW + 1a, 10.30000, 219, 0.31000) " +
|
||||
"(NOW + 2a, 12.60000, 218, 0.33000) " +
|
||||
"(NOW + 3a, 12.30000, 221, 0.31000) " +
|
||||
"power.d1002 USING power.meters TAGS('California.SanFrancisco', 3) " +
|
||||
"VALUES " +
|
||||
"(NOW + 1a, 10.30000, 218, 0.25000) ";
|
||||
taosResult = await wsSql.exec(insertQuery);
|
||||
console.log(taosResult);
|
||||
} catch (err) {
|
||||
console.error(err.code, err.message);
|
||||
} finally {
|
||||
if (wsSql) {
|
||||
await wsSql.close();
|
||||
}
|
||||
}
|
||||
}
|
||||
// ANCHOR_END: insertData
|
||||
|
||||
// ANCHOR: queryData
|
||||
async function queryData() {
|
||||
let wsRows = null;
|
||||
let wsSql = null;
|
||||
try {
|
||||
wsSql = await createConnect();
|
||||
wsRows = await wsSql.query('select * from meters');
|
||||
let meta = wsRows.getMeta();
|
||||
console.log("wsRow:meta:=>", meta);
|
||||
while (await wsRows.next()) {
|
||||
let result = wsRows.getData();
|
||||
console.log('queryRes.Scan().then=>', result);
|
||||
}
|
||||
}
|
||||
catch (err) {
|
||||
console.error(err.code, err.message);
|
||||
}
|
||||
finally {
|
||||
if (wsRows) {
|
||||
await wsRows.close();
|
||||
}
|
||||
if (wsSql) {
|
||||
await wsSql.close();
|
||||
}
|
||||
}
|
||||
}
|
||||
// ANCHOR_END: queryData
|
||||
|
||||
// ANCHOR: sqlWithReqid
|
||||
async function sqlWithReqid(wsSql) {
|
||||
let insertQuery = "INSERT INTO " +
|
||||
"power.d1001 USING power.meters (location, groupId) TAGS('California.SanFrancisco', 2) " +
|
||||
"VALUES " +
|
||||
"(NOW + 1a, 10.30000, 219, 0.31000) " +
|
||||
"(NOW + 2a, 12.60000, 218, 0.33000) " +
|
||||
"(NOW + 3a, 12.30000, 221, 0.31000) " +
|
||||
"power.d1002 USING power.meters TAGS('California.SanFrancisco', 3) " +
|
||||
"VALUES " +
|
||||
"(NOW + 1a, 10.30000, 218, 0.25000) ";
|
||||
|
||||
let wsRows = null;
|
||||
let wsSql = null;
|
||||
try {
|
||||
wsSql = await createConnect();
|
||||
taosResult = await wsSql.exec(insertQuery, 1);
|
||||
wsRows = await wsSql.query('select * from meters', 2);
|
||||
let meta = wsRows.getMeta();
|
||||
console.log("wsRow:meta:=>", meta);
|
||||
while (await wsRows.next()) {
|
||||
let result = wsRows.getData();
|
||||
console.log('queryRes.Scan().then=>', result);
|
||||
}
|
||||
}
|
||||
catch (err) {
|
||||
console.error(err.code, err.message);
|
||||
}
|
||||
finally {
|
||||
if (wsRows) {
|
||||
await wsRows.close();
|
||||
}
|
||||
if (wsSql) {
|
||||
await wsSql.close();
|
||||
}
|
||||
}
|
||||
}
|
||||
// ANCHOR_END: sqlWithReqid
|
||||
|
||||
async function test() {
|
||||
await createDbAndTable();
|
||||
await insertData();
|
||||
await queryData();
|
||||
await sqlWithReqid();
|
||||
taos.destroy();
|
||||
}
|
||||
|
||||
test()
|
|
@ -0,0 +1,60 @@
|
|||
const taos = require("@tdengine/websocket");
|
||||
|
||||
let db = 'power';
|
||||
let stable = 'meters';
|
||||
let tags = ['California.SanFrancisco', 3];
|
||||
let values = [
|
||||
[1706786044994, 1706786044995, 1706786044996],
|
||||
[10.2, 10.3, 10.4],
|
||||
[292, 293, 294],
|
||||
[0.32, 0.33, 0.34],
|
||||
];
|
||||
|
||||
async function prepare() {
|
||||
let dsn = 'ws://localhost:6041'
|
||||
let conf = new taos.WSConfig(dsn);
|
||||
conf.setUser('root')
|
||||
conf.setPwd('taosdata')
|
||||
conf.setDb(db)
|
||||
let wsSql = await taos.sqlConnect(conf);
|
||||
await wsSql.exec(`CREATE DATABASE IF NOT EXISTS ${db} KEEP 3650 DURATION 10 BUFFER 16 WAL_LEVEL 1;`);
|
||||
await wsSql.exec(`CREATE STABLE IF NOT EXISTS ${db}.${stable} (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);`);
|
||||
return wsSql
|
||||
}
|
||||
|
||||
(async () => {
|
||||
let stmt = null;
|
||||
let connector = null;
|
||||
try {
|
||||
connector = await prepare();
|
||||
stmt = await connector.stmtInit();
|
||||
await stmt.prepare(`INSERT INTO ? USING ${db}.${stable} (location, groupId) TAGS (?, ?) VALUES (?, ?, ?, ?)`);
|
||||
await stmt.setTableName('d1001');
|
||||
let tagParams = stmt.newStmtParam();
|
||||
tagParams.setVarchar([tags[0]]);
|
||||
tagParams.setInt([tags[1]]);
|
||||
await stmt.setTags(tagParams);
|
||||
|
||||
let bindParams = stmt.newStmtParam();
|
||||
bindParams.setTimestamp(values[0]);
|
||||
bindParams.setFloat(values[1]);
|
||||
bindParams.setInt(values[2]);
|
||||
bindParams.setFloat(values[3]);
|
||||
await stmt.bind(bindParams);
|
||||
await stmt.batch();
|
||||
await stmt.exec();
|
||||
console.log(stmt.getLastAffected());
|
||||
}
|
||||
catch (err) {
|
||||
console.error(err.code, err.message);
|
||||
}
|
||||
finally {
|
||||
if (stmt) {
|
||||
await stmt.close();
|
||||
}
|
||||
if (connector) {
|
||||
await connector.close();
|
||||
}
|
||||
taos.destroy();
|
||||
}
|
||||
})();
|
|
@ -0,0 +1,58 @@
|
|||
const taos = require("@tdengine/websocket");
|
||||
|
||||
var host = null;
|
||||
for(var i = 2; i < global.process.argv.length; i++){
|
||||
var key = global.process.argv[i].split("=")[0];
|
||||
var value = global.process.argv[i].split("=")[1];
|
||||
if("host" == key){
|
||||
host = value;
|
||||
}
|
||||
}
|
||||
|
||||
if(host == null){
|
||||
console.log("Usage: node nodejsChecker.js host=<hostname> port=<port>");
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
let dbData = ["meters.current 1648432611249 10.3 location=California.SanFrancisco groupid=2",
|
||||
"meters.current 1648432611250 12.6 location=California.SanFrancisco groupid=2",
|
||||
"meters.current 1648432611249 10.8 location=California.LosAngeles groupid=3",
|
||||
"meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3",
|
||||
"meters.voltage 1648432611249 219 location=California.SanFrancisco groupid=2",
|
||||
"meters.voltage 1648432611250 218 location=California.SanFrancisco groupid=2",
|
||||
"meters.voltage 1648432611249 221 location=California.LosAngeles groupid=3",
|
||||
"meters.voltage 1648432611250 217 location=California.LosAngeles groupid=3",];
|
||||
|
||||
async function createConnect() {
|
||||
let dsn = 'ws://' + host + ':6041'
|
||||
let conf = new taos.WSConfig(dsn);
|
||||
conf.setUser('root');
|
||||
conf.setPwd('taosdata');
|
||||
|
||||
return await taos.sqlConnect(conf);
|
||||
}
|
||||
|
||||
async function test() {
|
||||
let wsSql = null;
|
||||
let wsRows = null;
|
||||
let reqId = 0;
|
||||
try {
|
||||
wsSql = await createConnect()
|
||||
await wsSql.exec('create database if not exists power KEEP 3650 DURATION 10 BUFFER 16 WAL_LEVEL 1;', reqId++);
|
||||
await wsSql.exec('use power', reqId++);
|
||||
await wsSql.schemalessInsert(dbData, taos.SchemalessProto.OpenTSDBTelnetLineProtocol, taos.Precision.MILLI_SECONDS, 0);
|
||||
}
|
||||
catch (err) {
|
||||
console.error(err.code, err.message);
|
||||
}
|
||||
finally {
|
||||
if (wsRows) {
|
||||
await wsRows.close();
|
||||
}
|
||||
if (wsSql) {
|
||||
await wsSql.close();
|
||||
}
|
||||
taos.destroy();
|
||||
}
|
||||
}
|
||||
test()
|
|
@ -0,0 +1,90 @@
|
|||
const taos = require("@tdengine/websocket");
|
||||
|
||||
const db = 'power';
|
||||
const stable = 'meters';
|
||||
const topics = ['power_meters_topic'];
|
||||
|
||||
// ANCHOR: create_consumer
|
||||
async function createConsumer() {
|
||||
let configMap = new Map([
|
||||
[taos.TMQConstants.GROUP_ID, "gId"],
|
||||
[taos.TMQConstants.CONNECT_USER, "root"],
|
||||
[taos.TMQConstants.CONNECT_PASS, "taosdata"],
|
||||
[taos.TMQConstants.AUTO_OFFSET_RESET, "latest"],
|
||||
[taos.TMQConstants.CLIENT_ID, 'test_tmq_client'],
|
||||
[taos.TMQConstants.WS_URL, 'ws://localhost:6041'],
|
||||
[taos.TMQConstants.ENABLE_AUTO_COMMIT, 'true'],
|
||||
[taos.TMQConstants.AUTO_COMMIT_INTERVAL_MS, '1000']
|
||||
]);
|
||||
return await taos.tmqConnect(configMap);
|
||||
}
|
||||
// ANCHOR_END: create_consumer
|
||||
|
||||
async function prepare() {
|
||||
let conf = new taos.WSConfig('ws://localhost:6041');
|
||||
conf.setUser('root')
|
||||
conf.setPwd('taosdata')
|
||||
conf.setDb('power')
|
||||
const createDB = `CREATE DATABASE IF NOT EXISTS POWER ${db} KEEP 3650 DURATION 10 BUFFER 16 WAL_LEVEL 1;`;
|
||||
const createStable = `CREATE STABLE IF NOT EXISTS ${db}.${stable} (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);`;
|
||||
|
||||
let wsSql = await taos.sqlConnect(conf);
|
||||
await wsSql.exec(createDB);
|
||||
await wsSql.exec(createStable);
|
||||
|
||||
// ANCHOR: create_topic
|
||||
let createTopic = `CREATE TOPIC IF NOT EXISTS ${topics[0]} AS SELECT * FROM ${db}.${stable}`;
|
||||
await wsSql.exec(createTopic);
|
||||
// ANCHOR_END: create_topic
|
||||
|
||||
for (let i = 0; i < 10; i++) {
|
||||
await wsSql.exec(`INSERT INTO d1001 USING ${stable} (location, groupId) TAGS ("California.SanFrancisco", 3) VALUES (NOW, ${10 + i}, ${200 + i}, ${0.32 + i})`);
|
||||
}
|
||||
wsSql.Close();
|
||||
}
|
||||
|
||||
// ANCHOR: subscribe
|
||||
async function subscribe(consumer) {
|
||||
await consumer.subscribe(topics);
|
||||
for (let i = 0; i < 5; i++) {
|
||||
let res = await consumer.poll(500);
|
||||
for (let [key, value] of res) {
|
||||
console.log(key, value);
|
||||
}
|
||||
if (res.size == 0) {
|
||||
break;
|
||||
}
|
||||
await consumer.commit();
|
||||
}
|
||||
}
|
||||
// ANCHOR_END: subscribe
|
||||
|
||||
async function test() {
|
||||
let consumer = null;
|
||||
try {
|
||||
await prepare();
|
||||
let consumer = await createConsumer()
|
||||
await subscribe(consumer)
|
||||
// ANCHOR: assignment
|
||||
let assignment = await consumer.assignment();
|
||||
console.log(assignment);
|
||||
|
||||
assignment = await consumer.seekToBeginning(assignment);
|
||||
for(let i in assignment) {
|
||||
console.log("seek after:", assignment[i])
|
||||
}
|
||||
// ANCHOR_END: assignment
|
||||
await consumer.unsubscribe();
|
||||
}
|
||||
catch (err) {
|
||||
console.error(err.code, err.message);
|
||||
}
|
||||
finally {
|
||||
if (consumer) {
|
||||
await consumer.close();
|
||||
}
|
||||
taos.destroy();
|
||||
}
|
||||
}
|
||||
|
||||
test()
|
|
@ -1,12 +1,14 @@
|
|||
import taos
|
||||
|
||||
conn: taos.TaosConnection = taos.connect(host="localhost",
|
||||
conn = taos.connect(
|
||||
host="localhost",
|
||||
user="root",
|
||||
password="taosdata",
|
||||
database="test",
|
||||
port=6030,
|
||||
config="/etc/taos", # for windows the default value is C:\TDengine\cfg
|
||||
timezone="Asia/Shanghai") # default your host's timezone
|
||||
timezone="Asia/Shanghai",
|
||||
) # default your host's timezone
|
||||
|
||||
server_version = conn.server_info
|
||||
print("server_version", server_version)
|
||||
|
|
|
@ -0,0 +1,26 @@
|
|||
import taos
|
||||
|
||||
conn = taos.connect(
|
||||
host="localhost",
|
||||
user="root",
|
||||
password="taosdata",
|
||||
port=6030,
|
||||
)
|
||||
|
||||
db = "power"
|
||||
|
||||
conn.execute(f"DROP DATABASE IF EXISTS {db}")
|
||||
conn.execute(f"CREATE DATABASE {db}")
|
||||
|
||||
# change database. same as execute "USE db"
|
||||
conn.select_db(db)
|
||||
|
||||
# create super table
|
||||
conn.execute(
|
||||
"CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) TAGS (`groupid` INT, `location` BINARY(16))"
|
||||
)
|
||||
|
||||
# create table
|
||||
conn.execute("CREATE TABLE `d0` USING `meters` TAGS(0, 'Los Angles')")
|
||||
|
||||
conn.close()
|
|
@ -0,0 +1,18 @@
|
|||
import taosrest
|
||||
|
||||
conn = taosrest.connect(url="http://localhost:6041")
|
||||
|
||||
db = "power"
|
||||
|
||||
conn.execute(f"DROP DATABASE IF EXISTS {db}")
|
||||
conn.execute(f"CREATE DATABASE {db}")
|
||||
|
||||
# create super table
|
||||
conn.execute(
|
||||
f"CREATE TABLE `{db}`.`meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) TAGS (`groupid` INT, `location` BINARY(16))"
|
||||
)
|
||||
|
||||
# create table
|
||||
conn.execute(f"CREATE TABLE `{db}`.`d0` USING `{db}`.`meters` TAGS(0, 'Los Angles')")
|
||||
|
||||
conn.close()
|
|
@ -0,0 +1,22 @@
|
|||
import taosws
|
||||
|
||||
dsn = "taosws://root:taosdata@localhost:6041"
|
||||
conn = taosws.connect(dsn)
|
||||
|
||||
db = "power"
|
||||
|
||||
conn.execute(f"DROP DATABASE IF EXISTS {db}")
|
||||
conn.execute(f"CREATE DATABASE {db}")
|
||||
|
||||
# change database.
|
||||
conn.execute(f"USE {db}")
|
||||
|
||||
# create super table
|
||||
conn.execute(
|
||||
"CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) TAGS (`groupid` INT, `location` BINARY(16))"
|
||||
)
|
||||
|
||||
# create table
|
||||
conn.execute("CREATE TABLE `d0` USING `meters` TAGS(0, 'Los Angles')")
|
||||
|
||||
conn.close()
|
|
@ -0,0 +1,76 @@
|
|||
import taos
|
||||
|
||||
conn = taos.connect(
|
||||
host="localhost",
|
||||
user="root",
|
||||
password="taosdata",
|
||||
port=6030,
|
||||
)
|
||||
|
||||
db = "power"
|
||||
|
||||
conn.execute(f"DROP DATABASE IF EXISTS {db}")
|
||||
conn.execute(f"CREATE DATABASE {db}")
|
||||
|
||||
# change database. same as execute "USE db"
|
||||
conn.select_db(db)
|
||||
|
||||
# create super table
|
||||
conn.execute(
|
||||
"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)"
|
||||
)
|
||||
|
||||
# ANCHOR: insert
|
||||
# insert data
|
||||
sql = """
|
||||
INSERT INTO
|
||||
power.d1001 USING power.meters TAGS('California.SanFrancisco', 2)
|
||||
VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000)
|
||||
('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
|
||||
power.d1002 USING power.meters TAGS('California.SanFrancisco', 3)
|
||||
VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
|
||||
power.d1003 USING power.meters TAGS('California.LosAngeles', 2)
|
||||
VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
|
||||
power.d1004 USING power.meters TAGS('California.LosAngeles', 3)
|
||||
VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
|
||||
"""
|
||||
|
||||
inserted = conn.execute(sql)
|
||||
assert inserted == 8
|
||||
# ANCHOR_END: insert
|
||||
|
||||
# ANCHOR: query
|
||||
# Execute a sql and get its result set. It's useful for SELECT statement
|
||||
result = conn.query("SELECT * from meters")
|
||||
|
||||
# Get fields from result
|
||||
fields = result.fields
|
||||
for field in fields:
|
||||
print(field)
|
||||
|
||||
"""
|
||||
output:
|
||||
{name: ts, type: 9, bytes: 8}
|
||||
{name: current, type: 6, bytes: 4}
|
||||
{name: voltage, type: 4, bytes: 4}
|
||||
{name: phase, type: 6, bytes: 4}
|
||||
{name: location, type: 8, bytes: 64}
|
||||
{name: groupid, type: 4, bytes: 4}
|
||||
"""
|
||||
|
||||
# Get data from result as list of tuple
|
||||
data = result.fetch_all()
|
||||
for row in data:
|
||||
print(row)
|
||||
|
||||
"""
|
||||
output:
|
||||
(datetime.datetime(2018, 10, 3, 14, 38, 16, 650000), 10.300000190734863, 218, 0.25, 'California.SanFrancisco', 3)
|
||||
...
|
||||
"""
|
||||
# ANCHOR_END: query
|
||||
|
||||
# ANCHOR: req_id
|
||||
result = conn.query("SELECT * from meters", req_id=1)
|
||||
# ANCHOR_END: req_id
|
||||
conn.close()
|
|
@ -0,0 +1,48 @@
|
|||
import taosrest
|
||||
|
||||
conn = taosrest.connect(url="http://localhost:6041")
|
||||
|
||||
db = "power"
|
||||
|
||||
conn.execute(f"DROP DATABASE IF EXISTS {db}")
|
||||
conn.execute(f"CREATE DATABASE {db}")
|
||||
|
||||
# create super table
|
||||
conn.execute(
|
||||
"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)"
|
||||
)
|
||||
|
||||
# ANCHOR: insert
|
||||
# rest insert data
|
||||
sql = """
|
||||
INSERT INTO
|
||||
power.d1001 USING power.meters TAGS('California.SanFrancisco', 2)
|
||||
VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000)
|
||||
('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
|
||||
power.d1002 USING power.meters TAGS('California.SanFrancisco', 3)
|
||||
VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
|
||||
power.d1003 USING power.meters TAGS('California.LosAngeles', 2)
|
||||
VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
|
||||
power.d1004 USING power.meters TAGS('California.LosAngeles', 3)
|
||||
VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
|
||||
"""
|
||||
|
||||
inserted = conn.execute(sql)
|
||||
assert inserted == 8
|
||||
# ANCHOR_END: insert
|
||||
|
||||
# ANCHOR: query
|
||||
client = taosrest.RestClient("http://localhost:6041")
|
||||
result = client.sql(f"SELECT * from {db}.meters")
|
||||
print(result)
|
||||
|
||||
"""
|
||||
output:
|
||||
{'code': 0, 'column_meta': [['ts', 'TIMESTAMP', 8], ['current', 'FLOAT', 4], ['voltage', 'INT', 4], ['phase', 'FLOAT', 4], ['location', 'VARCHAR', 64], ['groupid', 'INT', 4]], 'data': [[datetime.datetime(2018, 10, 3, 14, 38, 5), 10.3, 219, 0.31, 'California.SanFrancisco', 2], [datetime.datetime(2018, 10, 3, 14, 38, 15), 12.6, 218, 0.33, 'California.SanFrancisco', 2], [datetime.datetime(2018, 10, 3, 14, 38, 16, 800000), 12.3, 221, 0.31, 'California.SanFrancisco', 2], [datetime.datetime(2018, 10, 3, 14, 38, 16, 650000), 10.3, 218, 0.25, 'California.SanFrancisco', 3], [datetime.datetime(2018, 10, 3, 14, 38, 5, 500000), 11.8, 221, 0.28, 'California.LosAngeles', 2], [datetime.datetime(2018, 10, 3, 14, 38, 16, 600000), 13.4, 223, 0.29, 'California.LosAngeles', 2], [datetime.datetime(2018, 10, 3, 14, 38, 5), 10.8, 223, 0.29, 'California.LosAngeles', 3], [datetime.datetime(2018, 10, 3, 14, 38, 6, 500000), 11.5, 221, 0.35, 'California.LosAngeles', 3]], 'rows': 8}
|
||||
"""
|
||||
# ANCHOR_END: query
|
||||
|
||||
# ANCHOR: req_id
|
||||
result = client.sql(f"SELECT * from {db}.meters", req_id=1)
|
||||
# ANCHOR_END: req_id
|
||||
conn.close()
|
|
@ -0,0 +1,71 @@
|
|||
import taosws
|
||||
|
||||
dsn = "taosws://root:taosdata@localhost:6041"
|
||||
conn = taosws.connect(dsn)
|
||||
|
||||
db = "power"
|
||||
|
||||
conn.execute(f"DROP DATABASE IF EXISTS {db}")
|
||||
conn.execute(f"CREATE DATABASE {db}")
|
||||
|
||||
# change database.
|
||||
conn.execute(f"USE {db}")
|
||||
|
||||
# create super table
|
||||
conn.execute(
|
||||
"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)"
|
||||
)
|
||||
|
||||
# ANCHOR: insert
|
||||
# ws insert data
|
||||
sql = """
|
||||
INSERT INTO
|
||||
power.d1001 USING power.meters TAGS('California.SanFrancisco', 2)
|
||||
VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000)
|
||||
('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
|
||||
power.d1002 USING power.meters TAGS('California.SanFrancisco', 3)
|
||||
VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
|
||||
power.d1003 USING power.meters TAGS('California.LosAngeles', 2)
|
||||
VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
|
||||
power.d1004 USING power.meters TAGS('California.LosAngeles', 3)
|
||||
VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)
|
||||
"""
|
||||
|
||||
inserted = conn.execute(sql)
|
||||
assert inserted == 8
|
||||
# ANCHOR_END: insert
|
||||
|
||||
# ANCHOR: query
|
||||
# Execute a sql and get its result set. It's useful for SELECT statement
|
||||
result = conn.query("SELECT * from meters")
|
||||
|
||||
# Get fields from result
|
||||
fields = result.fields
|
||||
for field in fields:
|
||||
print(field)
|
||||
|
||||
"""
|
||||
output:
|
||||
{name: ts, type: TIMESTAMP, bytes: 8}
|
||||
{name: current, type: FLOAT, bytes: 4}
|
||||
{name: voltage, type: INT, bytes: 4}
|
||||
{name: phase, type: FLOAT, bytes: 4}
|
||||
{name: location, type: BINARY, bytes: 64}
|
||||
{name: groupid, type: INT, bytes: 4}
|
||||
"""
|
||||
|
||||
# Get rows from result
|
||||
for row in result:
|
||||
print(row)
|
||||
|
||||
"""
|
||||
output:
|
||||
('2018-10-03 14:38:05 +08:00', 10.300000190734863, 219, 0.3100000023841858, 'California.SanFrancisco', 2)
|
||||
...
|
||||
"""
|
||||
# ANCHOR_END: query
|
||||
|
||||
# ANCHOR: req_id
|
||||
result = conn.query_with_req_id("SELECT * from meters", req_id=1)
|
||||
# ANCHOR_END: req_id
|
||||
conn.close()
|
|
@ -0,0 +1,36 @@
|
|||
import taos
|
||||
|
||||
conn = taos.connect(
|
||||
host="localhost",
|
||||
user="root",
|
||||
password="taosdata",
|
||||
port=6030,
|
||||
)
|
||||
|
||||
db = "power"
|
||||
|
||||
conn.execute(f"DROP DATABASE IF EXISTS {db}")
|
||||
conn.execute(f"CREATE DATABASE {db}")
|
||||
|
||||
# change database. same as execute "USE db"
|
||||
conn.select_db(db)
|
||||
|
||||
lineDemo = [
|
||||
"meters,groupid=2,location=California.SanFrancisco current=10.3000002f64,voltage=219i32,phase=0.31f64 1626006833639000000"
|
||||
]
|
||||
telnetDemo = ["stb0_0 1707095283260 4 host=host0 interface=eth0"]
|
||||
jsonDemo = [
|
||||
'{"metric": "meter_current","timestamp": 1626846400,"value": 10.3, "tags": {"groupid": 2, "location": "California.SanFrancisco", "id": "d1001"}}'
|
||||
]
|
||||
|
||||
conn.schemaless_insert(
|
||||
lineDemo, taos.SmlProtocol.LINE_PROTOCOL, taos.SmlPrecision.MILLI_SECONDS
|
||||
)
|
||||
conn.schemaless_insert(
|
||||
telnetDemo, taos.SmlProtocol.TELNET_PROTOCOL, taos.SmlPrecision.MICRO_SECONDS
|
||||
)
|
||||
conn.schemaless_insert(
|
||||
jsonDemo, taos.SmlProtocol.JSON_PROTOCOL, taos.SmlPrecision.MILLI_SECONDS
|
||||
)
|
||||
|
||||
conn.close()
|
|
@ -0,0 +1,46 @@
|
|||
import taosws
|
||||
|
||||
dsn = "taosws://root:taosdata@localhost:6041"
|
||||
conn = taosws.connect(dsn)
|
||||
|
||||
db = "power"
|
||||
|
||||
conn.execute(f"DROP DATABASE IF EXISTS {db}")
|
||||
conn.execute(f"CREATE DATABASE {db}")
|
||||
|
||||
# change database.
|
||||
conn = taosws.connect(f"{dsn}/{db}")
|
||||
|
||||
lineDemo = [
|
||||
"meters,groupid=2,location=California.SanFrancisco current=10.3000002f64,voltage=219i32,phase=0.31f64 1626006833639000000"
|
||||
]
|
||||
telnetDemo = ["stb0_0 1707095283260 4 host=host0 interface=eth0"]
|
||||
jsonDemo = [
|
||||
'{"metric": "meter_current","timestamp": 1626846400,"value": 10.3, "tags": {"groupid": 2, "location": "California.SanFrancisco", "id": "d1001"}}'
|
||||
]
|
||||
|
||||
conn.schemaless_insert(
|
||||
lines=lineDemo,
|
||||
protocol=taosws.PySchemalessProtocol.Line,
|
||||
precision=taosws.PySchemalessPrecision.Millisecond,
|
||||
ttl=1,
|
||||
req_id=1,
|
||||
)
|
||||
|
||||
conn.schemaless_insert(
|
||||
lines=telnetDemo,
|
||||
protocol=taosws.PySchemalessProtocol.Telnet,
|
||||
precision=taosws.PySchemalessPrecision.Microsecond,
|
||||
ttl=1,
|
||||
req_id=2,
|
||||
)
|
||||
|
||||
conn.schemaless_insert(
|
||||
lines=jsonDemo,
|
||||
protocol=taosws.PySchemalessProtocol.Json,
|
||||
precision=taosws.PySchemalessPrecision.Millisecond,
|
||||
ttl=1,
|
||||
req_id=3,
|
||||
)
|
||||
|
||||
conn.close()
|
|
@ -0,0 +1,53 @@
|
|||
import taos
|
||||
|
||||
conn = taos.connect(
|
||||
host="localhost",
|
||||
user="root",
|
||||
password="taosdata",
|
||||
port=6030,
|
||||
)
|
||||
|
||||
db = "power"
|
||||
|
||||
conn.execute(f"DROP DATABASE IF EXISTS {db}")
|
||||
conn.execute(f"CREATE DATABASE {db}")
|
||||
|
||||
# change database. same as execute "USE db"
|
||||
conn.select_db(db)
|
||||
|
||||
# create super table
|
||||
conn.execute(
|
||||
"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)"
|
||||
)
|
||||
|
||||
# ANCHOR: stmt
|
||||
sql = "INSERT INTO ? USING meters TAGS(?,?) VALUES (?,?,?,?)"
|
||||
stmt = conn.statement(sql)
|
||||
|
||||
tbname = "power.d1001"
|
||||
|
||||
tags = taos.new_bind_params(2)
|
||||
tags[0].binary(["California.SanFrancisco"])
|
||||
tags[1].int([2])
|
||||
|
||||
stmt.set_tbname_tags(tbname, tags)
|
||||
|
||||
params = taos.new_bind_params(4)
|
||||
params[0].timestamp((1626861392589, 1626861392591, 1626861392592))
|
||||
params[1].float((10.3, 12.6, 12.3))
|
||||
params[2].int([194, 200, 201])
|
||||
params[3].float([0.31, 0.33, 0.31])
|
||||
|
||||
stmt.bind_param_batch(params)
|
||||
|
||||
stmt.execute()
|
||||
|
||||
stmt.close()
|
||||
# ANCHOR_END: stmt
|
||||
|
||||
result = conn.query("SELECT * from meters")
|
||||
|
||||
for row in result.fetch_all():
|
||||
print(row)
|
||||
|
||||
conn.close()
|
|
@ -0,0 +1,52 @@
|
|||
import taosws
|
||||
|
||||
dsn = "taosws://root:taosdata@localhost:6041"
|
||||
conn = taosws.connect(dsn)
|
||||
|
||||
db = "power"
|
||||
|
||||
conn.execute(f"DROP DATABASE IF EXISTS {db}")
|
||||
conn.execute(f"CREATE DATABASE {db}")
|
||||
|
||||
# change database.
|
||||
conn.execute(f"USE {db}")
|
||||
|
||||
# create super table
|
||||
conn.execute(
|
||||
"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)"
|
||||
)
|
||||
|
||||
# ANCHOR: stmt
|
||||
sql = "INSERT INTO ? USING meters TAGS(?,?) VALUES (?,?,?,?)"
|
||||
stmt = conn.statement()
|
||||
stmt.prepare(sql)
|
||||
|
||||
tbname = "power.d1001"
|
||||
|
||||
tags = [
|
||||
taosws.varchar_to_tag("California.SanFrancisco"),
|
||||
taosws.int_to_tag(2),
|
||||
]
|
||||
|
||||
stmt.set_tbname_tags(tbname, tags)
|
||||
|
||||
stmt.bind_param(
|
||||
[
|
||||
taosws.millis_timestamps_to_column(
|
||||
[1626861392589, 1626861392591, 1626861392592]
|
||||
),
|
||||
taosws.floats_to_column([10.3, 12.6, 12.3]),
|
||||
taosws.ints_to_column([194, 200, 201]),
|
||||
taosws.floats_to_column([0.31, 0.33, 0.31]),
|
||||
]
|
||||
)
|
||||
|
||||
stmt.add_batch()
|
||||
rows = stmt.execute()
|
||||
|
||||
assert rows == 3
|
||||
|
||||
stmt.close()
|
||||
# ANCHOR_END: stmt
|
||||
|
||||
conn.close()
|
|
@ -0,0 +1,84 @@
|
|||
import taos
|
||||
|
||||
conn = taos.connect(
|
||||
host="localhost",
|
||||
user="root",
|
||||
password="taosdata",
|
||||
port=6030,
|
||||
)
|
||||
|
||||
db = "power"
|
||||
topic = "topic_meters"
|
||||
|
||||
conn.execute(f"DROP TOPIC IF EXISTS {topic}")
|
||||
conn.execute(f"DROP DATABASE IF EXISTS {db}")
|
||||
conn.execute(f"CREATE DATABASE {db}")
|
||||
|
||||
# change database. same as execute "USE db"
|
||||
conn.select_db(db)
|
||||
|
||||
# create super table
|
||||
conn.execute(
|
||||
"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)"
|
||||
)
|
||||
|
||||
# ANCHOR: create_topic
|
||||
# create topic
|
||||
conn.execute(
|
||||
f"CREATE TOPIC IF NOT EXISTS {topic} AS SELECT ts, current, voltage, phase, groupid, location FROM meters"
|
||||
)
|
||||
# ANCHOR_END: create_topic
|
||||
|
||||
# ANCHOR: create_consumer
|
||||
from taos.tmq import Consumer
|
||||
|
||||
consumer = Consumer(
|
||||
{
|
||||
"group.id": "1",
|
||||
"td.connect.user": "root",
|
||||
"td.connect.pass": "taosdata",
|
||||
"enable.auto.commit": "true",
|
||||
}
|
||||
)
|
||||
# ANCHOR_END: create_consumer
|
||||
|
||||
# ANCHOR: subscribe
|
||||
consumer.subscribe([topic])
|
||||
# ANCHOR_END: subscribe
|
||||
|
||||
try:
|
||||
# ANCHOR: consume
|
||||
while True:
|
||||
res = consumer.poll(1)
|
||||
if not res:
|
||||
break
|
||||
err = res.error()
|
||||
if err is not None:
|
||||
raise err
|
||||
val = res.value()
|
||||
|
||||
for block in val:
|
||||
print(block.fetchall())
|
||||
# ANCHOR_END: consume
|
||||
|
||||
# ANCHOR: assignment
|
||||
assignments = consumer.assignment()
|
||||
for assignment in assignments:
|
||||
print(assignment)
|
||||
# ANCHOR_END: assignment
|
||||
|
||||
# ANCHOR: seek
|
||||
offset = taos.tmq.TopicPartition(
|
||||
topic=topic,
|
||||
partition=assignment.partition,
|
||||
offset=0,
|
||||
)
|
||||
consumer.seek(offset)
|
||||
# ANCHOR_END: seek
|
||||
finally:
|
||||
# ANCHOR: unsubscribe
|
||||
consumer.unsubscribe()
|
||||
consumer.close()
|
||||
# ANCHOR_END: unsubscribe
|
||||
|
||||
conn.close()
|
|
@ -9,5 +9,7 @@ anyhow = "1"
|
|||
chrono = "0.4"
|
||||
serde = { version = "1", features = ["derive"] }
|
||||
tokio = { version = "1", features = ["rt", "macros", "rt-multi-thread"] }
|
||||
log = "0.4"
|
||||
pretty_env_logger = "0.5.0"
|
||||
|
||||
taos = { version = "0.4.8" }
|
||||
taos = { version = "0.11.8" }
|
|
@ -23,7 +23,7 @@ docker pull tdengine/tdengine:3.0.1.4
|
|||
然后只需执行下面的命令:
|
||||
|
||||
```shell
|
||||
docker run -d -p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
|
||||
docker run -d -p 6030:6030 -p 6041:6041 -p 6043-6060:6043-6060 -p 6043-6060:6043-6060/udp tdengine/tdengine
|
||||
```
|
||||
|
||||
注意:TDengine 3.0 服务端仅使用 6030 TCP 端口。6041 为 taosAdapter 所使用提供 REST 服务端口。6043-6049 为 taosAdapter 提供第三方应用接入所使用端口,可根据需要选择是否打开。
|
||||
|
@ -33,7 +33,7 @@ docker run -d -p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043
|
|||
```shell
|
||||
docker run -d -v ~/data/taos/dnode/data:/var/lib/taos \
|
||||
-v ~/data/taos/dnode/log:/var/log/taos \
|
||||
-p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
|
||||
-p 6030:6030 -p 6041:6041 -p 6043-6060:6043-6060 -p 6043-6060:6043-6060/udp tdengine/tdengine
|
||||
```
|
||||
|
||||
:::note
|
||||
|
@ -59,7 +59,7 @@ docker exec -it <container name> bash
|
|||
|
||||
注:Docker 工具自身的下载和使用请参考 [Docker 官网文档](https://docs.docker.com/get-docker/)。
|
||||
|
||||
## 运行 TDengine CLI
|
||||
## TDengine 命令行界面
|
||||
|
||||
进入容器,执行 `taos`:
|
||||
|
||||
|
@ -69,7 +69,13 @@ $ taos
|
|||
taos>
|
||||
```
|
||||
|
||||
## 使用 taosBenchmark 体验写入速度
|
||||
## TDengine 图形化界面
|
||||
|
||||
从 TDengine 3.3.0.0 开始,TDengine docker image 中增加了一个新的 web 组件:taos-explorer, 可以使用它进行数据库、超级表、子表、数据的查看和管理。其上也有一些只在企业版中才提供的高级特性,如需要可联系 TDengine 销售团队。
|
||||
|
||||
使用 taos-explorer,需要从浏览器访问其映射在主机上的端口,假定主机名为 abc.com,映射到主机的端口为 6060,则需从浏览器访问 http://abc.com:6060. taos-explorer 默认在容器内使用 6060 端口。初次使用需要使用企业邮箱进行注册,注册后即可使用数据库中的用户名和密码登录。
|
||||
|
||||
## 体验写入速度
|
||||
|
||||
可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入速度。
|
||||
|
||||
|
@ -85,7 +91,7 @@ $ taosBenchmark
|
|||
|
||||
taosBenchmark 命令本身带有很多选项,配置表的数目、记录条数等等,您可以设置不同参数进行体验,请执行 `taosBenchmark --help` 详细列出。taosBenchmark 详细使用方法请参照[如何使用 taosBenchmark 对 TDengine 进行性能测试](https://www.taosdata.com/2021/10/09/3111.html)和 [taosBenchmark 参考手册](../../reference/taosbenchmark)。
|
||||
|
||||
## 使用 TDengine CLI 体验查询速度
|
||||
## 体验查询速度
|
||||
|
||||
使用上述 `taosBenchmark` 插入数据后,可以在 TDengine CLI(taos)输入查询命令,体验查询速度。
|
||||
|
||||
|
|
|
@ -32,6 +32,10 @@ gcc 版本 - 9.3.1或以上;
|
|||
|
||||
## 安装
|
||||
|
||||
**注意**
|
||||
|
||||
从TDengine 3.0.6.0 开始,不再提供单独的 taosTools 安装包,原 taosTools 安装包中包含的工具都在 TDengine-server 安装包中,如果需要请直接下载 TDengine -server 安装包。
|
||||
|
||||
<Tabs>
|
||||
<TabItem label="Deb 安装" value="debinst">
|
||||
|
||||
|
@ -119,11 +123,17 @@ apt-get 方式只适用于 Debian 或 Ubuntu 系统。
|
|||
</TabItem>
|
||||
<TabItem label="Windows 安装" value="windows">
|
||||
|
||||
注意:目前 TDengine 在 Windows 平台上只支持 Windows Server 2016/2019 和 Windows 10/11。
|
||||
**注意**
|
||||
- 目前 TDengine 在 Windows 平台上只支持 Windows Server 2016/2019 和 Windows 10/11。
|
||||
- 从 TDengine 3.1.0.0 开始,只提供 Windows 客户端安装包。如果需要 Windows 服务端安装包,请联系 TDengine 销售团队升级为企业版。
|
||||
- Windows 上需要安装 VC 运行时库,可在此下载安装 [VC运行时库](https://learn.microsoft.com/zh-cn/cpp/windows/latest-supported-vc-redist?view=msvc-170), 如果已经安装此运行库可忽略。
|
||||
|
||||
按照以下步骤安装:
|
||||
|
||||
1. 从列表中下载获得 exe 安装程序;
|
||||
<PkgListV3 type={3}/>
|
||||
2. 运行可执行程序来安装 TDengine。
|
||||
Note: 从 3.0.1.7 开始,只提供 TDengine 客户端的 Windows 客户端的下载。想要使用TDengine 服务端的 Windows 版本,请联系销售升级为企业版本。
|
||||
|
||||
</TabItem>
|
||||
<TabItem label="macOS 安装" value="macos">
|
||||
|
@ -153,38 +163,25 @@ apt-get 方式只适用于 Debian 或 Ubuntu 系统。
|
|||
|
||||
```bash
|
||||
systemctl start taosd
|
||||
systemctl start taosadapter
|
||||
systemctl start taoskeeper
|
||||
systemctl start taos-explorer
|
||||
```
|
||||
|
||||
检查服务是否正常工作:
|
||||
你也可以直接运行 start-all.sh 脚本来启动上面的所有服务
|
||||
```bash
|
||||
start-all.sh
|
||||
```
|
||||
|
||||
可以使用 systemctl 来单独管理上面的每一个服务
|
||||
|
||||
```bash
|
||||
systemctl start taosd
|
||||
systemctl stop taosd
|
||||
systemctl restart taosd
|
||||
systemctl status taosd
|
||||
```
|
||||
|
||||
如果服务进程处于活动状态,则 status 指令会显示如下的相关信息:
|
||||
|
||||
```
|
||||
Active: active (running)
|
||||
```
|
||||
|
||||
如果后台服务进程处于停止状态,则 status 指令会显示如下的相关信息:
|
||||
|
||||
```
|
||||
Active: inactive (dead)
|
||||
```
|
||||
|
||||
如果 TDengine 服务正常工作,那么您可以通过 TDengine 的命令行程序 `taos` 来访问并体验 TDengine。
|
||||
|
||||
如下 `systemctl` 命令可以帮助你管理 TDengine 服务:
|
||||
|
||||
- 启动服务进程:`systemctl start taosd`
|
||||
|
||||
- 停止服务进程:`systemctl stop taosd`
|
||||
|
||||
- 重启服务进程:`systemctl restart taosd`
|
||||
|
||||
- 查看服务状态:`systemctl status taosd`
|
||||
|
||||
:::info
|
||||
|
||||
- `systemctl` 命令需要 _root_ 权限来运行,如果您非 _root_ 用户,请在命令前添加 `sudo`。
|
||||
|
@ -193,35 +190,39 @@ Active: inactive (dead)
|
|||
|
||||
:::
|
||||
|
||||
**TDengine 命令行(CLI)**
|
||||
|
||||
为便于检查 TDengine 的状态,执行数据库(Database)的各种即席(Ad Hoc)查询,TDengine 提供一命令行应用程序(以下简称为 TDengine CLI)taos。要进入 TDengine 命令行,您只要在终端执行 `taos` 即可。
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem label="Windows 系统" value="windows">
|
||||
|
||||
安装后,可以在拥有管理员权限的 cmd 窗口执行 `sc start taosd` 或在 `C:\TDengine` 目录下,运行 `taosd.exe` 来启动 TDengine 服务进程。如需使用 http/REST 服务,请执行 `sc start taosadapter` 或运行 `taosadapter.exe` 来启动 taosAdapter 服务进程。
|
||||
|
||||
**TDengine 命令行(CLI)**
|
||||
|
||||
为便于检查 TDengine 的状态,执行数据库(Database)的各种即席(Ad Hoc)查询,TDengine 提供一命令行应用程序(以下简称为 TDengine CLI)taos。要进入 TDengine 命令行,您只要在终端执行 `taos` 即可。
|
||||
|
||||
</TabItem>
|
||||
|
||||
<TabItem label="macOS 系统" value="macos">
|
||||
|
||||
安装后,在应用程序目录下,双击 TDengine 图标来启动程序,也可以运行 `sudo launchctl start com.tdengine.taosd` 来启动 TDengine 服务进程。
|
||||
安装后,在应用程序目录下,双击 TDengine 图标来启动程序,也可以运行 `sudo launchctl start ` 来启动 TDengine 服务进程。
|
||||
|
||||
如下 `launchctl` 命令用于管理 TDengine 服务:
|
||||
|
||||
- 启动服务进程:`sudo launchctl start com.tdengine.taosd`
|
||||
```bash
|
||||
sudo launchctl start com.tdengine.taosd
|
||||
sudo launchctl start com.tdengine.taosadapter
|
||||
sudo launchctl start com.tdengine.taoskeeper
|
||||
sudo launchctl start com.tdengine.taos-explorer
|
||||
```
|
||||
|
||||
- 停止服务进程:`sudo launchctl stop com.tdengine.taosd`
|
||||
你也可以直接运行 start-all.sh 脚本来启动上面的所有服务
|
||||
```bash
|
||||
start-all.sh
|
||||
```
|
||||
|
||||
- 查看服务状态:`sudo launchctl list | grep taosd`
|
||||
可以使用 `launchctl` 命令管理上面提到的每个 TDengine 服务,以下示例使用 `taosd` :
|
||||
|
||||
- 查看服务详细信息:`launchctl print system/com.tdengine.taosd`
|
||||
```bash
|
||||
sudo launchctl start com.tdengine.taosd
|
||||
sudo launchctl stop com.tdengine.taosd
|
||||
sudo launchctl list | grep taosd
|
||||
sudo launchctl print system/com.tdengine.taosd
|
||||
```
|
||||
|
||||
:::info
|
||||
|
||||
|
@ -232,18 +233,13 @@ Active: inactive (dead)
|
|||
|
||||
:::
|
||||
|
||||
**TDengine 命令行(CLI)**
|
||||
|
||||
为便于检查 TDengine 的状态,执行数据库(Database)的各种即席(Ad Hoc)查询,TDengine 提供一命令行应用程序(以下简称为 TDengine CLI)taos。要进入 TDengine 命令行,您只要在 Windows 终端的 C:\TDengine 目录下,运行 taos.exe 来启动 TDengine 命令行。
|
||||
|
||||
</TabItem>
|
||||
</Tabs>
|
||||
|
||||
```bash
|
||||
taos
|
||||
```
|
||||
|
||||
如果连接服务成功,将会打印出欢迎消息和版本信息。如果失败,则会打印错误消息出来(请参考 [FAQ](/train-faq/faq) 来解决终端连接服务端失败的问题)。TDengine CLI 的提示符号如下:
|
||||
## TDengine 命令行(CLI)
|
||||
|
||||
为便于检查 TDengine 的状态,执行数据库(Database)的各种即席(Ad Hoc)查询,TDengine 提供一命令行应用程序(以下简称为 TDengine CLI)taos。要进入 TDengine 命令行,您只要在终端执行 `taos` (Linux/Mac) 或 `taos.exe` (Windows) 即可。 TDengine CLI 的提示符号如下:
|
||||
|
||||
```cmd
|
||||
taos>
|
||||
|
@ -269,6 +265,12 @@ Query OK, 2 row(s) in set (0.003128s)
|
|||
|
||||
除执行 SQL 语句外,系统管理员还可以从 TDengine CLI 进行检查系统运行状态、添加删除用户账号等操作。TDengine CLI 连同应用驱动也可以独立安装在机器上运行,更多细节请参考 [TDengine 命令行](../../reference/taos-shell/)。
|
||||
|
||||
## TDengine 图形化界面
|
||||
|
||||
从 TDengine 3.3.0.0 开始,TDengine 发布包中增加了一个新的 web 组件:taos-explorer, 可以使用它进行数据库、超级表、子表、数据的查看和管理。其上也有一些只在企业版中才提供的高级特性,如需要可联系 TDengine 销售团队。
|
||||
|
||||
使用 taos-explorer,需要从浏览器访问其映射在主机上的端口,假定主机名为 abc.com,映射到主机的端口为 6060,则需从浏览器访问 http://abc.com:6060. taos-explorer 默认在容器内使用 6060 端口。初次使用需要使用企业邮箱进行注册,注册后即可使用数据库中的用户名和密码登录
|
||||
|
||||
## 使用 taosBenchmark 体验写入速度
|
||||
|
||||
可以使用 TDengine 的自带工具 taosBenchmark 快速体验 TDengine 的写入速度。
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
```rust title="原生连接/REST 连接"
|
||||
```rust title="原生连接"
|
||||
{{#include docs/examples/rust/nativeexample/examples/connect.rs}}
|
||||
```
|
||||
|
||||
:::note
|
||||
对于 Rust 连接器, 连接方式的不同只体现在使用的特性不同。如果启用了 "rest" 特性,那么只有 RESTful 的实现会被编译进来。
|
||||
对于 Rust 连接器, 连接方式的不同只体现在使用的特性不同。如果启用了 "ws" 特性,那么只有 Websocket 的实现会被编译进来。
|
||||
|
||||
:::
|
||||
|
|
After Width: | Height: | Size: 14 KiB |
|
@ -25,17 +25,24 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速
|
|||
|
||||
## 连接器建立连接的方式
|
||||
|
||||
连接器建立连接的方式,TDengine 提供两种:
|
||||
连接器建立连接的方式,TDengine 提供三种:
|
||||
|
||||
1. 通过 taosAdapter 组件提供的 REST API 建立与 taosd 的连接,这种连接方式下文中简称“REST 连接”
|
||||
2. 通过客户端驱动程序 taosc 直接与服务端程序 taosd 建立连接,这种连接方式下文中简称“原生连接”。
|
||||
1. 通过客户端驱动程序 taosc 直接与服务端程序 taosd 建立连接,这种连接方式下文中简称 “原生连接”。
|
||||
2. 通过 taosAdapter 组件提供的 REST API 建立与 taosd 的连接,这种连接方式下文中简称 “REST 连接”
|
||||
3. 通过 taosAdapter 组件提供的 Websocket API 建立与 taosd 的连接,这种连接方式下文中简称 “Websocket 连接”
|
||||
|
||||

|
||||
|
||||
无论使用何种方式建立连接,连接器都提供了相同或相似的 API 操作数据库,都可以执行 SQL 语句,只是初始化连接的方式稍有不同,用户在使用上不会感到什么差别。
|
||||
|
||||
关键不同点在于:
|
||||
|
||||
1. 使用 REST 连接,用户无需安装客户端驱动程序 taosc,具有跨平台易用的优势,但性能要下降 30% 左右。
|
||||
2. 使用原生连接可以体验 TDengine 的全部功能,如[参数绑定接口](../../connector/cpp/#参数绑定-api)、[订阅](../../connector/cpp/#订阅和消费-api)等等。
|
||||
1. 使用 原生连接,需要保证客户端的驱动程序 taosc 和服务端的 TDengine 版本配套。
|
||||
2. 使用 REST 连接,用户无需安装客户端驱动程序 taosc,具有跨平台易用的优势,但是无法体验数据订阅和二进制数据类型等功能。另外与 原生连接 和 Websocket 连接相比,REST连接的性能最低。
|
||||
3. 使用 Websocket 连接,用户也无需安装客户端驱动程序 taosc。
|
||||
4. 连接云服务实例,必须使用 REST 连接 或 Websocket 连接。
|
||||
|
||||
一般我们建议使用 **Websocket 连接**。
|
||||
|
||||
## 安装客户端驱动 taosc
|
||||
|
||||
|
@ -82,7 +89,7 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.2.4</version>
|
||||
<version>3.3.0</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
|
@ -122,18 +129,18 @@ driver-go 使用 cgo 封装了 taosc 的 API。cgo 需要使用 GCC 编译 C 的
|
|||
</TabItem>
|
||||
<TabItem label="Rust" value="rust">
|
||||
|
||||
编辑 `Cargo.toml` 添加 `libtaos` 依赖即可。
|
||||
编辑 `Cargo.toml` 添加 `taos` 依赖即可。
|
||||
|
||||
```toml title=Cargo.toml
|
||||
[dependencies]
|
||||
libtaos = { version = "0.4.2"}
|
||||
taos = { version = "*"}
|
||||
```
|
||||
|
||||
:::info
|
||||
Rust 连接器通过不同的特性区分不同的连接方式。如果要建立 REST 连接,需要开启 `rest` 特性:
|
||||
Rust 连接器通过不同的特性区分不同的连接方式。默认同时支持原生连接和 Websocket 连接,如果仅需要建立 Websocket 连接,可设置 `ws` 特性:
|
||||
|
||||
```toml
|
||||
libtaos = { version = "*", features = ["rest"] }
|
||||
taos = { version = "*", default-features = false, features = ["ws"] }
|
||||
```
|
||||
|
||||
:::
|
||||
|
|
|
@ -16,7 +16,7 @@ TDengine 采用类关系型数据模型,需要建库、建表。因此对于
|
|||
CREATE DATABASE power KEEP 365 DURATION 10 BUFFER 16 WAL_LEVEL 1;
|
||||
```
|
||||
|
||||
上述语句将创建一个名为 power 的库,这个库的数据将保留 365 天(超过 365 天将被自动删除),每 10 天一个数据文件,每个 VNode 的写入内存池的大小为 16 MB,对该数据库入会写 WAL 但不执行 FSYNC。详细的语法及参数请见 [数据库管理](/taos-sql/database) 章节。
|
||||
上述语句将创建一个名为 power 的库,这个库的数据将保留 365 天(超过 365 天将被自动删除),每 10 天一个数据文件,每个 VNode 的写入内存池的大小为 16 MB,对该数据库入会写 WAL 但不执行 FSYNC。详细的语法及参数请见 [数据库管理](../../taos-sql/database) 章节。
|
||||
|
||||
创建库之后,需要使用 SQL 命令 `USE` 将当前库切换过来,例如:
|
||||
|
||||
|
@ -35,13 +35,13 @@ USE power;
|
|||
|
||||
## 创建超级表
|
||||
|
||||
一个物联网系统,往往存在多种类型的设备,比如对于电网,存在智能电表、变压器、母线、开关等等。为便于多表之间的聚合,使用 TDengine, 需要对每个类型的数据采集点创建一个超级表。以 [表 1](/concept) 中的智能电表为例,可以使用如下的 SQL 命令创建超级表:
|
||||
一个物联网系统,往往存在多种类型的设备,比如对于电网,存在智能电表、变压器、母线、开关等等。为便于多表之间的聚合,使用 TDengine, 需要对每个类型的数据采集点创建一个超级表。以 [表 1](../../concept) 中的智能电表为例,可以使用如下的 SQL 命令创建超级表:
|
||||
|
||||
```sql
|
||||
CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
|
||||
```
|
||||
|
||||
与创建普通表一样,创建超级表时,需要提供表名(示例中为 meters),表结构 Schema,即数据列的定义。第一列必须为时间戳(示例中为 ts),其他列为采集的物理量(示例中为 current, voltage, phase),数据类型可以为整型、浮点型、字符串等。除此之外,还需要提供标签的 Schema (示例中为 location, groupId),标签的数据类型可以为整型、浮点型、字符串等。采集点的静态属性往往可以作为标签,比如采集点的地理位置、设备型号、设备组 ID、管理员 ID 等等。标签的 Schema 可以事后增加、删除、修改。具体定义以及细节请见 [TDengine SQL 的超级表管理](/taos-sql/stable) 章节。
|
||||
与创建普通表一样,创建超级表时,需要提供表名(示例中为 meters),表结构 Schema,即数据列的定义。第一列必须为时间戳(示例中为 ts),其他列为采集的物理量(示例中为 current, voltage, phase),数据类型可以为整型、浮点型、字符串等。除此之外,还需要提供标签的 Schema (示例中为 location, groupId),标签的数据类型可以为整型、浮点型、字符串等。采集点的静态属性往往可以作为标签,比如采集点的地理位置、设备型号、设备组 ID、管理员 ID 等等。标签的 Schema 可以事后增加、删除、修改。具体定义以及细节请见 [TDengine SQL 的超级表管理](../../taos-sql/stable) 章节。
|
||||
|
||||
每一种类型的数据采集点需要建立一个超级表,因此一个物联网系统,往往会有多个超级表。对于电网,我们就需要对智能电表、变压器、母线、开关等都建立一个超级表。在物联网中,一个设备就可能有多个数据采集点(比如一台风力发电的风机,有的采集点采集电流、电压等电参数,有的采集点采集温度、湿度、风向等环境参数),这个时候,对这一类型的设备,需要建立多张超级表。
|
||||
|
||||
|
@ -49,13 +49,13 @@ CREATE STABLE meters (ts timestamp, current float, voltage int, phase float) TAG
|
|||
|
||||
## 创建表
|
||||
|
||||
TDengine 对每个数据采集点需要独立建表。与标准的关系型数据库一样,一张表有表名,Schema,但除此之外,还可以带有一到多个标签。创建时,需要使用超级表做模板,同时指定标签的具体值。以 [表 1](/concept) 中的智能电表为例,可以使用如下的 SQL 命令建表:
|
||||
TDengine 对每个数据采集点需要独立建表。与标准的关系型数据库一样,一张表有表名,Schema,但除此之外,还可以带有一到多个标签。创建时,需要使用超级表做模板,同时指定标签的具体值。以 [表 1](../../concept) 中的智能电表为例,可以使用如下的 SQL 命令建表:
|
||||
|
||||
```sql
|
||||
CREATE TABLE d1001 USING meters TAGS ("California.SanFrancisco", 2);
|
||||
```
|
||||
|
||||
其中 d1001 是表名,meters 是超级表的表名,后面紧跟标签 Location 的具体标签值为 "California.SanFrancisco",标签 groupId 的具体标签值为 2。虽然在创建表时,需要指定标签值,但可以事后修改。详细细则请见 [TDengine SQL 的表管理](/taos-sql/table) 章节。
|
||||
其中 d1001 是表名,meters 是超级表的表名,后面紧跟标签 Location 的具体标签值为 "California.SanFrancisco",标签 groupId 的具体标签值为 2。虽然在创建表时,需要指定标签值,但可以事后修改。详细细则请见 [TDengine SQL 的表管理](../../taos-sql/table) 章节。
|
||||
|
||||
TDengine 建议将数据采集点的全局唯一 ID 作为表名(比如设备序列号)。但对于有的场景,并没有唯一的 ID,可以将多个 ID 组合成一个唯一的 ID。不建议将具有唯一性的 ID 作为标签值。
|
||||
|
||||
|
@ -69,7 +69,7 @@ INSERT INTO d1001 USING meters TAGS ("California.SanFrancisco", 2) VALUES (NOW,
|
|||
|
||||
上述 SQL 语句将记录`(NOW, 10.2, 219, 0.32)`插入表 d1001。如果表 d1001 还未创建,则使用超级表 meters 做模板自动创建,同时打上标签值 `"California.SanFrancisco", 2`。
|
||||
|
||||
关于自动建表的详细语法请参见 [插入记录时自动建表](/taos-sql/insert#插入记录时自动建表) 章节。
|
||||
关于自动建表的详细语法请参见 [插入记录时自动建表](../../taos-sql/insert#插入记录时自动建表) 章节。
|
||||
|
||||
## 多列模型 vs 单列模型
|
||||
|
||||
|
|
|
@ -33,7 +33,7 @@ import PhpStmt from "./_php_stmt.mdx";
|
|||
INSERT INTO d1001 VALUES (ts1, 10.3, 219, 0.31);
|
||||
```
|
||||
|
||||
这里的`ts1`为Unix时间戳(Unix timestamp),允许插入的最老记录的时间戳,是相对于当前服务器时间,减去配置的 KEEP 值。时间戳详情规则参考 [TDengine SQL数据写入 关于时间戳一节](/taos-sql/insert)
|
||||
这里的`ts1`为Unix时间戳(Unix timestamp),允许插入的最老记录的时间戳,是相对于当前服务器时间,减去配置的 KEEP 值。时间戳详情规则参考 [TDengine SQL数据写入 关于时间戳一节](../../../taos-sql/insert)
|
||||
|
||||
### 一次写入多条
|
||||
|
||||
|
@ -43,7 +43,7 @@ TDengine 支持一次写入多条记录,比如下面这条命令就将两条
|
|||
INSERT INTO d1001 VALUES (ts1, 10.2, 220, 0.23) (ts2, 10.3, 218, 0.25);
|
||||
```
|
||||
|
||||
这里的`ts1`和`ts2`为Unix时间戳(Unix timestamp),允许插入的最老记录的时间戳,是相对于当前服务器时间,减去配置的 KEEP 值。时间戳详情规则参考 [TDengine SQL数据写入 关于时间戳一节](/taos-sql/insert)
|
||||
这里的`ts1`和`ts2`为Unix时间戳(Unix timestamp),允许插入的最老记录的时间戳,是相对于当前服务器时间,减去配置的 KEEP 值。时间戳详情规则参考 [TDengine SQL数据写入 关于时间戳一节](../../../taos-sql/insert)
|
||||
|
||||
### 一次写入多表
|
||||
|
||||
|
@ -53,9 +53,9 @@ TDengine 也支持一次向多个表写入数据,比如下面这条命令就
|
|||
INSERT INTO d1001 VALUES (ts1, 10.3, 219, 0.31) (ts2, 12.6, 218, 0.33) d1002 VALUES (ts3, 12.3, 221, 0.31);
|
||||
```
|
||||
|
||||
这里的`ts1`、`ts2`和`ts3`为Unix时间戳(Unix timestamp),允许插入的最老记录的时间戳,是相对于当前服务器时间,减去配置的 KEEP 值。时间戳详情规则参考 [TDengine SQL数据写入 关于时间戳一节](/taos-sql/insert)
|
||||
这里的`ts1`、`ts2`和`ts3`为Unix时间戳(Unix timestamp),允许插入的最老记录的时间戳,是相对于当前服务器时间,减去配置的 KEEP 值。时间戳详情规则参考 [TDengine SQL数据写入 关于时间戳一节](../../../taos-sql/insert)
|
||||
|
||||
详细的 SQL INSERT 语法规则参考 [TDengine SQL 的数据写入](/taos-sql/insert)。
|
||||
详细的 SQL INSERT 语法规则参考 [TDengine SQL 的数据写入](../../../taos-sql/insert)。
|
||||
|
||||
:::info
|
||||
|
||||
|
|
|
@ -35,7 +35,7 @@ bin/kafka-topics.sh --bootstrap-server=localhost:9092 --describe
|
|||
|
||||
## 写入 TDengine
|
||||
|
||||
TDengine 支持 Sql 方式和 Schemaless 方式的数据写入,Sql 方式数据写入可以参考 [TDengine SQL 写入](/develop/insert-data/sql-writing/) 和 [TDengine 高效写入](/develop/insert-data/high-volume/)。Schemaless 方式数据写入可以参考 [TDengine Schemaless 写入](/reference/schemaless/) 文档。
|
||||
TDengine 支持 Sql 方式和 Schemaless 方式的数据写入,Sql 方式数据写入可以参考 [TDengine SQL 写入](../sql-writing/) 和 [TDengine 高效写入](../high-volume/)。Schemaless 方式数据写入可以参考 [TDengine Schemaless 写入](../../../reference/schemaless/) 文档。
|
||||
|
||||
## 示例代码
|
||||
|
||||
|
|
|
@ -37,15 +37,15 @@ meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0
|
|||
- tag_set 中的所有的数据自动转化为 NCHAR 数据类型
|
||||
- field_set 中的每个数据项都需要对自身的数据类型进行描述, 比如 1.2f32 代表 FLOAT 类型的数值 1.2, 如果不带类型后缀会被当作 DOUBLE 处理
|
||||
- timestamp 支持多种时间精度。写入数据的时候需要用参数指定时间精度,支持从小时到纳秒的 6 种时间精度
|
||||
- 为了提高写入的效率,默认假设同一个超级表中 field_set 的顺序是一样的(第一条数据包含所有的 field,后面的数据按照这个顺序),如果顺序不一样,需要配置参数 smlDataFormat 为 false,否则,数据写入按照相同顺序写入,库中数据会异常。(3.0.1.3 之后的版本 smlDataFormat 默认为 false,从3.0.3.0开始,该配置废弃) [TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议)
|
||||
- 为了提高写入的效率,默认假设同一个超级表中 field_set 的顺序是一样的(第一条数据包含所有的 field,后面的数据按照这个顺序),如果顺序不一样,需要配置参数 smlDataFormat 为 false,否则,数据写入按照相同顺序写入,库中数据会异常。(3.0.1.3 之后的版本 smlDataFormat 默认为 false,从3.0.3.0开始,该配置废弃) [TDengine 无模式写入参考指南](../../../reference/schemaless/#无模式写入行协议)
|
||||
- 子表名生成规则
|
||||
- 默认产生的子表名是根据规则生成的唯一 ID 值。
|
||||
- 用户也可以通过在client端的 taos.cfg 里配置 smlAutoChildTableNameDelimiter 参数来指定连接标签之间的分隔符,连接起来后作为子表名。举例如下:配置 smlAutoChildTableNameDelimiter=-, 插入数据为 st,t0=cpu1,t1=4 c1=3 1626006833639000000 则创建的子表名为 cpu1-4。
|
||||
- 用户也可以通过在client端的 taos.cfg 里配置 smlChildTableName 参数来指定某个标签值作为子表名。该标签值应该具有全局唯一性。举例如下:假设有个标签名为tname, 配置 smlChildTableName=tname, 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的子表名为 cpu1。注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。[TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议)
|
||||
- 用户也可以通过在client端的 taos.cfg 里配置 smlChildTableName 参数来指定某个标签值作为子表名。该标签值应该具有全局唯一性。举例如下:假设有个标签名为tname, 配置 smlChildTableName=tname, 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的子表名为 cpu1。注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。[TDengine 无模式写入参考指南](../../../reference/schemaless/#无模式写入行协议)
|
||||
|
||||
:::
|
||||
|
||||
要了解更多可参考:[InfluxDB Line 协议官方文档](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/) 和 [TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议)
|
||||
要了解更多可参考:[InfluxDB Line 协议官方文档](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/) 和 [TDengine 无模式写入参考指南](../../../reference/schemaless/#无模式写入行协议)
|
||||
|
||||
## 示例代码
|
||||
|
||||
|
|
|
@ -34,7 +34,7 @@ meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3
|
|||
- 子表名生成规则
|
||||
- 默认产生的子表名是根据规则生成的唯一 ID 值。
|
||||
- 用户也可以通过在client端的 taos.cfg 里配置 smlAutoChildTableNameDelimiter 参数来指定连接标签之间的分隔符,连接起来后作为子表名。举例如下:配置 smlAutoChildTableNameDelimiter=-, 插入数据为 st,t0=cpu1,t1=4 c1=3 1626006833639000000 则创建的子表名为 cpu1-4。
|
||||
- 用户也可以通过在client端的 taos.cfg 里配置 smlChildTableName 参数来指定某个标签值作为子表名。该标签值应该具有全局唯一性。举例如下:假设有个标签名为tname, 配置 smlChildTableName=tname, 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的子表名为 cpu1。注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。[TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议)
|
||||
- 用户也可以通过在client端的 taos.cfg 里配置 smlChildTableName 参数来指定某个标签值作为子表名。该标签值应该具有全局唯一性。举例如下:假设有个标签名为tname, 配置 smlChildTableName=tname, 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的子表名为 cpu1。注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。[TDengine 无模式写入参考指南](../../../reference/schemaless/#无模式写入行协议)
|
||||
|
||||
参考 [OpenTSDB Telnet API 文档](http://opentsdb.net/docs/build/html/api_telnet/put.html)。
|
||||
|
||||
|
|
|
@ -50,7 +50,7 @@ OpenTSDB JSON 格式协议采用一个 JSON 字符串表示一行或多行数据
|
|||
- 子表名生成规则
|
||||
- 默认产生的子表名是根据规则生成的唯一 ID 值。
|
||||
- 用户也可以通过在client端的 taos.cfg 里配置 smlAutoChildTableNameDelimiter 参数来指定连接标签之间的分隔符,连接起来后作为子表名。举例如下:配置 smlAutoChildTableNameDelimiter=-, 插入数据为 st,t0=cpu1,t1=4 c1=3 1626006833639000000 则创建的子表名为 cpu1-4。
|
||||
- 用户也可以通过在client端的 taos.cfg 里配置 smlChildTableName 参数来指定某个标签值作为子表名。该标签值应该具有全局唯一性。举例如下:假设有个标签名为tname, 配置 smlChildTableName=tname, 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的子表名为 cpu1。注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。[TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议)
|
||||
- 用户也可以通过在client端的 taos.cfg 里配置 smlChildTableName 参数来指定某个标签值作为子表名。该标签值应该具有全局唯一性。举例如下:假设有个标签名为tname, 配置 smlChildTableName=tname, 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的子表名为 cpu1。注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。[TDengine 无模式写入参考指南](../../../reference/schemaless/#无模式写入行协议)
|
||||
|
||||
:::
|
||||
|
||||
|
|
|
@ -23,7 +23,7 @@ import CAsync from "./_c_async.mdx";
|
|||
TDengine 采用 SQL 作为查询语言。应用程序可以通过 REST API 或连接器发送 SQL 语句,用户还可以通过 TDengine 命令行工具 taos 手动执行 SQL 即席查询(Ad-Hoc Query)。TDengine 支持如下查询功能:
|
||||
|
||||
- 单列、多列数据查询
|
||||
- 标签和数值的多种过滤条件:>, <, =, <\>, like 等
|
||||
- 标签和数值的多种过滤条件:>, \<, =, \<>, like 等
|
||||
- 聚合结果的分组(Group by)、排序(Order by)、约束输出(Limit/Offset)
|
||||
- 时间窗口(Interval)、会话窗口(Session)和状态窗口(State_window)等窗口切分聚合查询
|
||||
- 数值列及聚合结果的四则运算
|
||||
|
@ -128,7 +128,7 @@ Query OK, 6 rows in database (0.005515s)
|
|||
|
||||
### 查询数据
|
||||
|
||||
在 [SQL 写入](/develop/insert-data/sql-writing) 一章,我们创建了 power 数据库,并向 meters 表写入了一些数据,以下示例代码展示如何查询这个表的数据。
|
||||
在 [SQL 写入](../insert-data/sql-writing) 一章,我们创建了 power 数据库,并向 meters 表写入了一些数据,以下示例代码展示如何查询这个表的数据。
|
||||
|
||||
<Tabs defaultValue="java" groupId="lang">
|
||||
<TabItem label="Java" value="java">
|
||||
|
@ -160,7 +160,7 @@ Query OK, 6 rows in database (0.005515s)
|
|||
:::note
|
||||
|
||||
1. 无论是使用 REST 连接还是原生连接的连接器,以上示例代码都能正常工作。
|
||||
2. 唯一需要注意的是:由于 REST 接口无状态, 不能使用 `use db` 语句来切换数据库。除了在 REST 参数中指定数据库以外也可以在 SQL 语句中使用 <db_name>.<table_name> 来指定数据库。
|
||||
2. 唯一需要注意的是:由于 REST 接口无状态, 不能使用 `use db` 语句来切换数据库。除了在 REST 参数中指定数据库以外也可以在 SQL 语句中使用 \<db_name>.\<table_name> 来指定数据库。
|
||||
|
||||
:::
|
||||
|
||||
|
|
|
@ -261,7 +261,7 @@ impl AsAsyncConsumer for Consumer
|
|||
async fn unsubscribe(self);
|
||||
```
|
||||
|
||||
可在 <https://docs.rs/taos> 上查看详细 API 说明。
|
||||
可在 \<https://docs.rs/taos> 上查看详细 API 说明。
|
||||
|
||||
</TabItem>
|
||||
|
||||
|
|