Merge branch 'enh/triggerCheckPoint2' into enh/chkpTransfer
|
|
@ -39,7 +39,7 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series
|
|||
|
||||
# 构建
|
||||
|
||||
TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64,后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。
|
||||
TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64,后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。目前不支持使用交叉编译器构建。
|
||||
|
||||
用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)或[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。
|
||||
|
||||
|
|
@ -175,7 +175,7 @@ cd TDengine
|
|||
```bash
|
||||
mkdir debug
|
||||
cd debug
|
||||
cmake .. -DBUILD_TOOLS=true
|
||||
cmake .. -DBUILD_TOOLS=true -DBUILD_CONTRIB=true
|
||||
make
|
||||
```
|
||||
|
||||
|
|
@ -352,4 +352,4 @@ TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java
|
|||
|
||||
# 加入技术交流群
|
||||
|
||||
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine1",加小 T 为好友,即可入群。
|
||||
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小 T 为好友,即可入群。
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@ For user manual, system design and architecture, please refer to [TDengine Docum
|
|||
|
||||
# Building
|
||||
|
||||
At the moment, TDengine server supports running on Linux/Windows/macOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
|
||||
At the moment, TDengine server supports running on Linux/Windows/macOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment.
|
||||
|
||||
You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source.
|
||||
|
||||
|
|
@ -183,7 +183,7 @@ It equals to execute following commands:
|
|||
```bash
|
||||
mkdir debug
|
||||
cd debug
|
||||
cmake .. -DBUILD_TOOLS=true
|
||||
cmake .. -DBUILD_TOOLS=true -DBUILD_CONTRIB=true
|
||||
make
|
||||
```
|
||||
|
||||
|
|
|
|||
2
build.sh
|
|
@ -4,5 +4,5 @@ if [ ! -d debug ]; then
|
|||
mkdir debug || echo -e "failed to make directory for build"
|
||||
fi
|
||||
|
||||
cd debug && cmake .. -DBUILD_TOOLS=true && make
|
||||
cd debug && cmake .. -DBUILD_TOOLS=true -DBUILD_CONTRIB=true && make
|
||||
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
IF (DEFINED VERNUMBER)
|
||||
SET(TD_VER_NUMBER ${VERNUMBER})
|
||||
ELSE ()
|
||||
SET(TD_VER_NUMBER "3.1.0.0.alpha")
|
||||
SET(TD_VER_NUMBER "3.1.1.0.alpha")
|
||||
ENDIF ()
|
||||
|
||||
IF (DEFINED VERCOMPATIBLE)
|
||||
|
|
|
|||
|
|
@ -32,6 +32,20 @@ docker run -d -p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043
|
|||
|
||||
Note that TDengine Server 3.0 uses TCP port 6030. Port 6041 is used by taosAdapter for the REST API service. Ports 6043 through 6049 are used by taosAdapter for other connectors. You can open these ports as needed.
|
||||
|
||||
If you need to persist data to a specific directory on your local machine, please run the following command:
|
||||
```shell
|
||||
docker run -d -v ~/data/taos/dnode/data:/var/lib/taos \
|
||||
-v ~/data/taos/dnode/log:/var/log/taos \
|
||||
-p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
|
||||
```
|
||||
:::note
|
||||
|
||||
- /var/lib/taos: TDengine's default data file directory. The location can be changed via [configuration file]. Also you can modify ~/data/taos/dnode/data to your any local empty data directory
|
||||
- /var/log/taos: TDengine's default log file directory. The location can be changed via [configure file]. you can modify ~/data/taos/dnode/log to your any local empty log directory
|
||||
|
||||
:::
|
||||
|
||||
|
||||
Run the following command to ensure that your container is running:
|
||||
|
||||
```shell
|
||||
|
|
@ -113,4 +127,4 @@ In the query above you are selecting the first timestamp (ts) in the interval, a
|
|||
|
||||
## Additional Information
|
||||
|
||||
For more information about deploying TDengine in a Docker environment, see [Using TDengine in Docker](../../reference/docker).
|
||||
For more information about deploying TDengine in a Docker environment, see [Deploying TDengine with Docker](../../deployment/docker).
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ The full package of TDengine includes the TDengine Server (`taosd`), TDengine Cl
|
|||
|
||||
The standard server installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark`, and sample code. You can also download the Lite package that includes only `taosd` and the C/C++ connector.
|
||||
|
||||
The TDengine Community Edition is released as Deb and RPM packages. The Deb package can be installed on Debian, Ubuntu, and derivative systems. The RPM package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the Deb or RPM package, download and install taosTools separately. TDengine can also be installed on x64 Windows and x64/m1 macOS.
|
||||
TDengine OSS is released as Deb and RPM packages. The Deb package can be installed on Debian, Ubuntu, and derivative systems. The RPM package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the Deb or RPM package, download and install taosTools separately. TDengine can also be installed on x64 Windows and x64/m1 macOS.
|
||||
|
||||
## Operating environment requirements
|
||||
In the Linux system, the minimum requirements for the operating environment are as follows:
|
||||
|
|
@ -201,7 +201,7 @@ You can use the TDengine CLI to monitor your TDengine deployment and execute ad
|
|||
|
||||
<TabItem label="Windows" value="windows">
|
||||
|
||||
After the installation is complete, please run `sc start taosd` or run `C:\TDengine\taosd.exe` with administrator privilege to start TDengine Server.
|
||||
After the installation is complete, please run `sc start taosd` or run `C:\TDengine\taosd.exe` with administrator privilege to start TDengine Server. Please run `sc start taosadapter` or run `C:\TDengine\taosadapter.exe` with administrator privilege to start taosAdapter to provide http/REST service.
|
||||
|
||||
## Command Line Interface (CLI)
|
||||
|
||||
|
|
|
|||
|
|
@ -21,17 +21,6 @@ import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
|
|||
<DocCardList items={useCurrentSidebarCategory().items}/>
|
||||
```
|
||||
|
||||
## Study TDengine Knowledge Map
|
||||
|
||||
The TDengine Knowledge Map covers the various knowledge points of TDengine, revealing the invocation relationships and data flow between various conceptual entities. Learning and understanding the TDengine Knowledge Map will help you quickly master the TDengine knowledge system.
|
||||
|
||||
<figure>
|
||||
<center>
|
||||
<a href="pathname:///img/tdengine-map.svg" target="_blank"><img src="/img/tdengine-map.svg" width="80%" /></a>
|
||||
<figcaption>Diagram 1. TDengine Knowledge Map</figcaption>
|
||||
</center>
|
||||
</figure>
|
||||
|
||||
## Join TDengine Community
|
||||
|
||||
<table width="100%">
|
||||
|
|
|
|||
|
|
@ -83,7 +83,7 @@ If `maven` is used to manage the projects, what needs to be done is only adding
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.2.1</version>
|
||||
<version>3.2.4</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -298,7 +298,7 @@ You configure the following parameters when creating a consumer:
|
|||
| `td.connect.port` | string | Port of the server side | |
|
||||
| `group.id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. Each topic can create up to 100 consumer groups. |
|
||||
| `client.id` | string | Client ID | Maximum length: 192. |
|
||||
| `auto.offset.reset` | enum | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
|
||||
| `auto.offset.reset` | enum | Initial offset for the consumer group | `earliest`: subscribe from the earliest data, this is the default behavior; `latest`: subscribe from the latest data; or `none`: can't subscribe without committed offset|
|
||||
| `enable.auto.commit` | boolean | Commit automatically; true: user application doesn't need to explicitly commit; false: user application need to handle commit by itself | Default value is true |
|
||||
| `auto.commit.interval.ms` | integer | Interval for automatic commits, in milliseconds |
|
||||
| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages | default value: false
|
||||
|
|
|
|||
|
|
@ -403,7 +403,7 @@ In this section we will demonstrate 5 examples of developing UDF in Python langu
|
|||
|
||||
In the guide, some debugging skills of using Python UDF will be explained too.
|
||||
|
||||
We assume you are using Linux system and already have TDengine 3.0.4.0+ and Python 3.x.
|
||||
We assume you are using Linux system and already have TDengine 3.0.4.0+ and Python 3.7+.
|
||||
|
||||
Note:**You can't use print() function to output log inside a UDF, you have to write the log to a specific file or use logging module of Python.**
|
||||
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: Deploying TDengine with Docker
|
||||
sidebar_label: Docker
|
||||
description: This chapter describes how to start and access TDengine in a Docker container.
|
||||
---
|
||||
|
||||
|
|
@ -10,8 +11,17 @@ This chapter describes how to start the TDengine service in a container and acce
|
|||
The TDengine image starts with the HTTP service activated by default, using the following command:
|
||||
|
||||
```shell
|
||||
docker run -d --name tdengine -p 6041:6041 tdengine/tdengine
|
||||
docker run -d --name tdengine \
|
||||
-v ~/data/taos/dnode/data:/var/lib/taos \
|
||||
-v ~/data/taos/dnode/log:/var/log/taos \
|
||||
-p 6041:6041 tdengine/tdengine
|
||||
```
|
||||
:::note
|
||||
|
||||
* /var/lib/taos: TDengine's default data file directory. The location can be changed via [configuration file]. And also you can modify ~/data/taos/dnode/data to your any other local emtpy data directory
|
||||
* /var/log/taos: TDengine's default log file directory. The location can be changed via [configure file]. And also you can modify ~/data/taos/dnode/log to your any other local empty log directory
|
||||
|
||||
:::
|
||||
|
||||
The above command starts a container named "tdengine" and maps the HTTP service port 6041 to the host port 6041. You can verify that the HTTP service provided in this container is available using the following command.
|
||||
|
||||
|
|
@ -283,39 +293,38 @@ services:
|
|||
environment:
|
||||
TAOS_FQDN: "td-1"
|
||||
TAOS_FIRST_EP: "td-1"
|
||||
ports:
|
||||
- 6041:6041
|
||||
- 6030:6030
|
||||
volumes:
|
||||
- taosdata-td1:/var/lib/taos/
|
||||
- taoslog-td1:/var/log/taos/
|
||||
# /var/lib/taos: TDengine's default data file directory. The location can be changed via [configuration file]. you can modify ~/data/taos/dnode1/data to your own data directory
|
||||
- ~/data/taos/dnode1/data:/var/lib/taos
|
||||
# /var/log/taos: TDengine's default log file directory. The location can be changed via [configure file]. you can modify ~/data/taos/dnode1/log to your own log directory
|
||||
- ~/data/taos/dnode1/log:/var/log/taos
|
||||
td-2:
|
||||
image: tdengine/tdengine:$VERSION
|
||||
environment:
|
||||
TAOS_FQDN: "td-2"
|
||||
TAOS_FIRST_EP: "td-1"
|
||||
volumes:
|
||||
- taosdata-td2:/var/lib/taos/
|
||||
- taoslog-td2:/var/log/taos/
|
||||
- ~/data/taos/dnode2/data:/var/lib/taos
|
||||
- ~/data/taos/dnode2/log:/var/log/taos
|
||||
td-3:
|
||||
image: tdengine/tdengine:$VERSION
|
||||
environment:
|
||||
TAOS_FQDN: "td-3"
|
||||
TAOS_FIRST_EP: "td-1"
|
||||
volumes:
|
||||
- taosdata-td3:/var/lib/taos/
|
||||
- taoslog-td3:/var/log/taos/
|
||||
volumes:
|
||||
taosdata-td1:
|
||||
taoslog-td1:
|
||||
taosdata-td2:
|
||||
taoslog-td2:
|
||||
taosdata-td3:
|
||||
taoslog-td3:
|
||||
- ~/data/taos/dnode3/data:/var/lib/taos
|
||||
- ~/data/taos/dnode3/log:/var/log/taos
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
- The `VERSION` environment variable is used to set the tdengine image tag
|
||||
- `TAOS_FIRST_EP` must be set on the newly created instance so that it can join the TDengine cluster; if there is a high availability requirement, `TAOS_SECOND_EP` needs to be used at the same time
|
||||
:::
|
||||
|
||||
:::
|
||||
|
||||
2. Start the cluster
|
||||
|
||||
|
|
@ -382,24 +391,22 @@ networks:
|
|||
services:
|
||||
td-1:
|
||||
image: tdengine/tdengine:$VERSION
|
||||
networks:
|
||||
- inter
|
||||
environment:
|
||||
TAOS_FQDN: "td-1"
|
||||
TAOS_FIRST_EP: "td-1"
|
||||
volumes:
|
||||
- taosdata-td1:/var/lib/taos/
|
||||
- taoslog-td1:/var/log/taos/
|
||||
# /var/lib/taos: TDengine's default data file directory. The location can be changed via [configuration file]. you can modify ~/data/taos/dnode1/data to your own data directory
|
||||
- ~/data/taos/dnode1/data:/var/lib/taos
|
||||
# /var/log/taos: TDengine's default log file directory. The location can be changed via [configure file]. you can modify ~/data/taos/dnode1/log to your own log directory
|
||||
- ~/data/taos/dnode1/log:/var/log/taos
|
||||
td-2:
|
||||
image: tdengine/tdengine:$VERSION
|
||||
networks:
|
||||
- inter
|
||||
environment:
|
||||
TAOS_FQDN: "td-2"
|
||||
TAOS_FIRST_EP: "td-1"
|
||||
volumes:
|
||||
- taosdata-td2:/var/lib/taos/
|
||||
- taoslog-td2:/var/log/taos/
|
||||
- ~/data/taos/dnode2/data:/var/lib/taos
|
||||
- ~/data/taos/dnode2/log:/var/log/taos
|
||||
adapter:
|
||||
image: tdengine/tdengine:$VERSION
|
||||
entrypoint: "taosadapter"
|
||||
|
|
@ -431,11 +438,6 @@ services:
|
|||
>> /etc/nginx/nginx.conf;cat /etc/nginx/nginx.conf;
|
||||
nginx -g 'daemon off;'",
|
||||
]
|
||||
volumes:
|
||||
taosdata-td1:
|
||||
taoslog-td1:
|
||||
taosdata-td2:
|
||||
taoslog-td2:
|
||||
```
|
||||
|
||||
## Deploy with docker swarm
|
||||
|
|
@ -4,23 +4,31 @@ sidebar_label: Kubernetes
|
|||
description: This document describes how to deploy TDengine on Kubernetes.
|
||||
---
|
||||
|
||||
TDengine is a cloud-native time-series database that can be deployed on Kubernetes. This document gives a step-by-step description of how you can use YAML files to create a TDengine cluster and introduces common operations for TDengine in a Kubernetes environment.
|
||||
## Overview
|
||||
|
||||
As a time series database for Cloud Native architecture design, TDengine supports Kubernetes deployment. Firstly we introduce how to use YAML files to create a highly available TDengine cluster from scratch step by step for production usage, and highlight the common operations of TDengine in Kubernetes environment.
|
||||
|
||||
To meet [high availability ](https://docs.taosdata.com/tdinternal/high-availability/)requirements, clusters need to meet the following requirements:
|
||||
|
||||
- 3 or more dnodes: multiple vnodes in the same vgroup of TDengine are not allowed to be distributed in one dnode at the same time, so if you create a database with 3 replicas, the number of dnodes is greater than or equal to 3
|
||||
- 3 mnodes: mnode is responsible for the management of the entire TDengine cluster. The default number of mnode in TDengine cluster is only one. If the dnode where the mnode located is dropped, the entire cluster is unavailable.
|
||||
- Database 3 replicas: The TDengine replica configuration is the database level, so 3 replicas for the database must need three dnodes in the cluster. If any one dnode is offline, does not affect the normal usage of the whole cluster. **If the number of offline** **dnodes** **is 2, then the cluster is not available,** **because** ** the cluster can not complete the election based on RAFT** **.** (Enterprise version: in the disaster recovery scenario, any node data file is damaged, can be restored by pulling up the dnode again)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before deploying TDengine on Kubernetes, perform the following:
|
||||
|
||||
* Current steps are compatible with Kubernetes v1.5 and later version.
|
||||
* Install and configure minikube, kubectl, and helm.
|
||||
* Install and deploy Kubernetes and ensure that it can be accessed and used normally. Update any container registries or other services as necessary.
|
||||
- This article applies Kubernetes 1.19 and above
|
||||
- This article uses the **kubectl** tool to install and deploy, please install the corresponding software in advance
|
||||
- Kubernetes have been installed and deployed and can access or update the necessary container repositories or other services
|
||||
|
||||
You can download the configuration files in this document from [GitHub](https://github.com/taosdata/TDengine-Operator/tree/3.0/src/tdengine).
|
||||
|
||||
## Configure the service
|
||||
|
||||
Create a service configuration file named `taosd-service.yaml`. Record the value of `metadata.name` (in this example, `taos`) for use in the next step. Add the ports required by TDengine:
|
||||
Create a service configuration file named `taosd-service.yaml`. Record the value of `metadata.name` (in this example, `taos`) for use in the next step. And then add the ports required by TDengine and record the value of the selector label "app" (in this example, `tdengine`) for use in the next step:
|
||||
|
||||
```yaml
|
||||
```YAML
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
|
@ -31,10 +39,10 @@ metadata:
|
|||
spec:
|
||||
ports:
|
||||
- name: tcp6030
|
||||
- protocol: "TCP"
|
||||
protocol: "TCP"
|
||||
port: 6030
|
||||
- name: tcp6041
|
||||
- protocol: "TCP"
|
||||
protocol: "TCP"
|
||||
port: 6041
|
||||
selector:
|
||||
app: "tdengine"
|
||||
|
|
@ -42,10 +50,11 @@ spec:
|
|||
|
||||
## Configure the service as StatefulSet
|
||||
|
||||
Configure the TDengine service as a StatefulSet.
|
||||
Create the `tdengine.yaml` file and set `replicas` to 3. In this example, the region is set to Asia/Shanghai and 10 GB of standard storage are allocated per node. You can change the configuration based on your environment and business requirements.
|
||||
According to Kubernetes instructions for various deployments, we will use StatefulSet as the deployment resource type of TDengine. Create the file `tdengine.yaml `, where replicas defines the number of cluster nodes as 3. The node time zone is China (Asia/Shanghai), and each node is allocated 5G standard storage (refer to the [Storage Classes ](https://kubernetes.io/docs/concepts/storage/storage-classes/)configuration storage class). You can also modify accordingly according to the actual situation.
|
||||
|
||||
```yaml
|
||||
Please pay special attention to the startupProbe configuration. If dnode's Pod drops for a period of time and then restart, the newly launched dnode Pod will be temporarily unavailable. The reason is the startupProbe configuration is too small, Kubernetes will know that the Pod is in an abnormal state and try to restart it, then the dnode's Pod will restart frequently and never return to the normal status. Refer to [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
|
||||
|
||||
```YAML
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
|
|
@ -69,14 +78,14 @@ spec:
|
|||
spec:
|
||||
containers:
|
||||
- name: "tdengine"
|
||||
image: "tdengine/tdengine:3.0.0.0"
|
||||
image: "tdengine/tdengine:3.0.7.1"
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
ports:
|
||||
- name: tcp6030
|
||||
- protocol: "TCP"
|
||||
protocol: "TCP"
|
||||
containerPort: 6030
|
||||
- name: tcp6041
|
||||
- protocol: "TCP"
|
||||
protocol: "TCP"
|
||||
containerPort: 6041
|
||||
env:
|
||||
# POD_NAME for FQDN config
|
||||
|
|
@ -102,12 +111,18 @@ spec:
|
|||
# Must set if you want a cluster.
|
||||
- name: TAOS_FIRST_EP
|
||||
value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)"
|
||||
# TAOS_FQDN should always be set in k8s env.
|
||||
# TAOS_FQND should always be set in k8s env.
|
||||
- name: TAOS_FQDN
|
||||
value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local"
|
||||
volumeMounts:
|
||||
- name: taosdata
|
||||
mountPath: /var/lib/taos
|
||||
startupProbe:
|
||||
exec:
|
||||
command:
|
||||
- taos-check
|
||||
failureThreshold: 360
|
||||
periodSeconds: 10
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
|
|
@ -129,266 +144,401 @@ spec:
|
|||
storageClassName: "standard"
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
storage: "5Gi"
|
||||
```
|
||||
|
||||
## Use kubectl to deploy TDengine
|
||||
|
||||
Run the following commands:
|
||||
First create the corresponding namespace, and then execute the following command in sequence :
|
||||
|
||||
```bash
|
||||
kubectl apply -f taosd-service.yaml
|
||||
kubectl apply -f tdengine.yaml
|
||||
```Bash
|
||||
kubectl apply -f taosd-service.yaml -n tdengine-test
|
||||
kubectl apply -f tdengine.yaml -n tdengine-test
|
||||
```
|
||||
|
||||
The preceding configuration generates a TDengine cluster with three nodes in which dnodes are automatically configured. You can run the `show dnodes` command to query the nodes in the cluster:
|
||||
The above configuration will generate a three-node TDengine cluster, dnode is automatically configured, you can use the **show dnodes** command to view the nodes of the current cluster:
|
||||
|
||||
```bash
|
||||
kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"
|
||||
kubectl exec -i -t tdengine-1 -- taos -s "show dnodes"
|
||||
kubectl exec -i -t tdengine-2 -- taos -s "show dnodes"
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show dnodes"
|
||||
kubectl exec -it tdengine-2 -n tdengine-test -- taos -s "show dnodes"
|
||||
```
|
||||
|
||||
The output is as follows:
|
||||
|
||||
```
|
||||
```Bash
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||
============================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
|
||||
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
|
||||
Query OK, 3 rows in database (0.003655s)
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 0 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
|
||||
2 | tdengine-1.ta... | 0 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
|
||||
3 | tdengine-2.ta... | 0 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
|
||||
Query OK, 3 row(s) in set (0.001853s)
|
||||
```
|
||||
|
||||
View the current mnode
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
|
||||
taos> show mnodes\G
|
||||
*************************** 1.row ***************************
|
||||
id: 1
|
||||
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: leader
|
||||
status: ready
|
||||
create_time: 2023-07-19 17:54:18.559
|
||||
reboot_time: 2023-07-19 17:54:19.520
|
||||
Query OK, 1 row(s) in set (0.001282s)
|
||||
```
|
||||
|
||||
## Create mnode
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 2"
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 3"
|
||||
```
|
||||
|
||||
View mnode
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
|
||||
|
||||
taos> show mnodes\G
|
||||
*************************** 1.row ***************************
|
||||
id: 1
|
||||
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: leader
|
||||
status: ready
|
||||
create_time: 2023-07-19 17:54:18.559
|
||||
reboot_time: 2023-07-20 09:19:36.060
|
||||
*************************** 2.row ***************************
|
||||
id: 2
|
||||
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: follower
|
||||
status: ready
|
||||
create_time: 2023-07-20 09:22:05.600
|
||||
reboot_time: 2023-07-20 09:22:12.838
|
||||
*************************** 3.row ***************************
|
||||
id: 3
|
||||
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: follower
|
||||
status: ready
|
||||
create_time: 2023-07-20 09:22:20.042
|
||||
reboot_time: 2023-07-20 09:22:23.271
|
||||
Query OK, 3 row(s) in set (0.003108s)
|
||||
```
|
||||
|
||||
## Enable port forwarding
|
||||
|
||||
The kubectl port forwarding feature allows applications to access the TDengine cluster running on Kubernetes.
|
||||
Kubectl port forwarding enables applications to access TDengine clusters running in Kubernetes environments.
|
||||
|
||||
```
|
||||
kubectl port-forward tdengine-0 6041:6041 &
|
||||
```bash
|
||||
kubectl port-forward -n tdengine-test tdengine-0 6041:6041 &
|
||||
```
|
||||
|
||||
Use curl to verify that the TDengine REST API is working on port 6041:
|
||||
Use **curl** to verify that the TDengine REST API is working on port 6041:
|
||||
|
||||
```
|
||||
$ curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
||||
Handling connection for 6041
|
||||
{"code":0,"column_meta":[["name","VARCHAR",64],["create_time","TIMESTAMP",8],["vgroups","SMALLINT",2],["ntables","BIGINT",8],["replica","TINYINT",1],["strict","VARCHAR",4],["duration","VARCHAR",10],["keep","VARCHAR",32],["buffer","INT",4],["pagesize","INT",4],["pages","INT",4],["minrows","INT",4],["maxrows","INT",4],["comp","TINYINT",1],["precision","VARCHAR",2],["status","VARCHAR",10],["retention","VARCHAR",60],["single_stable","BOOL",1],["cachemodel","VARCHAR",11],["cachesize","INT",4],["wal_level","TINYINT",1],["wal_fsync_period","INT",4],["wal_retention_period","INT",4],["wal_retention_size","BIGINT",8]],"data":[["information_schema",null,null,16,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null],["performance_schema",null,null,10,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null]],"rows":2}
|
||||
```bash
|
||||
curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
||||
{"code":0,"column_meta":[["name","VARCHAR",64]],"data":[["information_schema"],["performance_schema"],["test"],["test1"]],"rows":4}
|
||||
```
|
||||
|
||||
## Enable the dashboard for visualization
|
||||
## Test cluster
|
||||
|
||||
The minikube dashboard command enables visualized cluster management.
|
||||
### Data preparation
|
||||
|
||||
```
|
||||
$ minikube dashboard
|
||||
* Verifying dashboard health ...
|
||||
* Launching proxy ...
|
||||
* Verifying proxy health ...
|
||||
* Opening http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
|
||||
http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
|
||||
#### taosBenchmark
|
||||
|
||||
Create a 3 replicas database with taosBenchmark, write 100 million data at the same time, and view the data at the same time
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taosBenchmark -I stmt -d test -n 10000 -t 10000 -a 3
|
||||
|
||||
# query data
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "select count(*) from test.meters;"
|
||||
|
||||
taos> select count(*) from test.meters;
|
||||
count(*) |
|
||||
========================
|
||||
100000000 |
|
||||
Query OK, 1 row(s) in set (0.103537s)
|
||||
```
|
||||
|
||||
In some public clouds, minikube cannot be remotely accessed if it is bound to 127.0.0.1. In this case, use the kubectl proxy command to map the port to 0.0.0.0. Then, you can access the dashboard by using a web browser to open the dashboard URL above on the public IP address and port of the virtual machine.
|
||||
View vnode distribution by showing dnodes
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 8 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
|
||||
2 | tdengine-1.ta... | 8 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
|
||||
3 | tdengine-2.ta... | 8 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
|
||||
Query OK, 3 row(s) in set (0.001357s)
|
||||
```
|
||||
$ kubectl proxy --accept-hosts='^.*$' --address='0.0.0.0'
|
||||
|
||||
View xnode distribution by showing vgroup
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show test.vgroups"
|
||||
|
||||
taos> show test.vgroups
|
||||
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode | v4_status | cacheload | cacheelements | tsma |
|
||||
==============================================================================================================================================================================================
|
||||
2 | test | 1267 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
|
||||
3 | test | 1215 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
4 | test | 1215 | 1 | leader | 2 | follower | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
5 | test | 1307 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
6 | test | 1245 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
|
||||
7 | test | 1275 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
8 | test | 1231 | 1 | leader | 2 | follower | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
9 | test | 1245 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
|
||||
Query OK, 8 row(s) in set (0.001488s)
|
||||
```
|
||||
|
||||
#### Manually created
|
||||
|
||||
Common a three-copy test1, and create a table, write 2 pieces of data
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- \
|
||||
taos -s \
|
||||
"create database if not exists test1 replica 3;
|
||||
use test1;
|
||||
create table if not exists t1(ts timestamp, n int);
|
||||
insert into t1 values(now, 1)(now+1s, 2);"
|
||||
```
|
||||
|
||||
View xnode distribution by showing test1.vgroup
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show test1.vgroups"
|
||||
|
||||
taos> show test1.vgroups
|
||||
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode | v4_status | cacheload | cacheelements | tsma |
|
||||
==============================================================================================================================================================================================
|
||||
10 | test1 | 1 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
|
||||
11 | test1 | 0 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
Query OK, 2 row(s) in set (0.001489s)
|
||||
```
|
||||
|
||||
### Test fault tolerance
|
||||
|
||||
The dnode where the mnode leader is located is disconnected, dnode1
|
||||
|
||||
```Bash
|
||||
kubectl get pod -l app=tdengine -n tdengine-test -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
tdengine-0 0/1 ErrImagePull 2 (2s ago) 20m 10.244.2.75 node86 <none> <none>
|
||||
tdengine-1 1/1 Running 1 (6m48s ago) 20m 10.244.0.59 node84 <none> <none>
|
||||
tdengine-2 1/1 Running 0 21m 10.244.1.223 node85 <none> <none>
|
||||
```
|
||||
|
||||
At this time, the cluster mnode has a re-election, and the monde on dnode1 becomes the leader.
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
|
||||
Welcome to the TDengine Command Line Interface, Client Version:3.0.7.1.202307190706
|
||||
Copyright (c) 2022 by TDengine, all rights reserved.
|
||||
|
||||
taos> show mnodes\G
|
||||
*************************** 1.row ***************************
|
||||
id: 1
|
||||
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: offline
|
||||
status: offline
|
||||
create_time: 2023-07-19 17:54:18.559
|
||||
reboot_time: 1970-01-01 08:00:00.000
|
||||
*************************** 2.row ***************************
|
||||
id: 2
|
||||
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: leader
|
||||
status: ready
|
||||
create_time: 2023-07-20 09:22:05.600
|
||||
reboot_time: 2023-07-20 09:32:00.227
|
||||
*************************** 3.row ***************************
|
||||
id: 3
|
||||
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: follower
|
||||
status: ready
|
||||
create_time: 2023-07-20 09:22:20.042
|
||||
reboot_time: 2023-07-20 09:32:00.026
|
||||
Query OK, 3 row(s) in set (0.001513s)
|
||||
```
|
||||
|
||||
Cluster can read and write normally
|
||||
|
||||
```Bash
|
||||
# insert
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "insert into test1.t1 values(now, 1)(now+1s, 2);"
|
||||
|
||||
taos> insert into test1.t1 values(now, 1)(now+1s, 2);
|
||||
Insert OK, 2 row(s) affected (0.002098s)
|
||||
|
||||
# select
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "select *from test1.t1"
|
||||
|
||||
taos> select *from test1.t1
|
||||
ts | n |
|
||||
========================================
|
||||
2023-07-19 18:04:58.104 | 1 |
|
||||
2023-07-19 18:04:59.104 | 2 |
|
||||
2023-07-19 18:06:00.303 | 1 |
|
||||
2023-07-19 18:06:01.303 | 2 |
|
||||
Query OK, 4 row(s) in set (0.001994s)
|
||||
```
|
||||
|
||||
Similarly, as for the non-leader mnode dropped, read and write can of course be normal, here will not do too much display .
|
||||
|
||||
## Scaling Out Your Cluster
|
||||
|
||||
TDengine clusters can scale automatically:
|
||||
TDengine cluster supports automatic expansion:
|
||||
|
||||
```bash
|
||||
```Bash
|
||||
kubectl scale statefulsets tdengine --replicas=4
|
||||
```
|
||||
|
||||
The preceding command increases the number of replicas to 4. After running this command, query the pod status:
|
||||
The parameter `--replica = 4 `in the above command line indicates that you want to expand the TDengine cluster to 4 nodes. After execution, first check the status of the Pod:
|
||||
|
||||
```bash
|
||||
kubectl get pods -l app=tdengine
|
||||
```Bash
|
||||
kubectl get pod -l app=tdengine -n tdengine-test -o wide
|
||||
```
|
||||
|
||||
The output is as follows:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
tdengine-0 1/1 Running 0 161m
|
||||
tdengine-1 1/1 Running 0 161m
|
||||
tdengine-2 1/1 Running 0 32m
|
||||
tdengine-3 1/1 Running 0 32m
|
||||
```Plain
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
tdengine-0 1/1 Running 4 (6h26m ago) 6h53m 10.244.2.75 node86 <none> <none>
|
||||
tdengine-1 1/1 Running 1 (6h39m ago) 6h53m 10.244.0.59 node84 <none> <none>
|
||||
tdengine-2 1/1 Running 0 5h16m 10.244.1.224 node85 <none> <none>
|
||||
tdengine-3 1/1 Running 0 3m24s 10.244.2.76 node86 <none> <none>
|
||||
```
|
||||
|
||||
The status of all pods is Running. Once the pod status changes to Ready, you can check the dnode status:
|
||||
At this time, the state of the POD is still Running, and the dnode state in the TDengine cluster can only be seen after the Pod status is `ready `:
|
||||
|
||||
```bash
|
||||
kubectl exec -i -t tdengine-3 -- taos -s "show dnodes"
|
||||
```Bash
|
||||
kubectl exec -it tdengine-3 -n tdengine-test -- taos -s "show dnodes"
|
||||
```
|
||||
|
||||
The following output shows that the TDengine cluster has been expanded to 4 replicas:
|
||||
The dnode list of the expanded four-node TDengine cluster:
|
||||
|
||||
```
|
||||
```Plain
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||
============================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
|
||||
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
|
||||
4 | tdengine-3.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:33:16.039 | |
|
||||
Query OK, 4 rows in database (0.008377s)
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
|
||||
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
|
||||
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
|
||||
4 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:01:44.007 | 2023-07-20 16:01:44.889 | | | |
|
||||
Query OK, 4 row(s) in set (0.003628s)
|
||||
```
|
||||
|
||||
## Scaling In Your Cluster
|
||||
|
||||
When you scale in a TDengine cluster, your data is migrated to different nodes. You must run the drop dnodes command in TDengine to remove dnodes before scaling in your Kubernetes environment.
|
||||
Since the TDengine cluster will migrate data between nodes during volume expansion and contraction, using the **kubectl** command to reduce the volume requires first using the "drop dnodes" command ( **If there are 3 replicas of db in the cluster, the number of dnodes after reduction must also be greater than or equal to 3, otherwise the drop dnode operation will be aborted** ), the node deletion is completed before Kubernetes cluster reduction.
|
||||
|
||||
Note: In a Kubernetes StatefulSet service, the newest pods are always removed first. For this reason, when you scale in your TDengine cluster, ensure that you drop the newest dnodes.
|
||||
Note: Since Kubernetes Pods in the Statefulset can only be removed in reverse order of creation, the TDengine drop dnode also needs to be removed in reverse order of creation, otherwise the Pod will be in an error state.
|
||||
|
||||
```
|
||||
$ kubectl exec -i -t tdengine-0 -- taos -s "drop dnode 4"
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "drop dnode 4"
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||
============================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
|
||||
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
|
||||
Query OK, 3 rows in database (0.004861s)
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
|
||||
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
|
||||
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
|
||||
Query OK, 3 row(s) in set (0.003324s)
|
||||
```
|
||||
|
||||
Verify that the dnode have been successfully removed by running the `kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"` command. Then run the following command to remove the pod:
|
||||
After confirming that the removal is successful (use kubectl exec -i -t tdengine-0 --taos -s "show dnodes" to view and confirm the dnode list), use the kubectl command to remove the Pod:
|
||||
|
||||
```
|
||||
kubectl scale statefulsets tdengine --replicas=3
|
||||
```Plain
|
||||
kubectl scale statefulsets tdengine --replicas=3 -n tdengine-test
|
||||
```
|
||||
|
||||
The newest pod in the deployment is removed. Run the `kubectl get pods -l app=tdengine` command to query the pod status:
|
||||
The last Pod will be deleted. Use the command kubectl get pods -l app = tdengine to check the Pod status:
|
||||
|
||||
```
|
||||
$ kubectl get pods -l app=tdengine
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
tdengine-0 1/1 Running 0 4m7s
|
||||
tdengine-1 1/1 Running 0 3m55s
|
||||
tdengine-2 1/1 Running 0 2m28s
|
||||
```Plain
|
||||
kubectl get pod -l app=tdengine -n tdengine-test -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
tdengine-0 1/1 Running 4 (6h55m ago) 7h22m 10.244.2.75 node86 <none> <none>
|
||||
tdengine-1 1/1 Running 1 (7h9m ago) 7h23m 10.244.0.59 node84 <none> <none>
|
||||
tdengine-2 1/1 Running 0 5h45m 10.244.1.224 node85 <none> <none>
|
||||
```
|
||||
|
||||
After the pod has been removed, manually delete the PersistentVolumeClaim (PVC). Otherwise, future scale-outs will attempt to use existing data.
|
||||
After the Pod is deleted, the PVC needs to be deleted manually, otherwise the previous data will continue to be used for the next expansion, resulting in the inability to join the cluster normally.
|
||||
|
||||
```bash
|
||||
$ kubectl delete pvc taosdata-tdengine-3
|
||||
```Bash
|
||||
kubectl delete pvc aosdata-tdengine-3 -n tdengine-test
|
||||
```
|
||||
|
||||
Your cluster has now been safely scaled in, and you can scale it out again as necessary.
|
||||
The cluster state at this time is safe and can be scaled up again if needed.
|
||||
|
||||
```bash
|
||||
$ kubectl scale statefulsets tdengine --replicas=4
|
||||
```Bash
|
||||
kubectl scale statefulsets tdengine --replicas=4 -n tdengine-test
|
||||
statefulset.apps/tdengine scaled
|
||||
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
tdengine-0 1/1 Running 0 35m
|
||||
tdengine-1 1/1 Running 0 34m
|
||||
tdengine-2 1/1 Running 0 12m
|
||||
tdengine-3 0/1 ContainerCreating 0 4s
|
||||
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
tdengine-0 1/1 Running 0 35m
|
||||
tdengine-1 1/1 Running 0 34m
|
||||
tdengine-2 1/1 Running 0 12m
|
||||
tdengine-3 0/1 Running 0 7s
|
||||
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
|
||||
|
||||
kubectl get pod -l app=tdengine -n tdengine-test -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
tdengine-0 1/1 Running 4 (6h59m ago) 7h27m 10.244.2.75 node86 <none> <none>
|
||||
tdengine-1 1/1 Running 1 (7h13m ago) 7h27m 10.244.0.59 node84 <none> <none>
|
||||
tdengine-2 1/1 Running 0 5h49m 10.244.1.224 node85 <none> <none>
|
||||
tdengine-3 1/1 Running 0 20s 10.244.2.77 node86 <none> <none>
|
||||
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
|
||||
5 | tdengine-2.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:01:36.479 | |
|
||||
6 | tdengine-3.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:13:54.411 | |
|
||||
Query OK, 4 row(s) in set (0.001348s)
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
|
||||
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
|
||||
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
|
||||
5 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:31:34.092 | 2023-07-20 16:38:17.419 | | | |
|
||||
Query OK, 4 row(s) in set (0.003881s)
|
||||
```
|
||||
|
||||
## Remove a TDengine Cluster
|
||||
|
||||
To fully remove a TDengine cluster, you must delete its statefulset, svc, configmap, and pvc entries:
|
||||
> **When deleting the PVC, you need to pay attention to the pv persistentVolumeReclaimPolicy policy. It is recommended to change to Delete, so that the PV will be automatically cleaned up when the PVC is deleted, and the underlying CSI storage resources will be cleaned up at the same time. If the policy of deleting the PVC to automatically clean up the PV is not configured, and then after deleting the pvc, when manually cleaning up the PV, the CSI storage resources corresponding to the PV may not be released.**
|
||||
|
||||
```bash
|
||||
kubectl delete statefulset -l app=tdengine
|
||||
kubectl delete svc -l app=tdengine
|
||||
kubectl delete pvc -l app=tdengine
|
||||
kubectl delete configmap taoscfg
|
||||
Complete removal of TDengine cluster, need to clean up statefulset, svc, configmap, pvc respectively.
|
||||
|
||||
```Bash
|
||||
kubectl delete statefulset -l app=tdengine -n tdengine-test
|
||||
kubectl delete svc -l app=tdengine -n tdengine-test
|
||||
kubectl delete pvc -l app=tdengine -n tdengine-test
|
||||
kubectl delete configmap taoscfg -n tdengine-test
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Error 1
|
||||
|
||||
If you remove a pod without first running `drop dnode`, some TDengine nodes will go offline.
|
||||
No "drop dnode" is directly reduced. Since the TDengine has not deleted the node, the reduced pod causes some nodes in the TDengine cluster to be offline.
|
||||
|
||||
```
|
||||
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
|
||||
```Plain
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
|
||||
5 | tdengine-2.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:01:36.479 | status msg timeout |
|
||||
6 | tdengine-3.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:13:54.411 | status msg timeout |
|
||||
Query OK, 4 row(s) in set (0.001323s)
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
|
||||
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
|
||||
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
|
||||
5 | tdengine-3.ta... | 0 | 16 | offline | 2023-07-20 16:31:34.092 | 2023-07-20 16:38:17.419 | status msg timeout | | |
|
||||
Query OK, 4 row(s) in set (0.003862s)
|
||||
```
|
||||
|
||||
### Error 2
|
||||
## Finally
|
||||
|
||||
If the number of nodes after a scale-in is less than the value of the replica parameter, the cluster will go down:
|
||||
For the high availability and high reliability of TDengine in a Kubernetes environment, hardware damage and disaster recovery are divided into two levels:
|
||||
|
||||
Create a database with replica set to 2 and add data.
|
||||
1. The disaster recovery capability of the underlying distributed Block Storage, the multi-copy of Block Storage, the current popular distributed Block Storage such as Ceph, has the multi-copy capability, extending the storage copy to different racks, cabinets, computer rooms, Data center (or directly use the Block Storage service provided by Public Cloud vendors)
|
||||
2. TDengine disaster recovery, in TDengine Enterprise, itself has when a dnode permanently offline (TCE-metal disk damage, data sorting loss), re-pull a blank dnode to restore the original dnode work.
|
||||
|
||||
```bash
|
||||
kubectl exec -i -t tdengine-0 -- \
|
||||
taos -s \
|
||||
"create database if not exists test replica 2;
|
||||
use test;
|
||||
create table if not exists t1(ts timestamp, n int);
|
||||
insert into t1 values(now, 1)(now+1s, 2);"
|
||||
Finally, welcome to [TDengine Cloud ](https://cloud.tdengine.com/)to experience the one-stop fully managed TDengine Cloud as a Service.
|
||||
|
||||
|
||||
```
|
||||
|
||||
Scale in to one node:
|
||||
|
||||
```bash
|
||||
kubectl scale statefulsets tdengine --replicas=1
|
||||
|
||||
```
|
||||
|
||||
In the TDengine CLI, you can see that no database operations succeed:
|
||||
|
||||
```
|
||||
taos> show dnodes;
|
||||
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
|
||||
Query OK, 2 row(s) in set (0.000845s)
|
||||
|
||||
taos> show dnodes;
|
||||
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
|
||||
Query OK, 2 row(s) in set (0.000837s)
|
||||
|
||||
taos> use test;
|
||||
Database changed.
|
||||
|
||||
taos> insert into t1 values(now, 3);
|
||||
|
||||
DB error: Unable to resolve FQDN (0.013874s)
|
||||
|
||||
```
|
||||
> TDengine Cloud is a minimalist fully managed time series data processing Cloud as a Service platform developed based on the open source time series database TDengine. In addition to high-performance time series database, it also has system functions such as caching, subscription and stream computing, and provides convenient and secure data sharing, as well as numerous enterprise-level functions. It allows enterprises in the fields of Internet of Things, Industrial Internet, Finance, IT operation and maintenance monitoring to significantly reduce labor costs and operating costs in the management of time series data.
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ description: This document describes how to deploy a TDengine cluster on a serve
|
|||
|
||||
TDengine has a native distributed design and provides the ability to scale out. A few nodes can form a TDengine cluster. If you need higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source.
|
||||
|
||||
This document describes how to manually deploy a cluster on a host as well as how to deploy on Kubernetes and by using Helm.
|
||||
This document describes how to manually deploy a cluster on a host directly and deploy a cluster with Docker, Kubernetes or Helm.
|
||||
|
||||
```mdx-code-block
|
||||
import DocCardList from '@theme/DocCardList';
|
||||
|
|
|
|||
|
|
@ -42,10 +42,20 @@ In TDengine, the data types below can be used when specifying a column or tag.
|
|||
| 14 | NCHAR | User Defined | Multi-byte string that can include multi byte characters like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\'`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
|
||||
| 15 | JSON | | JSON type can only be used on tags. A tag of json type is excluded with any other tags of any other type. |
|
||||
| 16 | VARCHAR | User-defined | Alias of BINARY |
|
||||
| 17 | GEOMETRY | User-defined | Geometry |
|
||||
:::note
|
||||
|
||||
- Each row of the table cannot be longer than 48KB (64KB since version 3.0.5.0) (note that each BINARY/NCHAR/GEOMETRY column takes up an additional 2 bytes of storage space).
|
||||
- Only ASCII visible characters are suggested to be used in a column or tag of BINARY type. Multi-byte characters must be stored in NCHAR type.
|
||||
- The length of BINARY can be up to 16,374(data column is 65,517 and tag column is 16,382 since version 3.0.5.0) bytes. The string value must be quoted with single quotes. You must specify a length in bytes for a BINARY value, for example binary(20) for up to twenty single-byte characters. If the data exceeds the specified length, an error will occur. The literal single quote inside the string must be preceded with back slash like `\'`
|
||||
- The maximum length of the GEOMETRY data column is 65,517 bytes, and the maximum length of the tag column is 16,382 bytes. Supports POINT, LINESTRING, and POLYGON subtypes of 2D. The following table describes the length calculation method:
|
||||
|
||||
| # | **Syntax** | **MinLen** | **MaxLen** | **Growth of each point** |
|
||||
|---|--------------------------------------|------------|------------|--------------------------|
|
||||
| 1 | POINT(1.0 1.0) | 21 | 21 | NA |
|
||||
| 2 | LINESTRING(1.0 1.0, 2.0 2.0) | 9+2*16 | 9+4094*16 | +16 |
|
||||
| 3 | POLYGON((1.0 1.0, 2.0 2.0, 1.0 1.0)) | 13+3*16 | 13+4094*16 | +16 |
|
||||
|
||||
- Numeric values in SQL statements will be determined as integer or float type according to whether there is decimal point or whether scientific notation is used, so attention must be paid to avoid overflow. For example, 9999999999999999999 will be considered as overflow because it exceeds the upper limit of long integer, but 9999999999999999999.0 will be considered as a legal float number.
|
||||
|
||||
:::
|
||||
|
|
|
|||
|
|
@ -56,7 +56,7 @@ database_option: {
|
|||
- WAL_FSYNC_PERIOD: specifies the interval (in milliseconds) at which data is written from the WAL to disk. This parameter takes effect only when the WAL parameter is set to 2. The default value is 3000. Enter a value between 0 and 180000. The value 0 indicates that incoming data is immediately written to disk.
|
||||
- MAXROWS: specifies the maximum number of rows recorded in a block. The default value is 4096.
|
||||
- MINROWS: specifies the minimum number of rows recorded in a block. The default value is 100.
|
||||
- KEEP: specifies the time for which data is retained. Enter a value between 1 and 365000. The default value is 3650. The value of the KEEP parameter must be greater than or equal to the value of the DURATION parameter. TDengine automatically deletes data that is older than the value of the KEEP parameter. You can use m (minutes), h (hours), and d (days) as the unit, for example KEEP 100h or KEEP 10d. If you do not include a unit, d is used by default. The Enterprise Edition supports [Tiered Storage](https://docs.tdengine.com/tdinternal/arch/#tiered-storage) function, thus multiple KEEP values (comma separated and up to 3 values supported, and meet keep 0 <= keep 1 <= keep 2, e.g. KEEP 100h,100d,3650d) are supported; the Community Edition does not support Tiered Storage function (although multiple keep values are configured, they do not take effect, only the maximum keep value is used as KEEP).
|
||||
- KEEP: specifies the time for which data is retained. Enter a value between 1 and 365000. The default value is 3650. The value of the KEEP parameter must be greater than or equal to the value of the DURATION parameter. TDengine automatically deletes data that is older than the value of the KEEP parameter. You can use m (minutes), h (hours), and d (days) as the unit, for example KEEP 100h or KEEP 10d. If you do not include a unit, d is used by default. TDengine Enterprise supports [Tiered Storage](https://docs.tdengine.com/tdinternal/arch/#tiered-storage) function, thus multiple KEEP values (comma separated and up to 3 values supported, and meet keep 0 <= keep 1 <= keep 2, e.g. KEEP 100h,100d,3650d) are supported; TDengine OSS does not support Tiered Storage function (although multiple keep values are configured, they do not take effect, only the maximum keep value is used as KEEP).
|
||||
- PAGES: specifies the number of pages in the metadata storage engine cache on each vnode. Enter a value greater than or equal to 64. The default value is 256. The space occupied by metadata storage on each vnode is equal to the product of the values of the PAGESIZE and PAGES parameters. The space occupied by default is 1 MB.
|
||||
- PAGESIZE: specifies the size (in KB) of each page in the metadata storage engine cache on each vnode. The default value is 4. Enter a value between 1 and 16384.
|
||||
- PRECISION: specifies the precision at which a database records timestamps. Enter ms for milliseconds, us for microseconds, or ns for nanoseconds. The default value is ms.
|
||||
|
|
@ -73,7 +73,7 @@ database_option: {
|
|||
- TABLE_PREFIX: The prefix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the prefix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then "0001" is used if TSDB_PREFIX is set to 2 but "v3" is used if TSDB_PREFIX is set to -2; It can help you to control the distribution of tables.
|
||||
- TABLE_SUFFIX: The suffix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the suffix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then "v300" is used if TSDB_SUFFIX is set to 2 but "01" is used if TSDB_SUFFIX is set to -2; It can help you to control the distribution of tables.
|
||||
- TSDB_PAGESIZE: The page size of the data storage engine in a vnode. The unit is KB. The default is 4 KB. The range is 1 to 16384, that is, 1 KB to 16 MB.
|
||||
- WAL_RETENTION_PERIOD: specifies the maximum time of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a time in seconds. The default value 0. A value of 0 indicates that WAL files are not required to keep for consumption. Alter it with a proper value at first to create topics.
|
||||
- WAL_RETENTION_PERIOD: specifies the maximum time of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a time in seconds. The default value is 3600, which means the data in latest 3600 seconds will be kept in WAL for data subscription. Please adjust this parameter to a more proper value for your data subscription.
|
||||
- WAL_RETENTION_SIZE: specifies the maximum total size of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a size in KB. The default value is 0. A value of 0 indicates that the total size of WAL files to keep for consumption has no upper limit.
|
||||
### Example Statement
|
||||
|
||||
|
|
|
|||
|
|
@ -9,27 +9,27 @@ You create standard tables and subtables with the `CREATE TABLE` statement.
|
|||
|
||||
```sql
|
||||
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definition] ...) [table_options]
|
||||
|
||||
|
||||
CREATE TABLE create_subtable_clause
|
||||
|
||||
|
||||
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definition] ...)
|
||||
[TAGS (create_definition [, create_definition] ...)]
|
||||
[table_options]
|
||||
|
||||
|
||||
create_subtable_clause: {
|
||||
create_subtable_clause [create_subtable_clause] ...
|
||||
| [IF NOT EXISTS] [db_name.]tb_name USING [db_name.]stb_name [(tag_name [, tag_name] ...)] TAGS (tag_value [, tag_value] ...)
|
||||
}
|
||||
|
||||
|
||||
create_definition:
|
||||
col_name column_definition
|
||||
|
||||
|
||||
column_definition:
|
||||
type_name [comment 'string_value']
|
||||
|
||||
|
||||
table_options:
|
||||
table_option ...
|
||||
|
||||
|
||||
table_option: {
|
||||
COMMENT 'string_value'
|
||||
| WATERMARK duration[,duration]
|
||||
|
|
@ -45,9 +45,9 @@ table_option: {
|
|||
|
||||
1. The first column of a table MUST be of type TIMESTAMP. It is automatically set as the primary key.
|
||||
2. The maximum length of the table name is 192 bytes.
|
||||
3. The maximum length of each row is 48k(64k since version 3.0.5.0) bytes, please note that the extra 2 bytes used by each BINARY/NCHAR column are also counted.
|
||||
3. The maximum length of each row is 48k(64k since version 3.0.5.0) bytes, please note that the extra 2 bytes used by each BINARY/NCHAR/GEOMETRY column are also counted.
|
||||
4. The name of the subtable can only consist of characters from the English alphabet, digits and underscore. Table names can't start with a digit. Table names are case insensitive.
|
||||
5. The maximum length in bytes must be specified when using BINARY or NCHAR types.
|
||||
5. The maximum length in bytes must be specified when using BINARY/NCHAR/GEOMETRY types.
|
||||
6. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for the name length is still valid. The table names specified using escape character are case sensitive.
|
||||
For example \`aBc\` and \`abc\` are different table names but `abc` and `aBc` are same table names because they are both converted to `abc` internally.
|
||||
Only ASCII visible characters can be used with escape character.
|
||||
|
|
@ -58,7 +58,7 @@ table_option: {
|
|||
3. MAX_DELAY: specifies the maximum latency for pushing computation results. The default value is 15 minutes or the value of the INTERVAL parameter, whichever is smaller. Enter a value between 0 and 15 minutes in milliseconds, seconds, or minutes. You can enter multiple values separated by commas (,). Note: Retain the default value if possible. Configuring a small MAX_DELAY may cause results to be frequently pushed, affecting storage and query performance. This parameter applies only to supertables and takes effect only when the RETENTIONS parameter has been specified for the database.
|
||||
4. ROLLUP: specifies aggregate functions to roll up. Rolling up a function provides downsampled results based on multiple axes. This parameter applies only to supertables and takes effect only when the RETENTIONS parameter has been specified for the database. You can specify only one function to roll up. The rollup takes effect on all columns except TS. Enter one of the following values: avg, sum, min, max, last, or first.
|
||||
5. SMA: specifies functions on which to enable small materialized aggregates (SMA). SMA is user-defined precomputation of aggregates based on data blocks. Enter one of the following values: max, min, or sum This parameter can be used with supertables and standard tables.
|
||||
6. TTL: specifies the time to live (TTL) for the table. If TTL is specified when creatinga table, after the time period for which the table has been existing is over TTL, TDengine will automatically delete the table. Please be noted that the system may not delete the table at the exact moment that the TTL expires but guarantee there is such a system and finally the table will be deleted. The unit of TTL is in days. The default value is 0, i.e. never expire.
|
||||
6. TTL: specifies the time to live (TTL) for the table. If TTL is specified when creatinga table, after the time period for which the table has been existing is over TTL, TDengine will automatically delete the table. Please be noted that the system may not delete the table at the exact moment that the TTL expires but guarantee there is such a system and finally the table will be deleted. The unit of TTL is in days. The default value is 0, i.e. never expire.
|
||||
|
||||
## Create Subtables
|
||||
|
||||
|
|
@ -88,7 +88,7 @@ You can create multiple subtables in a single SQL statement provided that all su
|
|||
|
||||
```sql
|
||||
ALTER TABLE [db_name.]tb_name alter_table_clause
|
||||
|
||||
|
||||
alter_table_clause: {
|
||||
alter_table_options
|
||||
| ADD COLUMN col_name column_type
|
||||
|
|
@ -96,10 +96,10 @@ alter_table_clause: {
|
|||
| MODIFY COLUMN col_name column_type
|
||||
| RENAME COLUMN old_col_name new_col_name
|
||||
}
|
||||
|
||||
|
||||
alter_table_options:
|
||||
alter_table_option ...
|
||||
|
||||
|
||||
alter_table_option: {
|
||||
TTL value
|
||||
| COMMENT 'string_value'
|
||||
|
|
@ -142,15 +142,15 @@ ALTER TABLE tb_name RENAME COLUMN old_col_name new_col_name
|
|||
|
||||
```sql
|
||||
ALTER TABLE [db_name.]tb_name alter_table_clause
|
||||
|
||||
|
||||
alter_table_clause: {
|
||||
alter_table_options
|
||||
| SET TAG tag_name = new_tag_value
|
||||
}
|
||||
|
||||
|
||||
alter_table_options:
|
||||
alter_table_option ...
|
||||
|
||||
|
||||
alter_table_option: {
|
||||
TTL value
|
||||
| COMMENT 'string_value'
|
||||
|
|
|
|||
|
|
@ -1274,3 +1274,161 @@ SELECT SERVER_STATUS();
|
|||
```
|
||||
|
||||
**Description**: The server status.
|
||||
|
||||
|
||||
## Geometry Functions
|
||||
|
||||
### Geometry Input Functions
|
||||
|
||||
Geometry input functions create geometry data from WTK.
|
||||
|
||||
#### ST_GeomFromText
|
||||
|
||||
```sql
|
||||
ST_GeomFromText(VARCHAR WKT expr)
|
||||
```
|
||||
|
||||
**Description**: Return a specified GEOMETRY value from Well-Known Text representation (WKT).
|
||||
|
||||
**Return value type**: GEOMETRY
|
||||
|
||||
**Applicable data types**: VARCHAR
|
||||
|
||||
**Applicable table types**: standard tables and supertables
|
||||
|
||||
**Explanations**:
|
||||
- The input can be one of WTK string, like POINT, LINESTRING, POLYGON, MULTIPOINT, MULTILINESTRING, MULTIPOLYGON, GEOMETRYCOLLECTION.
|
||||
- The output is a GEOMETRY data type, internal defined as binary string.
|
||||
|
||||
### Geometry Output Functions
|
||||
|
||||
Geometry output functions convert geometry data into WTK.
|
||||
|
||||
#### ST_AsText
|
||||
|
||||
```sql
|
||||
ST_AsText(GEOMETRY geom)
|
||||
```
|
||||
|
||||
**Description**: Return a specified Well-Known Text representation (WKT) value from GEOMETRY data.
|
||||
|
||||
**Return value type**: VARCHAR
|
||||
|
||||
**Applicable data types**: GEOMETRY
|
||||
|
||||
**Applicable table types**: standard tables and supertables
|
||||
|
||||
**Explanations**:
|
||||
- The output can be one of WTK string, like POINT, LINESTRING, POLYGON, MULTIPOINT, MULTILINESTRING, MULTIPOLYGON, GEOMETRYCOLLECTION.
|
||||
|
||||
### Geometry Relationships Functions
|
||||
|
||||
Geometry relationships functions determine spatial relationships between geometries.
|
||||
|
||||
#### ST_Intersects
|
||||
|
||||
```sql
|
||||
ST_Intersects(GEOMETRY geomA, GEOMETRY geomB)
|
||||
```
|
||||
|
||||
**Description**: Compares two geometries and returns true if they intersect.
|
||||
|
||||
**Return value type**: BOOL
|
||||
|
||||
**Applicable data types**: GEOMETRY, GEOMETRY
|
||||
|
||||
**Applicable table types**: standard tables and supertables
|
||||
|
||||
**Explanations**:
|
||||
- Geometries intersect if they have any point in common.
|
||||
|
||||
|
||||
#### ST_Equals
|
||||
|
||||
```sql
|
||||
ST_Equals(GEOMETRY geomA, GEOMETRY geomB)
|
||||
```
|
||||
|
||||
**Description**: Returns TRUE if the given geometries are "spatially equal".
|
||||
|
||||
**Return value type**: BOOL
|
||||
|
||||
**Applicable data types**: GEOMETRY, GEOMETRY
|
||||
|
||||
**Applicable table types**: standard tables and supertables
|
||||
|
||||
**Explanations**:
|
||||
- 'Spatially equal' means ST_Contains(A,B) = true and ST_Contains(B,A) = true, and the ordering of points can be different but represent the same geometry structure.
|
||||
|
||||
|
||||
#### ST_Touches
|
||||
|
||||
```sql
|
||||
ST_Touches(GEOMETRY geomA, GEOMETRY geomB)
|
||||
```
|
||||
|
||||
**Description**: Returns TRUE if A and B intersect, but their interiors do not intersect.
|
||||
|
||||
**Return value type**: BOOL
|
||||
|
||||
**Applicable data types**: GEOMETRY, GEOMETRY
|
||||
|
||||
**Applicable table types**: standard tables and supertables
|
||||
|
||||
**Explanations**:
|
||||
- A and B have at least one point in common, and the common points lie in at least one boundary.
|
||||
- For Point/Point inputs the relationship is always FALSE, since points do not have a boundary.
|
||||
|
||||
|
||||
#### ST_Covers
|
||||
|
||||
```sql
|
||||
ST_Covers(GEOMETRY geomA, GEOMETRY geomB)
|
||||
```
|
||||
|
||||
**Description**: Returns TRUE if every point in Geometry B lies inside (intersects the interior or boundary of) Geometry A.
|
||||
|
||||
**Return value type**: BOOL
|
||||
|
||||
**Applicable data types**: GEOMETRY, GEOMETRY
|
||||
|
||||
**Applicable table types**: standard tables and supertables
|
||||
|
||||
**Explanations**:
|
||||
- A covers B means no point of B lies outside (in the exterior of) A.
|
||||
|
||||
|
||||
#### ST_Contains
|
||||
|
||||
```sql
|
||||
ST_Contains(GEOMETRY geomA, GEOMETRY geomB)
|
||||
```
|
||||
|
||||
**Description**: Returns TRUE if geometry A contains geometry B.
|
||||
|
||||
**Return value type**: BOOL
|
||||
|
||||
**Applicable data types**: GEOMETRY, GEOMETRY
|
||||
|
||||
**Applicable table types**: standard tables and supertables
|
||||
|
||||
**Explanations**:
|
||||
- A contains B if and only if all points of B lie inside (i.e. in the interior or boundary of) A (or equivalently, no points of B lie in the exterior of A), and the interiors of A and B have at least one point in common.
|
||||
|
||||
|
||||
#### ST_ContainsProperly
|
||||
|
||||
```sql
|
||||
ST_ContainsProperly(GEOMETRY geomA, GEOMETRY geomB)
|
||||
```
|
||||
|
||||
**Description**: Returns TRUE if every point of B lies inside A.
|
||||
|
||||
**Return value type**: BOOL
|
||||
|
||||
**Applicable data types**: GEOMETRY, GEOMETRY
|
||||
|
||||
**Applicable table types**: standard tables and supertables
|
||||
|
||||
**Explanations**:
|
||||
- There is no point of B that lies on the boundary of A or in the exterior of A.
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ TDengine supports the `UNION` and `UNION ALL` operations. UNION ALL collects all
|
|||
| 3 | \>, < | All types except BLOB, MEDIUMBLOB, and JSON | Greater than and less than |
|
||||
| 4 | \>=, <= | All types except BLOB, MEDIUMBLOB, and JSON | Greater than or equal to and less than or equal to |
|
||||
| 5 | IS [NOT] NULL | All types | Indicates whether the value is null |
|
||||
| 6 | [NOT] BETWEEN AND | All types except BLOB, MEDIUMBLOB, and JSON | Closed interval comparison |
|
||||
| 6 | [NOT] BETWEEN AND | All types except BLOB, MEDIUMBLOB, JSON and GEOMETRY | Closed interval comparison |
|
||||
| 7 | IN | All types except BLOB, MEDIUMBLOB, and JSON; the primary key (timestamp) is also not supported | Equal to any value in the list |
|
||||
| 8 | LIKE | BINARY, NCHAR, and VARCHAR | Wildcard match |
|
||||
| 9 | MATCH, NMATCH | BINARY, NCHAR, and VARCHAR | Regular expression match |
|
||||
|
|
@ -54,7 +54,7 @@ LIKE is used together with wildcards to match strings. Its usage is described as
|
|||
MATCH and NMATCH are used together with regular expressions to match strings. Their usage is described as follows:
|
||||
|
||||
- Use POSIX regular expression syntax. For more information, see Regular Expressions.
|
||||
- Regular expression can be used against only table names, i.e. `tbname`, and tags of binary/nchar types, but can't be used against data columns.
|
||||
- Regular expression can be used against only table names, i.e. `tbname`, and tags/columns of binary/nchar types.
|
||||
- The maximum length of regular expression string is 128 bytes. Configuration parameter `maxRegexStringLen` can be used to set the maximum allowed regular expression. It's a configuration parameter on the client side, and will take effect after restarting the client.
|
||||
|
||||
## Logical Operators
|
||||
|
|
|
|||
|
|
@ -178,6 +178,7 @@ The following list shows all reserved keywords:
|
|||
|
||||
- MATCH
|
||||
- MAX_DELAY
|
||||
- MAX_SPEED
|
||||
- MAXROWS
|
||||
- MERGE
|
||||
- META
|
||||
|
|
|
|||
|
|
@ -28,47 +28,47 @@ This document introduces the tables of INFORMATION_SCHEMA and their structure.
|
|||
|
||||
Provides information about dnodes. Similar to SHOW DNODES.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :------------: | ------------ | ------------------------- |
|
||||
| 1 | vnodes | SMALLINT | Current number of vnodes on the dnode. It should be noted that `vnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 2 | support_vnodes | SMALLINT | Maximum number of vnodes on the dnode |
|
||||
| 3 | status | BINARY(10) | Current status |
|
||||
| 4 | note | BINARY(256) | Reason for going offline or other information |
|
||||
| 5 | id | SMALLINT | Dnode ID |
|
||||
| 6 | endpoint | BINARY(134) | Dnode endpoint |
|
||||
| 7 | create | TIMESTAMP | Creation time |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :------------: | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | vnodes | SMALLINT | Current number of vnodes on the dnode. It should be noted that `vnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 2 | support_vnodes | SMALLINT | Maximum number of vnodes on the dnode |
|
||||
| 3 | status | BINARY(10) | Current status |
|
||||
| 4 | note | BINARY(256) | Reason for going offline or other information |
|
||||
| 5 | id | SMALLINT | Dnode ID |
|
||||
| 6 | endpoint | BINARY(134) | Dnode endpoint |
|
||||
| 7 | create | TIMESTAMP | Creation time |
|
||||
|
||||
## INS_MNODES
|
||||
|
||||
Provides information about mnodes. Similar to SHOW MNODES.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------ | ------------------ |
|
||||
| 1 | id | SMALLINT | Mnode ID |
|
||||
| 2 | endpoint | BINARY(134) | Mnode endpoint |
|
||||
| 3 | role | BINARY(10) | Current role |
|
||||
| 4 | role_time | TIMESTAMP | Time at which the current role was assumed |
|
||||
| 5 | create_time | TIMESTAMP | Creation time |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | ------------------------------------------ |
|
||||
| 1 | id | SMALLINT | Mnode ID |
|
||||
| 2 | endpoint | BINARY(134) | Mnode endpoint |
|
||||
| 3 | role | BINARY(10) | Current role |
|
||||
| 4 | role_time | TIMESTAMP | Time at which the current role was assumed |
|
||||
| 5 | create_time | TIMESTAMP | Creation time |
|
||||
|
||||
## INS_QNODES
|
||||
|
||||
Provides information about qnodes. Similar to SHOW QNODES.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------ | ------------ |
|
||||
| 1 | id | SMALLINT | Qnode ID |
|
||||
| 2 | endpoint | BINARY(134) | Qnode endpoint |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | --------------- |
|
||||
| 1 | id | SMALLINT | Qnode ID |
|
||||
| 2 | endpoint | BINARY(134) | Qnode endpoint |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
|
||||
## INS_CLUSTER
|
||||
|
||||
Provides information about the cluster.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------ | ---------- |
|
||||
| 1 | id | BIGINT | Cluster ID |
|
||||
| 2 | name | BINARY(134) | Cluster name |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | --------------- |
|
||||
| 1 | id | BIGINT | Cluster ID |
|
||||
| 2 | name | BINARY(134) | Cluster name |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
|
||||
## INS_DATABASES
|
||||
|
||||
|
|
@ -98,7 +98,7 @@ Provides information about user-created databases. Similar to SHOW DATABASES.
|
|||
| 21 | cachesize | INT | Memory per vnode used for caching the newest data. It should be noted that `cachesize` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 22 | wal_level | INT | WAL level. It should be noted that `wal_level` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 23 | wal_fsync_period | INT | Interval at which WAL is written to disk. It should be noted that `wal_fsync_period` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 24 | wal_retention_period | INT | WAL retention period. It should be noted that `wal_retention_period` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 24 | wal_retention_period | INT | WAL retention period, in second. It should be noted that `wal_retention_period` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 25 | wal_retention_size | INT | Maximum WAL size. It should be noted that `wal_retention_size` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 26 | stt_trigger | SMALLINT | The threshold for number of files to trigger file merging. It should be noted that `stt_trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 27 | table_prefix | SMALLINT | The prefix length in the table name that is ignored when distributing table to vnode based on table name. It should be noted that `table_prefix` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
|
|
@ -109,191 +109,201 @@ Provides information about user-created databases. Similar to SHOW DATABASES.
|
|||
|
||||
Provides information about user-defined functions.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------ | -------------- |
|
||||
| 1 | name | BINARY(64) | Function name |
|
||||
| 2 | comment | BINARY(255) | Function description. It should be noted that `comment` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 3 | aggregate | INT | Whether the UDF is an aggregate function. It should be noted that `aggregate` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 4 | output_type | BINARY(31) | Output data type |
|
||||
| 5 | create_time | TIMESTAMP | Creation time |
|
||||
| 6 | code_len | INT | Length of the source code |
|
||||
| 7 | bufsize | INT | Buffer size |
|
||||
| 8 | func_language | BINARY(31) | UDF programming language |
|
||||
| 9 | func_body | BINARY(16384) | UDF function body |
|
||||
| 10 | func_version | INT | UDF function version. starting from 0. Increasing by 1 each time it is updated|
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :-----------: | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | name | BINARY(64) | Function name |
|
||||
| 2 | comment | BINARY(255) | Function description. It should be noted that `comment` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 3 | aggregate | INT | Whether the UDF is an aggregate function. It should be noted that `aggregate` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 4 | output_type | BINARY(31) | Output data type |
|
||||
| 5 | create_time | TIMESTAMP | Creation time |
|
||||
| 6 | code_len | INT | Length of the source code |
|
||||
| 7 | bufsize | INT | Buffer size |
|
||||
| 8 | func_language | BINARY(31) | UDF programming language |
|
||||
| 9 | func_body | BINARY(16384) | UDF function body |
|
||||
| 10 | func_version | INT | UDF function version. starting from 0. Increasing by 1 each time it is updated |
|
||||
|
||||
## INS_INDEXES
|
||||
|
||||
Provides information about user-created indices. Similar to SHOW INDEX.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :--------------: | ------------ | ---------------------------------------------------------------------------------- |
|
||||
| 1 | db_name | BINARY(32) | Database containing the table with the specified index |
|
||||
| 2 | table_name | BINARY(192) | Table containing the specified index |
|
||||
| 3 | index_name | BINARY(192) | Index name |
|
||||
| 4 | db_name | BINARY(64) | Index column |
|
||||
| 5 | index_type | BINARY(10) | SMA or FULLTEXT index |
|
||||
| 6 | index_extensions | BINARY(256) | Other information For SMA indices, this shows a list of functions. For FULLTEXT indices, this is null. |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :--------------: | ------------- | --------------------------------------------------------------------- |
|
||||
| 1 | db_name | BINARY(32) | Database containing the table with the specified index |
|
||||
| 2 | table_name | BINARY(192) | Table containing the specified index |
|
||||
| 3 | index_name | BINARY(192) | Index name |
|
||||
| 4 | db_name | BINARY(64) | Index column |
|
||||
| 5 | index_type | BINARY(10) | SMA or tag index |
|
||||
| 6 | index_extensions | BINARY(256) | Other information For SMA/tag indices, this shows a list of functions |
|
||||
|
||||
## INS_STABLES
|
||||
|
||||
Provides information about supertables.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :-----------: | ------------ | ------------------------ |
|
||||
| 1 | stable_name | BINARY(192) | Supertable name |
|
||||
| 2 | db_name | BINARY(64) | All databases in the supertable |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| 4 | columns | INT | Number of columns |
|
||||
| 5 | tags | INT | Number of tags. It should be noted that `tags` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 6 | last_update | TIMESTAMP | Last updated time |
|
||||
| 7 | table_comment | BINARY(1024) | Table description |
|
||||
| 8 | watermark | BINARY(64) | Window closing time. It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 9 | max_delay | BINARY(64) | Maximum delay for pushing stream processing results. It should be noted that `max_delay` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 10 | rollup | BINARY(128) | Rollup aggregate function. It should be noted that `rollup` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :-----------: | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | stable_name | BINARY(192) | Supertable name |
|
||||
| 2 | db_name | BINARY(64) | All databases in the supertable |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| 4 | columns | INT | Number of columns |
|
||||
| 5 | tags | INT | Number of tags. It should be noted that `tags` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 6 | last_update | TIMESTAMP | Last updated time |
|
||||
| 7 | table_comment | BINARY(1024) | Table description |
|
||||
| 8 | watermark | BINARY(64) | Window closing time. It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 9 | max_delay | BINARY(64) | Maximum delay for pushing stream processing results. It should be noted that `max_delay` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 10 | rollup | BINARY(128) | Rollup aggregate function. It should be noted that `rollup` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
|
||||
## INS_TABLES
|
||||
|
||||
Provides information about standard tables and subtables.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :-----------: | ------------ | ---------------- |
|
||||
| 1 | table_name | BINARY(192) | Table name |
|
||||
| 2 | db_name | BINARY(64) | Database name |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| 4 | columns | INT | Number of columns |
|
||||
| 5 | stable_name | BINARY(192) | Supertable name |
|
||||
| 6 | uid | BIGINT | Table ID |
|
||||
| 7 | vgroup_id | INT | Vgroup ID |
|
||||
| 8 | ttl | INT | Table time-to-live. It should be noted that `ttl` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 9 | table_comment | BINARY(1024) | Table description |
|
||||
| 10 | type | BINARY(20) | Table type |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :-----------: | ------------- | ---------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | table_name | BINARY(192) | Table name |
|
||||
| 2 | db_name | BINARY(64) | Database name |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| 4 | columns | INT | Number of columns |
|
||||
| 5 | stable_name | BINARY(192) | Supertable name |
|
||||
| 6 | uid | BIGINT | Table ID |
|
||||
| 7 | vgroup_id | INT | Vgroup ID |
|
||||
| 8 | ttl | INT | Table time-to-live. It should be noted that `ttl` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 9 | table_comment | BINARY(1024) | Table description |
|
||||
| 10 | type | BINARY(20) | Table type |
|
||||
|
||||
## INS_TAGS
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | ---------------------- |
|
||||
| 1 | table_name | BINARY(192) | Table name |
|
||||
| 2 | db_name | BINARY(64) | Database name |
|
||||
| 3 | stable_name | BINARY(192) | Supertable name |
|
||||
| 4 | tag_name | BINARY(64) | Tag name |
|
||||
| 5 | tag_type | BINARY(64) | Tag type |
|
||||
| 6 | tag_value | BINARY(16384) | Tag value |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | --------------- |
|
||||
| 1 | table_name | BINARY(192) | Table name |
|
||||
| 2 | db_name | BINARY(64) | Database name |
|
||||
| 3 | stable_name | BINARY(192) | Supertable name |
|
||||
| 4 | tag_name | BINARY(64) | Tag name |
|
||||
| 5 | tag_type | BINARY(64) | Tag type |
|
||||
| 6 | tag_value | BINARY(16384) | Tag value |
|
||||
|
||||
## INS_COLUMNS
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | ---------------------- |
|
||||
| 1 | table_name | BINARY(192) | Table name |
|
||||
| 2 | db_name | BINARY(64) | Database name |
|
||||
| 3 | table_type | BINARY(21) | Table type |
|
||||
| 4 | col_name | BINARY(64) | Column name |
|
||||
| 5 | col_type | BINARY(32) | Column type |
|
||||
| 6 | col_length | INT | Column length |
|
||||
| 7 | col_precision | INT | Column precision |
|
||||
| 8 | col_scale | INT | Column scale |
|
||||
| 9 | col_nullable | INT | Column nullable |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :-----------: | ------------- | ---------------- |
|
||||
| 1 | table_name | BINARY(192) | Table name |
|
||||
| 2 | db_name | BINARY(64) | Database name |
|
||||
| 3 | table_type | BINARY(21) | Table type |
|
||||
| 4 | col_name | BINARY(64) | Column name |
|
||||
| 5 | col_type | BINARY(32) | Column type |
|
||||
| 6 | col_length | INT | Column length |
|
||||
| 7 | col_precision | INT | Column precision |
|
||||
| 8 | col_scale | INT | Column scale |
|
||||
| 9 | col_nullable | INT | Column nullable |
|
||||
|
||||
## INS_USERS
|
||||
|
||||
Provides information about TDengine users.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------ | -------- |
|
||||
| 1 | user_name | BINARY(23) | User name |
|
||||
| 2 | privilege | BINARY(256) | User permissions |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | ---------------- |
|
||||
| 1 | user_name | BINARY(23) | User name |
|
||||
| 2 | privilege | BINARY(256) | User permissions |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
|
||||
## INS_GRANTS
|
||||
|
||||
Provides information about TDengine Enterprise Edition permissions.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------ | -------------------------------------------------- |
|
||||
| 1 | version | BINARY(9) | Whether the deployment is a licensed or trial version |
|
||||
| 2 | cpu_cores | BINARY(9) | CPU cores included in license |
|
||||
| 3 | dnodes | BINARY(10) | Dnodes included in license. It should be noted that `dnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 4 | streams | BINARY(10) | Streams included in license. It should be noted that `streams` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 5 | users | BINARY(10) | Users included in license. It should be noted that `users` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 6 | accounts | BINARY(10) | Accounts included in license. It should be noted that `accounts` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 7 | storage | BINARY(21) | Storage space included in license. It should be noted that `storage` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 8 | connections | BINARY(21) | Client connections included in license. It should be noted that `connections` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 9 | databases | BINARY(11) | Databases included in license. It should be noted that `databases` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 10 | speed | BINARY(9) | Write speed specified in license (data points per second) |
|
||||
| 11 | querytime | BINARY(9) | Total query time specified in license |
|
||||
| 12 | timeseries | BINARY(21) | Number of metrics included in license |
|
||||
| 13 | expired | BINARY(5) | Whether the license has expired |
|
||||
| 14 | expire_time | BINARY(19) | When the trial period expires |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | version | BINARY(9) | Whether the deployment is a licensed or trial version |
|
||||
| 2 | cpu_cores | BINARY(9) | CPU cores included in license |
|
||||
| 3 | dnodes | BINARY(10) | Dnodes included in license. It should be noted that `dnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 4 | streams | BINARY(10) | Streams included in license. It should be noted that `streams` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 5 | users | BINARY(10) | Users included in license. It should be noted that `users` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 6 | accounts | BINARY(10) | Accounts included in license. It should be noted that `accounts` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 7 | storage | BINARY(21) | Storage space included in license. It should be noted that `storage` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 8 | connections | BINARY(21) | Client connections included in license. It should be noted that `connections` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 9 | databases | BINARY(11) | Databases included in license. It should be noted that `databases` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 10 | speed | BINARY(9) | Write speed specified in license (data points per second) |
|
||||
| 11 | querytime | BINARY(9) | Total query time specified in license |
|
||||
| 12 | timeseries | BINARY(21) | Number of metrics included in license |
|
||||
| 13 | expired | BINARY(5) | Whether the license has expired |
|
||||
| 14 | expire_time | BINARY(19) | When the trial period expires |
|
||||
|
||||
## INS_VGROUPS
|
||||
|
||||
Provides information about vgroups.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :-------: | ------------ | ------------------------------------------------------ |
|
||||
| 1 | vgroup_id | INT | Vgroup ID |
|
||||
| 2 | db_name | BINARY(32) | Database name |
|
||||
| 3 | tables | INT | Tables in vgroup. It should be noted that `tables` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 4 | status | BINARY(10) | Vgroup status |
|
||||
| 5 | v1_dnode | INT | Dnode ID of first vgroup member |
|
||||
| 6 | v1_status | BINARY(10) | Status of first vgroup member |
|
||||
| 7 | v2_dnode | INT | Dnode ID of second vgroup member |
|
||||
| 8 | v2_status | BINARY(10) | Status of second vgroup member |
|
||||
| 9 | v3_dnode | INT | Dnode ID of third vgroup member |
|
||||
| 10 | v3_status | BINARY(10) | Status of third vgroup member |
|
||||
| 11 | nfiles | INT | Number of data and metadata files in the vgroup |
|
||||
| 12 | file_size | INT | Size of the data and metadata files in the vgroup |
|
||||
| 13 | tsma | TINYINT | Whether time-range-wise SMA is enabled. 1 means enabled; 0 means disabled. |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :--------: | ------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | vgroup_id | INT | Vgroup ID |
|
||||
| 2 | db_name | BINARY(32) | Database name |
|
||||
| 3 | tables | INT | Tables in vgroup. It should be noted that `tables` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 4 | status | BINARY(10) | Vgroup status |
|
||||
| 5 | v1_dnode | INT | Dnode ID of first vgroup member |
|
||||
| 6 | v1_status | BINARY(10) | Status of first vgroup member |
|
||||
| 7 | v2_dnode | INT | Dnode ID of second vgroup member |
|
||||
| 8 | v2_status | BINARY(10) | Status of second vgroup member |
|
||||
| 9 | v3_dnode | INT | Dnode ID of third vgroup member |
|
||||
| 10 | v3_status | BINARY(10) | Status of third vgroup member |
|
||||
| 11 | nfiles | INT | Number of data and metadata files in the vgroup |
|
||||
| 12 | file_size | INT | Size of the data and metadata files in the vgroup |
|
||||
| 13 | tsma | TINYINT | Whether time-range-wise SMA is enabled. 1 means enabled; 0 means disabled. |
|
||||
|
||||
## INS_CONFIGS
|
||||
|
||||
Provides system configuration information.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :------: | ------------ | ------------ |
|
||||
| 1 | name | BINARY(32) | Parameter |
|
||||
| 2 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :--------: | ------------- | ----------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | name | BINARY(32) | Parameter |
|
||||
| 2 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
|
||||
## INS_DNODE_VARIABLES
|
||||
|
||||
Provides dnode configuration information.
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :------: | ------------ | ------------ |
|
||||
| 1 | dnode_id | INT | Dnode ID |
|
||||
| 2 | name | BINARY(32) | Parameter |
|
||||
| 3 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :--------: | ------------- | ----------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | dnode_id | INT | Dnode ID |
|
||||
| 2 | name | BINARY(32) | Parameter |
|
||||
| 3 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
|
||||
## INS_TOPICS
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------ | ------------------------------ |
|
||||
| 1 | topic_name | BINARY(192) | Topic name |
|
||||
| 2 | db_name | BINARY(64) | Database for the topic |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| 4 | sql | BINARY(1024) | SQL statement used to create the topic |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :---------: | ------------- | -------------------------------------- |
|
||||
| 1 | topic_name | BINARY(192) | Topic name |
|
||||
| 2 | db_name | BINARY(64) | Database for the topic |
|
||||
| 3 | create_time | TIMESTAMP | Creation time |
|
||||
| 4 | sql | BINARY(1024) | SQL statement used to create the topic |
|
||||
|
||||
## INS_SUBSCRIPTIONS
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :------------: | ------------ | ------------------------ |
|
||||
| 1 | topic_name | BINARY(204) | Subscribed topic |
|
||||
| 2 | consumer_group | BINARY(193) | Subscribed consumer group |
|
||||
| 3 | vgroup_id | INT | Vgroup ID for the consumer |
|
||||
| 4 | consumer_id | BIGINT | Consumer ID |
|
||||
| 5 | offset | BINARY(64) | Consumption progress |
|
||||
| 6 | rows | BIGINT | Number of consumption items |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :------------: | ------------- | --------------------------- |
|
||||
| 1 | topic_name | BINARY(204) | Subscribed topic |
|
||||
| 2 | consumer_group | BINARY(193) | Subscribed consumer group |
|
||||
| 3 | vgroup_id | INT | Vgroup ID for the consumer |
|
||||
| 4 | consumer_id | BIGINT | Consumer ID |
|
||||
| 5 | offset | BINARY(64) | Consumption progress |
|
||||
| 6 | rows | BIGINT | Number of consumption items |
|
||||
|
||||
## INS_STREAMS
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :----------: | ------------ | --------------------------------------- |
|
||||
| 1 | stream_name | BINARY(64) | Stream name |
|
||||
| 2 | create_time | TIMESTAMP | Creation time |
|
||||
| 3 | sql | BINARY(1024) | SQL statement used to create the stream |
|
||||
| 4 | status | BINARY(20) | Current status |
|
||||
| 5 | source_db | BINARY(64) | Source database |
|
||||
| 6 | target_db | BINARY(64) | Target database |
|
||||
| 7 | target_table | BINARY(192) | Target table |
|
||||
| 8 | watermark | BIGINT | Watermark (see stream processing documentation). It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 9 | trigger | INT | Method of triggering the result push (see stream processing documentation). It should be noted that `trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| # | **Column** | **Data Type** | **Description** |
|
||||
| --- | :----------: | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | stream_name | BINARY(64) | Stream name |
|
||||
| 2 | create_time | TIMESTAMP | Creation time |
|
||||
| 3 | sql | BINARY(1024) | SQL statement used to create the stream |
|
||||
| 4 | status | BINARY(20) | Current status |
|
||||
| 5 | source_db | BINARY(64) | Source database |
|
||||
| 6 | target_db | BINARY(64) | Target database |
|
||||
| 7 | target_table | BINARY(192) | Target table |
|
||||
| 8 | watermark | BIGINT | Watermark (see stream processing documentation). It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
| 9 | trigger | INT | Method of triggering the result push (see stream processing documentation). It should be noted that `trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
|
||||
|
||||
## INS_USER_PRIVILEGES
|
||||
|
||||
| # | **Column** | **Data Type** | **Description** |** |
|
||||
| --- | :----------: | ------------ | -------------------------------------------|
|
||||
| 1 | user_name | VARCHAR(24) | Username |
|
||||
| 2 | privilege | VARCHAR(10) | Privilege description |
|
||||
| 3 | db_name | VARCHAR(65) | Database name |
|
||||
| 4 | table_name | VARCHAR(193) | Table name |
|
||||
| 5 | condition | VARCHAR(49152) | The privilege filter for child tables |
|
||||
|
|
|
|||
|
|
@ -4,12 +4,12 @@ sidebar_label: Indexing
|
|||
description: This document describes the SQL statements related to indexing in TDengine.
|
||||
---
|
||||
|
||||
TDengine supports SMA and FULLTEXT indexing.
|
||||
TDengine supports SMA and tag indexing.
|
||||
|
||||
## Create an Index
|
||||
|
||||
```sql
|
||||
CREATE FULLTEXT INDEX index_name ON tb_name (col_name [, col_name] ...)
|
||||
CREATE INDEX index_name ON tb_name (col_name [, col_name] ...)
|
||||
|
||||
CREATE SMA INDEX index_name ON tb_name index_option
|
||||
|
||||
|
|
@ -46,10 +46,6 @@ SELECT _wstart,_wend,_wduration,max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDIN
|
|||
ALTER LOCAL 'querySmaOptimize' '0';
|
||||
```
|
||||
|
||||
### FULLTEXT Indexing
|
||||
|
||||
Creates a text index for the specified column. FULLTEXT indexing improves performance for queries with text filtering. The index_option syntax is not supported for FULLTEXT indexing. FULLTEXT indexing is supported for JSON tag columns only. Multiple columns cannot be indexed together. However, separate indices can be created for each column.
|
||||
|
||||
## Delete an Index
|
||||
|
||||
```sql
|
||||
|
|
@ -18,6 +18,7 @@ description: This document describes how TDengine SQL has changed in version 3.0
|
|||
| 8 | Mixed operations | Enhanced | Mixing scalar and vector operations in queries has been enhanced and is supported in all SELECT clauses.
|
||||
| 9 | Tag operations | Added | Tag columns can be used in queries and clauses like data columns.
|
||||
| 10 | Timeline clauses and time functions in supertables | Enhanced | When PARTITION BY is not used, data in supertables is merged into a single timeline.
|
||||
| 11 | GEOMETRY | Added | Geometry
|
||||
|
||||
## SQL Syntax
|
||||
|
||||
|
|
|
|||
|
|
@ -214,19 +214,6 @@ The data of tdinsight dashboard is stored in `log` database (default. You can ch
|
|||
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|
||||
|cluster\_id|NCHAR|TAG|cluster id|
|
||||
|
||||
### logs table
|
||||
|
||||
`logs` table contains login information records.
|
||||
|
||||
|field|type|is\_tag|comment|
|
||||
|:----|:---|:-----|:------|
|
||||
|ts|TIMESTAMP||timestamp|
|
||||
|level|VARCHAR||log level|
|
||||
|content|NCHAR||log content|
|
||||
|dnode\_id|INT|TAG|dnode id|
|
||||
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|
||||
|cluster\_id|NCHAR|TAG|cluster id|
|
||||
|
||||
### log\_summary table
|
||||
|
||||
`log_summary` table contains log summary information records.
|
||||
|
|
|
|||
|
|
@ -79,6 +79,12 @@ Parameter Description:
|
|||
- tz: Optional parameter that specifies the timezone of the returned time, following the IANA Time Zone rules, e.g. `America/New_York`.
|
||||
- req_id: Optional parameter that specifies the request id for tracing.
|
||||
|
||||
:::note
|
||||
|
||||
URL Encoding. Make sure that parameters are properly encoded. For example, when specifying a timezone you must properly encode special characters. ?tz=Etc/GMT+10 will not work because the <+> plus symbol is recognized as a space in the url. It's best practice to encode all special characters in a parameter. Instead use ?tz=Etc%2FGMT%2B10 for the parameter.
|
||||
|
||||
:::
|
||||
|
||||
For example, `http://h1.taos.com:6041/rest/sql/test` is a URL to `h1.taos.com:6041` and sets the default database name to `test`.
|
||||
|
||||
TDengine supports both Basic authentication and custom authentication mechanisms, and subsequent versions will provide a standard secure digital signature mechanism for authentication.
|
||||
|
|
|
|||
|
|
@ -31,7 +31,8 @@ Websocket connections are supported on all platforms that can run Go.
|
|||
|
||||
| connector-rust version | TDengine version | major features |
|
||||
| :----------------: | :--------------: | :--------------------------------------------------: |
|
||||
| v0.8.12 | 3.0.5.0 or later | TMQ: Get consuming progress and seek offset to consume. |
|
||||
| v0.9.2 | 3.0.7.0 or later | STMT: Get tag_fields and col_fields under ws. |
|
||||
| v0.8.12 | 3.0.5.0 | TMQ: Get consuming progress and seek offset to consume. |
|
||||
| v0.8.0 | 3.0.4.0 | Support schemaless insert. |
|
||||
| v0.7.6 | 3.0.3.0 | Support req_id in query. |
|
||||
| v0.6.0 | 3.0.0.0 | Base features. |
|
||||
|
|
@ -648,12 +649,12 @@ stmt.execute()?;
|
|||
//stmt.execute()?;
|
||||
```
|
||||
|
||||
For a working example, see [GitHub](https://github.com/taosdata/taos-connector-rust/blob/main/examples/bind.rs).
|
||||
For a working example, see [GitHub](https://github.com/taosdata/taos-connector-rust/blob/main/taos/examples/bind.rs).
|
||||
|
||||
|
||||
For information about other structure APIs, see the [Rust documentation](https://docs.rs/taos).
|
||||
|
||||
[taos]: https://github.com/taosdata/rust-connector-taos
|
||||
[taos]: https://github.com/taosdata/taos-connector-rust
|
||||
[r2d2]: https://crates.io/crates/r2d2
|
||||
[TaosBuilder]: https://docs.rs/taos/latest/taos/struct.TaosBuilder.html
|
||||
[TaosCfg]: https://docs.rs/taos/latest/taos/struct.TaosCfg.html
|
||||
|
|
|
|||
|
|
@ -373,7 +373,7 @@ conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (locat
|
|||
<TabItem value="websocket" label="WebSocket connection">
|
||||
|
||||
```python
|
||||
conn = taosws.connect(url="ws://localhost:6041")
|
||||
conn = taosws.connect("taosws://localhost:6041")
|
||||
# Execute a sql, ignore the result set, just get affected rows. It's useful for DDL and DML statement.
|
||||
conn.execute("DROP DATABASE IF EXISTS test")
|
||||
conn.execute("CREATE DATABASE test")
|
||||
|
|
@ -1007,13 +1007,12 @@ consumer.close()
|
|||
### Other sample programs
|
||||
|
||||
| Example program links | Example program content |
|
||||
| ------------------------------------------------------------------------------------------------------------- | ------------------- ---- |
|
||||
| [bind_multi.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-multi.py) | parameter binding,
|
||||
bind multiple rows at once |
|
||||
| [bind_row.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-row.py) | bind_row.py
|
||||
|-----------------------|-------------------------|
|
||||
| [bind_multi.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-multi.py) | parameter binding, bind multiple rows at once |
|
||||
| [bind_row.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-row.py) | parameter binding, bind one row at once |
|
||||
| [insert_lines.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/insert-lines.py) | InfluxDB line protocol writing |
|
||||
| [json_tag.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/json-tag.py) | Use JSON type tags |
|
||||
| [tmq.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/tmq.py) | TMQ subscription |
|
||||
| [tmq_consumer.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/tmq_consumer.py) | TMQ subscription |
|
||||
|
||||
## Other notes
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,87 @@
|
|||
---
|
||||
toc_max_heading_level: 4
|
||||
sidebar_label: R
|
||||
title: R Language Connector
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
import Rdemo from "../../07-develop/01-connect/_connect_r.mdx"
|
||||
|
||||
By using the RJDBC library in R, you can enable R programs to access TDengine data. Here are the installation process, configuration steps, and an example code in R.
|
||||
|
||||
## Installation Process
|
||||
|
||||
Before getting started, make sure you have installed the R language environment. Then, follow these steps to install and configure the RJDBC library:
|
||||
|
||||
1. Install Java Development Kit (JDK): RJDBC library requires Java environment. Download the appropriate JDK for your operating system from the official Oracle website and follow the installation guide.
|
||||
|
||||
2. Install the RJDBC library: Execute the following command in the R console to install the RJDBC library.
|
||||
|
||||
```r
|
||||
install.packages("RJDBC", repos='http://cran.us.r-project.org')
|
||||
```
|
||||
|
||||
:::note
|
||||
1. The default R language package version 4.2 which shipped with Ubuntu might lead unresponsive bug. Please install latest version of R language package from the [official website](https://www.r-project.org/).
|
||||
2. On Linux systems, installing the RJDBC package may require installing the necessary components for compilation. For example, on Ubuntu, you can execute the command ``apt install -y libbz2-dev libpcre2-dev libicu-dev`` to install the required components.
|
||||
3. On Windows systems, you need to set the **JAVA_HOME** environment variable.
|
||||
:::
|
||||
|
||||
3. Download the TDengine JDBC driver: Visit the Maven website and download the TDengine JDBC driver (taos-jdbcdriver-X.X.X-dist.jar) to your local machine.
|
||||
|
||||
## Configuration Process
|
||||
|
||||
Once you have completed the installation steps, you need to do some configuration to enable the RJDBC library to connect and access the TDengine time-series database.
|
||||
|
||||
1. Load the RJDBC library and other necessary libraries in your R script:
|
||||
|
||||
```r
|
||||
library(DBI)
|
||||
library(rJava)
|
||||
library(RJDBC)
|
||||
```
|
||||
|
||||
2. Set the JDBC driver and JDBC URL:
|
||||
|
||||
```r
|
||||
# Set the JDBC driver path (specify the location on your local machine)
|
||||
driverPath <- "/path/to/taos-jdbcdriver-X.X.X-dist.jar"
|
||||
|
||||
# Set the JDBC URL (specify the FQDN and credentials of your TDengine cluster)
|
||||
url <- "jdbc:TAOS://localhost:6030/?user=root&password=taosdata"
|
||||
```
|
||||
|
||||
3. Load the JDBC driver:
|
||||
|
||||
```r
|
||||
# Load the JDBC driver
|
||||
drv <- JDBC("com.taosdata.jdbc.TSDBDriver", driverPath)
|
||||
```
|
||||
|
||||
4. Create a TDengine database connection:
|
||||
|
||||
```r
|
||||
# Create a database connection
|
||||
conn <- dbConnect(drv, url)
|
||||
```
|
||||
|
||||
5. Once the connection is established, you can use the ``conn`` object for various database operations such as querying data and inserting data.
|
||||
|
||||
6. Finally, don't forget to close the database connection after you are done:
|
||||
|
||||
```r
|
||||
# Close the database connection
|
||||
dbDisconnect(conn)
|
||||
```
|
||||
|
||||
## Example Code Using RJDBC in R
|
||||
|
||||
Here's an example code that uses the RJDBC library to connect to a TDengine time-series database and perform a query operation:
|
||||
|
||||
<Rdemo/>
|
||||
|
||||
Please modify the JDBC driver, JDBC URL, username, password, and SQL query statement according to your specific TDengine time-series database environment and requirements.
|
||||
|
||||
By following the steps and using the provided example code, you can use the RJDBC library in the R language to access the TDengine time-series database and perform tasks such as data querying and analysis.
|
||||
|
|
@ -59,9 +59,9 @@ The different database framework specifications for various programming language
|
|||
| -------------------------------------- | ------------- | --------------- | ------------- | ------------- | ------------- | ------------- |
|
||||
| **Connection Management** | Support | Support | Support | Support | Support | Support |
|
||||
| **Regular Query** | Support | Support | Support | Support | Support | Support |
|
||||
| **Parameter Binding** | Supported | Not Supported | Support | Support | Not Supported | Support |
|
||||
| **Parameter Binding** | Supported | Supported | Support | Support | Not Supported | Support |
|
||||
| **Subscription (TMQ) ** | Supported | Support | Support | Not Supported | Not Supported | Support |
|
||||
| **Schemaless** | Supported | Not Supported | Supported | Not Supported | Not Supported | Not Supported |
|
||||
| **Schemaless** | Supported | Supported | Supported | Not Supported | Not Supported | Not Supported |
|
||||
| **Bulk Pulling (based on WebSocket) ** | Support | Support | Support | Support | Support | Support |
|
||||
|
||||
:::warning
|
||||
|
|
|
|||
|
|
@ -364,6 +364,7 @@ The configuration parameters for specifying super table tag columns and data col
|
|||
- **min**: The minimum value of the column/label of the data type. The generated value will equal or large than the minimum value.
|
||||
|
||||
- **max**: The maximum value of the column/label of the data type. The generated value will less than the maximum value.
|
||||
- **fun**: This column of data is filled with functions. Currently, only the sin and cos functions are supported. The input parameter is the timestamp and converted to an angle value. The conversion formula is: angle x=input time column ts value % 360. At the same time, it supports coefficient adjustment and random fluctuation factor adjustment, presented in a fixed format expression, such as fun="10\*sin(x)+100\*random(5)", where x represents the angle, ranging from 0 to 360 degrees, and the growth step size is consistent with the time column step size. 10 represents the coefficient of multiplication, 100 represents the coefficient of addition or subtraction, and 5 represents the fluctuation range within a random range of 5%. The currently supported data types are int, bigint, float, and double. Note: The expression is fixed and cannot be reversed.
|
||||
|
||||
- **values**: The value field of the nchar/binary column/label, which will be chosen randomly from the values.
|
||||
|
||||
|
|
|
|||
|
|
@ -5,12 +5,12 @@ description: This document describes the supported platforms for the TDengine se
|
|||
|
||||
## List of supported platforms for TDengine server
|
||||
|
||||
| | **Windows Server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** | **macOS** |
|
||||
| | **Windows Server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18 or later** | **macOS** |
|
||||
| ------------ | ---------------------------- | ----------------- | ---------------- | ---------------- | --------- |
|
||||
| X64 | ● | ● | ● | ● | ● |
|
||||
| X64 | ●/E | ●/E | ● | ● | ● |
|
||||
| ARM64 | | | ● | | ● |
|
||||
|
||||
Note: ● means officially tested and verified, ○ means unofficially tested and verified.
|
||||
Note: 1) ● means officially tested and verified, ○ means unofficially tested and verified, E means only supported by the enterprise edition. 2) The community edition only supports newer versions of mainstream operating systems, including Ubuntu 18+/CentOS 7+/RetHat/Debian/CoreOS/FreeBSD/OpenSUSE/SUSE Linux/Fedora/macOS, etc. If you have requirements for other operating systems and editions, please contact support of the enterprise edition.
|
||||
|
||||
## List of supported platforms for TDengine clients and connectors
|
||||
|
||||
|
|
|
|||
|
|
@ -1 +0,0 @@
|
|||
label: TDengine Docker images
|
||||
|
|
@ -166,7 +166,7 @@ Please note the `taoskeeper` needs to be installed and running to create the `lo
|
|||
|
||||
| Attribute | Description |
|
||||
| ------------- | ---------------------------------------------------------------------------- |
|
||||
| Applicable | Server Only |
|
||||
| Applicable | Server and Client |
|
||||
| Meaning | Switch for allowing TDengine to collect and report service usage information |
|
||||
| Value Range | 0: Not allowed; 1: Allowed |
|
||||
| Default Value | 1 |
|
||||
|
|
@ -174,7 +174,7 @@ Please note the `taoskeeper` needs to be installed and running to create the `lo
|
|||
|
||||
| Attribute | Description |
|
||||
| ------------- | ---------------------------------------------------------------------------- |
|
||||
| Applicable | Server Only |
|
||||
| Applicable | Server and Client |
|
||||
| Meaning | Switch for allowing TDengine to collect and report crash related information |
|
||||
| Value Range | 0,1 0: Not allowed; 1: allowed |
|
||||
| Default Value | 1 |
|
||||
|
|
@ -670,6 +670,15 @@ The charset that takes effect is UTF-8.
|
|||
| Value Range | 0: not consistent; 1: consistent. |
|
||||
| Default | 0 |
|
||||
|
||||
### smlTsDefaultName
|
||||
|
||||
| Attribute | Description |
|
||||
| -------- | -------------------------------------------------------- |
|
||||
| Applicable | Client only |
|
||||
| Meaning | The name of the time column for schemaless automatic table creation is set through this configuration |
|
||||
| Type | String |
|
||||
| Default Value | _ts |
|
||||
|
||||
## Compress Parameters
|
||||
|
||||
### compressMsgSize
|
||||
|
|
@ -732,6 +741,15 @@ The charset that takes effect is UTF-8.
|
|||
| Value Range | 0-23 |
|
||||
| Default Value | 0 |
|
||||
|
||||
### tmqMaxTopicNum
|
||||
|
||||
| Attribute | Description |
|
||||
| -------- | ------------------ |
|
||||
| Applicable | Server Only |
|
||||
| Meaning | The max num of topics |
|
||||
| Value Range | 1-10000|
|
||||
| Default Value | 20 |
|
||||
|
||||
## 3.0 Parameters
|
||||
|
||||
| # | **Parameter** | **Applicable to 2.x ** | **Applicable to 3.0 ** | Current behavior in 3.0 |
|
||||
|
|
|
|||
|
|
@ -34,7 +34,27 @@ In the schemaless writing data line protocol, each data item in the field_set ne
|
|||
|
||||
- If there are English double quotes on both sides, it indicates the BINARY(32) type. For example, `"abc"`.
|
||||
- If there are double quotes on both sides and an L prefix, it means NCHAR(32) type. For example, `L"error message"`.
|
||||
- Spaces, equal signs (=), commas (,), and double quotes (") need to be escaped with a backslash (\\) in front. (All refer to the ASCII character)
|
||||
- Spaces, equals sign (=), comma (,), double quote ("), and backslash (\\) need to be escaped with a backslash (\\) in front. (All refer to the ASCII character). The rules are as follows:
|
||||
|
||||
| **Serial number** | **Element** | **Escape characters** |
|
||||
| -------- | ----------- | ----------------------------- |
|
||||
| 1 | Measurement | Comma, Space |
|
||||
| 2 | Tag key | Comma, Equals Sign, Space |
|
||||
| 3 | Tag value | Comma, Equals Sign, Space |
|
||||
| 4 | Field key | Comma, Equals Sign, Space |
|
||||
| 5 | Field value | Double quote, Backslash |
|
||||
|
||||
With two contiguous backslashes, the first is interpreted as an escape character. Examples of backslash escape rules are as follows:
|
||||
|
||||
| **Serial number** | **Backslashes** | **Interpreted as** |
|
||||
| -------- | ----------- | ----------------------------- |
|
||||
| 1 | \ | \ |
|
||||
| 2 | \\\\ | \ |
|
||||
| 3 | \\\\\\ | \\\\ |
|
||||
| 4 | \\\\\\\\ | \\\\ |
|
||||
| 5 | \\\\\\\\\\ | \\\\\\ |
|
||||
| 6 | \\\\\\\\\\\\ | \\\\\\ |
|
||||
|
||||
- Numeric types will be distinguished from data types by the suffix.
|
||||
|
||||
| **Serial number** | **Postfix** | **Mapping type** | **Size (bytes)** |
|
||||
|
|
@ -88,6 +108,8 @@ You can configure smlChildTableName in taos.cfg to specify table names, for exam
|
|||
|
||||
8. It is assumed that the order of field_set in a supertable is consistent, meaning that the first record contains all fields and subsequent records store fields in the same order. If the order is not consistent, set smlDataFormat in taos.cfg to false. Otherwise, data will be written out of order and a database error will occur.
|
||||
Note: TDengine 3.0.3.0 and later automatically detect whether order is consistent. This parameter is no longer used.
|
||||
9. Due to the fact that SQL table names do not support period (.), schemaless has also processed period (.). If there is a period (.) in the table name automatically created by schemaless, it will be automatically replaced with an underscore (\_). If you manually specify a sub table name, if there is a dot (.) in the sub table name, it will also be converted to an underscore (\_)
|
||||
10. Taos.cfg adds the configuration of smlTsDefaultName (with a string value), which only works on the client side. After configuration, the time column name of the schemaless automatic table creation can be set through this configuration. If not configured, defaults to _ts.
|
||||
|
||||
:::tip
|
||||
All processing logic of schemaless will still follow TDengine's underlying restrictions on data structures, such as the total length of each row of data cannot exceed 48 KB(64 KB since version 3.0.5.0) and the total length of a tag value cannot exceed 16 KB. See [TDengine SQL Boundary Limits](/taos-sql/limit) for specific constraints in this area.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,40 @@
|
|||
---
|
||||
sidebar_label: qStudio
|
||||
title: qStudio
|
||||
description: Step-by-Step Guide to Accessing TDengine Data with qStudio
|
||||
---
|
||||
|
||||
qStudio is a free cross-platform SQL data analysis tool that allows easy browsing of tables, variables, functions, and configuration settings in a database. The latest version of qStudio includes built-in support for TDengine.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
To connect TDengine using qStudio, you need to complete the following preparations:
|
||||
|
||||
- Install qStudio: qStudio supports major operating systems, including Windows, macOS, and Linux. Please ensure you download the correct installation package for your platform from the [download page](https://www.timestored.com/qstudio/download/).
|
||||
- Set up TDengine instance: Make sure TDengine is installed and running correctly, and the taosAdapter is installed and running. For detailed information, refer to the taosAdapter User Manual.
|
||||
|
||||
## Connecting to TDengine with qStudio
|
||||
|
||||
1. Launch the qStudio application and select "Server" and then "Add Server..." from the menu. Choose TDengine from the Server Type dropdown.
|
||||
|
||||

|
||||
|
||||
2. Configure the TDengine connection by entering the host address, port number, username, and password. If TDengine is deployed on the local machine, you can fill in the username and password only. The default username is "root," and the default password is "taosdata." Click "Test" to test the connection's availability. If the TDengine Java connector is not installed on the local machine, qStudio will prompt you to download and install it.
|
||||
|
||||

|
||||
|
||||
3. Once connected successfully, the screen will display as shown below. If the connection fails, check that the TDengine service and taosAdapter are running correctly, and ensure that the host address, port number, username, and password are correct.
|
||||
|
||||

|
||||
|
||||
4. Use qStudio to select databases and tables to browse data from the TDengine server.
|
||||
|
||||

|
||||
|
||||
5. You can also perform operations on TDengine data by executing SQL commands.
|
||||
|
||||

|
||||
|
||||
6. qStudio supports charting functions based on the data. For more information, please refer to the [qStudio documentation](https://www.timestored.com/qstudio/help).
|
||||
|
||||

|
||||
|
After Width: | Height: | Size: 94 KiB |
|
After Width: | Height: | Size: 148 KiB |
|
After Width: | Height: | Size: 34 KiB |
|
After Width: | Height: | Size: 93 KiB |
|
After Width: | Height: | Size: 39 KiB |
|
After Width: | Height: | Size: 78 KiB |
|
|
@ -338,7 +338,7 @@ Remark:
|
|||
Equivalent function: sum
|
||||
|
||||
```sql
|
||||
Select max(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s)
|
||||
Select sum(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s)
|
||||
```
|
||||
|
||||
Note: This function has no interpolation requirements, so it can be directly calculated.
|
||||
|
|
|
|||
|
|
@ -6,10 +6,19 @@ description: This document provides download links for all released versions of
|
|||
|
||||
TDengine 3.x installation packages can be downloaded at the following links:
|
||||
|
||||
For TDengine 2.x installation packages by version, please visit [here](https://www.taosdata.com/all-downloads).
|
||||
For TDengine 2.x installation packages by version, please visit [here](https://tdengine.com/downloads/historical/).
|
||||
|
||||
import Release from "/components/ReleaseV3";
|
||||
|
||||
## 3.1.0.0
|
||||
|
||||
:::note IMPORTANT
|
||||
- Once you upgrade to TDengine 3.1.0.0, you cannot roll back to any previous version of TDengine. Upgrading to 3.1.0.0 will alter your data such that it cannot be read by previous versions.
|
||||
- You must remove all streams before upgrading to TDengine 3.1.0.0. If you upgrade a deployment that contains streams, the upgrade will fail and your deployment will become nonoperational.
|
||||
:::
|
||||
|
||||
<Release type="tdengine" version="3.1.0.0" />
|
||||
|
||||
## 3.0.7.1
|
||||
|
||||
<Release type="tdengine" version="3.0.7.1" />
|
||||
|
|
|
|||
|
|
@ -8,9 +8,13 @@ library("rJava")
|
|||
library("RJDBC")
|
||||
|
||||
args<- commandArgs(trailingOnly = TRUE)
|
||||
driver_path = args[1] # path to jdbc-driver for example: "/root/taos-jdbcdriver-3.0.0-dist.jar"
|
||||
driver_path = args[1] # path to jdbc-driver for example: "/root/taos-jdbcdriver-3.2.4-dist.jar"
|
||||
driver = JDBC("com.taosdata.jdbc.TSDBDriver", driver_path)
|
||||
conn = dbConnect(driver, "jdbc:TAOS://127.0.0.1:6030/?user=root&password=taosdata")
|
||||
dbGetQuery(conn, "SELECT server_version()")
|
||||
dbSendUpdate(conn, "create database if not exists rtest")
|
||||
dbSendUpdate(conn, "create table if not exists rtest.test (ts timestamp, current float, voltage int, devname varchar(20))")
|
||||
dbSendUpdate(conn, "insert into rtest.test values (now, 1.2, 220, 'test')")
|
||||
dbGetQuery(conn, "select * from rtest.test")
|
||||
dbDisconnect(conn)
|
||||
# ANCHOR_END: demo
|
||||
|
|
|
|||
|
|
@ -2,11 +2,19 @@ if (! "RJDBC" %in% installed.packages()[, "Package"]) {
|
|||
install.packages('RJDBC', repos='http://cran.us.r-project.org')
|
||||
}
|
||||
|
||||
# ANCHOR: demo
|
||||
library("DBI")
|
||||
library("rJava")
|
||||
library("RJDBC")
|
||||
driver_path = "/home/debug/build/lib/taos-jdbcdriver-2.0.38-dist.jar"
|
||||
|
||||
args<- commandArgs(trailingOnly = TRUE)
|
||||
driver_path = args[1] # path to jdbc-driver for example: "/root/taos-jdbcdriver-3.2.4-dist.jar"
|
||||
driver = JDBC("com.taosdata.jdbc.rs.RestfulDriver", driver_path)
|
||||
conn = dbConnect(driver, "jdbc:TAOS-RS://localhost:6041?user=root&password=taosdata")
|
||||
dbGetQuery(conn, "SELECT server_version()")
|
||||
dbDisconnect(conn)
|
||||
dbSendUpdate(conn, "create database if not exists rtest")
|
||||
dbSendUpdate(conn, "create table if not exists rtest.test (ts timestamp, current float, voltage int, devname varchar(20))")
|
||||
dbSendUpdate(conn, "insert into rtest.test values (now, 1.2, 220, 'test')")
|
||||
dbGetQuery(conn, "select * from rtest.test")
|
||||
dbDisconnect(conn)
|
||||
# ANCHOR_END: demo
|
||||
|
|
|
|||
|
|
@ -0,0 +1 @@
|
|||
apt install -y libbz2-dev libpcre2-dev libicu-dev
|
||||
|
|
@ -22,7 +22,7 @@
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.2.1</version>
|
||||
<version>3.2.4</version>
|
||||
</dependency>
|
||||
<!-- ANCHOR_END: dep-->
|
||||
<dependency>
|
||||
|
|
@ -33,4 +33,4 @@
|
|||
</dependency>
|
||||
</dependencies>
|
||||
|
||||
</project>
|
||||
</project>
|
||||
|
|
|
|||
|
|
@ -28,6 +28,21 @@ docker run -d -p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043
|
|||
|
||||
注意:TDengine 3.0 服务端仅使用 6030 TCP 端口。6041 为 taosAdapter 所使用提供 REST 服务端口。6043-6049 为 taosAdapter 提供第三方应用接入所使用端口,可根据需要选择是否打开。
|
||||
|
||||
如果需要将数据持久化到本机的某一个文件夹,则执行下边的命令:
|
||||
|
||||
```shell
|
||||
docker run -d -v ~/data/taos/dnode/data:/var/lib/taos \
|
||||
-v ~/data/taos/dnode/log:/var/log/taos \
|
||||
-p 6030:6030 -p 6041:6041 -p 6043-6049:6043-6049 -p 6043-6049:6043-6049/udp tdengine/tdengine
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
- /var/lib/taos: TDengine 默认数据文件目录。可通过[配置文件]修改位置。你可以修改~/data/taos/dnode/data为你自己的数据目录
|
||||
- /var/log/taos: TDengine 默认日志文件目录。可通过[配置文件]修改位置。你可以修改~/data/taos/dnode/log为你自己的日志目录
|
||||
|
||||
:::
|
||||
|
||||
确定该容器已经启动并且在正常运行。
|
||||
|
||||
```shell
|
||||
|
|
@ -108,4 +123,4 @@ SELECT FIRST(ts), AVG(current), MAX(voltage), MIN(phase) FROM test.d10 INTERVAL(
|
|||
|
||||
## 其它
|
||||
|
||||
更多关于在 Docker 环境下使用 TDengine 的细节,请参考 [在 Docker 下使用 TDengine](../../reference/docker)。
|
||||
更多关于在 Docker 环境下使用 TDengine 的细节,请参考 [用 Docker 部署 TDengine](../../deployment/docker)。
|
||||
|
|
|
|||
|
|
@ -201,7 +201,7 @@ Active: inactive (dead)
|
|||
|
||||
<TabItem label="Windows 系统" value="windows">
|
||||
|
||||
安装后,可以在拥有管理员权限的 cmd 窗口执行 `sc start taosd` 或在 `C:\TDengine` 目录下,运行 `taosd.exe` 来启动 TDengine 服务进程。
|
||||
安装后,可以在拥有管理员权限的 cmd 窗口执行 `sc start taosd` 或在 `C:\TDengine` 目录下,运行 `taosd.exe` 来启动 TDengine 服务进程。如需使用 http/REST 服务,请执行 `sc start taosadapter` 或运行 `taosadapter.exe` 来启动 taosAdapter 服务进程。
|
||||
|
||||
**TDengine 命令行(CLI)**
|
||||
|
||||
|
|
|
|||
|
|
@ -82,7 +82,7 @@ TDengine 提供了丰富的应用程序开发接口,为了便于用户快速
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.2.1</version>
|
||||
<version>3.2.4</version>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -398,7 +398,7 @@ def finish(buf: bytes) -> output_type:
|
|||
3. 定义一个标量函数,输入一个时间戳,输出距离这个时间最近的下一个周日。完成这个函数要用到第三方库 moment。我们在这个示例中讲解使用第三方库的注意事项。
|
||||
4. 定义一个聚合函数,计算某一列最大值和最小值的差, 也就是实现 TDengien 内置的 spread 函数。
|
||||
同时也包含大量实用的 debug 技巧。
|
||||
本文假设你用的是 Linux 系统,且已安装好了 TDengine 3.0.4.0+ 和 Python 3.x。
|
||||
本文假设你用的是 Linux 系统,且已安装好了 TDengine 3.0.4.0+ 和 Python 3.7+。
|
||||
|
||||
注意:**UDF 内无法通过 print 函数输出日志,需要自己写文件或用 python 内置的 logging 库写文件**。
|
||||
|
||||
|
|
|
|||
|
|
@ -446,7 +446,7 @@ TDengine 的 JDBC 原生连接实现大幅改进了参数绑定方式对数据
|
|||
**注意**:
|
||||
|
||||
- JDBC REST 连接目前不支持参数绑定
|
||||
- 以下示例代码基于 taos-jdbcdriver-3.2.1
|
||||
- 以下示例代码基于 taos-jdbcdriver-3.2.4
|
||||
- binary 类型数据需要调用 setString 方法,nchar 类型数据需要调用 setNString 方法
|
||||
- 预处理语句中指定数据库与子表名称不要使用 `db.?`,应直接使用 `?`,然后在 setTableName 中指定数据库,如:`prepareStatement.setTableName("db.t1")`。
|
||||
|
||||
|
|
|
|||
|
|
@ -30,7 +30,8 @@ Websocket 连接支持所有能运行 Rust 的平台。
|
|||
|
||||
| Rust 连接器版本 | TDengine 版本 | 主要功能 |
|
||||
| :----------------: | :--------------: | :--------------------------------------------------: |
|
||||
| v0.8.12 | 3.0.5.0 or later | 消息订阅:获取消费进度及按照指定进度开始消费。 |
|
||||
| v0.9.2 | 3.0.7.0 or later | STMT:ws 下获取 tag_fields、col_fields。 |
|
||||
| v0.8.12 | 3.0.5.0 | 消息订阅:获取消费进度及按照指定进度开始消费。 |
|
||||
| v0.8.0 | 3.0.4.0 | 支持无模式写入。 |
|
||||
| v0.7.6 | 3.0.3.0 | 支持在请求中使用 req_id。 |
|
||||
| v0.6.0 | 3.0.0.0 | 基础功能。 |
|
||||
|
|
|
|||
|
|
@ -375,7 +375,7 @@ conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (locat
|
|||
<TabItem value="websocket" label="WebSocket 连接">
|
||||
|
||||
```python
|
||||
conn = taosws.connect(url="ws://localhost:6041")
|
||||
conn = taosws.connect("taosws://localhost:6041")
|
||||
# Execute a sql, ignore the result set, just get affected rows. It's useful for DDL and DML statement.
|
||||
conn.execute("DROP DATABASE IF EXISTS test")
|
||||
conn.execute("CREATE DATABASE test")
|
||||
|
|
|
|||
|
|
@ -0,0 +1,89 @@
|
|||
---
|
||||
toc_max_heading_level: 4
|
||||
sidebar_label: R
|
||||
title: R Language Connector
|
||||
---
|
||||
|
||||
import Tabs from '@theme/Tabs';
|
||||
import TabItem from '@theme/TabItem';
|
||||
|
||||
import Rdemo from "../07-develop/01-connect/_connect_r.mdx"
|
||||
|
||||
通过 R 语言中的 RJDBC 库可以使 R 语言程序支持访问 TDengine 数据。以下是安装过程、配置过程以及 R 语言示例代码。
|
||||
|
||||
## 安装过程
|
||||
|
||||
在开始之前,请确保已经安装了R语言环境。然后按照以下步骤安装和配置RJDBC库:
|
||||
|
||||
1. 安装Java Development Kit (JDK):RJDBC库需要依赖Java环境。请从Oracle官方网站下载适合您操作系统的JDK,并按照安装指南进行安装。
|
||||
|
||||
2. 安装RJDBC库:在R控制台中执行以下命令来安装RJDBC库。
|
||||
|
||||
```r
|
||||
install.packages("RJDBC", repos='http://cran.us.r-project.org')
|
||||
```
|
||||
|
||||
:::note
|
||||
1. Ubuntu 系统自带的 R 语言软件版本 4.2 在调用 RJDBC 库会产生无响应 bug,请安装 R 语言[官网](https://www.r-project.org/)的安装包。
|
||||
2. 在 Linux 上安装 RJDBC 包可能需要安装编译需要的组件,以 Ubuntu 为例执行 `apt install -y libbz2-dev libpcre2-dev libicu-dev` 命令安装。
|
||||
3. 在 Windows 系统上需要设置 JAVA_HOME 环境变量。
|
||||
:::
|
||||
|
||||
3. 下载 TDengine JDBC 驱动程序:访问 maven.org 网站,下载 TDengine JDBC 驱动程序(taos-jdbcdriver-X.X.X-dist.jar)。
|
||||
|
||||
4. 将 TDengine JDBC 驱动程序放置在适当的位置:在您的计算机上选择一个合适的位置,将 TDengine JDBC 驱动程序文件(taos-jdbcdriver-X.X.X-dist.jar)保存在此处。
|
||||
|
||||
## 配置过程
|
||||
|
||||
完成了安装步骤后,您需要进行一些配置,以便RJDBC库能够正确连接和访问TDengine时序数据库。
|
||||
|
||||
1. 在 R 脚本中加载 RJDBC 和其他必要的库:
|
||||
|
||||
```r
|
||||
library(DBI)
|
||||
library(rJava)
|
||||
library(RJDBC)
|
||||
```
|
||||
|
||||
2. 设置 JDBC 驱动程序和 JDBC URL:
|
||||
|
||||
```r
|
||||
# 设置JDBC驱动程序路径(根据您实际保存的位置进行修改)
|
||||
driverPath <- "/path/to/taos-jdbcdriver-X.X.X-dist.jar"
|
||||
|
||||
# 设置JDBC URL(根据您的具体环境进行修改)
|
||||
url <- "jdbc:TAOS://localhost:6030/?user=root&password=taosdata"
|
||||
```
|
||||
|
||||
3. 加载 JDBC 驱动程序:
|
||||
|
||||
```r
|
||||
# 加载JDBC驱动程序
|
||||
drv <- JDBC("com.taosdata.jdbc.TSDBDriver", driverPath)
|
||||
```
|
||||
|
||||
4. 创建 TDengine 数据库连接:
|
||||
|
||||
```r
|
||||
# 创建数据库连接
|
||||
conn <- dbConnect(drv, url)
|
||||
```
|
||||
|
||||
5. 连接成功后,您可以使用 conn 对象进行各种数据库操作,如查询数据、插入数据等。
|
||||
|
||||
6. 最后,不要忘记在使用完成后关闭数据库连接:
|
||||
|
||||
```r
|
||||
# 关闭数据库连接
|
||||
dbDisconnect(conn)
|
||||
```
|
||||
|
||||
## 使用 RJDBC 的 R 语言示例代码
|
||||
|
||||
以下是一个使用 RJDBC 库连接 TDengine 时序数据库并执行查询操作的示例代码:
|
||||
|
||||
<Rdemo/>
|
||||
|
||||
请根据您的实际情况修改JDBC驱动程序、JDBC URL、用户名、密码以及SQL查询语句,以适配您的 TDengine 时序数据库环境和要求。
|
||||
|
||||
通过以上步骤和示例代码,您可以在 R 语言环境中使用 RJDBC 库访问 TDengine 时序数据库,进行数据查询和分析等操作。
|
||||
|
|
@ -58,9 +58,9 @@ TDengine 版本更新往往会增加新的功能特性,列表中的连接器
|
|||
| ------------------------------ | -------- | ---------- | -------- | -------- | ----------- | -------- |
|
||||
| **连接管理** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
|
||||
| **普通查询** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
|
||||
| **参数绑定** | 支持 | 暂不支持 | 支持 | 支持 | 暂不支持 | 支持 |
|
||||
| **参数绑定** | 支持 | 支持 | 支持 | 支持 | 暂不支持 | 支持 |
|
||||
| **数据订阅(TMQ)** | 支持 | 支持 | 支持 | 暂不支持 | 暂不支持 | 支持 |
|
||||
| **Schemaless** | 支持 | 暂不支持 | 支持 | 暂不支持 | 暂不支持 | 支持 |
|
||||
| **Schemaless** | 支持 | 支持 | 支持 | 暂不支持 | 暂不支持 | 支持 |
|
||||
| **批量拉取(基于 WebSocket)** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
|
||||
|
||||
:::warning
|
||||
|
|
|
|||
|
|
@ -1,5 +1,6 @@
|
|||
---
|
||||
title: 用 Docker 部署 TDengine
|
||||
sidebar_label: Docker
|
||||
description: '本章主要介绍如何在容器中启动 TDengine 服务并访问它'
|
||||
---
|
||||
|
||||
|
|
@ -10,8 +11,17 @@ description: '本章主要介绍如何在容器中启动 TDengine 服务并访
|
|||
TDengine 镜像启动时默认激活 HTTP 服务,使用下列命令
|
||||
|
||||
```shell
|
||||
docker run -d --name tdengine -p 6041:6041 tdengine/tdengine
|
||||
docker run -d --name tdengine \
|
||||
-v ~/data/taos/dnode/data:/var/lib/taos \
|
||||
-v ~/data/taos/dnode/log:/var/log/taos \
|
||||
-p 6041:6041 tdengine/tdengine
|
||||
```
|
||||
:::note
|
||||
|
||||
- /var/lib/taos: TDengine 默认数据文件目录。可通过[配置文件]修改位置。你可以修改~/data/taos/dnode/data为你自己的数据目录
|
||||
- /var/log/taos: TDengine 默认日志文件目录。可通过[配置文件]修改位置。你可以修改~/data/taos/dnode/log为你自己的日志目录
|
||||
|
||||
:::
|
||||
|
||||
以上命令启动了一个名为“tdengine”的容器,并把其中的 HTTP 服务的端 6041 映射到了主机端口 6041。使用如下命令可以验证该容器中提供的 HTTP 服务是否可用:
|
||||
|
||||
|
|
@ -291,38 +301,37 @@ services:
|
|||
environment:
|
||||
TAOS_FQDN: "td-1"
|
||||
TAOS_FIRST_EP: "td-1"
|
||||
ports:
|
||||
- 6041:6041
|
||||
- 6030:6030
|
||||
volumes:
|
||||
- taosdata-td1:/var/lib/taos/
|
||||
- taoslog-td1:/var/log/taos/
|
||||
# /var/lib/taos: TDengine 默认数据文件目录。可通过[配置文件]修改位置。你可以修改~/data/taos/dnode1/data为你自己的数据目录
|
||||
- ~/data/taos/dnode1/data:/var/lib/taos
|
||||
# /var/log/taos: TDengine 默认日志文件目录。可通过[配置文件]修改位置。你可以修改~/data/taos/dnode1/log为你自己的日志目录
|
||||
- ~/data/taos/dnode1/log:/var/log/taos
|
||||
td-2:
|
||||
image: tdengine/tdengine:$VERSION
|
||||
environment:
|
||||
TAOS_FQDN: "td-2"
|
||||
TAOS_FIRST_EP: "td-1"
|
||||
volumes:
|
||||
- taosdata-td2:/var/lib/taos/
|
||||
- taoslog-td2:/var/log/taos/
|
||||
- ~/data/taos/dnode2/data:/var/lib/taos
|
||||
- ~/data/taos/dnode2/log:/var/log/taos
|
||||
td-3:
|
||||
image: tdengine/tdengine:$VERSION
|
||||
environment:
|
||||
TAOS_FQDN: "td-3"
|
||||
TAOS_FIRST_EP: "td-1"
|
||||
volumes:
|
||||
- taosdata-td3:/var/lib/taos/
|
||||
- taoslog-td3:/var/log/taos/
|
||||
volumes:
|
||||
taosdata-td1:
|
||||
taoslog-td1:
|
||||
taosdata-td2:
|
||||
taoslog-td2:
|
||||
taosdata-td3:
|
||||
taoslog-td3:
|
||||
- ~/data/taos/dnode3/data:/var/lib/taos
|
||||
- ~/data/taos/dnode3/log:/var/log/taos
|
||||
```
|
||||
|
||||
:::note
|
||||
|
||||
* `VERSION` 环境变量被用来设置 tdengine image tag
|
||||
* 在新创建的实例上必须设置 `TAOS_FIRST_EP` 以使其能够加入 TDengine 集群;如果有高可用需求,则需要同时使用 `TAOS_SECOND_EP`
|
||||
|
||||
:::
|
||||
|
||||
2. 启动集群
|
||||
|
|
@ -397,24 +406,22 @@ networks:
|
|||
services:
|
||||
td-1:
|
||||
image: tdengine/tdengine:$VERSION
|
||||
networks:
|
||||
- inter
|
||||
environment:
|
||||
TAOS_FQDN: "td-1"
|
||||
TAOS_FIRST_EP: "td-1"
|
||||
volumes:
|
||||
- taosdata-td1:/var/lib/taos/
|
||||
- taoslog-td1:/var/log/taos/
|
||||
# /var/lib/taos: TDengine 默认数据文件目录。可通过[配置文件]修改位置。你可以修改~/data/taos/dnode1/data为你自己的数据目录
|
||||
- ~/data/taos/dnode1/data:/var/lib/taos
|
||||
# /var/log/taos: TDengine 默认日志文件目录。可通过[配置文件]修改位置。你可以修改~/data/taos/dnode1/log为你自己的日志目录
|
||||
- ~/data/taos/dnode1/log:/var/log/taos
|
||||
td-2:
|
||||
image: tdengine/tdengine:$VERSION
|
||||
networks:
|
||||
- inter
|
||||
environment:
|
||||
TAOS_FQDN: "td-2"
|
||||
TAOS_FIRST_EP: "td-1"
|
||||
volumes:
|
||||
- taosdata-td2:/var/lib/taos/
|
||||
- taoslog-td2:/var/log/taos/
|
||||
- ~/data/taos/dnode2/data:/var/lib/taos
|
||||
- ~/data/taos/dnode2/log:/var/log/taos
|
||||
adapter:
|
||||
image: tdengine/tdengine:$VERSION
|
||||
entrypoint: "taosadapter"
|
||||
|
|
@ -446,11 +453,6 @@ services:
|
|||
>> /etc/nginx/nginx.conf;cat /etc/nginx/nginx.conf;
|
||||
nginx -g 'daemon off;'",
|
||||
]
|
||||
volumes:
|
||||
taosdata-td1:
|
||||
taoslog-td1:
|
||||
taosdata-td2:
|
||||
taoslog-td2:
|
||||
```
|
||||
|
||||
## 使用 docker swarm 部署
|
||||
|
|
@ -4,23 +4,31 @@ title: 在 Kubernetes 上部署 TDengine 集群
|
|||
description: 利用 Kubernetes 部署 TDengine 集群的详细指南
|
||||
---
|
||||
|
||||
作为面向云原生架构设计的时序数据库,TDengine 支持 Kubernetes 部署。这里介绍如何使用 YAML 文件一步一步从头创建一个 TDengine 集群,并重点介绍 Kubernetes 环境下 TDengine 的常用操作。
|
||||
## 概述
|
||||
|
||||
作为面向云原生架构设计的时序数据库,TDengine 本身就支持 Kubernetes 部署。这里介绍如何使用 YAML 文件从头一步一步创建一个可用于生产使用的高可用 TDengine 集群,并重点介绍 Kubernetes 环境下 TDengine 的常用操作。
|
||||
|
||||
为了满足[高可用](https://docs.taosdata.com/tdinternal/high-availability/)的需求,集群需要满足如下要求:
|
||||
|
||||
- 3个及以上 dnode :TDengine 的同一个 vgroup 中的多个 vnode ,不允许同时分布在一个 dnode ,所以如果创建3副本的数据库,则 dnode 数大于等于3
|
||||
- 3个 mnode :mnode 负责整个集群的管理工作,TDengine 默认是一个 mnode。如果这个 mnode 所在的 dnode 掉线,则整个集群不可用。
|
||||
- 数据库的3副本:TDengine 的副本配置是数据库级别,所以数据库3副本可满足在3个 dnode 的集群中,任意一个 dnode 下线,都不影响集群的正常使用。**如果下线** **dnode** **个数为2时,此时集群不可用,****因为****RAFT无法完成选举****。**(企业版:在灾难恢复场景,任一节点数据文件损坏,都可以通过重新拉起dnode进行恢复)
|
||||
|
||||
## 前置条件
|
||||
|
||||
要使用 Kubernetes 部署管理 TDengine 集群,需要做好如下准备工作。
|
||||
|
||||
* 本文适用 Kubernetes v1.5 以上版本
|
||||
* 本文和下一章使用 minikube、kubectl 和 helm 等工具进行安装部署,请提前安装好相应软件
|
||||
* Kubernetes 已经安装部署并能正常访问使用或更新必要的容器仓库或其他服务
|
||||
- 本文适用 Kubernetes v1.19 以上版本
|
||||
- 本文使用 kubectl 工具进行安装部署,请提前安装好相应软件
|
||||
- Kubernetes 已经安装部署并能正常访问使用或更新必要的容器仓库或其他服务
|
||||
|
||||
以下配置文件也可以从 [GitHub 仓库](https://github.com/taosdata/TDengine-Operator/tree/3.0/src/tdengine) 下载。
|
||||
|
||||
## 配置 Service 服务
|
||||
|
||||
创建一个 Service 配置文件:`taosd-service.yaml`,服务名称 `metadata.name` (此处为 "taosd") 将在下一步中使用到。添加 TDengine 所用到的端口:
|
||||
创建一个 Service 配置文件:`taosd-service.yaml`,服务名称 `metadata.name` (此处为 "taosd") 将在下一步中使用到。首先添加 TDengine 所用到的端口,然后在选择器设置确定的标签 app (此处为 “tdengine”)。
|
||||
|
||||
```yaml
|
||||
```YAML
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
|
|
@ -42,10 +50,11 @@ spec:
|
|||
|
||||
## 有状态服务 StatefulSet
|
||||
|
||||
根据 Kubernetes 对各类部署的说明,我们将使用 StatefulSet 作为 TDengine 的服务类型。
|
||||
创建文件 `tdengine.yaml`,其中 replicas 定义集群节点的数量为 3。节点时区为中国(Asia/Shanghai),每个节点分配 10G 标准(standard)存储。你也可以根据实际情况进行相应修改。
|
||||
根据 Kubernetes 对各类部署的说明,我们将使用 StatefulSet 作为 TDengine 的部署资源类型。 创建文件 `tdengine.yaml`,其中 replicas 定义集群节点的数量为 3。节点时区为中国(Asia/Shanghai),每个节点分配 5G 标准(standard)存储(参考[Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) 配置 storage class )。你也可以根据实际情况进行相应修改。
|
||||
|
||||
```yaml
|
||||
请特别注意startupProbe的配置,在 dnode 的 Pod 掉线一段时间后,再重新启动,这个时候新上线的 dnode 会短暂不可用。如果startupProbe配置过小,Kubernetes 会认为该 Pod 处于不正常的状态,并尝试重启该 Pod,该 dnode 的 Pod 会频繁重启,始终无法恢复到正常状态。参考 [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
|
||||
|
||||
```YAML
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
|
|
@ -69,7 +78,7 @@ spec:
|
|||
spec:
|
||||
containers:
|
||||
- name: "tdengine"
|
||||
image: "tdengine/tdengine:3.0.0.0"
|
||||
image: "tdengine/tdengine:3.0.7.1"
|
||||
imagePullPolicy: "IfNotPresent"
|
||||
ports:
|
||||
- name: tcp6030
|
||||
|
|
@ -108,6 +117,12 @@ spec:
|
|||
volumeMounts:
|
||||
- name: taosdata
|
||||
mountPath: /var/lib/taos
|
||||
startupProbe:
|
||||
exec:
|
||||
command:
|
||||
- taos-check
|
||||
failureThreshold: 360
|
||||
periodSeconds: 10
|
||||
readinessProbe:
|
||||
exec:
|
||||
command:
|
||||
|
|
@ -129,199 +144,373 @@ spec:
|
|||
storageClassName: "standard"
|
||||
resources:
|
||||
requests:
|
||||
storage: "10Gi"
|
||||
storage: "5Gi"
|
||||
```
|
||||
|
||||
## 使用 kubectl 命令部署 TDengine 集群
|
||||
|
||||
顺序执行以下命令。
|
||||
首先创建对应的 namespace,然后顺序执行以下命令:
|
||||
|
||||
```bash
|
||||
kubectl apply -f taosd-service.yaml
|
||||
kubectl apply -f tdengine.yaml
|
||||
```Bash
|
||||
kubectl apply -f taosd-service.yaml -n tdengine-test
|
||||
kubectl apply -f tdengine.yaml -n tdengine-test
|
||||
```
|
||||
|
||||
上面的配置将生成一个三节点的 TDengine 集群,dnode 为自动配置,可以使用 show dnodes 命令查看当前集群的节点:
|
||||
|
||||
```bash
|
||||
kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"
|
||||
kubectl exec -i -t tdengine-1 -- taos -s "show dnodes"
|
||||
kubectl exec -i -t tdengine-2 -- taos -s "show dnodes"
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show dnodes"
|
||||
kubectl exec -it tdengine-2 -n tdengine-test -- taos -s "show dnodes"
|
||||
```
|
||||
|
||||
输出如下:
|
||||
|
||||
```
|
||||
```Bash
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||
============================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
|
||||
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
|
||||
Query OK, 3 rows in database (0.003655s)
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 0 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
|
||||
2 | tdengine-1.ta... | 0 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
|
||||
3 | tdengine-2.ta... | 0 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
|
||||
Query OK, 3 row(s) in set (0.001853s)
|
||||
```
|
||||
|
||||
查看当前mnode
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
|
||||
taos> show mnodes\G
|
||||
*************************** 1.row ***************************
|
||||
id: 1
|
||||
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: leader
|
||||
status: ready
|
||||
create_time: 2023-07-19 17:54:18.559
|
||||
reboot_time: 2023-07-19 17:54:19.520
|
||||
Query OK, 1 row(s) in set (0.001282s)
|
||||
```
|
||||
|
||||
## 创建mnode
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 2"
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 3"
|
||||
```
|
||||
|
||||
查看mnode
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
|
||||
|
||||
taos> show mnodes\G
|
||||
*************************** 1.row ***************************
|
||||
id: 1
|
||||
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: leader
|
||||
status: ready
|
||||
create_time: 2023-07-19 17:54:18.559
|
||||
reboot_time: 2023-07-20 09:19:36.060
|
||||
*************************** 2.row ***************************
|
||||
id: 2
|
||||
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: follower
|
||||
status: ready
|
||||
create_time: 2023-07-20 09:22:05.600
|
||||
reboot_time: 2023-07-20 09:22:12.838
|
||||
*************************** 3.row ***************************
|
||||
id: 3
|
||||
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: follower
|
||||
status: ready
|
||||
create_time: 2023-07-20 09:22:20.042
|
||||
reboot_time: 2023-07-20 09:22:23.271
|
||||
Query OK, 3 row(s) in set (0.003108s)
|
||||
```
|
||||
|
||||
## 使能端口转发
|
||||
|
||||
利用 kubectl 端口转发功能可以使应用可以访问 Kubernetes 环境运行的 TDengine 集群。
|
||||
|
||||
```
|
||||
kubectl port-forward tdengine-0 6041:6041 &
|
||||
```Plain
|
||||
kubectl port-forward -n tdengine-test tdengine-0 6041:6041 &
|
||||
```
|
||||
|
||||
使用 curl 命令验证 TDengine REST API 使用的 6041 接口。
|
||||
|
||||
```
|
||||
$ curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
||||
Handling connection for 6041
|
||||
{"code":0,"column_meta":[["name","VARCHAR",64],["create_time","TIMESTAMP",8],["vgroups","SMALLINT",2],["ntables","BIGINT",8],["replica","TINYINT",1],["strict","VARCHAR",4],["duration","VARCHAR",10],["keep","VARCHAR",32],["buffer","INT",4],["pagesize","INT",4],["pages","INT",4],["minrows","INT",4],["maxrows","INT",4],["comp","TINYINT",1],["precision","VARCHAR",2],["status","VARCHAR",10],["retention","VARCHAR",60],["single_stable","BOOL",1],["cachemodel","VARCHAR",11],["cachesize","INT",4],["wal_level","TINYINT",1],["wal_fsync_period","INT",4],["wal_retention_period","INT",4],["wal_retention_size","BIGINT",8]],"data":[["information_schema",null,null,16,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null],["performance_schema",null,null,10,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null]],"rows":2}
|
||||
```Plain
|
||||
curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
|
||||
{"code":0,"column_meta":[["name","VARCHAR",64]],"data":[["information_schema"],["performance_schema"],["test"],["test1"]],"rows":4}
|
||||
```
|
||||
|
||||
## 使用 dashboard 进行图形化管理
|
||||
## 集群测试
|
||||
|
||||
minikube 提供 dashboard 命令支持图形化管理界面。
|
||||
### 数据准备
|
||||
|
||||
```
|
||||
$ minikube dashboard
|
||||
* Verifying dashboard health ...
|
||||
* Launching proxy ...
|
||||
* Verifying proxy health ...
|
||||
* Opening http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
|
||||
http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
|
||||
#### taosBenchmark
|
||||
|
||||
通过taosBenchmark 创建一个3副本的数据库,同时写入1亿条数据,同时查看数据
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taosBenchmark -I stmt -d test -n 10000 -t 10000 -a 3
|
||||
|
||||
# query data
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "select count(*) from test.meters;"
|
||||
|
||||
taos> select count(*) from test.meters;
|
||||
count(*) |
|
||||
========================
|
||||
100000000 |
|
||||
Query OK, 1 row(s) in set (0.103537s)
|
||||
```
|
||||
|
||||
对于某些公有云环境,minikube 绑定在 127.0.0.1 IP 地址上无法通过远程访问,需要使用 kubectl proxy 命令将端口映射到 0.0.0.0 IP 地址上,再通过浏览器访问虚拟机公网 IP 和端口以及相同的 dashboard URL 路径即可远程访问 dashboard。
|
||||
查看vnode分布,通过show dnodes
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 8 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
|
||||
2 | tdengine-1.ta... | 8 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
|
||||
3 | tdengine-2.ta... | 8 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
|
||||
Query OK, 3 row(s) in set (0.001357s)
|
||||
```
|
||||
$ kubectl proxy --accept-hosts='^.*$' --address='0.0.0.0'
|
||||
|
||||
通过show vgroup 查看 vnode 分布情况
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show test.vgroups"
|
||||
|
||||
taos> show test.vgroups
|
||||
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode | v4_status | cacheload | cacheelements | tsma |
|
||||
==============================================================================================================================================================================================
|
||||
2 | test | 1267 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
|
||||
3 | test | 1215 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
4 | test | 1215 | 1 | leader | 2 | follower | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
5 | test | 1307 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
6 | test | 1245 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
|
||||
7 | test | 1275 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
8 | test | 1231 | 1 | leader | 2 | follower | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
9 | test | 1245 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
|
||||
Query OK, 8 row(s) in set (0.001488s)
|
||||
```
|
||||
|
||||
#### 手工创建
|
||||
|
||||
常见一个三副本的test1,并创建一张表,写入2条数据
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- \
|
||||
taos -s \
|
||||
"create database if not exists test1 replica 3;
|
||||
use test1;
|
||||
create table if not exists t1(ts timestamp, n int);
|
||||
insert into t1 values(now, 1)(now+1s, 2);"
|
||||
```
|
||||
|
||||
通过show test1.vgroup 查看xnode分布情况
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show test1.vgroups"
|
||||
|
||||
taos> show test1.vgroups
|
||||
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode | v4_status | cacheload | cacheelements | tsma |
|
||||
==============================================================================================================================================================================================
|
||||
10 | test1 | 1 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
|
||||
11 | test1 | 0 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
|
||||
Query OK, 2 row(s) in set (0.001489s)
|
||||
```
|
||||
|
||||
### 容错测试
|
||||
|
||||
Mnode leader 所在的 dnode 掉线,dnode1
|
||||
|
||||
```Bash
|
||||
kubectl get pod -l app=tdengine -n tdengine-test -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
tdengine-0 0/1 ErrImagePull 2 (2s ago) 20m 10.244.2.75 node86 <none> <none>
|
||||
tdengine-1 1/1 Running 1 (6m48s ago) 20m 10.244.0.59 node84 <none> <none>
|
||||
tdengine-2 1/1 Running 0 21m 10.244.1.223 node85 <none> <none>
|
||||
```
|
||||
|
||||
此时集群mnode发生重新选举,dnode1上的monde 成为leader
|
||||
|
||||
```Bash
|
||||
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
|
||||
Welcome to the TDengine Command Line Interface, Client Version:3.0.7.1.202307190706
|
||||
Copyright (c) 2022 by TDengine, all rights reserved.
|
||||
|
||||
taos> show mnodes\G
|
||||
*************************** 1.row ***************************
|
||||
id: 1
|
||||
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: offline
|
||||
status: offline
|
||||
create_time: 2023-07-19 17:54:18.559
|
||||
reboot_time: 1970-01-01 08:00:00.000
|
||||
*************************** 2.row ***************************
|
||||
id: 2
|
||||
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: leader
|
||||
status: ready
|
||||
create_time: 2023-07-20 09:22:05.600
|
||||
reboot_time: 2023-07-20 09:32:00.227
|
||||
*************************** 3.row ***************************
|
||||
id: 3
|
||||
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
|
||||
role: follower
|
||||
status: ready
|
||||
create_time: 2023-07-20 09:22:20.042
|
||||
reboot_time: 2023-07-20 09:32:00.026
|
||||
Query OK, 3 row(s) in set (0.001513s)
|
||||
```
|
||||
|
||||
集群可以正常读写
|
||||
|
||||
```Bash
|
||||
# insert
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "insert into test1.t1 values(now, 1)(now+1s, 2);"
|
||||
|
||||
taos> insert into test1.t1 values(now, 1)(now+1s, 2);
|
||||
Insert OK, 2 row(s) affected (0.002098s)
|
||||
|
||||
# select
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "select *from test1.t1"
|
||||
|
||||
taos> select *from test1.t1
|
||||
ts | n |
|
||||
========================================
|
||||
2023-07-19 18:04:58.104 | 1 |
|
||||
2023-07-19 18:04:59.104 | 2 |
|
||||
2023-07-19 18:06:00.303 | 1 |
|
||||
2023-07-19 18:06:01.303 | 2 |
|
||||
Query OK, 4 row(s) in set (0.001994s)
|
||||
```
|
||||
|
||||
同理,至于非leader得mnode掉线,读写当然可以正常进行,这里就不做过多的展示。
|
||||
|
||||
## 集群扩容
|
||||
|
||||
TDengine 集群支持自动扩容:
|
||||
|
||||
```bash
|
||||
```Bash
|
||||
kubectl scale statefulsets tdengine --replicas=4
|
||||
```
|
||||
|
||||
上面命令行中参数 `--replica=4` 表示要将 TDengine 集群扩容到 4 个节点,执行后首先检查 POD 的状态:
|
||||
|
||||
```bash
|
||||
kubectl get pods -l app=tdengine
|
||||
```Bash
|
||||
kubectl get pod -l app=tdengine -n tdengine-test -o wide
|
||||
```
|
||||
|
||||
输出如下:
|
||||
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
tdengine-0 1/1 Running 0 161m
|
||||
tdengine-1 1/1 Running 0 161m
|
||||
tdengine-2 1/1 Running 0 32m
|
||||
tdengine-3 1/1 Running 0 32m
|
||||
```Plain
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
tdengine-0 1/1 Running 4 (6h26m ago) 6h53m 10.244.2.75 node86 <none> <none>
|
||||
tdengine-1 1/1 Running 1 (6h39m ago) 6h53m 10.244.0.59 node84 <none> <none>
|
||||
tdengine-2 1/1 Running 0 5h16m 10.244.1.224 node85 <none> <none>
|
||||
tdengine-3 1/1 Running 0 3m24s 10.244.2.76 node86 <none> <none>
|
||||
```
|
||||
|
||||
此时 POD 的状态仍然是 Running,TDengine 集群中的 dnode 状态要等 POD 状态为 `ready` 之后才能看到:
|
||||
此时 Pod 的状态仍然是 Running,TDengine 集群中的 dnode 状态要等 Pod 状态为 `ready` 之后才能看到:
|
||||
|
||||
```bash
|
||||
kubectl exec -i -t tdengine-3 -- taos -s "show dnodes"
|
||||
```Bash
|
||||
kubectl exec -it tdengine-3 -n tdengine-test -- taos -s "show dnodes"
|
||||
```
|
||||
|
||||
扩容后的四节点 TDengine 集群的 dnode 列表:
|
||||
|
||||
```
|
||||
```Plain
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||
============================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
|
||||
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
|
||||
4 | tdengine-3.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:33:16.039 | |
|
||||
Query OK, 4 rows in database (0.008377s)
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
|
||||
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
|
||||
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
|
||||
4 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:01:44.007 | 2023-07-20 16:01:44.889 | | | |
|
||||
Query OK, 4 row(s) in set (0.003628s)
|
||||
```
|
||||
|
||||
## 集群缩容
|
||||
|
||||
由于 TDengine 集群在扩缩容时会对数据进行节点间迁移,使用 kubectl 命令进行缩容需要首先使用 "drop dnodes" 命令,节点删除完成后再进行 Kubernetes 集群缩容。
|
||||
由于 TDengine 集群在扩缩容时会对数据进行节点间迁移,使用 kubectl 命令进行缩容需要首先使用 "drop dnodes" 命令(**如果集群中存在3副本的db,那么缩容后的** **dnode** **个数也要必须大于等于3,否则drop dnode操作会被中止**),然后再节点删除完成后再进行 Kubernetes 集群缩容。
|
||||
|
||||
注意:由于 Kubernetes Statefulset 中 Pod 的只能按创建顺序逆序移除,所以 TDengine drop dnode 也需要按照创建顺序逆序移除,否则会导致 Pod 处于错误状态。
|
||||
|
||||
```
|
||||
$ kubectl exec -i -t tdengine-0 -- taos -s "drop dnode 4"
|
||||
```
|
||||
|
||||
```bash
|
||||
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
|
||||
```Bash
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "drop dnode 4"
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | note |
|
||||
============================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
|
||||
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
|
||||
Query OK, 3 rows in database (0.004861s)
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
|
||||
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
|
||||
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
|
||||
Query OK, 3 row(s) in set (0.003324s)
|
||||
```
|
||||
|
||||
确认移除成功后(使用 kubectl exec -i -t tdengine-0 -- taos -s "show dnodes" 查看和确认 dnode 列表),使用 kubectl 命令移除 POD:
|
||||
|
||||
```
|
||||
kubectl scale statefulsets tdengine --replicas=3
|
||||
```Plain
|
||||
kubectl scale statefulsets tdengine --replicas=3 -n tdengine-test
|
||||
```
|
||||
|
||||
最后一个 POD 将会被删除。使用命令 kubectl get pods -l app=tdengine 查看POD状态:
|
||||
|
||||
```
|
||||
$ kubectl get pods -l app=tdengine
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
tdengine-0 1/1 Running 0 4m7s
|
||||
tdengine-1 1/1 Running 0 3m55s
|
||||
tdengine-2 1/1 Running 0 2m28s
|
||||
```Plain
|
||||
kubectl get pod -l app=tdengine -n tdengine-test -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
tdengine-0 1/1 Running 4 (6h55m ago) 7h22m 10.244.2.75 node86 <none> <none>
|
||||
tdengine-1 1/1 Running 1 (7h9m ago) 7h23m 10.244.0.59 node84 <none> <none>
|
||||
tdengine-2 1/1 Running 0 5h45m 10.244.1.224 node85 <none> <none>
|
||||
```
|
||||
|
||||
POD删除后,需要手动删除PVC,否则下次扩容时会继续使用以前的数据导致无法正常加入集群。
|
||||
|
||||
```bash
|
||||
$ kubectl delete pvc taosdata-tdengine-3
|
||||
```Bash
|
||||
kubectl delete pvc aosdata-tdengine-3 -n tdengine-test
|
||||
```
|
||||
|
||||
此时的集群状态是安全的,需要时还可以再次进行扩容:
|
||||
|
||||
```bash
|
||||
$ kubectl scale statefulsets tdengine --replicas=4
|
||||
```Bash
|
||||
kubectl scale statefulsets tdengine --replicas=4 -n tdengine-test
|
||||
statefulset.apps/tdengine scaled
|
||||
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
tdengine-0 1/1 Running 0 35m
|
||||
tdengine-1 1/1 Running 0 34m
|
||||
tdengine-2 1/1 Running 0 12m
|
||||
tdengine-3 0/1 ContainerCreating 0 4s
|
||||
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
tdengine-0 1/1 Running 0 35m
|
||||
tdengine-1 1/1 Running 0 34m
|
||||
tdengine-2 1/1 Running 0 12m
|
||||
tdengine-3 0/1 Running 0 7s
|
||||
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
|
||||
|
||||
kubectl get pod -l app=tdengine -n tdengine-test -o wide
|
||||
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
tdengine-0 1/1 Running 4 (6h59m ago) 7h27m 10.244.2.75 node86 <none> <none>
|
||||
tdengine-1 1/1 Running 1 (7h13m ago) 7h27m 10.244.0.59 node84 <none> <none>
|
||||
tdengine-2 1/1 Running 0 5h49m 10.244.1.224 node85 <none> <none>
|
||||
tdengine-3 1/1 Running 0 20s 10.244.2.77 node86 <none> <none>
|
||||
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
|
||||
5 | tdengine-2.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:01:36.479 | |
|
||||
6 | tdengine-3.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:13:54.411 | |
|
||||
Query OK, 4 row(s) in set (0.001348s)
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
|
||||
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
|
||||
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
|
||||
5 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:31:34.092 | 2023-07-20 16:38:17.419 | | | |
|
||||
Query OK, 4 row(s) in set (0.003881s)
|
||||
```
|
||||
|
||||
## 清理 TDengine 集群
|
||||
|
||||
> **删除pvc时需要注意下pv persistentVolumeReclaimPolicy策略,建议改为Delete,这样在删除pvc时才会自动清理pv,同时会清理底层的csi存储资源,如果没有配置删除pvc自动清理pv的策略,再删除pvc后,在手动清理pv时,pv对应的csi存储资源可能不会被释放。**
|
||||
|
||||
完整移除 TDengine 集群,需要分别清理 statefulset、svc、configmap、pvc。
|
||||
|
||||
```bash
|
||||
kubectl delete statefulset -l app=tdengine
|
||||
kubectl delete svc -l app=tdengine
|
||||
kubectl delete pvc -l app=tdengine
|
||||
kubectl delete configmap taoscfg
|
||||
|
||||
```Bash
|
||||
kubectl delete statefulset -l app=tdengine -n tdengine-test
|
||||
kubectl delete svc -l app=tdengine -n tdengine-test
|
||||
kubectl delete pvc -l app=tdengine -n tdengine-test
|
||||
kubectl delete configmap taoscfg -n tdengine-test
|
||||
```
|
||||
|
||||
## 常见错误
|
||||
|
|
@ -330,65 +519,26 @@ kubectl delete configmap taoscfg
|
|||
|
||||
未进行 "drop dnode" 直接进行缩容,由于 TDengine 尚未删除节点,缩容 pod 导致 TDengine 集群中部分节点处于 offline 状态。
|
||||
|
||||
```
|
||||
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
|
||||
```Plain
|
||||
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
|
||||
|
||||
taos> show dnodes
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
|
||||
5 | tdengine-2.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:01:36.479 | status msg timeout |
|
||||
6 | tdengine-3.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:13:54.411 | status msg timeout |
|
||||
Query OK, 4 row(s) in set (0.001323s)
|
||||
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
|
||||
=============================================================================================================================================================================================================================================
|
||||
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
|
||||
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
|
||||
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
|
||||
5 | tdengine-3.ta... | 0 | 16 | offline | 2023-07-20 16:31:34.092 | 2023-07-20 16:38:17.419 | status msg timeout | | |
|
||||
Query OK, 4 row(s) in set (0.003862s)
|
||||
```
|
||||
|
||||
### 错误二
|
||||
## 最后
|
||||
|
||||
TDengine 集群会持有 replica 参数,如果缩容后的节点数小于这个值,集群将无法使用:
|
||||
对于在 Kubernetes 环境下 TDengine 的高可用和高可靠来说,对于硬件损坏、灾难恢复,分为两个层面来讲:
|
||||
|
||||
创建一个库使用 replica 参数为 2,插入部分数据:
|
||||
1. 底层的分布式块存储具备的灾难恢复能力,块存储的多副本,当下流行的分布式块存储如 Ceph,就具备多副本能力,将存储副本扩展到不同的机架、机柜、机房、数据中心(或者直接使用公有云厂商提供的块存储服务)
|
||||
2. TDengine的灾难恢复,在 TDengine Enterprise 中,本身具备了当一个 dnode 永久下线(物理机磁盘损坏,数据分拣丢失)后,重新拉起一个空白的dnode来恢复原dnode的工作。
|
||||
|
||||
```bash
|
||||
kubectl exec -i -t tdengine-0 -- \
|
||||
taos -s \
|
||||
"create database if not exists test replica 2;
|
||||
use test;
|
||||
create table if not exists t1(ts timestamp, n int);
|
||||
insert into t1 values(now, 1)(now+1s, 2);"
|
||||
最后,欢迎使用[TDengine Cloud](https://cloud.taosdata.com/),来体验一站式全托管的TDengine云服务。
|
||||
|
||||
|
||||
```
|
||||
|
||||
缩容到单节点:
|
||||
|
||||
```bash
|
||||
kubectl scale statefulsets tdengine --replicas=1
|
||||
|
||||
```
|
||||
|
||||
在 TDengine CLI 中的所有数据库操作将无法成功。
|
||||
|
||||
```
|
||||
taos> show dnodes;
|
||||
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
|
||||
Query OK, 2 row(s) in set (0.000845s)
|
||||
|
||||
taos> show dnodes;
|
||||
id | end_point | vnodes | cores | status | role | create_time | offline reason |
|
||||
======================================================================================================================================
|
||||
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
|
||||
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
|
||||
Query OK, 2 row(s) in set (0.000837s)
|
||||
|
||||
taos> use test;
|
||||
Database changed.
|
||||
|
||||
taos> insert into t1 values(now, 3);
|
||||
|
||||
DB error: Unable to resolve FQDN (0.013874s)
|
||||
|
||||
```
|
||||
> TDengine Cloud 是一个极简的全托管时序数据处理云服务平台,它是基于开源的时序数据库 TDengine 而开发的。除高性能的时序数据库之外,它还具有缓存、订阅和流计算等系统功能,而且提供了便利而又安全的数据分享、以及众多的企业级功能。它可以让物联网、工业互联网、金融、IT 运维监控等领域企业在时序数据的管理上大幅降低人力成本和运营成本。
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ description: 部署 TDengine 集群的多种方式
|
|||
|
||||
TDengine 支持集群,提供水平扩展的能力。如果需要获得更高的处理能力,只需要多增加节点即可。TDengine 采用虚拟节点技术,将一个节点虚拟化为多个虚拟节点,以实现负载均衡。同时,TDengine可以将多个节点上的虚拟节点组成虚拟节点组,通过多副本机制,以保证供系统的高可用。TDengine的集群功能完全开源。
|
||||
|
||||
本章节主要介绍如何在主机上人工部署集群,以及如何使用 Kubernetes 和 Helm部署集群。
|
||||
本章节主要介绍如何在主机上人工部署集群,docker部署,以及如何使用 Kubernetes 和 Helm部署集群。
|
||||
|
||||
```mdx-code-block
|
||||
import DocCardList from '@theme/DocCardList';
|
||||
|
|
|
|||
|
|
@ -42,12 +42,21 @@ CREATE DATABASE db_name PRECISION 'ns';
|
|||
| 14 | NCHAR | 自定义 | 记录包含多字节字符在内的字符串,如中文字符。每个 NCHAR 字符占用 4 字节的存储空间。字符串两端使用单引号引用,字符串内的单引号需用转义字符 `\'`。NCHAR 使用时须指定字符串大小,类型为 NCHAR(10) 的列表示此列的字符串最多存储 10 个 NCHAR 字符。如果用户字符串长度超出声明长度,将会报错。 |
|
||||
| 15 | JSON | | JSON 数据类型, 只有 Tag 可以是 JSON 格式 |
|
||||
| 16 | VARCHAR | 自定义 | BINARY 类型的别名 |
|
||||
| 17 | GEOMETRY | 自定义 | 几何类型 |
|
||||
|
||||
:::note
|
||||
|
||||
- 表的每行长度不能超过 48KB(从 3.0.5.0 版本开始为 64KB)(注意:每个 BINARY/NCHAR 类型的列还会额外占用 2 个字节的存储位置)。
|
||||
- 表的每行长度不能超过 48KB(从 3.0.5.0 版本开始为 64KB)(注意:每个 BINARY/NCHAR/GEOMETRY 类型的列还会额外占用 2 个字节的存储位置)。
|
||||
- 虽然 BINARY 类型在底层存储上支持字节型的二进制字符,但不同编程语言对二进制数据的处理方式并不保证一致,因此建议在 BINARY 类型中只存储 ASCII 可见字符,而避免存储不可见字符。多字节的数据,例如中文字符,则需要使用 NCHAR 类型进行保存。如果强行使用 BINARY 类型保存中文字符,虽然有时也能正常读写,但并不带有字符集信息,很容易出现数据乱码甚至数据损坏等情况。
|
||||
- BINARY 类型理论上最长可以有 16,374(从 3.0.5.0 版本开始,数据列为 65,517,标签列为 16,382) 字节。BINARY 仅支持字符串输入,字符串两端需使用单引号引用。使用时须指定大小,如 BINARY(20) 定义了最长为 20 个单字节字符的字符串,每个字符占 1 字节的存储空间,总共固定占用 20 字节的空间,此时如果用户字符串超出 20 字节将会报错。对于字符串内的单引号,可以用转义字符反斜线加单引号来表示,即 `\'`。
|
||||
- GEOMETRY 类型数据列为最大长度为 65,517 字节,标签列最大长度为 16,382 字节。支持 2D 的 POINT、LINESTRING 和 POLYGON 子类型数据。长度计算方式如下表所示:
|
||||
|
||||
| # | **语法** | **最小长度** | **最大长度** | **每组坐标长度增长** |
|
||||
|---|--------------------------------------|----------|------------|--------------|
|
||||
| 1 | POINT(1.0 1.0) | 21 | 21 | 无 |
|
||||
| 2 | LINESTRING(1.0 1.0, 2.0 2.0) | 9+2*16 | 9+4094*16 | +16 |
|
||||
| 3 | POLYGON((1.0 1.0, 2.0 2.0, 1.0 1.0)) | 13+3*16 | 13+4094*16 | +16 |
|
||||
|
||||
- SQL 语句中的数值类型将依据是否存在小数点,或使用科学计数法表示,来判断数值类型是否为整型或者浮点型,因此在使用时要注意相应类型越界的情况。例如,9999999999999999999 会认为超过长整型的上边界而溢出,而 9999999999999999999.0 会被认为是有效的浮点数。
|
||||
|
||||
:::
|
||||
|
|
|
|||
|
|
@ -73,7 +73,7 @@ database_option: {
|
|||
- TABLE_PREFIX:当其为正值时,在决定把一个表分配到哪个 vgroup 时要忽略表名中指定长度的前缀;当其为负值时,在决定把一个表分配到哪个 vgroup 时只使用表名中指定长度的前缀;例如,假定表名为 "v30001",当 TSDB_PREFIX = 2 时 使用 "0001" 来决定分配到哪个 vgroup ,当 TSDB_PREFIX = -2 时使用 "v3" 来决定分配到哪个 vgroup
|
||||
- TABLE_SUFFIX:当其为正值时,在决定把一个表分配到哪个 vgroup 时要忽略表名中指定长度的后缀;当其为负值时,在决定把一个表分配到哪个 vgroup 时只使用表名中指定长度的后缀;例如,假定表名为 "v30001",当 TSDB_SUFFIX = 2 时 使用 "v300" 来决定分配到哪个 vgroup ,当 TSDB_SUFFIX = -2 时使用 "01" 来决定分配到哪个 vgroup。
|
||||
- TSDB_PAGESIZE:一个 VNODE 中时序数据存储引擎的页大小,单位为 KB,默认为 4 KB。范围为 1 到 16384,即 1 KB到 16 MB。
|
||||
- WAL_RETENTION_PERIOD: 为了数据订阅消费,需要WAL日志文件额外保留的最大时长策略。WAL日志清理,不受订阅客户端消费状态影响。单位为 s。默认为 0,表示无需为订阅保留。新建订阅,应先设置恰当的时长策略。
|
||||
- WAL_RETENTION_PERIOD: 为了数据订阅消费,需要WAL日志文件额外保留的最大时长策略。WAL日志清理,不受订阅客户端消费状态影响。单位为 s。默认为 3600,表示在 WAL 保留最近 3600 秒的数据,请根据数据订阅的需要修改这个参数为适当值。
|
||||
- WAL_RETENTION_SIZE:为了数据订阅消费,需要WAL日志文件额外保留的最大累计大小策略。单位为 KB。默认为 0,表示累计大小无上限。
|
||||
### 创建数据库示例
|
||||
|
||||
|
|
@ -82,7 +82,7 @@ create database if not exists db vgroups 10 buffer 10
|
|||
|
||||
```
|
||||
|
||||
以上示例创建了一个有 10 个 vgroup 名为 db 的数据库, 其中每个 vnode 分配也 10MB 的写入缓存
|
||||
以上示例创建了一个有 10 个 vgroup 名为 db 的数据库, 其中每个 vnode 分配 10MB 的写入缓存
|
||||
|
||||
### 使用数据库
|
||||
|
||||
|
|
|
|||
|
|
@ -43,12 +43,11 @@ table_option: {
|
|||
|
||||
1. 表的第一个字段必须是 TIMESTAMP,并且系统自动将其设为主键;
|
||||
2. 表名最大长度为 192;
|
||||
3. 表的每行长度不能超过 48KB(从 3.0.5.0 版本开始为 64KB);(注意:每个 BINARY/NCHAR 类型的列还会额外占用 2 个字节的存储位置)
|
||||
3. 表的每行长度不能超过 48KB(从 3.0.5.0 版本开始为 64KB);(注意:每个 BINARY/NCHAR/GEOMETRY 类型的列还会额外占用 2 个字节的存储位置)
|
||||
4. 子表名只能由字母、数字和下划线组成,且不能以数字开头,不区分大小写
|
||||
5. 使用数据类型 binary 或 nchar,需指定其最长的字节数,如 binary(20),表示 20 字节;
|
||||
5. 使用数据类型 BINARY/NCHAR/GEOMETRY,需指定其最长的字节数,如 BINARY(20),表示 20 字节;
|
||||
6. 为了兼容支持更多形式的表名,TDengine 引入新的转义符 "\`",可以让表名与关键词不冲突,同时不受限于上述表名称合法性约束检查。但是同样具有长度限制要求。使用转义字符以后,不再对转义字符中的内容进行大小写统一。
|
||||
例如:\`aBc\` 和 \`abc\` 是不同的表名,但是 abc 和 aBc 是相同的表名。
|
||||
需要注意的是转义字符中的内容必须是可打印字符。
|
||||
|
||||
**参数说明**
|
||||
|
||||
|
|
|
|||
|
|
@ -82,7 +82,7 @@ INSERT INTO d1001 (ts, current, phase) VALUES ('2021-07-13 14:06:33.196', 10.27,
|
|||
|
||||
```sql
|
||||
INSERT INTO d1001 VALUES ('2021-07-13 14:06:34.630', 10.2, 219, 0.32) ('2021-07-13 14:06:35.779', 10.15, 217, 0.33)
|
||||
d1002 (ts, current, phase) VALUES ('2021-07-13 14:06:34.255', 10.27, 0.31);
|
||||
d1002 (ts, current, phase) VALUES ('2021-07-13 14:06:34.255', 10.27, 0.31);
|
||||
```
|
||||
|
||||
## 插入记录时自动建表
|
||||
|
|
|
|||
|
|
@ -315,7 +315,7 @@ WHERE (column|tbname) match/MATCH/nmatch/NMATCH _regex_
|
|||
|
||||
### 使用限制
|
||||
|
||||
只能针对表名(即 tbname 筛选)、binary/nchar 类型标签值进行正则表达式过滤,不支持普通列的过滤。
|
||||
只能针对表名(即 tbname 筛选)、binary/nchar 类型值进行正则表达式过滤。
|
||||
|
||||
正则匹配字符串长度不能超过 128 字节。可以通过参数 _maxRegexStringLen_ 设置和调整最大允许的正则匹配字符串,该参数是客户端配置参数,需要重启才能生效。
|
||||
|
||||
|
|
|
|||
|
|
@ -1265,3 +1265,140 @@ SELECT SERVER_STATUS();
|
|||
```
|
||||
|
||||
**说明**:检测服务端是否所有 dnode 都在线,如果是则返回成功,否则返回无法建立连接的错误。
|
||||
|
||||
|
||||
## Geometry 函数
|
||||
|
||||
### Geometry 输入函数:
|
||||
|
||||
#### ST_GeomFromText
|
||||
|
||||
```sql
|
||||
ST_GeomFromText(VARCHAR WKT expr)
|
||||
```
|
||||
|
||||
**功能说明**:根据 Well-Known Text (WKT) 表示从指定的几何值创建几何数据。
|
||||
|
||||
**返回值类型**:GEOMETRY
|
||||
|
||||
**适用数据类型**:VARCHAR
|
||||
|
||||
**适用表类型**:标准表和超表
|
||||
|
||||
**使用说明**:输入可以是 WKT 字符串之一,例如点(POINT)、线串(LINESTRING)、多边形(POLYGON)、多点集(MULTIPOINT)、多线串(MULTILINESTRING)、多多边形(MULTIPOLYGON)、几何集合(GEOMETRYCOLLECTION)。输出是以二进制字符串形式定义的 GEOMETRY 数据类型。
|
||||
|
||||
### Geometry 输出函数:
|
||||
|
||||
#### ST_AsText
|
||||
|
||||
```sql
|
||||
ST_AsText(GEOMETRY geom)
|
||||
```
|
||||
|
||||
**功能说明**:从几何数据中返回指定的 Well-Known Text (WKT) 表示。
|
||||
|
||||
**返回值类型**:VARCHAR
|
||||
|
||||
**适用数据类型**:GEOMETRY
|
||||
|
||||
**适用表类型**:标准表和超表
|
||||
|
||||
**使用说明**:输出可以是 WKT 字符串之一,例如点(POINT)、线串(LINESTRING)、多边形(POLYGON)、多点集(MULTIPOINT)、多线串(MULTILINESTRING)、多多边形(MULTIPOLYGON)、几何集合(GEOMETRYCOLLECTION)。
|
||||
|
||||
### Geometry 关系函数:
|
||||
|
||||
#### ST_Intersects
|
||||
|
||||
```sql
|
||||
ST_Intersects(GEOMETRY geomA, GEOMETRY geomB)
|
||||
```
|
||||
|
||||
##功能说明**:比较两个几何对象,并在它们相交时返回 true。
|
||||
|
||||
**返回值类型**:BOOL
|
||||
|
||||
**适用数据类型**:GEOMETRY,GEOMETRY
|
||||
|
||||
**适用表类型**:标准表和超表
|
||||
|
||||
**使用说明**:如果两个几何对象有任何一个共享点,则它们相交。
|
||||
|
||||
#### ST_Equals
|
||||
|
||||
```sql
|
||||
ST_Equals(GEOMETRY geomA, GEOMETRY geomB)
|
||||
```
|
||||
|
||||
**功能说明**:如果给定的几何对象是"空间相等"的,则返回 TRUE。
|
||||
|
||||
**返回值类型**:BOOL
|
||||
|
||||
**适用数据类型**:GEOMETRY,GEOMETRY
|
||||
|
||||
**适用表类型**:标准表和超表
|
||||
|
||||
**使用说明**:"空间相等"意味着 ST_Contains(A,B) = true 和 ST_Contains(B,A) = true,并且点的顺序可能不同,但表示相同的几何结构。
|
||||
|
||||
#### ST_Touches
|
||||
|
||||
```sql
|
||||
ST_Touches(GEOMETRY geomA, GEOMETRY geomB)
|
||||
```
|
||||
|
||||
**功能说明**:如果 A 和 B 相交,但它们的内部不相交,则返回 TRUE。
|
||||
|
||||
**返回值类型**:BOOL
|
||||
|
||||
**适用数据类型**:GEOMETRY,GEOMETRY
|
||||
|
||||
**适用表类型**:标准表和超表
|
||||
|
||||
**使用说明**:A 和 B 至少有一个公共点,并且这些公共点位于至少一个边界中。对于点/点输入,关系始终为 FALSE,因为点没有边界。
|
||||
|
||||
#### ST_Covers
|
||||
|
||||
```sql
|
||||
ST_Covers(GEOMETRY geomA, GEOMETRY geomB)
|
||||
```
|
||||
|
||||
**功能说明**:如果 B 中的每个点都位于几何形状 A 内部(与内部或边界相交),则返回 TRUE。
|
||||
|
||||
**返回值类型**:BOOL
|
||||
|
||||
**适用数据类型**:GEOMETRY,GEOMETRY
|
||||
|
||||
**适用表类型**:标准表和超表
|
||||
|
||||
**使用说明**:A 包含 B 意味着 B 中的没有点位于 A 的外部(在外部)。
|
||||
|
||||
#### ST_Contains
|
||||
|
||||
```sql
|
||||
ST_Contains(GEOMETRY geomA, GEOMETRY geomB)
|
||||
```
|
||||
|
||||
**功能说明**:如果 A 包含 B,描述:如果几何形状 A 包含几何形状 B,则返回 TRUE。
|
||||
|
||||
**返回值类型**:BOOL
|
||||
|
||||
**适用数据类型**:GEOMETRY,GEOMETRY
|
||||
|
||||
**适用表类型**:标准表和超表
|
||||
|
||||
**使用说明**:A 包含 B 当且仅当 B 的所有点位于 A 的内部(即位于内部或边界上)(或等效地,B 的没有点位于 A 的外部),并且 A 和 B 的内部至少有一个公共点。
|
||||
|
||||
#### ST_ContainsProperly
|
||||
|
||||
```sql
|
||||
ST_ContainsProperly(GEOMETRY geomA, GEOMETRY geomB)
|
||||
```
|
||||
|
||||
**功能说明**:如果 B 的每个点都位于 A 内部,则返回 TRUE。
|
||||
|
||||
**返回值类型**:BOOL
|
||||
|
||||
**适用数据类型**:GEOMETRY,GEOMETRY
|
||||
|
||||
**适用表类型**:标准表和超表
|
||||
|
||||
**使用说明**:B 的没有点位于 A 的边界或外部。
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ TDengine 支持 `UNION ALL` 和 `UNION` 操作符。UNION ALL 将查询返回的
|
|||
| 3 | \>, < | 除 BLOB、MEDIUMBLOB 和 JSON 外的所有类型 | 大于,小于 |
|
||||
| 4 | \>=, <= | 除 BLOB、MEDIUMBLOB 和 JSON 外的所有类型 | 大于等于,小于等于 |
|
||||
| 5 | IS [NOT] NULL | 所有类型 | 是否为空值 |
|
||||
| 6 | [NOT] BETWEEN AND | 除 BOOL、BLOB、MEDIUMBLOB 和 JSON 外的所有类型 | 闭区间比较 |
|
||||
| 6 | [NOT] BETWEEN AND | 除 BOOL、BLOB、MEDIUMBLOB、JSON 和 GEOMETRY 外的所有类型 | 闭区间比较 |
|
||||
| 7 | IN | 除 BLOB、MEDIUMBLOB 和 JSON 外的所有类型,且不可以为表的时间戳主键列 | 与列表内的任意值相等 |
|
||||
| 8 | LIKE | BINARY、NCHAR 和 VARCHAR | 通配符匹配 |
|
||||
| 9 | MATCH, NMATCH | BINARY、NCHAR 和 VARCHAR | 正则表达式匹配 |
|
||||
|
|
|
|||
|
|
@ -10,11 +10,9 @@ description: 合法字符集和命名中的限制规则
|
|||
2. 允许英文字符或下划线开头,不允许以数字开头
|
||||
3. 不区分大小写
|
||||
4. 转义后表(列)名规则:
|
||||
为了兼容支持更多形式的表(列)名,TDengine 引入新的转义符 "`"。可用让表名与关键词不冲突,同时不受限于上述表名称合法性约束检查
|
||||
转义后的表(列)名同样受到长度限制要求,且长度计算的时候不计算转义符。使用转义字符以后,不再对转义字符中的内容进行大小写统一
|
||||
为了兼容支持更多形式的表(列)名,TDengine 引入新的转义符 "`"。使用转义字符以后,不再对转义字符中的内容进行大小写统一,即可以保留用户指定表名中的大小写属性。
|
||||
|
||||
例如:\`aBc\` 和 \`abc\` 是不同的表(列)名,但是 abc 和 aBc 是相同的表(列)名。
|
||||
需要注意的是转义字符中的内容必须是可打印字符。
|
||||
|
||||
## 密码合法字符集
|
||||
|
||||
|
|
@ -48,13 +46,13 @@ description: 合法字符集和命名中的限制规则
|
|||
|
||||
### 转义后表(列)名规则:
|
||||
|
||||
为了兼容支持更多形式的表(列)名,TDengine 引入新的转义符 "`",可以避免表名与关键词的冲突,同时不受限于上述表名合法性约束检查,转义符不计入表名的长度。
|
||||
为了兼容支持更多形式的表(列)名,TDengine 引入新的转义符 "`",可以避免表名与关键词的冲突,转义符不计入表名的长度。
|
||||
转义后的表(列)名同样受到长度限制要求,且长度计算的时候不计算转义符。使用转义字符以后,不再对转义字符中的内容进行大小写统一。
|
||||
|
||||
例如:
|
||||
\`aBc\` 和 \`abc\` 是不同的表(列)名,但是 abc 和 aBc 是相同的表(列)名。
|
||||
|
||||
:::note
|
||||
转义字符中的内容必须是可打印字符。
|
||||
转义字符中的内容必须符合命名规则中的字符约束。
|
||||
|
||||
:::
|
||||
|
|
|
|||
|
|
@ -178,6 +178,7 @@ description: TDengine 保留关键字的详细列表
|
|||
|
||||
- MATCH
|
||||
- MAX_DELAY
|
||||
- MAX_SPEED
|
||||
- MAXROWS
|
||||
- MERGE
|
||||
- META
|
||||
|
|
|
|||
|
|
@ -28,15 +28,15 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
|
|||
|
||||
提供 dnode 的相关信息。也可以使用 SHOW DNODES 来查询这些信息。
|
||||
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :------------: | ------------ | ------------------------- |
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :------------: | ------------ | ----------------------------------------------------------------------------------------------------- |
|
||||
| 1 | vnodes | SMALLINT | dnode 中的实际 vnode 个数。需要注意,`vnodes` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 2 | support_vnodes | SMALLINT | 最多支持的 vnode 个数 |
|
||||
| 3 | status | BINARY(10) | 当前状态 |
|
||||
| 4 | note | BINARY(256) | 离线原因等信息 |
|
||||
| 5 | id | SMALLINT | dnode id |
|
||||
| 6 | endpoint | BINARY(134) | dnode 的地址 |
|
||||
| 7 | create | TIMESTAMP | 创建时间 |
|
||||
| 2 | support_vnodes | SMALLINT | 最多支持的 vnode 个数 |
|
||||
| 3 | status | BINARY(10) | 当前状态 |
|
||||
| 4 | note | BINARY(256) | 离线原因等信息 |
|
||||
| 5 | id | SMALLINT | dnode id |
|
||||
| 6 | endpoint | BINARY(134) | dnode 的地址 |
|
||||
| 7 | create | TIMESTAMP | 创建时间 |
|
||||
|
||||
## INS_MNODES
|
||||
|
||||
|
|
@ -98,7 +98,7 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
|
|||
| 21 | cachesize | INT | 表示每个 vnode 中用于缓存子表最近数据的内存大小。需要注意,`cachesize` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 22 | wal_level | INT | WAL 级别。需要注意,`wal_level` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 23 | wal_fsync_period | INT | 数据落盘周期。需要注意,`wal_fsync_period` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 24 | wal_retention_period | INT | WAL 的保存时长。需要注意,`wal_retention_period` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 24 | wal_retention_period | INT | WAL 的保存时长,单位为秒。需要注意,`wal_retention_period` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 25 | wal_retention_size | INT | WAL 的保存上限。需要注意,`wal_retention_size` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 26 | stt_trigger | SMALLINT | 触发文件合并的落盘文件的个数。需要注意,`stt_trigger` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 27 | table_prefix | SMALLINT | 内部存储引擎根据表名分配存储该表数据的 VNODE 时要忽略的前缀的长度。需要注意,`table_prefix` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
|
|
@ -109,66 +109,66 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
|
|||
|
||||
用户创建的自定义函数的信息。
|
||||
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :---------: | ------------ | -------------- |
|
||||
| 1 | name | BINARY(64) | 函数名 |
|
||||
| 2 | comment | BINARY(255) | 补充说明。需要注意,`comment` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 3 | aggregate | INT | 是否为聚合函数。需要注意,`aggregate` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 4 | output_type | BINARY(31) | 输出类型 |
|
||||
| 5 | create_time | TIMESTAMP | 创建时间 |
|
||||
| 6 | code_len | INT | 代码长度 |
|
||||
| 7 | bufsize | INT | buffer 大小 |
|
||||
| 8 | func_language | BINARY(31) | 自定义函数编程语言 |
|
||||
| 9 | func_body | BINARY(16384) | 函数体定义 |
|
||||
| 10 | func_version | INT | 函数版本号。初始版本为0,每次替换更新,版本号加1。|
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :-----------: | ------------- | --------------------------------------------------------------------------------------------- |
|
||||
| 1 | name | BINARY(64) | 函数名 |
|
||||
| 2 | comment | BINARY(255) | 补充说明。需要注意,`comment` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 3 | aggregate | INT | 是否为聚合函数。需要注意,`aggregate` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 4 | output_type | BINARY(31) | 输出类型 |
|
||||
| 5 | create_time | TIMESTAMP | 创建时间 |
|
||||
| 6 | code_len | INT | 代码长度 |
|
||||
| 7 | bufsize | INT | buffer 大小 |
|
||||
| 8 | func_language | BINARY(31) | 自定义函数编程语言 |
|
||||
| 9 | func_body | BINARY(16384) | 函数体定义 |
|
||||
| 10 | func_version | INT | 函数版本号。初始版本为0,每次替换更新,版本号加1。 |
|
||||
|
||||
|
||||
## INS_INDEXES
|
||||
|
||||
提供用户创建的索引的相关信息。也可以使用 SHOW INDEX 来查询这些信息。
|
||||
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :--------------: | ------------ | ---------------------------------------------------------------------------------- |
|
||||
| 1 | db_name | BINARY(32) | 包含此索引的表所在的数据库名 |
|
||||
| 2 | table_name | BINARY(192) | 包含此索引的表的名称 |
|
||||
| 3 | index_name | BINARY(192) | 索引名 |
|
||||
| 4 | column_name | BINARY(64) | 建索引的列的列名 |
|
||||
| 5 | index_type | BINARY(10) | 目前有 SMA 和 FULLTEXT |
|
||||
| 6 | index_extensions | BINARY(256) | 索引的额外信息。对 SMA 类型的索引,是函数名的列表。对 FULLTEXT 类型的索引为 NULL。 |
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :--------------: | ------------ | ------------------------------------------------------- |
|
||||
| 1 | db_name | BINARY(32) | 包含此索引的表所在的数据库名 |
|
||||
| 2 | table_name | BINARY(192) | 包含此索引的表的名称 |
|
||||
| 3 | index_name | BINARY(192) | 索引名 |
|
||||
| 4 | column_name | BINARY(64) | 建索引的列的列名 |
|
||||
| 5 | index_type | BINARY(10) | 目前有 SMA 和 tag |
|
||||
| 6 | index_extensions | BINARY(256) | 索引的额外信息。对 SMA/tag 类型的索引,是函数名的列表。 |
|
||||
|
||||
## INS_STABLES
|
||||
|
||||
提供用户创建的超级表的相关信息。
|
||||
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :-----------: | ------------ | ------------------------ |
|
||||
| 1 | stable_name | BINARY(192) | 超级表表名 |
|
||||
| 2 | db_name | BINARY(64) | 超级表所在的数据库的名称 |
|
||||
| 3 | create_time | TIMESTAMP | 创建时间 |
|
||||
| 4 | columns | INT | 列数目 |
|
||||
| 5 | tags | INT | 标签数目。需要注意,`tags` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 6 | last_update | TIMESTAMP | 最后更新时间 |
|
||||
| 7 | table_comment | BINARY(1024) | 表注释 |
|
||||
| 8 | watermark | BINARY(64) | 窗口的关闭时间。需要注意,`watermark` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 9 | max_delay | BINARY(64) | 推送计算结果的最大延迟。需要注意,`max_delay` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 10 | rollup | BINARY(128) | rollup 聚合函数。需要注意,`rollup` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :-----------: | ------------ | ----------------------------------------------------------------------------------------------------- |
|
||||
| 1 | stable_name | BINARY(192) | 超级表表名 |
|
||||
| 2 | db_name | BINARY(64) | 超级表所在的数据库的名称 |
|
||||
| 3 | create_time | TIMESTAMP | 创建时间 |
|
||||
| 4 | columns | INT | 列数目 |
|
||||
| 5 | tags | INT | 标签数目。需要注意,`tags` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 6 | last_update | TIMESTAMP | 最后更新时间 |
|
||||
| 7 | table_comment | BINARY(1024) | 表注释 |
|
||||
| 8 | watermark | BINARY(64) | 窗口的关闭时间。需要注意,`watermark` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 9 | max_delay | BINARY(64) | 推送计算结果的最大延迟。需要注意,`max_delay` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 10 | rollup | BINARY(128) | rollup 聚合函数。需要注意,`rollup` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
|
||||
## INS_TABLES
|
||||
|
||||
提供用户创建的普通表和子表的相关信息
|
||||
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :-----------: | ------------ | ---------------- |
|
||||
| 1 | table_name | BINARY(192) | 表名 |
|
||||
| 2 | db_name | BINARY(64) | 数据库名 |
|
||||
| 3 | create_time | TIMESTAMP | 创建时间 |
|
||||
| 4 | columns | INT | 列数目 |
|
||||
| 5 | stable_name | BINARY(192) | 所属的超级表表名 |
|
||||
| 6 | uid | BIGINT | 表 id |
|
||||
| 7 | vgroup_id | INT | vgroup id |
|
||||
| 8 | ttl | INT | 表的生命周期。需要注意,`ttl` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 9 | table_comment | BINARY(1024) | 表注释 |
|
||||
| 10 | type | BINARY(21) | 表类型 |
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :-----------: | ------------ | ------------------------------------------------------------------------------------- |
|
||||
| 1 | table_name | BINARY(192) | 表名 |
|
||||
| 2 | db_name | BINARY(64) | 数据库名 |
|
||||
| 3 | create_time | TIMESTAMP | 创建时间 |
|
||||
| 4 | columns | INT | 列数目 |
|
||||
| 5 | stable_name | BINARY(192) | 所属的超级表表名 |
|
||||
| 6 | uid | BIGINT | 表 id |
|
||||
| 7 | vgroup_id | INT | vgroup id |
|
||||
| 8 | ttl | INT | 表的生命周期。需要注意,`ttl` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 9 | table_comment | BINARY(1024) | 表注释 |
|
||||
| 10 | type | BINARY(21) | 表类型 |
|
||||
|
||||
## INS_TAGS
|
||||
|
||||
|
|
@ -183,17 +183,17 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
|
|||
|
||||
## INS_COLUMNS
|
||||
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :---------: | ------------- | ---------------------- |
|
||||
| 1 | table_name | BINARY(192) | 表名 |
|
||||
| 2 | db_name | BINARY(64) | 该表所在的数据库的名称 |
|
||||
| 3 | table_type | BINARY(21) | 表类型 |
|
||||
| 4 | col_name | BINARY(64) | 列 的名称 |
|
||||
| 5 | col_type | BINARY(32) | 列 的类型 |
|
||||
| 6 | col_length | INT | 列 的长度 |
|
||||
| 7 | col_precision | INT | 列 的精度 |
|
||||
| 8 | col_scale | INT | 列 的比例 |
|
||||
| 9 | col_nullable | INT | 列 是否可以为空 |
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :-----------: | ------------ | ---------------------- |
|
||||
| 1 | table_name | BINARY(192) | 表名 |
|
||||
| 2 | db_name | BINARY(64) | 该表所在的数据库的名称 |
|
||||
| 3 | table_type | BINARY(21) | 表类型 |
|
||||
| 4 | col_name | BINARY(64) | 列 的名称 |
|
||||
| 5 | col_type | BINARY(32) | 列 的类型 |
|
||||
| 6 | col_length | INT | 列 的长度 |
|
||||
| 7 | col_precision | INT | 列 的精度 |
|
||||
| 8 | col_scale | INT | 列 的比例 |
|
||||
| 9 | col_nullable | INT | 列 是否可以为空 |
|
||||
|
||||
## INS_USERS
|
||||
|
||||
|
|
@ -209,60 +209,60 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
|
|||
|
||||
提供企业版授权的相关信息。
|
||||
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :---------: | ------------ | -------------------------------------------------- |
|
||||
| 1 | version | BINARY(9) | 企业版授权说明:official(官方授权的)/trial(试用的) |
|
||||
| 2 | cpu_cores | BINARY(9) | 授权使用的 CPU 核心数量 |
|
||||
| 3 | dnodes | BINARY(10) | 授权使用的 dnode 节点数量。需要注意,`dnodes` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 4 | streams | BINARY(10) | 授权创建的流数量。需要注意,`streams` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 5 | users | BINARY(10) | 授权创建的用户数量。需要注意,`users` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 6 | accounts | BINARY(10) | 授权创建的帐户数量。需要注意,`accounts` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 7 | storage | BINARY(21) | 授权使用的存储空间大小。需要注意,`storage` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 8 | connections | BINARY(21) | 授权使用的客户端连接数量。需要注意,`connections` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 9 | databases | BINARY(11) | 授权使用的数据库数量。需要注意,`databases` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 10 | speed | BINARY(9) | 授权使用的数据点每秒写入数量 |
|
||||
| 11 | querytime | BINARY(9) | 授权使用的查询总时长 |
|
||||
| 12 | timeseries | BINARY(21) | 授权使用的测点数量 |
|
||||
| 13 | expired | BINARY(5) | 是否到期,true:到期,false:未到期 |
|
||||
| 14 | expire_time | BINARY(19) | 试用期到期时间 |
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :---------: | ------------ | --------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | version | BINARY(9) | 企业版授权说明:official(官方授权的)/trial(试用的) |
|
||||
| 2 | cpu_cores | BINARY(9) | 授权使用的 CPU 核心数量 |
|
||||
| 3 | dnodes | BINARY(10) | 授权使用的 dnode 节点数量。需要注意,`dnodes` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 4 | streams | BINARY(10) | 授权创建的流数量。需要注意,`streams` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 5 | users | BINARY(10) | 授权创建的用户数量。需要注意,`users` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 6 | accounts | BINARY(10) | 授权创建的帐户数量。需要注意,`accounts` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 7 | storage | BINARY(21) | 授权使用的存储空间大小。需要注意,`storage` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 8 | connections | BINARY(21) | 授权使用的客户端连接数量。需要注意,`connections` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 9 | databases | BINARY(11) | 授权使用的数据库数量。需要注意,`databases` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 10 | speed | BINARY(9) | 授权使用的数据点每秒写入数量 |
|
||||
| 11 | querytime | BINARY(9) | 授权使用的查询总时长 |
|
||||
| 12 | timeseries | BINARY(21) | 授权使用的测点数量 |
|
||||
| 13 | expired | BINARY(5) | 是否到期,true:到期,false:未到期 |
|
||||
| 14 | expire_time | BINARY(19) | 试用期到期时间 |
|
||||
|
||||
## INS_VGROUPS
|
||||
|
||||
系统中所有 vgroups 的信息。
|
||||
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :-------: | ------------ | ------------------------------------------------------ |
|
||||
| 1 | vgroup_id | INT | vgroup id |
|
||||
| 2 | db_name | BINARY(32) | 数据库名 |
|
||||
| 3 | tables | INT | 此 vgroup 内有多少表。需要注意,`tables` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 4 | status | BINARY(10) | 此 vgroup 的状态 |
|
||||
| 5 | v1_dnode | INT | 第一个成员所在的 dnode 的 id |
|
||||
| 6 | v1_status | BINARY(10) | 第一个成员的状态 |
|
||||
| 7 | v2_dnode | INT | 第二个成员所在的 dnode 的 id |
|
||||
| 8 | v2_status | BINARY(10) | 第二个成员的状态 |
|
||||
| 9 | v3_dnode | INT | 第三个成员所在的 dnode 的 id |
|
||||
| 10 | v3_status | BINARY(10) | 第三个成员的状态 |
|
||||
| 11 | nfiles | INT | 此 vgroup 中数据/元数据文件的数量 |
|
||||
| 12 | file_size | INT | 此 vgroup 中数据/元数据文件的大小 |
|
||||
| 13 | tsma | TINYINT | 此 vgroup 是否专用于 Time-range-wise SMA,1: 是, 0: 否 |
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :-------: | ------------ | ------------------------------------------------------------------------------------------------ |
|
||||
| 1 | vgroup_id | INT | vgroup id |
|
||||
| 2 | db_name | BINARY(32) | 数据库名 |
|
||||
| 3 | tables | INT | 此 vgroup 内有多少表。需要注意,`tables` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 4 | status | BINARY(10) | 此 vgroup 的状态 |
|
||||
| 5 | v1_dnode | INT | 第一个成员所在的 dnode 的 id |
|
||||
| 6 | v1_status | BINARY(10) | 第一个成员的状态 |
|
||||
| 7 | v2_dnode | INT | 第二个成员所在的 dnode 的 id |
|
||||
| 8 | v2_status | BINARY(10) | 第二个成员的状态 |
|
||||
| 9 | v3_dnode | INT | 第三个成员所在的 dnode 的 id |
|
||||
| 10 | v3_status | BINARY(10) | 第三个成员的状态 |
|
||||
| 11 | nfiles | INT | 此 vgroup 中数据/元数据文件的数量 |
|
||||
| 12 | file_size | INT | 此 vgroup 中数据/元数据文件的大小 |
|
||||
| 13 | tsma | TINYINT | 此 vgroup 是否专用于 Time-range-wise SMA,1: 是, 0: 否 |
|
||||
|
||||
## INS_CONFIGS
|
||||
|
||||
系统配置参数。
|
||||
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :------: | ------------ | ------------ |
|
||||
| 1 | name | BINARY(32) | 配置项名称 |
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :------: | ------------ | --------------------------------------------------------------------------------------- |
|
||||
| 1 | name | BINARY(32) | 配置项名称 |
|
||||
| 2 | value | BINARY(64) | 该配置项的值。需要注意,`value` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
|
||||
## INS_DNODE_VARIABLES
|
||||
|
||||
系统中每个 dnode 的配置参数。
|
||||
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :------: | ------------ | ------------ |
|
||||
| 1 | dnode_id | INT | dnode 的 ID |
|
||||
| 2 | name | BINARY(32) | 配置项名称 |
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :------: | ------------ | --------------------------------------------------------------------------------------- |
|
||||
| 1 | dnode_id | INT | dnode 的 ID |
|
||||
| 2 | name | BINARY(32) | 配置项名称 |
|
||||
| 3 | value | BINARY(64) | 该配置项的值。需要注意,`value` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
|
||||
## INS_TOPICS
|
||||
|
|
@ -282,19 +282,29 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
|
|||
| 2 | consumer_group | BINARY(193) | 订阅者的消费者组 |
|
||||
| 3 | vgroup_id | INT | 消费者被分配的 vgroup id |
|
||||
| 4 | consumer_id | BIGINT | 消费者的唯一 id |
|
||||
| 5 | offset | BINARY(64) | 消费者的消费进度 |
|
||||
| 6 | rows | BIGINT | 消费者的消费的数据条数 |
|
||||
| 5 | offset | BINARY(64) | 消费者的消费进度 |
|
||||
| 6 | rows | BIGINT | 消费者的消费的数据条数 |
|
||||
|
||||
## INS_STREAMS
|
||||
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :----------: | ------------ | --------------------------------------- |
|
||||
| 1 | stream_name | BINARY(64) | 流计算名称 |
|
||||
| 2 | create_time | TIMESTAMP | 创建时间 |
|
||||
| 3 | sql | BINARY(1024) | 创建流计算时提供的 SQL 语句 |
|
||||
| 4 | status | BINARY(20) | 流当前状态 |
|
||||
| 5 | source_db | BINARY(64) | 源数据库 |
|
||||
| 6 | target_db | BINARY(64) | 目的数据库 |
|
||||
| 7 | target_table | BINARY(192) | 流计算写入的目标表 |
|
||||
| 8 | watermark | BIGINT | watermark,详见 SQL 手册流式计算。需要注意,`watermark` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :----------: | ------------ | -------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | stream_name | BINARY(64) | 流计算名称 |
|
||||
| 2 | create_time | TIMESTAMP | 创建时间 |
|
||||
| 3 | sql | BINARY(1024) | 创建流计算时提供的 SQL 语句 |
|
||||
| 4 | status | BINARY(20) | 流当前状态 |
|
||||
| 5 | source_db | BINARY(64) | 源数据库 |
|
||||
| 6 | target_db | BINARY(64) | 目的数据库 |
|
||||
| 7 | target_table | BINARY(192) | 流计算写入的目标表 |
|
||||
| 8 | watermark | BIGINT | watermark,详见 SQL 手册流式计算。需要注意,`watermark` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
| 9 | trigger | INT | 计算结果推送模式,详见 SQL 手册流式计算。需要注意,`trigger` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
|
||||
|
||||
## INS_USER_PRIVILEGES
|
||||
|
||||
| # | **列名** | **数据类型** | **说明** |
|
||||
| --- | :----------: | ------------ | -------------------------------------------------------------------------------------------------------------------- |
|
||||
| 1 | user_name | VARCHAR(24) | 用户名
|
||||
| 2 | privilege | VARCHAR(10) | 权限描述
|
||||
| 3 | db_name | VARCHAR(65) | 数据库名称
|
||||
| 4 | table_name | VARCHAR(193) | 表名称
|
||||
| 5 | condition | VARCHAR(49152) | 子表权限过滤条件
|
||||
|
|
|
|||
|
|
@ -4,12 +4,13 @@ title: 索引
|
|||
description: 索引功能的使用细节
|
||||
---
|
||||
|
||||
TDengine 从 3.0.0.0 版本开始引入了索引功能,支持 SMA 索引和 FULLTEXT 索引。
|
||||
TDengine 从 3.0.0.0 版本开始引入了索引功能,支持 SMA 索引和 tag 索引。
|
||||
|
||||
## 创建索引
|
||||
|
||||
```sql
|
||||
CREATE FULLTEXT INDEX index_name ON tb_name (col_name [, col_name] ...)
|
||||
|
||||
CREATE INDEX index_name ON tb_name index_option
|
||||
|
||||
CREATE SMA INDEX index_name ON tb_name index_option
|
||||
|
||||
|
|
@ -46,10 +47,6 @@ SELECT _wstart,_wend,_wduration,max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDIN
|
|||
ALTER LOCAL 'querySmaOptimize' '0';
|
||||
```
|
||||
|
||||
### FULLTEXT 索引
|
||||
|
||||
对指定列建立文本索引,可以提升含有文本过滤的查询的性能。FULLTEXT 索引不支持 index_option 语法。现阶段只支持对 JSON 类型的标签列创建 FULLTEXT 索引。不支持多列联合索引,但可以为每个列分布创建 FULLTEXT 索引。
|
||||
|
||||
## 删除索引
|
||||
|
||||
```sql
|
||||
|
|
|
|||
|
|
@ -18,6 +18,7 @@ description: "TDengine 3.0 版本的语法变更说明"
|
|||
| 8 | 混合运算 | 增强 | 查询中的混合运算(标量运算和矢量运算混合)全面增强,SELECT的各个子句均全面支持符合语法语义的混合运算。
|
||||
| 9 | 标签运算 | 新增 |在查询中,标签列可以像普通列一样参与各种运算,用于各种子句。
|
||||
| 10 | 时间线子句和时间函数用于超级表查询 | 增强 |没有PARTITION BY时,超级表的数据会被合并成一条时间线。
|
||||
| 11 | GEOMETRY | 新增 | 几何类型。
|
||||
|
||||
## SQL 语句变更
|
||||
|
||||
|
|
|
|||
|
|
@ -362,6 +362,8 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
|
|||
|
||||
- **max** : 数据类型的 列/标签 的最大值。生成的值将小于最小值。
|
||||
|
||||
- **fun** : 此列数据以函数填充,目前只支持 sin 和 cos 两函数,输入参数为时间戳换算成角度值,换算公式: 角度 x = 输入的时间列ts值 % 360。同时支持系数调节,随机波动因子调节,以固定格式的表达式展现,如 fun=“10\*sin(x)+100\*random(5)” , x 表示角度,取值 0 ~ 360度,增长步长与时间列步长一致。10 表示乘的系数,100 表示加或减的系数,5 表示波动幅度在 5% 的随机范围内。目前支持的数据类型为 int, bigint, float, double 四种数据类型。注意:表达式为固定模式,不可前后颠倒。
|
||||
|
||||
- **values** : nchar/binary 列/标签的值域,将从值中随机选择。
|
||||
|
||||
- **sma**: 将该列加入 SMA 中,值为 "yes" 或者 "no",默认为 "no"。
|
||||
|
|
|
|||
|
|
@ -5,14 +5,15 @@ description: "TDengine 服务端、客户端和连接器支持的平台列表"
|
|||
|
||||
## TDengine 服务端支持的平台列表
|
||||
|
||||
| | **Windows server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **macOS** |
|
||||
| | **Windows server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18 以上** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **macOS** |
|
||||
| ------------ | ---------------------------- | ----------------- | ---------------- | ---------------- | ------------ | ----------------- | ---------------- | --------- |
|
||||
| X64 | ● | ● | ● | ● | ● | ● | ● | ● |
|
||||
| 树莓派 ARM64 | | | ● | | | | | |
|
||||
| 华为云 ARM64 | | | | ● | | | | |
|
||||
| M1 | | | | | | | | ● |
|
||||
| X64 | ●/E | ●/E | ● | ● | ●/E | ●/E | ●/E | ● |
|
||||
| 树莓派 ARM64 | | | ● | | | | | |
|
||||
| 华为云 ARM64 | | | | ● | | | | |
|
||||
| M1 | | | | | | | | ● |
|
||||
|
||||
注: ● 表示经过官方测试验证, ○ 表示非官方测试验证。
|
||||
注:1) ● 表示经过官方测试验证, ○ 表示非官方测试验证,E 表示仅企业版支持。
|
||||
2) 社区版仅支持主流操作系统的较新版本,包括 Ubuntu 18+/CentOS 7+/RetHat/Debian/CoreOS/FreeBSD/OpenSUSE/SUSE Linux/Fedora/macOS 等。如果有其他操作系统及版本的需求,请联系企业版支持。
|
||||
|
||||
## TDengine 客户端和连接器支持的平台列表
|
||||
|
||||
|
|
|
|||
|
|
@ -1 +0,0 @@
|
|||
label: TDengine Docker 镜像
|
||||
|
|
@ -95,30 +95,11 @@ taos -C
|
|||
### maxShellConns
|
||||
|
||||
| 属性 | 说明 |
|
||||
| --------| ----------------------- |
|
||||
| -------- | ----------------------- |
|
||||
| 适用范围 | 仅服务端适用 |
|
||||
| 含义 | 一个 dnode 容许的连接数 |
|
||||
| 含义 | 一个 dnode 容许的连接数 |
|
||||
| 取值范围 | 10-50000000 |
|
||||
| 缺省值 | 5000 |
|
||||
|
||||
### numOfRpcSessions
|
||||
|
||||
| 属性 | 说明 |
|
||||
| --------| ---------------------- |
|
||||
| 适用范围 | 客户端和服务端都适用 |
|
||||
| 含义 | 一个客户端能创建的最大连接数|
|
||||
| 取值范围 | 100-100000 |
|
||||
| 缺省值 | 10000 |
|
||||
|
||||
### timeToGetAvailableConn
|
||||
|
||||
| 属性 | 说明 |
|
||||
| -------- | --------------------|
|
||||
| 适用范围 | 客户端和服务端都适用 |
|
||||
| 含义 |获得可用连接的最长等待时间|
|
||||
| 取值范围 | 10-50000000(单位为毫秒)|
|
||||
| 缺省值 | 500000 |
|
||||
|
||||
| 缺省值 | 5000 |
|
||||
|
||||
### numOfRpcSessions
|
||||
|
||||
|
|
@ -127,7 +108,7 @@ taos -C
|
|||
| 适用范围 | 客户端和服务端都适用 |
|
||||
| 含义 | 一个客户端能创建的最大连接数 |
|
||||
| 取值范围 | 100-100000 |
|
||||
| 缺省值 | 10000 |
|
||||
| 缺省值 | 30000 |
|
||||
|
||||
### timeToGetAvailableConn
|
||||
|
||||
|
|
@ -184,7 +165,7 @@ taos -C
|
|||
|
||||
| 属性 | 说明 |
|
||||
| -------- | ------------------------ |
|
||||
| 适用范围 | 仅服务端适用 |
|
||||
| 适用范围 | 客户端和服务端都适用 |
|
||||
| 含义 | 是否上传 telemetry |
|
||||
| 取值范围 | 0,1 0: 不上传;1:上传 |
|
||||
| 缺省值 | 1 |
|
||||
|
|
@ -193,7 +174,7 @@ taos -C
|
|||
|
||||
| 属性 | 说明 |
|
||||
| -------- | ------------------------ |
|
||||
| 适用范围 | 仅服务端适用 |
|
||||
| 适用范围 | 客户端和服务端都适用 |
|
||||
| 含义 | 是否上传 crash 信息 |
|
||||
| 取值范围 | 0,1 0: 不上传;1:上传 |
|
||||
| 缺省值 | 1 |
|
||||
|
|
@ -392,12 +373,12 @@ charset 的有效值是 UTF-8。
|
|||
|
||||
### metaCacheMaxSize
|
||||
|
||||
| 属性 | 说明 |
|
||||
| -------- | ---------------------------------------------- |
|
||||
| 适用范围 | 仅客户端适用 |
|
||||
| 含义 | 指定单个客户端元数据缓存大小的最大值 |
|
||||
| 单位 | MB |
|
||||
| 缺省值 | -1 (无限制) |
|
||||
| 属性 | 说明 |
|
||||
| -------- | ------------------------------------ |
|
||||
| 适用范围 | 仅客户端适用 |
|
||||
| 含义 | 指定单个客户端元数据缓存大小的最大值 |
|
||||
| 单位 | MB |
|
||||
| 缺省值 | -1 (无限制) |
|
||||
|
||||
## 集群相关
|
||||
|
||||
|
|
@ -479,13 +460,13 @@ charset 的有效值是 UTF-8。
|
|||
|
||||
### slowLogScope
|
||||
|
||||
| 属性 | 说明 |
|
||||
| -------- | --------------------------------------------------------------|
|
||||
| 适用范围 | 仅客户端适用 |
|
||||
| 含义 | 指定启动记录哪些类型的慢查询 |
|
||||
| 可选值 | ALL, QUERY, INSERT, OTHERS, NONE |
|
||||
| 缺省值 | ALL |
|
||||
| 补充说明 | 默认记录所有类型的慢查询,可通过配置只记录某一类型的慢查询 |
|
||||
| 属性 | 说明 |
|
||||
| -------- | ---------------------------------------------------------- |
|
||||
| 适用范围 | 仅客户端适用 |
|
||||
| 含义 | 指定启动记录哪些类型的慢查询 |
|
||||
| 可选值 | ALL, QUERY, INSERT, OTHERS, NONE |
|
||||
| 缺省值 | ALL |
|
||||
| 补充说明 | 默认记录所有类型的慢查询,可通过配置只记录某一类型的慢查询 |
|
||||
|
||||
### debugFlag
|
||||
|
||||
|
|
@ -687,6 +668,15 @@ charset 的有效值是 UTF-8。
|
|||
| 值域 | 0:不一致;1: 一致 |
|
||||
| 缺省值 | 0 |
|
||||
|
||||
### smlTsDefaultName
|
||||
|
||||
| 属性 | 说明 |
|
||||
| -------- | -------------------------------------------- |
|
||||
| 适用范围 | 仅客户端适用 |
|
||||
| 含义 | schemaless自动建表的时间列名字通过该配置设置 |
|
||||
| 类型 | 字符串 |
|
||||
| 缺省值 | _ts |
|
||||
|
||||
## 其他
|
||||
|
||||
### enableCoreFile
|
||||
|
|
@ -719,22 +709,31 @@ charset 的有效值是 UTF-8。
|
|||
|
||||
### ttlChangeOnWrite
|
||||
|
||||
| 属性 | 说明 |
|
||||
| -------- | ------------------ |
|
||||
| 适用范围 | 仅服务端适用 |
|
||||
| 含义 | ttl 到期时间是否伴随表的修改操作改变 |
|
||||
| 取值范围 | 0: 不改变;1:改变 |
|
||||
| 缺省值 | 0 |
|
||||
| 属性 | 说明 |
|
||||
| -------- | ------------------------------------ |
|
||||
| 适用范围 | 仅服务端适用 |
|
||||
| 含义 | ttl 到期时间是否伴随表的修改操作改变 |
|
||||
| 取值范围 | 0: 不改变;1:改变 |
|
||||
| 缺省值 | 0 |
|
||||
|
||||
### keepTimeOffset
|
||||
|
||||
| 属性 | 说明 |
|
||||
| -------- | ------------------ |
|
||||
| 适用范围 | 仅服务端适用 |
|
||||
| 含义 | 迁移操作的延时 |
|
||||
| 单位 | 小时 |
|
||||
| 取值范围 | 0-23 |
|
||||
| 缺省值 | 0 |
|
||||
| 属性 | 说明 |
|
||||
| -------- | -------------- |
|
||||
| 适用范围 | 仅服务端适用 |
|
||||
| 含义 | 迁移操作的延时 |
|
||||
| 单位 | 小时 |
|
||||
| 取值范围 | 0-23 |
|
||||
| 缺省值 | 0 |
|
||||
|
||||
### tmqMaxTopicNum
|
||||
|
||||
| 属性 | 说明 |
|
||||
| -------- | --------------------------- |
|
||||
| 适用范围 | 仅服务端适用 |
|
||||
| 含义 | 订阅最多可建立的 topic 数量 |
|
||||
| 取值范围 | 1-10000 |
|
||||
| 缺省值 | 20 |
|
||||
|
||||
## 压缩参数
|
||||
|
||||
|
|
|
|||
|
|
@ -35,12 +35,32 @@ tag_set 中的所有的数据自动转化为 nchar 数据类型,并不需要
|
|||
|
||||
- 如果两边有英文双引号,表示 BINARY(32) 类型。例如 `"abc"`。
|
||||
- 如果两边有英文双引号而且带有 L 前缀,表示 NCHAR(32) 类型。例如 `L"报错信息"`。
|
||||
- 对空格、等号(=)、逗号(,)、双引号("),前面需要使用反斜杠(\)进行转义。(都指的是英文半角符号)
|
||||
- 对空格、等号(=)、逗号(,)、双引号(")、反斜杠(\),前面需要使用反斜杠(\)进行转义。(都指的是英文半角符号)。具体转义规则如下:
|
||||
|
||||
| **序号** | **域** | **需转义字符** |
|
||||
| -------- | ----------- | ----------------------------- |
|
||||
| 1 | 超级表名 | 逗号,空格 |
|
||||
| 2 | 标签名 | 逗号,等号,空格 |
|
||||
| 3 | 标签值 | 逗号,等号,空格 |
|
||||
| 4 | 列名 | 逗号,等号,空格 |
|
||||
| 5 | 列值 | 双引号,反斜杠 |
|
||||
|
||||
两个连续的反斜杠,第一个作为转义符,只有一个反斜杠则无需转义. 反斜杠转义规则举例如下:
|
||||
|
||||
| **序号** | **反斜杠** | **转义为** |
|
||||
| -------- | ----------- | ----------------------------- |
|
||||
| 1 | \ | \ |
|
||||
| 2 | \\\\ | \ |
|
||||
| 3 | \\\\\\ | \\\\ |
|
||||
| 4 | \\\\\\\\ | \\\\ |
|
||||
| 5 | \\\\\\\\\\ | \\\\\\ |
|
||||
| 6 | \\\\\\\\\\\\ | \\\\\\ |
|
||||
|
||||
- 数值类型将通过后缀来区分数据类型:
|
||||
|
||||
| **序号** | **后缀** | **映射类型** | **大小(字节)** |
|
||||
| **序号** | **后缀** | **映射类型** | **大小(字节)** |
|
||||
| -------- | ----------- | ----------------------------- | -------------- |
|
||||
| 1 | 无或 f64 | double | 8 |
|
||||
| 1 | 无或 f64 | double | 8 |
|
||||
| 2 | f32 | float | 4 |
|
||||
| 3 | i8/u8 | TinyInt/UTinyInt | 1 |
|
||||
| 4 | i16/u16 | SmallInt/USmallInt | 2 |
|
||||
|
|
@ -84,7 +104,9 @@ st,t1=3,t2=4,t3=t3 c1=3i64,c3="passit",c2=false,c4=4f64 1626006833639000000
|
|||
6. 对 BINARY 或 NCHAR 列,如果数据行中所提供值的长度超出了列类型的限制,自动增加该列允许存储的字符长度上限(只增不减),以保证数据的完整保存。
|
||||
7. 整个处理过程中遇到的错误会中断写入过程,并返回错误代码。
|
||||
8. 为了提高写入的效率,默认假设同一个超级表中 field_set 的顺序是一样的(第一条数据包含所有的 field,后面的数据按照这个顺序),如果顺序不一样,需要配置参数 smlDataFormat 为 false,否则,数据写入按照相同顺序写入,库中数据会异常,从3.0.3.0开始,自动检测顺序是否一致,该配置废弃。
|
||||
|
||||
9. 由于sql建表表名不支持点号(.),所以schemaless也对点号(.)做了处理,如果schemaless自动建表的表名如果有点号(.),会自动替换为下划线(\_)。如果手动指定子表名的话,子表名里有点号(.),同样转化为下划线(\_)。
|
||||
10. taos.cfg 增加 smlTsDefaultName 配置(值为字符串),只在client端起作用,配置后,schemaless自动建表的时间列名字可以通过该配置设置。不配置的话,默认为 _ts
|
||||
|
||||
:::tip
|
||||
无模式所有的处理逻辑,仍会遵循 TDengine 对数据结构的底层限制,例如每行数据的总长度不能超过
|
||||
48KB(从 3.0.5.0 版本开始为 64KB),标签值的总长度不超过16KB。这方面的具体限制约束请参见 [TDengine SQL 边界限制](/taos-sql/limit)
|
||||
|
|
|
|||
|
|
@ -210,19 +210,6 @@ TDinsight dashboard 数据来源于 log 库(存放监控数据的默认db,
|
|||
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|
||||
|cluster\_id|NCHAR|TAG|cluster id|
|
||||
|
||||
### logs 表
|
||||
|
||||
`logs` 表记录登录信息。
|
||||
|
||||
|field|type|is\_tag|comment|
|
||||
|:----|:---|:-----|:------|
|
||||
|ts|TIMESTAMP||timestamp|
|
||||
|level|VARCHAR||log level|
|
||||
|content|NCHAR||log content,长度不超过1024字节|
|
||||
|dnode\_id|INT|TAG|dnode id|
|
||||
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|
||||
|cluster\_id|NCHAR|TAG|cluster id|
|
||||
|
||||
### log\_summary 表
|
||||
|
||||
`log_summary` 记录日志统计信息。
|
||||
|
|
|
|||
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
sidebar_label: qStudio
|
||||
title: qStudio
|
||||
description: 使用 qStudio 存取 TDengine 数据的详细指南
|
||||
---
|
||||
|
||||
qStudio 是一款免费的多平台 SQL 数据分析工具,可以轻松浏览数据库中的表、变量、函数和配置设置。最新版本 qStudio 内嵌支持 TDengine。
|
||||
|
||||
## 前置条件
|
||||
|
||||
使用 qStudio 连接 TDengine 需要以下几方面的准备工作。
|
||||
|
||||
- 安装 qStudio。qStudio 支持主流操作系统包括 Windows、macOS 和 Linux。请注意[下载](https://www.timestored.com/qstudio/download/)正确平台的安装包。
|
||||
- 安装 TDengine 实例,请确认 TDengine 正常运行,并且 taosAdapter 已经安装并正常运行,具体细节请参考 [taosAdapter 的使用手册](/reference/taosadapter)。
|
||||
|
||||
## 使用 qStudio 连接 TDengine
|
||||
|
||||
1. 启动 qStudio 应用,从菜单项选择“Server” 和 “Add Server...”,然后在 Server Type 下拉框中选择 TDengine。
|
||||
|
||||

|
||||
|
||||
2. 配置 TDengine 连接,填入主机地址、端口号、用户名和密码。如果 TDengine 部署在本机,可以只填用户名和密码,默认用户名为 root,默认密码为 taosdata。点击“Test”可以对连接是否可用进行测试。如果本机没有安装 TDengine Java
|
||||
连接器,qStudio 会提示下载安装。
|
||||
|
||||

|
||||
|
||||
3. 连接成功将显示如下图所示。如果显示连接失败,请检查 TDengine 服务和 taosAdapter 是否正确运行,主机地址、端口号、用户名和密码是否正确。
|
||||
|
||||

|
||||
|
||||
4. 使用 qStudio 选择数据库和表可以浏览 TDengine 服务的数据。
|
||||
|
||||

|
||||
|
||||
5. 也可以通过执行 SQL 命令的方式对 TDengine 数据进行操作。
|
||||
|
||||

|
||||
|
||||
6. qStudio 支持根据数据绘制图表等功能,请参考 [qStudio 的帮助文档](https://www.timestored.com/qstudio/help)
|
||||
|
||||

|
||||
|
After Width: | Height: | Size: 94 KiB |
|
After Width: | Height: | Size: 148 KiB |
|
After Width: | Height: | Size: 34 KiB |
|
After Width: | Height: | Size: 93 KiB |
|
After Width: | Height: | Size: 39 KiB |
|
After Width: | Height: | Size: 78 KiB |
|
|
@ -112,7 +112,7 @@ TDengine 3.0 采用 hash 一致性算法,确定每张数据表所在的 vnode
|
|||
|
||||
### 数据分区
|
||||
|
||||
TDengine 除 vnode 分片之外,还对时序数据按照时间段进行分区。每个数据文件只包含一个时间段的时序数据,时间段的长度由 DB 的配置参数 days 决定。这种按时间段分区的方法还便于高效实现数据的保留策略,只要数据文件超过规定的天数(系统配置参数 keep),将被自动删除。而且不同的时间段可以存放于不同的路径和存储介质,以便于大数据的冷热管理,实现多级存储。
|
||||
TDengine 除 vnode 分片之外,还对时序数据按照时间段进行分区。每个数据文件只包含一个时间段的时序数据,时间段的长度由 DB 的配置参数 duration 决定。这种按时间段分区的方法还便于高效实现数据的保留策略,只要数据文件超过规定的天数(系统配置参数 keep),将被自动删除。而且不同的时间段可以存放于不同的路径和存储介质,以便于大数据的冷热管理,实现多级存储。
|
||||
|
||||
总的来说,**TDengine 是通过 vnode 以及时间两个维度,对大数据进行切分**,便于并行高效的管理,实现水平扩展。
|
||||
|
||||
|
|
|
|||
|
|
@ -371,7 +371,7 @@ Select min(val) from table_name
|
|||
等效函数:sum
|
||||
|
||||
```sql
|
||||
Select max(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s)
|
||||
Select sum(value) from (select first(val) value from table_name interval(10s) fill(linear)) interval(10s)
|
||||
```
|
||||
|
||||
备注:该函数无插值需求,因此可用直接计算。
|
||||
|
|
|
|||
|
|
@ -10,6 +10,10 @@ TDengine 2.x 各版本安装包请访问[这里](https://www.taosdata.com/all-do
|
|||
|
||||
import Release from "/components/ReleaseV3";
|
||||
|
||||
## 3.1.0.0
|
||||
|
||||
<Release type="tdengine" version="3.1.0.0" />
|
||||
|
||||
## 3.0.7.1
|
||||
|
||||
<Release type="tdengine" version="3.0.7.1" />
|
||||
|
|
|
|||
|
|
@ -20,18 +20,12 @@ mvn clean compile exec:java -Dexec.mainClass="com.taosdata.example.JdbcDemo" -De
|
|||
```
|
||||
|
||||
## Compile the Demo Code and Run It
|
||||
To compile taos-jdbcdriver, go to the source directory ``TDengine/src/connector/jdbc`` and execute
|
||||
```
|
||||
mvn clean package -Dmaven.test.skip=true
|
||||
```
|
||||
|
||||
To compile the demo project, go to the source directory ``TDengine/tests/examples/JDBC/JDBCDemo`` and execute
|
||||
To run JDBCDemo.jar, execute
|
||||
```
|
||||
mvn clean package assembly:single
|
||||
```
|
||||
|
||||
To run JDBCDemo.jar, go to ``TDengine/tests/examples/JDBC/JDBCDemo`` and execute
|
||||
```
|
||||
java -Djava.ext.dirs=../../../../src/connector/jdbc/target:$JAVA_HOME/jre/lib/ext -jar target/JDBCDemo-SNAPSHOT-jar-with-dependencies.jar -host [HOSTNAME]
|
||||
java -jar target/JDBCDemo-SNAPSHOT-jar-with-dependencies.jar -host [HOSTNAME]
|
||||
```
|
||||
|
||||
|
|
|
|||
|
|
@ -16,8 +16,6 @@ public class JdbcRestfulDemo {
|
|||
|
||||
Properties properties = new Properties();
|
||||
properties.setProperty("charset", "UTF-8");
|
||||
properties.setProperty("locale", "en_US.UTF-8");
|
||||
properties.setProperty("timezone", "UTC-8");
|
||||
|
||||
Connection conn = DriverManager.getConnection(url, properties);
|
||||
Statement stmt = conn.createStatement();
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@
|
|||
<dependency>
|
||||
<groupId>com.taosdata.jdbc</groupId>
|
||||
<artifactId>taos-jdbcdriver</artifactId>
|
||||
<version>3.0.0</version>
|
||||
<version>3.2.4</version>
|
||||
</dependency>
|
||||
|
||||
<dependency>
|
||||
|
|
|
|||
|
|
@ -287,11 +287,20 @@ DLL_EXPORT TAOS_RES *tmq_consumer_poll(tmq_t *tmq, int64_t timeout);
|
|||
DLL_EXPORT int32_t tmq_consumer_close(tmq_t *tmq);
|
||||
DLL_EXPORT int32_t tmq_commit_sync(tmq_t *tmq, const TAOS_RES *msg);
|
||||
DLL_EXPORT void tmq_commit_async(tmq_t *tmq, const TAOS_RES *msg, tmq_commit_cb *cb, void *param);
|
||||
DLL_EXPORT int32_t tmq_commit_offset_sync(tmq_t *tmq, const char *pTopicName, int32_t vgId, int64_t offset);
|
||||
DLL_EXPORT void tmq_commit_offset_async(tmq_t *tmq, const char *pTopicName, int32_t vgId, int64_t offset, tmq_commit_cb *cb, void *param);
|
||||
DLL_EXPORT int32_t tmq_get_topic_assignment(tmq_t *tmq, const char *pTopicName, tmq_topic_assignment **assignment,
|
||||
int32_t *numOfAssignment);
|
||||
DLL_EXPORT void tmq_free_assignment(tmq_topic_assignment* pAssignment);
|
||||
DLL_EXPORT int32_t tmq_offset_seek(tmq_t *tmq, const char *pTopicName, int32_t vgId, int64_t offset);
|
||||
|
||||
DLL_EXPORT const char *tmq_get_topic_name(TAOS_RES *res);
|
||||
DLL_EXPORT const char *tmq_get_db_name(TAOS_RES *res);
|
||||
DLL_EXPORT int32_t tmq_get_vgroup_id(TAOS_RES *res);
|
||||
DLL_EXPORT int64_t tmq_get_vgroup_offset(TAOS_RES* res);
|
||||
DLL_EXPORT int64_t tmq_position(tmq_t *tmq, const char *pTopicName, int32_t vgId);
|
||||
DLL_EXPORT int64_t tmq_committed(tmq_t *tmq, const char *pTopicName, int32_t vgId);
|
||||
|
||||
/* ----------------------TMQ CONFIGURATION INTERFACE---------------------- */
|
||||
|
||||
enum tmq_conf_res_t {
|
||||
|
|
@ -309,11 +318,6 @@ DLL_EXPORT void tmq_conf_set_auto_commit_cb(tmq_conf_t *conf, tmq_comm
|
|||
|
||||
/* -------------------------TMQ MSG HANDLE INTERFACE---------------------- */
|
||||
|
||||
DLL_EXPORT const char *tmq_get_topic_name(TAOS_RES *res);
|
||||
DLL_EXPORT const char *tmq_get_db_name(TAOS_RES *res);
|
||||
DLL_EXPORT int32_t tmq_get_vgroup_id(TAOS_RES *res);
|
||||
DLL_EXPORT int64_t tmq_get_vgroup_offset(TAOS_RES* res);
|
||||
|
||||
/* ------------------------------ TAOSX -----------------------------------*/
|
||||
// note: following apis are unstable
|
||||
enum tmq_res_t {
|
||||
|
|
|
|||
|
|
@ -58,6 +58,7 @@ extern int32_t tsTagFilterResCacheSize;
|
|||
extern int32_t tsNumOfRpcThreads;
|
||||
extern int32_t tsNumOfRpcSessions;
|
||||
extern int32_t tsTimeToGetAvailableConn;
|
||||
extern int32_t tsKeepAliveIdle;
|
||||
extern int32_t tsNumOfCommitThreads;
|
||||
extern int32_t tsNumOfTaskQueueThreads;
|
||||
extern int32_t tsNumOfMnodeQueryThreads;
|
||||
|
|
@ -85,8 +86,14 @@ extern int64_t tsVndCommitMaxIntervalMs;
|
|||
extern int64_t tsMndSdbWriteDelta;
|
||||
extern int64_t tsMndLogRetention;
|
||||
extern int8_t tsGrant;
|
||||
extern int32_t tsMndGrantMode;
|
||||
extern bool tsMndSkipGrant;
|
||||
|
||||
// dnode
|
||||
extern int64_t tsDndStart;
|
||||
extern int64_t tsDndStartOsUptime;
|
||||
extern int64_t tsDndUpTime;
|
||||
|
||||
// monitor
|
||||
extern bool tsEnableMonitor;
|
||||
extern int32_t tsMonitorInterval;
|
||||
|
|
@ -163,6 +170,8 @@ extern char tsUdfdLdLibPath[];
|
|||
// schemaless
|
||||
extern char tsSmlChildTableName[];
|
||||
extern char tsSmlTagName[];
|
||||
extern bool tsSmlDot2Underline;
|
||||
extern char tsSmlTsDefaultName[];
|
||||
// extern bool tsSmlDataFormat;
|
||||
// extern int32_t tsSmlBatchSize;
|
||||
|
||||
|
|
@ -175,6 +184,7 @@ extern int64_t tsWalFsyncDataSizeLimit;
|
|||
extern int32_t tsTransPullupInterval;
|
||||
extern int32_t tsMqRebalanceInterval;
|
||||
extern int32_t tsStreamCheckpointTickInterval;
|
||||
extern int32_t tsStreamNodeCheckInterval;
|
||||
extern int32_t tsTtlUnit;
|
||||
extern int32_t tsTtlPushInterval;
|
||||
extern int32_t tsGrantHBInterval;
|
||||
|
|
|
|||
|
|
@ -1144,6 +1144,7 @@ typedef struct {
|
|||
char timezone[TD_TIMEZONE_LEN]; // tsTimezone
|
||||
char locale[TD_LOCALE_LEN]; // tsLocale
|
||||
char charset[TD_LOCALE_LEN]; // tsCharset
|
||||
int8_t ttlChangeOnWrite;
|
||||
} SClusterCfg;
|
||||
|
||||
typedef struct {
|
||||
|
|
@ -1180,6 +1181,8 @@ typedef struct {
|
|||
typedef struct {
|
||||
int8_t syncState;
|
||||
int8_t syncRestore;
|
||||
int64_t syncTerm;
|
||||
int64_t roleTimeMs;
|
||||
} SMnodeLoad;
|
||||
|
||||
typedef struct {
|
||||
|
|
@ -1495,6 +1498,7 @@ int32_t tDeserializeSShowVariablesReq(void* buf, int32_t bufLen, SShowVariablesR
|
|||
typedef struct {
|
||||
char name[TSDB_CONFIG_OPTION_LEN + 1];
|
||||
char value[TSDB_CONFIG_VALUE_LEN + 1];
|
||||
char scope[TSDB_CONFIG_SCOPE_LEN + 1];
|
||||
} SVariablesInfo;
|
||||
|
||||
typedef struct {
|
||||
|
|
@ -2763,6 +2767,7 @@ typedef struct {
|
|||
typedef struct {
|
||||
SMsgHead head;
|
||||
int64_t leftForVer;
|
||||
int64_t streamId;
|
||||
int32_t taskId;
|
||||
} SVDropStreamTaskReq;
|
||||
|
||||
|
|
@ -2954,6 +2959,7 @@ int32_t tDecodeMqVgOffset(SDecoder* pDecoder, SMqVgOffset* pOffset);
|
|||
|
||||
typedef struct {
|
||||
SMsgHead head;
|
||||
int64_t streamId;
|
||||
int32_t taskId;
|
||||
} SVPauseStreamTaskReq;
|
||||
|
||||
|
|
@ -2972,6 +2978,7 @@ int32_t tDeserializeSMPauseStreamReq(void* buf, int32_t bufLen, SMPauseStreamReq
|
|||
typedef struct {
|
||||
SMsgHead head;
|
||||
int32_t taskId;
|
||||
int64_t streamId;
|
||||
int8_t igUntreated;
|
||||
} SVResumeStreamTaskReq;
|
||||
|
||||
|
|
@ -3378,6 +3385,12 @@ typedef struct {
|
|||
int8_t reserved;
|
||||
} SMqHbRsp;
|
||||
|
||||
typedef struct {
|
||||
SMsgHead head;
|
||||
int64_t consumerId;
|
||||
char subKey[TSDB_SUBSCRIBE_KEY_LEN];
|
||||
} SMqSeekReq;
|
||||
|
||||
#define TD_AUTO_CREATE_TABLE 0x1
|
||||
typedef struct {
|
||||
int64_t suid;
|
||||
|
|
@ -3507,6 +3520,8 @@ int32_t tSerializeSMqHbReq(void* buf, int32_t bufLen, SMqHbReq* pReq);
|
|||
int32_t tDeserializeSMqHbReq(void* buf, int32_t bufLen, SMqHbReq* pReq);
|
||||
int32_t tDeatroySMqHbReq(SMqHbReq* pReq);
|
||||
|
||||
int32_t tSerializeSMqSeekReq(void *buf, int32_t bufLen, SMqSeekReq *pReq);
|
||||
int32_t tDeserializeSMqSeekReq(void *buf, int32_t bufLen, SMqSeekReq *pReq);
|
||||
|
||||
#define SUBMIT_REQ_AUTO_CREATE_TABLE 0x1
|
||||
#define SUBMIT_REQ_COLUMN_DATA_FORMAT 0x2
|
||||
|
|
|
|||
|
|
@ -156,6 +156,7 @@ enum {
|
|||
TD_DEF_MSG_TYPE(TDMT_MND_TRANS_TIMER, "trans-tmr", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_TTL_TIMER, "ttl-tmr", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_GRANT_HB_TIMER, "grant-hb-tmr", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_NODECHECK_TIMER, "node-check-tmr", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_KILL_TRANS, "kill-trans", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_KILL_QUERY, "kill-query", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_KILL_CONN, "kill-conn", NULL, NULL)
|
||||
|
|
@ -174,6 +175,7 @@ enum {
|
|||
TD_DEF_MSG_TYPE(TDMT_MND_SERVER_VERSION, "server-version", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_UPTIME_TIMER, "uptime-timer", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_TMQ_LOST_CONSUMER_CLEAR, "lost-consumer-clear", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_STREAM_HEARTBEAT, "stream-heartbeat", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_MAX_MSG, "mnd-max", NULL, NULL)
|
||||
|
||||
TD_DEF_MSG_TYPE(TDMT_MND_BALANCE_VGROUP_LEADER, "balance-vgroup-leader", NULL, NULL)
|
||||
|
|
@ -302,18 +304,20 @@ enum {
|
|||
TD_DEF_MSG_TYPE(TDMT_VND_STREAM_TRIGGER, "vnode-stream-trigger", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_STREAM_SCAN_HISTORY, "vnode-stream-scan-history", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_STREAM_CHECK_POINT_SOURCE, "vnode-stream-checkpoint-source", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_STREAM_TASK_UPDATE, "vnode-stream-update", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_STREAM_MAX_MSG, "vnd-stream-max", NULL, NULL)
|
||||
|
||||
TD_NEW_MSG_SEG(TDMT_VND_TMQ_MSG)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_SUBSCRIBE, "vnode-tmq-subscribe", SMqRebVgReq, SMqRebVgRsp)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_DELETE_SUB, "vnode-tmq-delete-sub", SMqVDeleteReq, SMqVDeleteRsp)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_COMMIT_OFFSET, "vnode-tmq-commit-offset", STqOffset, STqOffset)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_SEEK_TO_OFFSET, "vnode-tmq-seekto-offset", STqOffset, STqOffset)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_SEEK, "vnode-tmq-seek", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_ADD_CHECKINFO, "vnode-tmq-add-checkinfo", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_DEL_CHECKINFO, "vnode-del-checkinfo", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_CONSUME, "vnode-tmq-consume", SMqPollReq, SMqDataBlkRsp)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_CONSUME_PUSH, "vnode-tmq-consume-push", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_VG_WALINFO, "vnode-tmq-vg-walinfo", SMqPollReq, SMqDataBlkRsp)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_VG_COMMITTEDINFO, "vnode-tmq-committedinfo", NULL, NULL)
|
||||
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_MAX_MSG, "vnd-tmq-max", NULL, NULL)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -72,7 +72,7 @@ typedef enum {
|
|||
* @param vgId
|
||||
* @return
|
||||
*/
|
||||
qTaskInfo_t qCreateStreamExecTaskInfo(void* msg, SReadHandle* readers, int32_t vgId);
|
||||
qTaskInfo_t qCreateStreamExecTaskInfo(void* msg, SReadHandle* readers, int32_t vgId, int32_t taskId);
|
||||
|
||||
/**
|
||||
* Create the exec task for queue mode
|
||||
|
|
@ -93,8 +93,6 @@ int32_t qGetTableList(int64_t suid, void* pVnode, void* node, SArray **tableList
|
|||
*/
|
||||
void qSetTaskId(qTaskInfo_t tinfo, uint64_t taskId, uint64_t queryId);
|
||||
|
||||
//void qSetTaskCode(qTaskInfo_t tinfo, int32_t code);
|
||||
|
||||
int32_t qSetStreamOpOpen(qTaskInfo_t tinfo);
|
||||
|
||||
/**
|
||||
|
|
@ -216,13 +214,9 @@ int32_t qStreamSourceScanParamForHistoryScanStep2(qTaskInfo_t tinfo, SVersionRan
|
|||
int32_t qStreamRecoverFinish(qTaskInfo_t tinfo);
|
||||
int32_t qRestoreStreamOperatorOption(qTaskInfo_t tinfo);
|
||||
bool qStreamRecoverScanFinished(qTaskInfo_t tinfo);
|
||||
bool qStreamRecoverScanStep1Finished(qTaskInfo_t tinfo);
|
||||
bool qStreamRecoverScanStep2Finished(qTaskInfo_t tinfo);
|
||||
int32_t qStreamRecoverSetAllStepFinished(qTaskInfo_t tinfo);
|
||||
int32_t qStreamInfoResetTimewindowFilter(qTaskInfo_t tinfo);
|
||||
void resetTaskInfo(qTaskInfo_t tinfo);
|
||||
|
||||
void qResetStreamInfoTimeWindow(qTaskInfo_t tinfo);
|
||||
|
||||
int32_t qStreamOperatorReleaseState(qTaskInfo_t tInfo);
|
||||
int32_t qStreamOperatorReloadState(qTaskInfo_t tInfo);
|
||||
|
||||
|
|
|
|||
|
|
@ -228,7 +228,7 @@ typedef struct SStoreTqReader {
|
|||
} SStoreTqReader;
|
||||
|
||||
typedef struct SStoreSnapshotFn {
|
||||
int32_t (*createSnapshot)(SSnapContext* ctx, int64_t uid);
|
||||
int32_t (*setForSnapShot)(SSnapContext* ctx, int64_t uid);
|
||||
int32_t (*destroySnapshot)(SSnapContext* ctx);
|
||||
SMetaTableInfo (*getMetaTableInfoFromSnapshot)(SSnapContext* ctx);
|
||||
int32_t (*getTableInfoFromSnapshot)(SSnapContext* ctx, void** pBuf, int32_t* contLen, int16_t* type, int64_t* uid);
|
||||
|
|
@ -368,6 +368,8 @@ typedef struct SStateStore {
|
|||
bool (*updateInfoIsUpdated)(SUpdateInfo* pInfo, uint64_t tableId, TSKEY ts);
|
||||
bool (*updateInfoIsTableInserted)(SUpdateInfo* pInfo, int64_t tbUid);
|
||||
void (*updateInfoDestroy)(SUpdateInfo* pInfo);
|
||||
void (*windowSBfDelete)(SUpdateInfo *pInfo, uint64_t count);
|
||||
void (*windowSBfAdd)(SUpdateInfo *pInfo, uint64_t count);
|
||||
|
||||
SUpdateInfo* (*updateInfoInitP)(SInterval* pInterval, int64_t watermark, bool igUp);
|
||||
void (*updateInfoAddCloseWindowSBF)(SUpdateInfo* pInfo);
|
||||
|
|
|
|||
|
|
@ -36,9 +36,10 @@ extern "C" {
|
|||
#define SHOW_CREATE_TB_RESULT_FIELD1_LEN (TSDB_TABLE_NAME_LEN + VARSTR_HEADER_SIZE)
|
||||
#define SHOW_CREATE_TB_RESULT_FIELD2_LEN (TSDB_MAX_ALLOWED_SQL_LEN * 3)
|
||||
|
||||
#define SHOW_LOCAL_VARIABLES_RESULT_COLS 2
|
||||
#define SHOW_LOCAL_VARIABLES_RESULT_COLS 3
|
||||
#define SHOW_LOCAL_VARIABLES_RESULT_FIELD1_LEN (TSDB_CONFIG_OPTION_LEN + VARSTR_HEADER_SIZE)
|
||||
#define SHOW_LOCAL_VARIABLES_RESULT_FIELD2_LEN (TSDB_CONFIG_VALUE_LEN + VARSTR_HEADER_SIZE)
|
||||
#define SHOW_LOCAL_VARIABLES_RESULT_FIELD3_LEN (TSDB_CONFIG_SCOPE_LEN + VARSTR_HEADER_SIZE)
|
||||
|
||||
#define SHOW_ALIVE_RESULT_COLS 1
|
||||
|
||||
|
|
@ -361,7 +362,7 @@ typedef struct SRestoreComponentNodeStmt {
|
|||
|
||||
typedef struct SCreateTopicStmt {
|
||||
ENodeType type;
|
||||
char topicName[TSDB_TABLE_NAME_LEN];
|
||||
char topicName[TSDB_TOPIC_NAME_LEN];
|
||||
char subDbName[TSDB_DB_NAME_LEN];
|
||||
char subSTbName[TSDB_TABLE_NAME_LEN];
|
||||
bool ignoreExists;
|
||||
|
|
@ -372,13 +373,13 @@ typedef struct SCreateTopicStmt {
|
|||
|
||||
typedef struct SDropTopicStmt {
|
||||
ENodeType type;
|
||||
char topicName[TSDB_TABLE_NAME_LEN];
|
||||
char topicName[TSDB_TOPIC_NAME_LEN];
|
||||
bool ignoreNotExists;
|
||||
} SDropTopicStmt;
|
||||
|
||||
typedef struct SDropCGroupStmt {
|
||||
ENodeType type;
|
||||
char topicName[TSDB_TABLE_NAME_LEN];
|
||||
char topicName[TSDB_TOPIC_NAME_LEN];
|
||||
char cgroup[TSDB_CGROUP_LEN];
|
||||
bool ignoreNotExists;
|
||||
} SDropCGroupStmt;
|
||||
|
|
|
|||
|
|
@ -55,6 +55,7 @@ typedef struct SLogicNode {
|
|||
EGroupAction groupAction;
|
||||
EOrder inputTsOrder;
|
||||
EOrder outputTsOrder;
|
||||
bool forceCreateNonBlockingOptr; // true if the operator can use non-blocking(pipeline) mode
|
||||
} SLogicNode;
|
||||
|
||||
typedef enum EScanType {
|
||||
|
|
@ -105,6 +106,7 @@ typedef struct SScanLogicNode {
|
|||
bool hasNormalCols; // neither tag column nor primary key tag column
|
||||
bool sortPrimaryKey;
|
||||
bool igLastNull;
|
||||
bool groupOrderScan;
|
||||
} SScanLogicNode;
|
||||
|
||||
typedef struct SJoinLogicNode {
|
||||
|
|
@ -246,6 +248,8 @@ typedef struct SSortLogicNode {
|
|||
SLogicNode node;
|
||||
SNodeList* pSortKeys;
|
||||
bool groupSort;
|
||||
int64_t maxRows;
|
||||
bool skipPKSortOpt;
|
||||
} SSortLogicNode;
|
||||
|
||||
typedef struct SPartitionLogicNode {
|
||||
|
|
@ -316,6 +320,7 @@ typedef struct SPhysiNode {
|
|||
struct SPhysiNode* pParent;
|
||||
SNode* pLimit;
|
||||
SNode* pSlimit;
|
||||
bool forceCreateNonBlockingOptr;
|
||||
} SPhysiNode;
|
||||
|
||||
typedef struct SScanPhysiNode {
|
||||
|
|
@ -326,6 +331,7 @@ typedef struct SScanPhysiNode {
|
|||
uint64_t suid;
|
||||
int8_t tableType;
|
||||
SName tableName;
|
||||
bool groupOrderScan;
|
||||
} SScanPhysiNode;
|
||||
|
||||
typedef SScanPhysiNode STagScanPhysiNode;
|
||||
|
|
|
|||
|
|
@ -14,6 +14,7 @@
|
|||
*/
|
||||
|
||||
#include "os.h"
|
||||
#include "ttimer.h"
|
||||
#include "streamState.h"
|
||||
#include "tdatablock.h"
|
||||
#include "tdbInt.h"
|
||||
|
|
@ -30,6 +31,7 @@ extern "C" {
|
|||
|
||||
typedef struct SStreamTask SStreamTask;
|
||||
|
||||
#define SSTREAM_TASK_VER 1
|
||||
enum {
|
||||
STREAM_STATUS__NORMAL = 0,
|
||||
STREAM_STATUS__STOP,
|
||||
|
|
@ -45,8 +47,8 @@ enum {
|
|||
TASK_STATUS__FAIL,
|
||||
TASK_STATUS__STOP,
|
||||
TASK_STATUS__SCAN_HISTORY, // stream task scan history data by using tsdbread in the stream scanner
|
||||
TASK_STATUS__HALT, // stream task will handle all data in the input queue, and then paused
|
||||
TASK_STATUS__PAUSE,
|
||||
TASK_STATUS__HALT, // pause, but not be manipulated by user command
|
||||
TASK_STATUS__PAUSE, // pause
|
||||
TASK_STATUS__CK, // stream task is in checkpoint status, no data are allowed to put into inputQ anymore
|
||||
TASK_STATUS__CK_READY,
|
||||
};
|
||||
|
|
@ -182,7 +184,8 @@ int32_t streamInit();
|
|||
void streamCleanUp();
|
||||
|
||||
SStreamQueue* streamQueueOpen(int64_t cap);
|
||||
void streamQueueClose(SStreamQueue* queue);
|
||||
void streamQueueClose(SStreamQueue* pQueue);
|
||||
void streamQueueCleanup(SStreamQueue* pQueue);
|
||||
|
||||
static FORCE_INLINE void streamQueueProcessSuccess(SStreamQueue* queue) {
|
||||
ASSERT(atomic_load_8(&queue->status) == STREAM_QUEUE__PROCESSING);
|
||||
|
|
@ -267,7 +270,9 @@ typedef struct SStreamStatus {
|
|||
int8_t schedStatus;
|
||||
int8_t keepTaskStatus;
|
||||
bool transferState;
|
||||
int8_t timerActive; // timer is active
|
||||
int8_t timerActive; // timer is active
|
||||
int8_t pauseAllowed; // allowed task status to be set to be paused
|
||||
int32_t stage; // rollback will increase this attribute one for each time
|
||||
} SStreamStatus;
|
||||
|
||||
typedef struct SHistDataRange {
|
||||
|
|
@ -292,15 +297,22 @@ typedef struct SDispatchMsgInfo {
|
|||
} SDispatchMsgInfo;
|
||||
|
||||
typedef struct {
|
||||
int8_t outputType;
|
||||
int8_t outputStatus;
|
||||
SStreamQueue* outputQueue;
|
||||
} SSTaskOutputInfo;
|
||||
int8_t type;
|
||||
int8_t status;
|
||||
SStreamQueue* queue;
|
||||
} STaskOutputInfo;
|
||||
|
||||
typedef struct {
|
||||
int64_t init;
|
||||
int64_t step1Start;
|
||||
int64_t step2Start;
|
||||
} STaskTimestamp;
|
||||
|
||||
struct SStreamTask {
|
||||
int64_t ver;
|
||||
SStreamId id;
|
||||
SSTaskBasicInfo info;
|
||||
int8_t outputType;
|
||||
STaskOutputInfo outputInfo;
|
||||
SDispatchMsgInfo msgInfo;
|
||||
SStreamStatus status;
|
||||
SCheckpointInfo chkInfo;
|
||||
|
|
@ -308,9 +320,12 @@ struct SStreamTask {
|
|||
SHistDataRange dataRange;
|
||||
SStreamId historyTaskId;
|
||||
SStreamId streamTaskId;
|
||||
SArray* pUpstreamInfoList; // SArray<SStreamChildEpInfo*>, // children info
|
||||
int32_t nextCheckId;
|
||||
SArray* checkpointInfo; // SArray<SStreamCheckpointInfo>
|
||||
STaskTimestamp tsInfo;
|
||||
SArray* pReadyMsgList; // SArray<SStreamChkptReadyInfo*>
|
||||
TdThreadMutex lock; // secure the operation of set task status and puting data into inputQ
|
||||
SArray* pUpstreamInfoList;
|
||||
|
||||
// output
|
||||
union {
|
||||
|
|
@ -322,9 +337,7 @@ struct SStreamTask {
|
|||
};
|
||||
|
||||
int8_t inputStatus;
|
||||
int8_t outputStatus;
|
||||
SStreamQueue* inputQueue;
|
||||
SStreamQueue* outputQueue;
|
||||
|
||||
// trigger
|
||||
int8_t triggerStatus;
|
||||
|
|
@ -333,6 +346,7 @@ struct SStreamTask {
|
|||
void* launchTaskTimer;
|
||||
SMsgCb* pMsgCb; // msg handle
|
||||
SStreamState* pState; // state backend
|
||||
SArray* pRspMsgList;
|
||||
|
||||
// the followings attributes don't be serialized
|
||||
int32_t notReadyTasks;
|
||||
|
|
@ -348,6 +362,11 @@ struct SStreamTask {
|
|||
SSHashObj* pNameMap;
|
||||
};
|
||||
|
||||
typedef struct SMgmtInfo {
|
||||
SEpSet epset;
|
||||
int32_t mnodeId;
|
||||
} SMgmtInfo;
|
||||
|
||||
// meta
|
||||
typedef struct SStreamMeta {
|
||||
char* path;
|
||||
|
|
@ -366,6 +385,8 @@ typedef struct SStreamMeta {
|
|||
int64_t streamBackendRid;
|
||||
SHashObj* pTaskBackendUnique;
|
||||
TdThreadMutex backendMutex;
|
||||
tmr_h hbTmr;
|
||||
SMgmtInfo mgmtInfo;
|
||||
|
||||
int32_t chkptNotReadyTasks;
|
||||
|
||||
|
|
@ -384,6 +405,7 @@ SStreamTask* tNewStreamTask(int64_t streamId, int8_t taskLevel, int8_t fillHisto
|
|||
int32_t tEncodeStreamTask(SEncoder* pEncoder, const SStreamTask* pTask);
|
||||
int32_t tDecodeStreamTask(SDecoder* pDecoder, SStreamTask* pTask);
|
||||
void tFreeStreamTask(SStreamTask* pTask);
|
||||
int32_t streamTaskInit(SStreamTask* pTask, SStreamMeta* pMeta, SMsgCb* pMsgCb, int64_t ver);
|
||||
|
||||
int32_t tDecodeStreamTaskChkInfo(SDecoder* pDecoder, SCheckpointInfo* pChkpInfo);
|
||||
|
||||
|
|
@ -445,6 +467,7 @@ typedef struct {
|
|||
int32_t downstreamNodeId;
|
||||
int32_t downstreamTaskId;
|
||||
int32_t childId;
|
||||
int32_t stage;
|
||||
} SStreamTaskCheckReq;
|
||||
|
||||
typedef struct {
|
||||
|
|
@ -455,6 +478,7 @@ typedef struct {
|
|||
int32_t downstreamNodeId;
|
||||
int32_t downstreamTaskId;
|
||||
int32_t childId;
|
||||
int32_t stage;
|
||||
int8_t status;
|
||||
} SStreamTaskCheckRsp;
|
||||
|
||||
|
|
@ -467,7 +491,9 @@ typedef struct {
|
|||
|
||||
typedef struct {
|
||||
int64_t streamId;
|
||||
int32_t taskId;
|
||||
int32_t upstreamTaskId;
|
||||
int32_t downstreamTaskId;
|
||||
int32_t upstreamNodeId;
|
||||
int32_t childId;
|
||||
} SStreamScanHistoryFinishReq, SStreamTransferReq;
|
||||
|
||||
|
|
@ -479,6 +505,7 @@ typedef struct {
|
|||
int64_t checkpointId;
|
||||
int32_t taskId;
|
||||
int32_t nodeId;
|
||||
SEpSet mgmtEps;
|
||||
int32_t mnodeId;
|
||||
int64_t expireTime;
|
||||
} SStreamCheckpointSourceReq;
|
||||
|
|
@ -512,6 +539,40 @@ typedef struct {
|
|||
int32_t tEncodeStreamCheckpointReadyMsg(SEncoder* pEncoder, const SStreamCheckpointReadyMsg* pRsp);
|
||||
int32_t tDecodeStreamCheckpointReadyMsg(SDecoder* pDecoder, SStreamCheckpointReadyMsg* pRsp);
|
||||
|
||||
typedef struct {
|
||||
int32_t vgId;
|
||||
int32_t numOfTasks;
|
||||
} SStreamHbMsg;
|
||||
|
||||
int32_t tEncodeStreamHbMsg(SEncoder* pEncoder, const SStreamHbMsg* pRsp);
|
||||
int32_t tDecodeStreamHbMsg(SDecoder* pDecoder, SStreamHbMsg* pRsp);
|
||||
|
||||
typedef struct {
|
||||
int64_t streamId;
|
||||
int32_t upstreamTaskId;
|
||||
int32_t upstreamNodeId;
|
||||
int32_t downstreamId;
|
||||
int32_t downstreamNode;
|
||||
} SStreamCompleteHistoryMsg;
|
||||
|
||||
int32_t tEncodeCompleteHistoryDataMsg(SEncoder* pEncoder, const SStreamCompleteHistoryMsg* pReq);
|
||||
int32_t tDecodeCompleteHistoryDataMsg(SDecoder* pDecoder, SStreamCompleteHistoryMsg* pReq);
|
||||
|
||||
typedef struct {
|
||||
int32_t nodeId;
|
||||
SEpSet prevEp;
|
||||
SEpSet newEp;
|
||||
} SNodeUpdateInfo;
|
||||
|
||||
typedef struct {
|
||||
int64_t streamId;
|
||||
int32_t taskId;
|
||||
SArray* pNodeList; // SArray<SNodeUpdateInfo>
|
||||
} SStreamTaskNodeUpdateMsg;
|
||||
|
||||
int32_t tEncodeStreamTaskUpdateMsg(SEncoder* pEncoder, const SStreamTaskNodeUpdateMsg* pMsg);
|
||||
int32_t tDecodeStreamTaskUpdateMsg(SDecoder* pDecoder, SStreamTaskNodeUpdateMsg* pMsg);
|
||||
|
||||
typedef struct {
|
||||
int64_t streamId;
|
||||
int32_t downstreamTaskId;
|
||||
|
|
@ -545,7 +606,7 @@ int32_t streamProcessRunReq(SStreamTask* pTask);
|
|||
int32_t streamProcessDispatchMsg(SStreamTask* pTask, SStreamDispatchReq* pReq, SRpcMsg* pMsg, bool exec);
|
||||
int32_t streamProcessDispatchRsp(SStreamTask* pTask, SStreamDispatchRsp* pRsp, int32_t code);
|
||||
|
||||
int32_t streamProcessRetrieveReq(SStreamTask* pTask, SStreamRetrieveReq* pReq, SRpcMsg* pMsg);
|
||||
int32_t streamProcessRetrieveReq(SStreamTask* pTask, SStreamRetrieveReq* pReq, SRpcMsg* pMsg);
|
||||
void streamTaskOpenAllUpstreamInput(SStreamTask* pTask);
|
||||
void streamTaskCloseUpstreamInput(SStreamTask* pTask, int32_t taskId);
|
||||
SStreamChildEpInfo* streamTaskGetUpstreamTaskEpInfo(SStreamTask* pTask, int32_t taskId);
|
||||
|
|
@ -557,45 +618,59 @@ int32_t streamTaskOutputResultBlock(SStreamTask* pTask, SStreamDataBlock* pBlock
|
|||
bool streamTaskShouldStop(const SStreamStatus* pStatus);
|
||||
bool streamTaskShouldPause(const SStreamStatus* pStatus);
|
||||
bool streamTaskIsIdle(const SStreamTask* pTask);
|
||||
int32_t streamTaskEndScanWAL(SStreamTask* pTask);
|
||||
|
||||
int32_t streamScanExec(SStreamTask* pTask, int32_t batchSz);
|
||||
SStreamChildEpInfo* streamTaskGetUpstreamTaskEpInfo(SStreamTask* pTask, int32_t taskId);
|
||||
int32_t streamScanExec(SStreamTask* pTask, int32_t batchSize);
|
||||
void initRpcMsg(SRpcMsg* pMsg, int32_t msgType, void* pCont, int32_t contLen);
|
||||
|
||||
char* createStreamTaskIdStr(int64_t streamId, int32_t taskId);
|
||||
|
||||
// recover and fill history
|
||||
void streamPrepareNdoCheckDownstream(SStreamTask* pTask);
|
||||
int32_t streamTaskCheckDownstreamTasks(SStreamTask* pTask);
|
||||
void streamTaskCheckDownstreamTasks(SStreamTask* pTask);
|
||||
int32_t streamTaskDoCheckDownstreamTasks(SStreamTask* pTask);
|
||||
int32_t streamTaskLaunchScanHistory(SStreamTask* pTask);
|
||||
int32_t streamTaskCheckStatus(SStreamTask* pTask);
|
||||
int32_t streamTaskCheckStatus(SStreamTask* pTask, int32_t stage);
|
||||
int32_t streamTaskRestart(SStreamTask* pTask, const char* pDir);
|
||||
int32_t streamTaskUpdateEpInfo(SArray* pTaskList, int32_t nodeId, SEpSet* pEpSet);
|
||||
int32_t streamTaskStop(SStreamTask* pTask);
|
||||
int32_t streamSendCheckRsp(const SStreamMeta* pMeta, const SStreamTaskCheckReq* pReq, SStreamTaskCheckRsp* pRsp,
|
||||
SRpcHandleInfo* pRpcInfo, int32_t taskId);
|
||||
int32_t streamProcessCheckRsp(SStreamTask* pTask, const SStreamTaskCheckRsp* pRsp);
|
||||
int32_t streamCheckHistoryTaskDownstream(SStreamTask* pTask);
|
||||
int32_t streamLaunchFillHistoryTask(SStreamTask* pTask);
|
||||
int32_t streamTaskScanHistoryDataComplete(SStreamTask* pTask);
|
||||
int32_t streamStartRecoverTask(SStreamTask* pTask, int8_t igUntreated);
|
||||
void streamHistoryTaskSetVerRangeStep2(SStreamTask* pTask);
|
||||
int32_t streamStartScanHistoryAsync(SStreamTask* pTask, int8_t igUntreated);
|
||||
bool streamHistoryTaskSetVerRangeStep2(SStreamTask* pTask, int64_t latestVer);
|
||||
int32_t streamTaskGetInputQItems(const SStreamTask* pTask);
|
||||
|
||||
bool streamTaskRecoverScanStep1Finished(SStreamTask* pTask);
|
||||
bool streamTaskRecoverScanStep2Finished(SStreamTask* pTask);
|
||||
int32_t streamTaskRecoverSetAllStepFinished(SStreamTask* pTask);
|
||||
|
||||
// common
|
||||
int32_t streamSetParamForScanHistory(SStreamTask* pTask);
|
||||
int32_t streamRestoreParam(SStreamTask* pTask);
|
||||
int32_t streamSetStatusNormal(SStreamTask* pTask);
|
||||
const char* streamGetTaskStatusStr(int32_t status);
|
||||
void streamTaskPause(SStreamTask* pTask);
|
||||
void streamTaskResume(SStreamTask* pTask);
|
||||
void streamTaskHalt(SStreamTask* pTask);
|
||||
void streamTaskResumeFromHalt(SStreamTask* pTask);
|
||||
void streamTaskDisablePause(SStreamTask* pTask);
|
||||
void streamTaskEnablePause(SStreamTask* pTask);
|
||||
int32_t streamTaskSetUpstreamInfo(SStreamTask* pTask, const SStreamTask* pUpstreamTask);
|
||||
void streamTaskUpdateUpstreamInfo(SStreamTask* pTask, int32_t nodeId, const SEpSet* pEpSet);
|
||||
void streamTaskUpdateDownstreamInfo(SStreamTask* pTask, int32_t nodeId, const SEpSet* pEpSet);
|
||||
void streamTaskSetFixedDownstreamInfo(SStreamTask* pTask, const SStreamTask* pDownstreamTask);
|
||||
|
||||
// source level
|
||||
int32_t streamSetParamForStreamScannerStep1(SStreamTask* pTask, SVersionRange* pVerRange, STimeWindow* pWindow);
|
||||
int32_t streamSetParamForStreamScannerStep2(SStreamTask* pTask, SVersionRange* pVerRange, STimeWindow* pWindow);
|
||||
int32_t streamBuildSourceRecover1Req(SStreamTask* pTask, SStreamScanHistoryReq* pReq, int8_t igUntreated);
|
||||
int32_t streamSourceScanHistoryData(SStreamTask* pTask);
|
||||
int32_t streamDispatchScanHistoryFinishMsg(SStreamTask* pTask);
|
||||
|
||||
int32_t streamDispatchTransferStateMsg(SStreamTask* pTask);
|
||||
|
||||
// agg level
|
||||
int32_t streamAggScanHistoryPrepare(SStreamTask* pTask);
|
||||
int32_t streamProcessScanHistoryFinishReq(SStreamTask* pTask, int32_t childId);
|
||||
int32_t streamTaskScanHistoryPrepare(SStreamTask* pTask);
|
||||
int32_t streamProcessScanHistoryFinishReq(SStreamTask* pTask, SStreamScanHistoryFinishReq* pReq,
|
||||
SRpcHandleInfo* pRpcInfo);
|
||||
int32_t streamProcessScanHistoryFinishRsp(SStreamTask* pTask);
|
||||
|
||||
// stream task meta
|
||||
void streamMetaInit();
|
||||
|
|
@ -603,20 +678,22 @@ void streamMetaCleanup();
|
|||
SStreamMeta* streamMetaOpen(const char* path, void* ahandle, FTaskExpand expandFunc, int32_t vgId);
|
||||
|
||||
void streamMetaClose(SStreamMeta* streamMeta);
|
||||
|
||||
// save to stream meta store
|
||||
int32_t streamMetaSaveTask(SStreamMeta* pMeta, SStreamTask* pTask);
|
||||
int32_t streamMetaAddDeployedTask(SStreamMeta* pMeta, int64_t ver, SStreamTask* pTask);
|
||||
int32_t streamMetaAddSerializedTask(SStreamMeta* pMeta, int64_t checkpointVer, char* msg, int32_t msgLen);
|
||||
int32_t streamMetaGetNumOfTasks(const SStreamMeta* pMeta); // todo remove it
|
||||
SStreamTask* streamMetaAcquireTask(SStreamMeta* pMeta, int32_t taskId);
|
||||
int32_t streamMetaRemoveTask(SStreamMeta* pMeta, int32_t taskId);
|
||||
int32_t streamMetaRegisterTask(SStreamMeta* pMeta, int64_t ver, SStreamTask* pTask, bool* pAdded);
|
||||
int32_t streamMetaUnregisterTask(SStreamMeta* pMeta, int64_t streamId, int32_t taskId);
|
||||
int32_t streamMetaGetNumOfTasks(SStreamMeta* pMeta); // todo remove it
|
||||
SStreamTask* streamMetaAcquireTask(SStreamMeta* pMeta, int64_t streamId, int32_t taskId);
|
||||
void streamMetaReleaseTask(SStreamMeta* pMeta, SStreamTask* pTask);
|
||||
void streamMetaRemoveTask(SStreamMeta* pMeta, int32_t taskId);
|
||||
|
||||
// int32_t streamStateRebuild(SStreamMeta* pMeta, char* path, int64_t chkpId);
|
||||
int32_t streamMetaReopen(SStreamMeta* pMeta, int64_t chkpId);
|
||||
|
||||
int32_t streamMetaBegin(SStreamMeta* pMeta);
|
||||
int32_t streamMetaCommit(SStreamMeta* pMeta);
|
||||
int32_t streamLoadTasks(SStreamMeta* pMeta, int64_t ver);
|
||||
int32_t streamLoadTasks(SStreamMeta* pMeta);
|
||||
|
||||
// checkpoint
|
||||
int32_t streamProcessCheckpointSourceReq(SStreamTask* pTask, SStreamCheckpointSourceReq* pReq);
|
||||
|
|
|
|||
|
|
@ -53,6 +53,8 @@ void updateInfoAddCloseWindowSBF(SUpdateInfo *pInfo);
|
|||
void updateInfoDestoryColseWinSBF(SUpdateInfo *pInfo);
|
||||
int32_t updateInfoSerialize(void *buf, int32_t bufLen, const SUpdateInfo *pInfo);
|
||||
int32_t updateInfoDeserialize(void *buf, int32_t bufLen, SUpdateInfo *pInfo);
|
||||
void windowSBfDelete(SUpdateInfo *pInfo, uint64_t count);
|
||||
void windowSBfAdd(SUpdateInfo *pInfo, uint64_t count);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
|
|
|
|||