This commit is contained in:
wangmm0220 2023-07-28 15:07:36 +08:00
commit 3cd2b10545
416 changed files with 33433 additions and 11710 deletions

View File

@ -39,7 +39,7 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series
# 构建
TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。
TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。目前不支持使用交叉编译器构建。
用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)或[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。
@ -352,4 +352,4 @@ TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java
# 加入技术交流群
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine1",加小 T 为好友,即可入群。
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小 T 为好友,即可入群。

View File

@ -47,7 +47,7 @@ For user manual, system design and architecture, please refer to [TDengine Docum
# Building
At the moment, TDengine server supports running on Linux/Windows/macOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
At the moment, TDengine server supports running on Linux/Windows/macOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment.
You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source.

View File

@ -2,7 +2,7 @@
# taos-tools
ExternalProject_Add(taos-tools
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
GIT_TAG 3.0
GIT_TAG main
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
BINARY_DIR ""
#BUILD_IN_SOURCE TRUE

View File

@ -18,7 +18,7 @@ The full package of TDengine includes the TDengine Server (`taosd`), TDengine Cl
The standard server installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark`, and sample code. You can also download the Lite package that includes only `taosd` and the C/C++ connector.
The TDengine Community Edition is released as Deb and RPM packages. The Deb package can be installed on Debian, Ubuntu, and derivative systems. The RPM package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the Deb or RPM package, download and install taosTools separately. TDengine can also be installed on x64 Windows and x64/m1 macOS.
TDengine OSS is released as Deb and RPM packages. The Deb package can be installed on Debian, Ubuntu, and derivative systems. The RPM package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the Deb or RPM package, download and install taosTools separately. TDengine can also be installed on x64 Windows and x64/m1 macOS.
## Operating environment requirements
In the Linux system, the minimum requirements for the operating environment are as follows:
@ -201,7 +201,7 @@ You can use the TDengine CLI to monitor your TDengine deployment and execute ad
<TabItem label="Windows" value="windows">
After the installation is complete, please run `sc start taosd` or run `C:\TDengine\taosd.exe` with administrator privilege to start TDengine Server.
After the installation is complete, please run `sc start taosd` or run `C:\TDengine\taosd.exe` with administrator privilege to start TDengine Server. Please run `sc start taosadapter` or run `C:\TDengine\taosadapter.exe` with administrator privilege to start taosAdapter to provide http/REST service.
## Command Line Interface (CLI)

View File

@ -21,17 +21,6 @@ import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
## Study TDengine Knowledge Map
The TDengine Knowledge Map covers the various knowledge points of TDengine, revealing the invocation relationships and data flow between various conceptual entities. Learning and understanding the TDengine Knowledge Map will help you quickly master the TDengine knowledge system.
<figure>
<center>
<a href="pathname:///img/tdengine-map.svg" target="_blank"><img src="/img/tdengine-map.svg" width="80%" /></a>
<figcaption>Diagram 1. TDengine Knowledge Map</figcaption>
</center>
</figure>
## Join TDengine Community
<table width="100%">

View File

@ -298,7 +298,7 @@ You configure the following parameters when creating a consumer:
| `td.connect.port` | string | Port of the server side | |
| `group.id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. Each topic can create up to 100 consumer groups. |
| `client.id` | string | Client ID | Maximum length: 192. |
| `auto.offset.reset` | enum | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
| `auto.offset.reset` | enum | Initial offset for the consumer group | `earliest`: subscribe from the earliest data, this is the default behavior; `latest`: subscribe from the latest data; or `none`: can't subscribe without committed offset|
| `enable.auto.commit` | boolean | Commit automatically; true: user application doesn't need to explicitly commit; false: user application need to handle commit by itself | Default value is true |
| `auto.commit.interval.ms` | integer | Interval for automatic commits, in milliseconds |
| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages | default value: false

View File

@ -403,7 +403,7 @@ In this section we will demonstrate 5 examples of developing UDF in Python langu
In the guide, some debugging skills of using Python UDF will be explained too.
We assume you are using Linux system and already have TDengine 3.0.4.0+ and Python 3.x.
We assume you are using Linux system and already have TDengine 3.0.4.0+ and Python 3.7+.
Note:**You can't use print() function to output log inside a UDF, you have to write the log to a specific file or use logging module of Python.**

View File

@ -4,23 +4,31 @@ sidebar_label: Kubernetes
description: This document describes how to deploy TDengine on Kubernetes.
---
TDengine is a cloud-native time-series database that can be deployed on Kubernetes. This document gives a step-by-step description of how you can use YAML files to create a TDengine cluster and introduces common operations for TDengine in a Kubernetes environment.
## Overview
As a time series database for Cloud Native architecture design, TDengine supports Kubernetes deployment. Firstly we introduce how to use YAML files to create a highly available TDengine cluster from scratch step by step for production usage, and highlight the common operations of TDengine in Kubernetes environment.
To meet [high availability ](https://docs.taosdata.com/tdinternal/high-availability/)requirements, clusters need to meet the following requirements:
- 3 or more dnodes: multiple vnodes in the same vgroup of TDengine are not allowed to be distributed in one dnode at the same time, so if you create a database with 3 replicas, the number of dnodes is greater than or equal to 3
- 3 mnodes: mnode is responsible for the management of the entire TDengine cluster. The default number of mnode in TDengine cluster is only one. If the dnode where the mnode located is dropped, the entire cluster is unavailable.
- Database 3 replicas: The TDengine replica configuration is the database level, so 3 replicas for the database must need three dnodes in the cluster. If any one dnode is offline, does not affect the normal usage of the whole cluster. **If the number of offline** **dnodes** **is 2, then the cluster is not available,** **because** ** the cluster can not complete the election based on RAFT** **.** (Enterprise version: in the disaster recovery scenario, any node data file is damaged, can be restored by pulling up the dnode again)
## Prerequisites
Before deploying TDengine on Kubernetes, perform the following:
* Current steps are compatible with Kubernetes v1.5 and later version.
* Install and configure minikube, kubectl, and helm.
* Install and deploy Kubernetes and ensure that it can be accessed and used normally. Update any container registries or other services as necessary.
- This article applies Kubernetes 1.19 and above
- This article uses the **kubectl** tool to install and deploy, please install the corresponding software in advance
- Kubernetes have been installed and deployed and can access or update the necessary container repositories or other services
You can download the configuration files in this document from [GitHub](https://github.com/taosdata/TDengine-Operator/tree/3.0/src/tdengine).
## Configure the service
Create a service configuration file named `taosd-service.yaml`. Record the value of `metadata.name` (in this example, `taos`) for use in the next step. Add the ports required by TDengine:
Create a service configuration file named `taosd-service.yaml`. Record the value of `metadata.name` (in this example, `taos`) for use in the next step. And then add the ports required by TDengine and record the value of the selector label "app" (in this example, `tdengine`) for use in the next step:
```yaml
```YAML
---
apiVersion: v1
kind: Service
@ -31,10 +39,10 @@ metadata:
spec:
ports:
- name: tcp6030
- protocol: "TCP"
protocol: "TCP"
port: 6030
- name: tcp6041
- protocol: "TCP"
protocol: "TCP"
port: 6041
selector:
app: "tdengine"
@ -42,10 +50,11 @@ spec:
## Configure the service as StatefulSet
Configure the TDengine service as a StatefulSet.
Create the `tdengine.yaml` file and set `replicas` to 3. In this example, the region is set to Asia/Shanghai and 10 GB of standard storage are allocated per node. You can change the configuration based on your environment and business requirements.
According to Kubernetes instructions for various deployments, we will use StatefulSet as the deployment resource type of TDengine. Create the file `tdengine.yaml `, where replicas defines the number of cluster nodes as 3. The node time zone is China (Asia/Shanghai), and each node is allocated 5G standard storage (refer to the [Storage Classes ](https://kubernetes.io/docs/concepts/storage/storage-classes/)configuration storage class). You can also modify accordingly according to the actual situation.
```yaml
Please pay special attention to the startupProbe configuration. If dnode's Pod drops for a period of time and then restart, the newly launched dnode Pod will be temporarily unavailable. The reason is the startupProbe configuration is too small, Kubernetes will know that the Pod is in an abnormal state and try to restart it, then the dnode's Pod will restart frequently and never return to the normal status. Refer to [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
```YAML
---
apiVersion: apps/v1
kind: StatefulSet
@ -69,14 +78,14 @@ spec:
spec:
containers:
- name: "tdengine"
image: "tdengine/tdengine:3.0.0.0"
image: "tdengine/tdengine:3.0.7.1"
imagePullPolicy: "IfNotPresent"
ports:
- name: tcp6030
- protocol: "TCP"
protocol: "TCP"
containerPort: 6030
- name: tcp6041
- protocol: "TCP"
protocol: "TCP"
containerPort: 6041
env:
# POD_NAME for FQDN config
@ -102,12 +111,18 @@ spec:
# Must set if you want a cluster.
- name: TAOS_FIRST_EP
value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)"
# TAOS_FQDN should always be set in k8s env.
# TAOS_FQND should always be set in k8s env.
- name: TAOS_FQDN
value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local"
volumeMounts:
- name: taosdata
mountPath: /var/lib/taos
startupProbe:
exec:
command:
- taos-check
failureThreshold: 360
periodSeconds: 10
readinessProbe:
exec:
command:
@ -129,266 +144,401 @@ spec:
storageClassName: "standard"
resources:
requests:
storage: "10Gi"
storage: "5Gi"
```
## Use kubectl to deploy TDengine
Run the following commands:
First create the corresponding namespace, and then execute the following command in sequence :
```bash
kubectl apply -f taosd-service.yaml
kubectl apply -f tdengine.yaml
```Bash
kubectl apply -f taosd-service.yaml -n tdengine-test
kubectl apply -f tdengine.yaml -n tdengine-test
```
The preceding configuration generates a TDengine cluster with three nodes in which dnodes are automatically configured. You can run the `show dnodes` command to query the nodes in the cluster:
The above configuration will generate a three-node TDengine cluster, dnode is automatically configured, you can use the **show dnodes** command to view the nodes of the current cluster:
```bash
kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"
kubectl exec -i -t tdengine-1 -- taos -s "show dnodes"
kubectl exec -i -t tdengine-2 -- taos -s "show dnodes"
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show dnodes"
kubectl exec -it tdengine-2 -n tdengine-test -- taos -s "show dnodes"
```
The output is as follows:
```
```Bash
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
Query OK, 3 rows in database (0.003655s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 0 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
2 | tdengine-1.ta... | 0 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
3 | tdengine-2.ta... | 0 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
Query OK, 3 row(s) in set (0.001853s)
```
View the current mnode
```Bash
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
taos> show mnodes\G
*************************** 1.row ***************************
id: 1
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
role: leader
status: ready
create_time: 2023-07-19 17:54:18.559
reboot_time: 2023-07-19 17:54:19.520
Query OK, 1 row(s) in set (0.001282s)
```
## Create mnode
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 2"
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 3"
```
View mnode
```Bash
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
taos> show mnodes\G
*************************** 1.row ***************************
id: 1
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
role: leader
status: ready
create_time: 2023-07-19 17:54:18.559
reboot_time: 2023-07-20 09:19:36.060
*************************** 2.row ***************************
id: 2
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
role: follower
status: ready
create_time: 2023-07-20 09:22:05.600
reboot_time: 2023-07-20 09:22:12.838
*************************** 3.row ***************************
id: 3
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
role: follower
status: ready
create_time: 2023-07-20 09:22:20.042
reboot_time: 2023-07-20 09:22:23.271
Query OK, 3 row(s) in set (0.003108s)
```
## Enable port forwarding
The kubectl port forwarding feature allows applications to access the TDengine cluster running on Kubernetes.
Kubectl port forwarding enables applications to access TDengine clusters running in Kubernetes environments.
```
kubectl port-forward tdengine-0 6041:6041 &
```bash
kubectl port-forward -n tdengine-test tdengine-0 6041:6041 &
```
Use curl to verify that the TDengine REST API is working on port 6041:
Use **curl** to verify that the TDengine REST API is working on port 6041:
```
$ curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
Handling connection for 6041
{"code":0,"column_meta":[["name","VARCHAR",64],["create_time","TIMESTAMP",8],["vgroups","SMALLINT",2],["ntables","BIGINT",8],["replica","TINYINT",1],["strict","VARCHAR",4],["duration","VARCHAR",10],["keep","VARCHAR",32],["buffer","INT",4],["pagesize","INT",4],["pages","INT",4],["minrows","INT",4],["maxrows","INT",4],["comp","TINYINT",1],["precision","VARCHAR",2],["status","VARCHAR",10],["retention","VARCHAR",60],["single_stable","BOOL",1],["cachemodel","VARCHAR",11],["cachesize","INT",4],["wal_level","TINYINT",1],["wal_fsync_period","INT",4],["wal_retention_period","INT",4],["wal_retention_size","BIGINT",8],["wal_roll_period","INT",4],["wal_segment_size","BIGINT",8]],"data":[["information_schema",null,null,16,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null],["performance_schema",null,null,10,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null]],"rows":2}
```bash
curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
{"code":0,"column_meta":[["name","VARCHAR",64]],"data":[["information_schema"],["performance_schema"],["test"],["test1"]],"rows":4}
```
## Enable the dashboard for visualization
## Test cluster
The minikube dashboard command enables visualized cluster management.
### Data preparation
```
$ minikube dashboard
* Verifying dashboard health ...
* Launching proxy ...
* Verifying proxy health ...
* Opening http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
#### taosBenchmark
Create a 3 replicas database with taosBenchmark, write 100 million data at the same time, and view the data at the same time
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taosBenchmark -I stmt -d test -n 10000 -t 10000 -a 3
# query data
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "select count(*) from test.meters;"
taos> select count(*) from test.meters;
count(*) |
========================
100000000 |
Query OK, 1 row(s) in set (0.103537s)
```
In some public clouds, minikube cannot be remotely accessed if it is bound to 127.0.0.1. In this case, use the kubectl proxy command to map the port to 0.0.0.0. Then, you can access the dashboard by using a web browser to open the dashboard URL above on the public IP address and port of the virtual machine.
View vnode distribution by showing dnodes
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 8 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
2 | tdengine-1.ta... | 8 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
3 | tdengine-2.ta... | 8 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
Query OK, 3 row(s) in set (0.001357s)
```
$ kubectl proxy --accept-hosts='^.*$' --address='0.0.0.0'
View xnode distribution by showing vgroup
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show test.vgroups"
taos> show test.vgroups
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode | v4_status | cacheload | cacheelements | tsma |
==============================================================================================================================================================================================
2 | test | 1267 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
3 | test | 1215 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
4 | test | 1215 | 1 | leader | 2 | follower | 3 | follower | NULL | NULL | 0 | 0 | 0 |
5 | test | 1307 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
6 | test | 1245 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
7 | test | 1275 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
8 | test | 1231 | 1 | leader | 2 | follower | 3 | follower | NULL | NULL | 0 | 0 | 0 |
9 | test | 1245 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
Query OK, 8 row(s) in set (0.001488s)
```
#### Manually created
Common a three-copy test1, and create a table, write 2 pieces of data
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- \
taos -s \
"create database if not exists test1 replica 3;
use test1;
create table if not exists t1(ts timestamp, n int);
insert into t1 values(now, 1)(now+1s, 2);"
```
View xnode distribution by showing test1.vgroup
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show test1.vgroups"
taos> show test1.vgroups
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode | v4_status | cacheload | cacheelements | tsma |
==============================================================================================================================================================================================
10 | test1 | 1 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
11 | test1 | 0 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
Query OK, 2 row(s) in set (0.001489s)
```
### Test fault tolerance
The dnode where the mnode leader is located is disconnected, dnode1
```Bash
kubectl get pod -l app=tdengine -n tdengine-test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 0/1 ErrImagePull 2 (2s ago) 20m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (6m48s ago) 20m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 21m 10.244.1.223 node85 <none> <none>
```
At this time, the cluster mnode has a re-election, and the monde on dnode1 becomes the leader.
```Bash
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
Welcome to the TDengine Command Line Interface, Client Version:3.0.7.1.202307190706
Copyright (c) 2022 by TDengine, all rights reserved.
taos> show mnodes\G
*************************** 1.row ***************************
id: 1
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
role: offline
status: offline
create_time: 2023-07-19 17:54:18.559
reboot_time: 1970-01-01 08:00:00.000
*************************** 2.row ***************************
id: 2
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
role: leader
status: ready
create_time: 2023-07-20 09:22:05.600
reboot_time: 2023-07-20 09:32:00.227
*************************** 3.row ***************************
id: 3
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
role: follower
status: ready
create_time: 2023-07-20 09:22:20.042
reboot_time: 2023-07-20 09:32:00.026
Query OK, 3 row(s) in set (0.001513s)
```
Cluster can read and write normally
```Bash
# insert
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "insert into test1.t1 values(now, 1)(now+1s, 2);"
taos> insert into test1.t1 values(now, 1)(now+1s, 2);
Insert OK, 2 row(s) affected (0.002098s)
# select
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "select *from test1.t1"
taos> select *from test1.t1
ts | n |
========================================
2023-07-19 18:04:58.104 | 1 |
2023-07-19 18:04:59.104 | 2 |
2023-07-19 18:06:00.303 | 1 |
2023-07-19 18:06:01.303 | 2 |
Query OK, 4 row(s) in set (0.001994s)
```
Similarly, as for the non-leader mnode dropped, read and write can of course be normal, here will not do too much display .
## Scaling Out Your Cluster
TDengine clusters can scale automatically:
TDengine cluster supports automatic expansion:
```bash
```Bash
kubectl scale statefulsets tdengine --replicas=4
```
The preceding command increases the number of replicas to 4. After running this command, query the pod status:
The parameter `--replica = 4 `in the above command line indicates that you want to expand the TDengine cluster to 4 nodes. After execution, first check the status of the Pod:
```bash
kubectl get pods -l app=tdengine
```Bash
kubectl get pod -l app=tdengine -n tdengine-test -o wide
```
The output is as follows:
```
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 161m
tdengine-1 1/1 Running 0 161m
tdengine-2 1/1 Running 0 32m
tdengine-3 1/1 Running 0 32m
```Plain
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 1/1 Running 4 (6h26m ago) 6h53m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (6h39m ago) 6h53m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 5h16m 10.244.1.224 node85 <none> <none>
tdengine-3 1/1 Running 0 3m24s 10.244.2.76 node86 <none> <none>
```
The status of all pods is Running. Once the pod status changes to Ready, you can check the dnode status:
At this time, the state of the POD is still Running, and the dnode state in the TDengine cluster can only be seen after the Pod status is `ready `:
```bash
kubectl exec -i -t tdengine-3 -- taos -s "show dnodes"
```Bash
kubectl exec -it tdengine-3 -n tdengine-test -- taos -s "show dnodes"
```
The following output shows that the TDengine cluster has been expanded to 4 replicas:
The dnode list of the expanded four-node TDengine cluster:
```
```Plain
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
4 | tdengine-3.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:33:16.039 | |
Query OK, 4 rows in database (0.008377s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
4 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:01:44.007 | 2023-07-20 16:01:44.889 | | | |
Query OK, 4 row(s) in set (0.003628s)
```
## Scaling In Your Cluster
When you scale in a TDengine cluster, your data is migrated to different nodes. You must run the drop dnodes command in TDengine to remove dnodes before scaling in your Kubernetes environment.
Since the TDengine cluster will migrate data between nodes during volume expansion and contraction, using the **kubectl** command to reduce the volume requires first using the "drop dnodes" command ( **If there are 3 replicas of db in the cluster, the number of dnodes after reduction must also be greater than or equal to 3, otherwise the drop dnode operation will be aborted** ), the node deletion is completed before Kubernetes cluster reduction.
Note: In a Kubernetes StatefulSet service, the newest pods are always removed first. For this reason, when you scale in your TDengine cluster, ensure that you drop the newest dnodes.
Note: Since Kubernetes Pods in the Statefulset can only be removed in reverse order of creation, the TDengine drop dnode also needs to be removed in reverse order of creation, otherwise the Pod will be in an error state.
```
$ kubectl exec -i -t tdengine-0 -- taos -s "drop dnode 4"
```
```bash
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "drop dnode 4"
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
Query OK, 3 rows in database (0.004861s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
Query OK, 3 row(s) in set (0.003324s)
```
Verify that the dnode have been successfully removed by running the `kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"` command. Then run the following command to remove the pod:
After confirming that the removal is successful (use kubectl exec -i -t tdengine-0 --taos -s "show dnodes" to view and confirm the dnode list), use the kubectl command to remove the Pod:
```
kubectl scale statefulsets tdengine --replicas=3
```Plain
kubectl scale statefulsets tdengine --replicas=3 -n tdengine-test
```
The newest pod in the deployment is removed. Run the `kubectl get pods -l app=tdengine` command to query the pod status:
The last Pod will be deleted. Use the command kubectl get pods -l app = tdengine to check the Pod status:
```
$ kubectl get pods -l app=tdengine
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 4m7s
tdengine-1 1/1 Running 0 3m55s
tdengine-2 1/1 Running 0 2m28s
```Plain
kubectl get pod -l app=tdengine -n tdengine-test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 1/1 Running 4 (6h55m ago) 7h22m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (7h9m ago) 7h23m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 5h45m 10.244.1.224 node85 <none> <none>
```
After the pod has been removed, manually delete the PersistentVolumeClaim (PVC). Otherwise, future scale-outs will attempt to use existing data.
After the Pod is deleted, the PVC needs to be deleted manually, otherwise the previous data will continue to be used for the next expansion, resulting in the inability to join the cluster normally.
```bash
$ kubectl delete pvc taosdata-tdengine-3
```Bash
kubectl delete pvc aosdata-tdengine-3 -n tdengine-test
```
Your cluster has now been safely scaled in, and you can scale it out again as necessary.
The cluster state at this time is safe and can be scaled up again if needed.
```bash
$ kubectl scale statefulsets tdengine --replicas=4
```Bash
kubectl scale statefulsets tdengine --replicas=4 -n tdengine-test
statefulset.apps/tdengine scaled
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 35m
tdengine-1 1/1 Running 0 34m
tdengine-2 1/1 Running 0 12m
tdengine-3 0/1 ContainerCreating 0 4s
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 35m
tdengine-1 1/1 Running 0 34m
tdengine-2 1/1 Running 0 12m
tdengine-3 0/1 Running 0 7s
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
kubectl get pod -l app=tdengine -n tdengine-test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 1/1 Running 4 (6h59m ago) 7h27m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (7h13m ago) 7h27m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 5h49m 10.244.1.224 node85 <none> <none>
tdengine-3 1/1 Running 0 20s 10.244.2.77 node86 <none> <none>
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
5 | tdengine-2.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:01:36.479 | |
6 | tdengine-3.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:13:54.411 | |
Query OK, 4 row(s) in set (0.001348s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
5 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:31:34.092 | 2023-07-20 16:38:17.419 | | | |
Query OK, 4 row(s) in set (0.003881s)
```
## Remove a TDengine Cluster
To fully remove a TDengine cluster, you must delete its statefulset, svc, configmap, and pvc entries:
> **When deleting the PVC, you need to pay attention to the pv persistentVolumeReclaimPolicy policy. It is recommended to change to Delete, so that the PV will be automatically cleaned up when the PVC is deleted, and the underlying CSI storage resources will be cleaned up at the same time. If the policy of deleting the PVC to automatically clean up the PV is not configured, and then after deleting the pvc, when manually cleaning up the PV, the CSI storage resources corresponding to the PV may not be released.**
```bash
kubectl delete statefulset -l app=tdengine
kubectl delete svc -l app=tdengine
kubectl delete pvc -l app=tdengine
kubectl delete configmap taoscfg
Complete removal of TDengine cluster, need to clean up statefulset, svc, configmap, pvc respectively.
```Bash
kubectl delete statefulset -l app=tdengine -n tdengine-test
kubectl delete svc -l app=tdengine -n tdengine-test
kubectl delete pvc -l app=tdengine -n tdengine-test
kubectl delete configmap taoscfg -n tdengine-test
```
## Troubleshooting
### Error 1
If you remove a pod without first running `drop dnode`, some TDengine nodes will go offline.
No "drop dnode" is directly reduced. Since the TDengine has not deleted the node, the reduced pod causes some nodes in the TDengine cluster to be offline.
```
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
```Plain
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
5 | tdengine-2.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:01:36.479 | status msg timeout |
6 | tdengine-3.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:13:54.411 | status msg timeout |
Query OK, 4 row(s) in set (0.001323s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
5 | tdengine-3.ta... | 0 | 16 | offline | 2023-07-20 16:31:34.092 | 2023-07-20 16:38:17.419 | status msg timeout | | |
Query OK, 4 row(s) in set (0.003862s)
```
### Error 2
## Finally
If the number of nodes after a scale-in is less than the value of the replica parameter, the cluster will go down:
For the high availability and high reliability of TDengine in a Kubernetes environment, hardware damage and disaster recovery are divided into two levels:
Create a database with replica set to 2 and add data.
1. The disaster recovery capability of the underlying distributed Block Storage, the multi-copy of Block Storage, the current popular distributed Block Storage such as Ceph, has the multi-copy capability, extending the storage copy to different racks, cabinets, computer rooms, Data center (or directly use the Block Storage service provided by Public Cloud vendors)
2. TDengine disaster recovery, in TDengine Enterprise, itself has when a dnode permanently offline (TCE-metal disk damage, data sorting loss), re-pull a blank dnode to restore the original dnode work.
```bash
kubectl exec -i -t tdengine-0 -- \
taos -s \
"create database if not exists test replica 2;
use test;
create table if not exists t1(ts timestamp, n int);
insert into t1 values(now, 1)(now+1s, 2);"
Finally, welcome to [TDengine Cloud ](https://cloud.tdengine.com/)to experience the one-stop fully managed TDengine Cloud as a Service.
```
Scale in to one node:
```bash
kubectl scale statefulsets tdengine --replicas=1
```
In the TDengine CLI, you can see that no database operations succeed:
```
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
Query OK, 2 row(s) in set (0.000845s)
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
Query OK, 2 row(s) in set (0.000837s)
taos> use test;
Database changed.
taos> insert into t1 values(now, 3);
DB error: Unable to resolve FQDN (0.013874s)
```
> TDengine Cloud is a minimalist fully managed time series data processing Cloud as a Service platform developed based on the open source time series database TDengine. In addition to high-performance time series database, it also has system functions such as caching, subscription and stream computing, and provides convenient and secure data sharing, as well as numerous enterprise-level functions. It allows enterprises in the fields of Internet of Things, Industrial Internet, Finance, IT operation and maintenance monitoring to significantly reduce labor costs and operating costs in the management of time series data.

View File

@ -42,10 +42,20 @@ In TDengine, the data types below can be used when specifying a column or tag.
| 14 | NCHAR | User Defined | Multi-byte string that can include multi byte characters like Chinese characters. Each character of NCHAR type consumes 4 bytes storage. The string value should be quoted with single quotes. Literal single quote inside the string must be preceded with backslash, like `\'`. The length must be specified when defining a column or tag of NCHAR type, for example nchar(10) means it can store at most 10 characters of nchar type and will consume fixed storage of 40 bytes. An error will be reported if the string value exceeds the length defined. |
| 15 | JSON | | JSON type can only be used on tags. A tag of json type is excluded with any other tags of any other type. |
| 16 | VARCHAR | User-defined | Alias of BINARY |
| 16 | GEOMETRY | User-defined | Geometry |
:::note
- Each row of the table cannot be longer than 48KB (64KB since version 3.0.5.0) (note that each BINARY/NCHAR/GEOMETRY column takes up an additional 2 bytes of storage space).
- Only ASCII visible characters are suggested to be used in a column or tag of BINARY type. Multi-byte characters must be stored in NCHAR type.
- The length of BINARY can be up to 16,374(data column is 65,517 and tag column is 16,382 since version 3.0.5.0) bytes. The string value must be quoted with single quotes. You must specify a length in bytes for a BINARY value, for example binary(20) for up to twenty single-byte characters. If the data exceeds the specified length, an error will occur. The literal single quote inside the string must be preceded with back slash like `\'`
- The maximum length of the GEOMETRY data column is 65,517 bytes, and the maximum length of the tag column is 16,382 bytes. Supports POINT, LINESTRING, and POLYGON subtypes of 2D. The following table describes the length calculation method:
| # | **Syntax** | **MinLen** | **MaxLen** | **Growth of each point** |
|---|--------------------------------------|------------|------------|--------------------------|
| 1 | POINT(1.0 1.0) | 21 | 21 | NA |
| 2 | LINESTRING(1.0 1.0, 2.0 2.0) | 9+2*16 | 9+4094*16 | +16 |
| 3 | POLYGON((1.0 1.0, 2.0 2.0, 1.0 1.0)) | 13+3*16 | 13+4094*16 | +16 |
- Numeric values in SQL statements will be determined as integer or float type according to whether there is decimal point or whether scientific notation is used, so attention must be paid to avoid overflow. For example, 9999999999999999999 will be considered as overflow because it exceeds the upper limit of long integer, but 9999999999999999999.0 will be considered as a legal float number.
:::

View File

@ -36,14 +36,12 @@ database_option: {
| TSDB_PAGESIZE value
| WAL_RETENTION_PERIOD value
| WAL_RETENTION_SIZE value
| WAL_ROLL_PERIOD value
| WAL_SEGMENT_SIZE value
}
```
## Parameters
- BUFFER: specifies the size (in MB) of the write buffer for each vnode. Enter a value between 3 and 16384. The default value is 96.
- BUFFER: specifies the size (in MB) of the write buffer for each vnode. Enter a value between 3 and 16384. The default value is 256.
- CACHEMODEL: specifies how the latest data in subtables is stored in the cache. The default value is none.
- none: The latest data is not cached.
- last_row: The last row of each subtable is cached. This option significantly improves the performance of the LAST_ROW function.
@ -58,7 +56,7 @@ database_option: {
- WAL_FSYNC_PERIOD: specifies the interval (in milliseconds) at which data is written from the WAL to disk. This parameter takes effect only when the WAL parameter is set to 2. The default value is 3000. Enter a value between 0 and 180000. The value 0 indicates that incoming data is immediately written to disk.
- MAXROWS: specifies the maximum number of rows recorded in a block. The default value is 4096.
- MINROWS: specifies the minimum number of rows recorded in a block. The default value is 100.
- KEEP: specifies the time for which data is retained. Enter a value between 1 and 365000. The default value is 3650. The value of the KEEP parameter must be greater than or equal to the value of the DURATION parameter. TDengine automatically deletes data that is older than the value of the KEEP parameter. You can use m (minutes), h (hours), and d (days) as the unit, for example KEEP 100h or KEEP 10d. If you do not include a unit, d is used by default. The Enterprise Edition supports [Tiered Storage](https://docs.tdengine.com/tdinternal/arch/#tiered-storage) function, thus multiple KEEP values (comma separated and up to 3 values supported, and meet keep 0 <= keep 1 <= keep 2, e.g. KEEP 100h,100d,3650d) are supported; the Community Edition does not support Tiered Storage function (although multiple keep values are configured, they do not take effect, only the maximum keep value is used as KEEP).
- KEEP: specifies the time for which data is retained. Enter a value between 1 and 365000. The default value is 3650. The value of the KEEP parameter must be greater than or equal to the value of the DURATION parameter. TDengine automatically deletes data that is older than the value of the KEEP parameter. You can use m (minutes), h (hours), and d (days) as the unit, for example KEEP 100h or KEEP 10d. If you do not include a unit, d is used by default. TDengine Enterprise supports [Tiered Storage](https://docs.tdengine.com/tdinternal/arch/#tiered-storage) function, thus multiple KEEP values (comma separated and up to 3 values supported, and meet keep 0 <= keep 1 <= keep 2, e.g. KEEP 100h,100d,3650d) are supported; TDengine OSS does not support Tiered Storage function (although multiple keep values are configured, they do not take effect, only the maximum keep value is used as KEEP).
- PAGES: specifies the number of pages in the metadata storage engine cache on each vnode. Enter a value greater than or equal to 64. The default value is 256. The space occupied by metadata storage on each vnode is equal to the product of the values of the PAGESIZE and PAGES parameters. The space occupied by default is 1 MB.
- PAGESIZE: specifies the size (in KB) of each page in the metadata storage engine cache on each vnode. The default value is 4. Enter a value between 1 and 16384.
- PRECISION: specifies the precision at which a database records timestamps. Enter ms for milliseconds, us for microseconds, or ns for nanoseconds. The default value is ms.
@ -75,10 +73,8 @@ database_option: {
- TABLE_PREFIX: The prefix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the prefix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then "0001" is used if TSDB_PREFIX is set to 2 but "v3" is used if TSDB_PREFIX is set to -2; It can help you to control the distribution of tables.
- TABLE_SUFFIX: The suffix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the suffix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then "v300" is used if TSDB_SUFFIX is set to 2 but "01" is used if TSDB_SUFFIX is set to -2; It can help you to control the distribution of tables.
- TSDB_PAGESIZE: The page size of the data storage engine in a vnode. The unit is KB. The default is 4 KB. The range is 1 to 16384, that is, 1 KB to 16 MB.
- WAL_RETENTION_PERIOD: specifies the maximum time of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a time in seconds. The default value 0. A value of 0 indicates that WAL files are not required to keep for consumption. Alter it with a proper value at first to create topics.
- WAL_RETENTION_PERIOD: specifies the maximum time of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a time in seconds. The default value is 3600, which means the data in latest 3600 seconds will be kept in WAL for data subscription. Please adjust this parameter to a more proper value for your data subscription.
- WAL_RETENTION_SIZE: specifies the maximum total size of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a size in KB. The default value is 0. A value of 0 indicates that the total size of WAL files to keep for consumption has no upper limit.
- WAL_ROLL_PERIOD: specifies the time after which WAL files are rotated. After this period elapses, a new WAL file is created. The default value is 0. A value of 0 indicates that a new WAL file is created only after TSDB data in memory are flushed to disk.
- WAL_SEGMENT_SIZE: specifies the maximum size of a WAL file. After the current WAL file reaches this size, a new WAL file is created. The default value is 0. A value of 0 indicates that a new WAL file is created only after TSDB data in memory are flushed to disk.
### Example Statement
```sql

View File

@ -9,27 +9,27 @@ You create standard tables and subtables with the `CREATE TABLE` statement.
```sql
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definition] ...) [table_options]
CREATE TABLE create_subtable_clause
CREATE TABLE [IF NOT EXISTS] [db_name.]tb_name (create_definition [, create_definition] ...)
[TAGS (create_definition [, create_definition] ...)]
[table_options]
create_subtable_clause: {
create_subtable_clause [create_subtable_clause] ...
| [IF NOT EXISTS] [db_name.]tb_name USING [db_name.]stb_name [(tag_name [, tag_name] ...)] TAGS (tag_value [, tag_value] ...)
}
create_definition:
col_name column_definition
column_definition:
type_name [comment 'string_value']
table_options:
table_option ...
table_option: {
COMMENT 'string_value'
| WATERMARK duration[,duration]
@ -45,9 +45,9 @@ table_option: {
1. The first column of a table MUST be of type TIMESTAMP. It is automatically set as the primary key.
2. The maximum length of the table name is 192 bytes.
3. The maximum length of each row is 48k(64k since version 3.0.5.0) bytes, please note that the extra 2 bytes used by each BINARY/NCHAR column are also counted.
3. The maximum length of each row is 48k(64k since version 3.0.5.0) bytes, please note that the extra 2 bytes used by each BINARY/NCHAR/GEOMETRY column are also counted.
4. The name of the subtable can only consist of characters from the English alphabet, digits and underscore. Table names can't start with a digit. Table names are case insensitive.
5. The maximum length in bytes must be specified when using BINARY or NCHAR types.
5. The maximum length in bytes must be specified when using BINARY/NCHAR/GEOMETRY types.
6. Escape character "\`" can be used to avoid the conflict between table names and reserved keywords, above rules will be bypassed when using escape character on table names, but the upper limit for the name length is still valid. The table names specified using escape character are case sensitive.
For example \`aBc\` and \`abc\` are different table names but `abc` and `aBc` are same table names because they are both converted to `abc` internally.
Only ASCII visible characters can be used with escape character.
@ -58,7 +58,7 @@ table_option: {
3. MAX_DELAY: specifies the maximum latency for pushing computation results. The default value is 15 minutes or the value of the INTERVAL parameter, whichever is smaller. Enter a value between 0 and 15 minutes in milliseconds, seconds, or minutes. You can enter multiple values separated by commas (,). Note: Retain the default value if possible. Configuring a small MAX_DELAY may cause results to be frequently pushed, affecting storage and query performance. This parameter applies only to supertables and takes effect only when the RETENTIONS parameter has been specified for the database.
4. ROLLUP: specifies aggregate functions to roll up. Rolling up a function provides downsampled results based on multiple axes. This parameter applies only to supertables and takes effect only when the RETENTIONS parameter has been specified for the database. You can specify only one function to roll up. The rollup takes effect on all columns except TS. Enter one of the following values: avg, sum, min, max, last, or first.
5. SMA: specifies functions on which to enable small materialized aggregates (SMA). SMA is user-defined precomputation of aggregates based on data blocks. Enter one of the following values: max, min, or sum This parameter can be used with supertables and standard tables.
6. TTL: specifies the time to live (TTL) for the table. If TTL is specified when creatinga table, after the time period for which the table has been existing is over TTL, TDengine will automatically delete the table. Please be noted that the system may not delete the table at the exact moment that the TTL expires but guarantee there is such a system and finally the table will be deleted. The unit of TTL is in days. The default value is 0, i.e. never expire.
6. TTL: specifies the time to live (TTL) for the table. If TTL is specified when creatinga table, after the time period for which the table has been existing is over TTL, TDengine will automatically delete the table. Please be noted that the system may not delete the table at the exact moment that the TTL expires but guarantee there is such a system and finally the table will be deleted. The unit of TTL is in days. The default value is 0, i.e. never expire.
## Create Subtables
@ -88,7 +88,7 @@ You can create multiple subtables in a single SQL statement provided that all su
```sql
ALTER TABLE [db_name.]tb_name alter_table_clause
alter_table_clause: {
alter_table_options
| ADD COLUMN col_name column_type
@ -96,10 +96,10 @@ alter_table_clause: {
| MODIFY COLUMN col_name column_type
| RENAME COLUMN old_col_name new_col_name
}
alter_table_options:
alter_table_option ...
alter_table_option: {
TTL value
| COMMENT 'string_value'
@ -142,15 +142,15 @@ ALTER TABLE tb_name RENAME COLUMN old_col_name new_col_name
```sql
ALTER TABLE [db_name.]tb_name alter_table_clause
alter_table_clause: {
alter_table_options
| SET TAG tag_name = new_tag_value
}
alter_table_options:
alter_table_option ...
alter_table_option: {
TTL value
| COMMENT 'string_value'

View File

@ -51,6 +51,11 @@ DESCRIBE [db_name.]stb_name;
### View tag information for all child tables in the supertable
```
SHOW TABLE TAGS FROM table_name [FROM db_name];
SHOW TABLE TAGS FROM [db_name.]table_name;
```
```
taos> SHOW TABLE TAGS FROM st1;
tbname | id | loc |

View File

@ -39,7 +39,7 @@ TDengine supports the `UNION` and `UNION ALL` operations. UNION ALL collects all
| 3 | \>, < | All types except BLOB, MEDIUMBLOB, and JSON | Greater than and less than |
| 4 | \>=, <= | All types except BLOB, MEDIUMBLOB, and JSON | Greater than or equal to and less than or equal to |
| 5 | IS [NOT] NULL | All types | Indicates whether the value is null |
| 6 | [NOT] BETWEEN AND | All types except BLOB, MEDIUMBLOB, and JSON | Closed interval comparison |
| 6 | [NOT] BETWEEN AND | All types except BLOB, MEDIUMBLOB, JSON and GEOMETRY | Closed interval comparison |
| 7 | IN | All types except BLOB, MEDIUMBLOB, and JSON; the primary key (timestamp) is also not supported | Equal to any value in the list |
| 8 | LIKE | BINARY, NCHAR, and VARCHAR | Wildcard match |
| 9 | MATCH, NMATCH | BINARY, NCHAR, and VARCHAR | Regular expression match |

View File

@ -36,7 +36,7 @@ The following characters cannot occur in a password: single quotation marks ('),
- Maximum numbers of databases, STables, tables are dependent only on the system resources.
- The number of replicas can only be 1 or 3.
- The maximum length of a username is 23 bytes.
- The maximum length of a password is 128 bytes.
- The maximum length of a password is 31 bytes.
- The maximum number of rows depends on system resources.
- The maximum number of vnodes in a database is 1024.

View File

@ -334,8 +334,6 @@ The following list shows all reserved keywords:
- WAL_LEVEL
- WAL_RETENTION_PERIOD
- WAL_RETENTION_SIZE
- WAL_ROLL_PERIOD
- WAL_SEGMENT_SIZE
- WATERMARK
- WHERE
- WINDOW_CLOSE

View File

@ -28,47 +28,47 @@ This document introduces the tables of INFORMATION_SCHEMA and their structure.
Provides information about dnodes. Similar to SHOW DNODES.
| # | **Column** | **Data Type** | **Description** |
| --- | :------------: | ------------ | ------------------------- |
| 1 | vnodes | SMALLINT | Current number of vnodes on the dnode. It should be noted that `vnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 2 | support_vnodes | SMALLINT | Maximum number of vnodes on the dnode |
| 3 | status | BINARY(10) | Current status |
| 4 | note | BINARY(256) | Reason for going offline or other information |
| 5 | id | SMALLINT | Dnode ID |
| 6 | endpoint | BINARY(134) | Dnode endpoint |
| 7 | create | TIMESTAMP | Creation time |
| # | **Column** | **Data Type** | **Description** |
| --- | :------------: | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | vnodes | SMALLINT | Current number of vnodes on the dnode. It should be noted that `vnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 2 | support_vnodes | SMALLINT | Maximum number of vnodes on the dnode |
| 3 | status | BINARY(10) | Current status |
| 4 | note | BINARY(256) | Reason for going offline or other information |
| 5 | id | SMALLINT | Dnode ID |
| 6 | endpoint | BINARY(134) | Dnode endpoint |
| 7 | create | TIMESTAMP | Creation time |
## INS_MNODES
Provides information about mnodes. Similar to SHOW MNODES.
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | ------------------ |
| 1 | id | SMALLINT | Mnode ID |
| 2 | endpoint | BINARY(134) | Mnode endpoint |
| 3 | role | BINARY(10) | Current role |
| 4 | role_time | TIMESTAMP | Time at which the current role was assumed |
| 5 | create_time | TIMESTAMP | Creation time |
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | ------------------------------------------ |
| 1 | id | SMALLINT | Mnode ID |
| 2 | endpoint | BINARY(134) | Mnode endpoint |
| 3 | role | BINARY(10) | Current role |
| 4 | role_time | TIMESTAMP | Time at which the current role was assumed |
| 5 | create_time | TIMESTAMP | Creation time |
## INS_QNODES
Provides information about qnodes. Similar to SHOW QNODES.
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | ------------ |
| 1 | id | SMALLINT | Qnode ID |
| 2 | endpoint | BINARY(134) | Qnode endpoint |
| 3 | create_time | TIMESTAMP | Creation time |
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | --------------- |
| 1 | id | SMALLINT | Qnode ID |
| 2 | endpoint | BINARY(134) | Qnode endpoint |
| 3 | create_time | TIMESTAMP | Creation time |
## INS_CLUSTER
Provides information about the cluster.
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | ---------- |
| 1 | id | BIGINT | Cluster ID |
| 2 | name | BINARY(134) | Cluster name |
| 3 | create_time | TIMESTAMP | Creation time |
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | --------------- |
| 1 | id | BIGINT | Cluster ID |
| 2 | name | BINARY(134) | Cluster name |
| 3 | create_time | TIMESTAMP | Creation time |
## INS_DATABASES
@ -100,202 +100,200 @@ Provides information about user-created databases. Similar to SHOW DATABASES.
| 23 | wal_fsync_period | INT | Interval at which WAL is written to disk. It should be noted that `wal_fsync_period` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 24 | wal_retention_period | INT | WAL retention period. It should be noted that `wal_retention_period` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 25 | wal_retention_size | INT | Maximum WAL size. It should be noted that `wal_retention_size` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 26 | wal_roll_period | INT | WAL rotation period. It should be noted that `wal_roll_period` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 27 | wal_segment_size | BIGINT | WAL file size. It should be noted that `wal_segment_size` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 28 | stt_trigger | SMALLINT | The threshold for number of files to trigger file merging. It should be noted that `stt_trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 29 | table_prefix | SMALLINT | The prefix length in the table name that is ignored when distributing table to vnode based on table name. It should be noted that `table_prefix` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 30 | table_suffix | SMALLINT | The suffix length in the table name that is ignored when distributing table to vnode based on table name. It should be noted that `table_suffix` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 31 | tsdb_pagesize | INT | The page size for internal storage engine, its unit is KB. It should be noted that `tsdb_pagesize` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 26 | stt_trigger | SMALLINT | The threshold for number of files to trigger file merging. It should be noted that `stt_trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 27 | table_prefix | SMALLINT | The prefix length in the table name that is ignored when distributing table to vnode based on table name. It should be noted that `table_prefix` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 28 | table_suffix | SMALLINT | The suffix length in the table name that is ignored when distributing table to vnode based on table name. It should be noted that `table_suffix` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 29 | tsdb_pagesize | INT | The page size for internal storage engine, its unit is KB. It should be noted that `tsdb_pagesize` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
## INS_FUNCTIONS
Provides information about user-defined functions.
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | -------------- |
| 1 | name | BINARY(64) | Function name |
| 2 | comment | BINARY(255) | Function description. It should be noted that `comment` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 3 | aggregate | INT | Whether the UDF is an aggregate function. It should be noted that `aggregate` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 4 | output_type | BINARY(31) | Output data type |
| 5 | create_time | TIMESTAMP | Creation time |
| 6 | code_len | INT | Length of the source code |
| 7 | bufsize | INT | Buffer size |
| 8 | func_language | BINARY(31) | UDF programming language |
| 9 | func_body | BINARY(16384) | UDF function body |
| 10 | func_version | INT | UDF function version. starting from 0. Increasing by 1 each time it is updated|
| # | **Column** | **Data Type** | **Description** |
| --- | :-----------: | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | name | BINARY(64) | Function name |
| 2 | comment | BINARY(255) | Function description. It should be noted that `comment` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 3 | aggregate | INT | Whether the UDF is an aggregate function. It should be noted that `aggregate` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 4 | output_type | BINARY(31) | Output data type |
| 5 | create_time | TIMESTAMP | Creation time |
| 6 | code_len | INT | Length of the source code |
| 7 | bufsize | INT | Buffer size |
| 8 | func_language | BINARY(31) | UDF programming language |
| 9 | func_body | BINARY(16384) | UDF function body |
| 10 | func_version | INT | UDF function version. starting from 0. Increasing by 1 each time it is updated |
## INS_INDEXES
Provides information about user-created indices. Similar to SHOW INDEX.
| # | **Column** | **Data Type** | **Description** |
| --- | :--------------: | ------------ | ---------------------------------------------------------------------------------- |
| 1 | db_name | BINARY(32) | Database containing the table with the specified index |
| 2 | table_name | BINARY(192) | Table containing the specified index |
| 3 | index_name | BINARY(192) | Index name |
| 4 | db_name | BINARY(64) | Index column |
| 5 | index_type | BINARY(10) | SMA or FULLTEXT index |
| 6 | index_extensions | BINARY(256) | Other information For SMA indices, this shows a list of functions. For FULLTEXT indices, this is null. |
| # | **Column** | **Data Type** | **Description** |
| --- | :--------------: | ------------- | --------------------------------------------------------------------- |
| 1 | db_name | BINARY(32) | Database containing the table with the specified index |
| 2 | table_name | BINARY(192) | Table containing the specified index |
| 3 | index_name | BINARY(192) | Index name |
| 4 | db_name | BINARY(64) | Index column |
| 5 | index_type | BINARY(10) | SMA or tag index |
| 6 | index_extensions | BINARY(256) | Other information For SMA/tag indices, this shows a list of functions |
## INS_STABLES
Provides information about supertables.
| # | **Column** | **Data Type** | **Description** |
| --- | :-----------: | ------------ | ------------------------ |
| 1 | stable_name | BINARY(192) | Supertable name |
| 2 | db_name | BINARY(64) | All databases in the supertable |
| 3 | create_time | TIMESTAMP | Creation time |
| 4 | columns | INT | Number of columns |
| 5 | tags | INT | Number of tags. It should be noted that `tags` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 6 | last_update | TIMESTAMP | Last updated time |
| 7 | table_comment | BINARY(1024) | Table description |
| 8 | watermark | BINARY(64) | Window closing time. It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | max_delay | BINARY(64) | Maximum delay for pushing stream processing results. It should be noted that `max_delay` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 10 | rollup | BINARY(128) | Rollup aggregate function. It should be noted that `rollup` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| # | **Column** | **Data Type** | **Description** |
| --- | :-----------: | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | stable_name | BINARY(192) | Supertable name |
| 2 | db_name | BINARY(64) | All databases in the supertable |
| 3 | create_time | TIMESTAMP | Creation time |
| 4 | columns | INT | Number of columns |
| 5 | tags | INT | Number of tags. It should be noted that `tags` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 6 | last_update | TIMESTAMP | Last updated time |
| 7 | table_comment | BINARY(1024) | Table description |
| 8 | watermark | BINARY(64) | Window closing time. It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | max_delay | BINARY(64) | Maximum delay for pushing stream processing results. It should be noted that `max_delay` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 10 | rollup | BINARY(128) | Rollup aggregate function. It should be noted that `rollup` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
## INS_TABLES
Provides information about standard tables and subtables.
| # | **Column** | **Data Type** | **Description** |
| --- | :-----------: | ------------ | ---------------- |
| 1 | table_name | BINARY(192) | Table name |
| 2 | db_name | BINARY(64) | Database name |
| 3 | create_time | TIMESTAMP | Creation time |
| 4 | columns | INT | Number of columns |
| 5 | stable_name | BINARY(192) | Supertable name |
| 6 | uid | BIGINT | Table ID |
| 7 | vgroup_id | INT | Vgroup ID |
| 8 | ttl | INT | Table time-to-live. It should be noted that `ttl` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | table_comment | BINARY(1024) | Table description |
| 10 | type | BINARY(20) | Table type |
| # | **Column** | **Data Type** | **Description** |
| --- | :-----------: | ------------- | ---------------------------------------------------------------------------------------------------------------------------------- |
| 1 | table_name | BINARY(192) | Table name |
| 2 | db_name | BINARY(64) | Database name |
| 3 | create_time | TIMESTAMP | Creation time |
| 4 | columns | INT | Number of columns |
| 5 | stable_name | BINARY(192) | Supertable name |
| 6 | uid | BIGINT | Table ID |
| 7 | vgroup_id | INT | Vgroup ID |
| 8 | ttl | INT | Table time-to-live. It should be noted that `ttl` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | table_comment | BINARY(1024) | Table description |
| 10 | type | BINARY(20) | Table type |
## INS_TAGS
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | ---------------------- |
| 1 | table_name | BINARY(192) | Table name |
| 2 | db_name | BINARY(64) | Database name |
| 3 | stable_name | BINARY(192) | Supertable name |
| 4 | tag_name | BINARY(64) | Tag name |
| 5 | tag_type | BINARY(64) | Tag type |
| 6 | tag_value | BINARY(16384) | Tag value |
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | --------------- |
| 1 | table_name | BINARY(192) | Table name |
| 2 | db_name | BINARY(64) | Database name |
| 3 | stable_name | BINARY(192) | Supertable name |
| 4 | tag_name | BINARY(64) | Tag name |
| 5 | tag_type | BINARY(64) | Tag type |
| 6 | tag_value | BINARY(16384) | Tag value |
## INS_COLUMNS
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | ---------------------- |
| 1 | table_name | BINARY(192) | Table name |
| 2 | db_name | BINARY(64) | Database name |
| 3 | table_type | BINARY(21) | Table type |
| 4 | col_name | BINARY(64) | Column name |
| 5 | col_type | BINARY(32) | Column type |
| 6 | col_length | INT | Column length |
| 7 | col_precision | INT | Column precision |
| 8 | col_scale | INT | Column scale |
| 9 | col_nullable | INT | Column nullable |
| # | **Column** | **Data Type** | **Description** |
| --- | :-----------: | ------------- | ---------------- |
| 1 | table_name | BINARY(192) | Table name |
| 2 | db_name | BINARY(64) | Database name |
| 3 | table_type | BINARY(21) | Table type |
| 4 | col_name | BINARY(64) | Column name |
| 5 | col_type | BINARY(32) | Column type |
| 6 | col_length | INT | Column length |
| 7 | col_precision | INT | Column precision |
| 8 | col_scale | INT | Column scale |
| 9 | col_nullable | INT | Column nullable |
## INS_USERS
Provides information about TDengine users.
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | -------- |
| 1 | user_name | BINARY(23) | User name |
| 2 | privilege | BINARY(256) | User permissions |
| 3 | create_time | TIMESTAMP | Creation time |
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | ---------------- |
| 1 | user_name | BINARY(23) | User name |
| 2 | privilege | BINARY(256) | User permissions |
| 3 | create_time | TIMESTAMP | Creation time |
## INS_GRANTS
Provides information about TDengine Enterprise Edition permissions.
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | -------------------------------------------------- |
| 1 | version | BINARY(9) | Whether the deployment is a licensed or trial version |
| 2 | cpu_cores | BINARY(9) | CPU cores included in license |
| 3 | dnodes | BINARY(10) | Dnodes included in license. It should be noted that `dnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 4 | streams | BINARY(10) | Streams included in license. It should be noted that `streams` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 5 | users | BINARY(10) | Users included in license. It should be noted that `users` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 6 | accounts | BINARY(10) | Accounts included in license. It should be noted that `accounts` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 7 | storage | BINARY(21) | Storage space included in license. It should be noted that `storage` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 8 | connections | BINARY(21) | Client connections included in license. It should be noted that `connections` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | databases | BINARY(11) | Databases included in license. It should be noted that `databases` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 10 | speed | BINARY(9) | Write speed specified in license (data points per second) |
| 11 | querytime | BINARY(9) | Total query time specified in license |
| 12 | timeseries | BINARY(21) | Number of metrics included in license |
| 13 | expired | BINARY(5) | Whether the license has expired |
| 14 | expire_time | BINARY(19) | When the trial period expires |
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | version | BINARY(9) | Whether the deployment is a licensed or trial version |
| 2 | cpu_cores | BINARY(9) | CPU cores included in license |
| 3 | dnodes | BINARY(10) | Dnodes included in license. It should be noted that `dnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 4 | streams | BINARY(10) | Streams included in license. It should be noted that `streams` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 5 | users | BINARY(10) | Users included in license. It should be noted that `users` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 6 | accounts | BINARY(10) | Accounts included in license. It should be noted that `accounts` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 7 | storage | BINARY(21) | Storage space included in license. It should be noted that `storage` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 8 | connections | BINARY(21) | Client connections included in license. It should be noted that `connections` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | databases | BINARY(11) | Databases included in license. It should be noted that `databases` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 10 | speed | BINARY(9) | Write speed specified in license (data points per second) |
| 11 | querytime | BINARY(9) | Total query time specified in license |
| 12 | timeseries | BINARY(21) | Number of metrics included in license |
| 13 | expired | BINARY(5) | Whether the license has expired |
| 14 | expire_time | BINARY(19) | When the trial period expires |
## INS_VGROUPS
Provides information about vgroups.
| # | **Column** | **Data Type** | **Description** |
| --- | :-------: | ------------ | ------------------------------------------------------ |
| 1 | vgroup_id | INT | Vgroup ID |
| 2 | db_name | BINARY(32) | Database name |
| 3 | tables | INT | Tables in vgroup. It should be noted that `tables` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 4 | status | BINARY(10) | Vgroup status |
| 5 | v1_dnode | INT | Dnode ID of first vgroup member |
| 6 | v1_status | BINARY(10) | Status of first vgroup member |
| 7 | v2_dnode | INT | Dnode ID of second vgroup member |
| 8 | v2_status | BINARY(10) | Status of second vgroup member |
| 9 | v3_dnode | INT | Dnode ID of third vgroup member |
| 10 | v3_status | BINARY(10) | Status of third vgroup member |
| 11 | nfiles | INT | Number of data and metadata files in the vgroup |
| 12 | file_size | INT | Size of the data and metadata files in the vgroup |
| 13 | tsma | TINYINT | Whether time-range-wise SMA is enabled. 1 means enabled; 0 means disabled. |
| # | **Column** | **Data Type** | **Description** |
| --- | :--------: | ------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| 1 | vgroup_id | INT | Vgroup ID |
| 2 | db_name | BINARY(32) | Database name |
| 3 | tables | INT | Tables in vgroup. It should be noted that `tables` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 4 | status | BINARY(10) | Vgroup status |
| 5 | v1_dnode | INT | Dnode ID of first vgroup member |
| 6 | v1_status | BINARY(10) | Status of first vgroup member |
| 7 | v2_dnode | INT | Dnode ID of second vgroup member |
| 8 | v2_status | BINARY(10) | Status of second vgroup member |
| 9 | v3_dnode | INT | Dnode ID of third vgroup member |
| 10 | v3_status | BINARY(10) | Status of third vgroup member |
| 11 | nfiles | INT | Number of data and metadata files in the vgroup |
| 12 | file_size | INT | Size of the data and metadata files in the vgroup |
| 13 | tsma | TINYINT | Whether time-range-wise SMA is enabled. 1 means enabled; 0 means disabled. |
## INS_CONFIGS
Provides system configuration information.
| # | **Column** | **Data Type** | **Description** |
| --- | :------: | ------------ | ------------ |
| 1 | name | BINARY(32) | Parameter |
| 2 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| # | **Column** | **Data Type** | **Description** |
| --- | :--------: | ------------- | ----------------------------------------------------------------------------------------------------------------------- |
| 1 | name | BINARY(32) | Parameter |
| 2 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
## INS_DNODE_VARIABLES
Provides dnode configuration information.
| # | **Column** | **Data Type** | **Description** |
| --- | :------: | ------------ | ------------ |
| 1 | dnode_id | INT | Dnode ID |
| 2 | name | BINARY(32) | Parameter |
| 3 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| # | **Column** | **Data Type** | **Description** |
| --- | :--------: | ------------- | ----------------------------------------------------------------------------------------------------------------------- |
| 1 | dnode_id | INT | Dnode ID |
| 2 | name | BINARY(32) | Parameter |
| 3 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
## INS_TOPICS
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | ------------------------------ |
| 1 | topic_name | BINARY(192) | Topic name |
| 2 | db_name | BINARY(64) | Database for the topic |
| 3 | create_time | TIMESTAMP | Creation time |
| 4 | sql | BINARY(1024) | SQL statement used to create the topic |
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | -------------------------------------- |
| 1 | topic_name | BINARY(192) | Topic name |
| 2 | db_name | BINARY(64) | Database for the topic |
| 3 | create_time | TIMESTAMP | Creation time |
| 4 | sql | BINARY(1024) | SQL statement used to create the topic |
## INS_SUBSCRIPTIONS
| # | **Column** | **Data Type** | **Description** |
| --- | :------------: | ------------ | ------------------------ |
| 1 | topic_name | BINARY(204) | Subscribed topic |
| 2 | consumer_group | BINARY(193) | Subscribed consumer group |
| 3 | vgroup_id | INT | Vgroup ID for the consumer |
| 4 | consumer_id | BIGINT | Consumer ID |
| 5 | offset | BINARY(64) | Consumption progress |
| 6 | rows | BIGINT | Number of consumption items |
| # | **Column** | **Data Type** | **Description** |
| --- | :------------: | ------------- | --------------------------- |
| 1 | topic_name | BINARY(204) | Subscribed topic |
| 2 | consumer_group | BINARY(193) | Subscribed consumer group |
| 3 | vgroup_id | INT | Vgroup ID for the consumer |
| 4 | consumer_id | BIGINT | Consumer ID |
| 5 | offset | BINARY(64) | Consumption progress |
| 6 | rows | BIGINT | Number of consumption items |
## INS_STREAMS
| # | **Column** | **Data Type** | **Description** |
| --- | :----------: | ------------ | --------------------------------------- |
| 1 | stream_name | BINARY(64) | Stream name |
| 2 | create_time | TIMESTAMP | Creation time |
| 3 | sql | BINARY(1024) | SQL statement used to create the stream |
| 4 | status | BINARY(20) | Current status |
| 5 | source_db | BINARY(64) | Source database |
| 6 | target_db | BINARY(64) | Target database |
| 7 | target_table | BINARY(192) | Target table |
| 8 | watermark | BIGINT | Watermark (see stream processing documentation). It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | trigger | INT | Method of triggering the result push (see stream processing documentation). It should be noted that `trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| # | **Column** | **Data Type** | **Description** |
| --- | :----------: | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | stream_name | BINARY(64) | Stream name |
| 2 | create_time | TIMESTAMP | Creation time |
| 3 | sql | BINARY(1024) | SQL statement used to create the stream |
| 4 | status | BINARY(20) | Current status |
| 5 | source_db | BINARY(64) | Source database |
| 6 | target_db | BINARY(64) | Target database |
| 7 | target_table | BINARY(192) | Target table |
| 8 | watermark | BIGINT | Watermark (see stream processing documentation). It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | trigger | INT | Method of triggering the result push (see stream processing documentation). It should be noted that `trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. |

View File

@ -101,6 +101,7 @@ Note: TDengine Enterprise Edition only.
```sql
SHOW INDEXES FROM tbl_name [FROM db_name];
SHOW INDEXES FROM [db_name.]tbl_name;
```
Shows indices that have been created.
@ -326,6 +327,7 @@ Note that only the information about the data blocks in the data file will be di
```sql
SHOW TAGS FROM child_table_name [FROM db_name];
SHOW TAGS FROM [db_name.]child_table_name;
```
Shows all tag information in a subtable.

View File

@ -16,7 +16,7 @@ This statement creates a user account.
The maximum length of user_name is 23 bytes.
The maximum length of password is 128 bytes. The password can include leters, digits, and special characters excluding single quotation marks, double quotation marks, backticks, backslashes, and spaces. The password cannot be empty.
The maximum length of password is 31 bytes. The password can include leters, digits, and special characters excluding single quotation marks, double quotation marks, backticks, backslashes, and spaces. The password cannot be empty.
`SYSINFO` indicates whether the user is allowed to view system information. `1` means allowed, `0` means not allowed. System information includes server configuration, dnode, vnode, storage. The default value is `1`.

View File

@ -4,12 +4,12 @@ sidebar_label: Indexing
description: This document describes the SQL statements related to indexing in TDengine.
---
TDengine supports SMA and FULLTEXT indexing.
TDengine supports SMA and tag indexing.
## Create an Index
```sql
CREATE FULLTEXT INDEX index_name ON tb_name (col_name [, col_name] ...)
CREATE INDEX index_name ON tb_name (col_name [, col_name] ...)
CREATE SMA INDEX index_name ON tb_name index_option
@ -28,9 +28,23 @@ Performs pre-aggregation on the specified column over the time window defined by
- WATERMARK: Enter a value between 0ms and 900000ms. The most precise unit supported is milliseconds. The default value is 5 seconds. This option can be used only on supertables.
- MAX_DELAY: Enter a value between 1ms and 900000ms. The most precise unit supported is milliseconds. The default value is the value of interval provided that it does not exceed 900000ms. This option can be used only on supertables. Note: Retain the default value if possible. Configuring a small MAX_DELAY may cause results to be frequently pushed, affecting storage and query performance.
### FULLTEXT Indexing
Creates a text index for the specified column. FULLTEXT indexing improves performance for queries with text filtering. The index_option syntax is not supported for FULLTEXT indexing. FULLTEXT indexing is supported for JSON tag columns only. Multiple columns cannot be indexed together. However, separate indices can be created for each column.
```sql
DROP DATABASE IF EXISTS d0;
CREATE DATABASE d0;
USE d0;
CREATE TABLE IF NOT EXISTS st1 (ts timestamp, c1 int, c2 float, c3 double) TAGS (t1 int unsigned);
CREATE TABLE ct1 USING st1 TAGS(1000);
CREATE TABLE ct2 USING st1 TAGS(2000);
INSERT INTO ct1 VALUES(now+0s, 10, 2.0, 3.0);
INSERT INTO ct1 VALUES(now+1s, 11, 2.1, 3.1)(now+2s, 12, 2.2, 3.2)(now+3s, 13, 2.3, 3.3);
CREATE SMA INDEX sma_index_name1 ON st1 FUNCTION(max(c1),max(c2),min(c1)) INTERVAL(5m,10s) SLIDING(5m) WATERMARK 5s MAX_DELAY 1m;
-- query from SMA Index
ALTER LOCAL 'querySmaOptimize' '1';
SELECT max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDING(5m);
SELECT _wstart,_wend,_wduration,max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDING(5m);
-- query from raw data
ALTER LOCAL 'querySmaOptimize' '0';
```
## Delete an Index
@ -41,8 +55,8 @@ DROP INDEX index_name;
## View Indices
````sql
```sql
SHOW INDEXES FROM tbl_name [FROM db_name];
SHOW INDEXES FROM [db_name.]tbl_name ;
````
Shows indices that have been created for the specified database or table.

View File

@ -18,6 +18,7 @@ description: This document describes how TDengine SQL has changed in version 3.0
| 8 | Mixed operations | Enhanced | Mixing scalar and vector operations in queries has been enhanced and is supported in all SELECT clauses.
| 9 | Tag operations | Added | Tag columns can be used in queries and clauses like data columns.
| 10 | Timeline clauses and time functions in supertables | Enhanced | When PARTITION BY is not used, data in supertables is merged into a single timeline.
| 11 | GEOMETRY | Added | Geometry
## SQL Syntax
@ -33,7 +34,7 @@ The following data types can be used in the schema for standard tables.
| 6 | ALTER USER | Modified | Deprecated<ul><li>PRIVILEGE: Specified user permissions. Replaced by GRANT and REVOKE. <br/>Added</li><li>ENABLE: Enables or disables a user. </li><li>SYSINFO: Specifies whether a user can query system information. </li></ul>
| 7 | COMPACT VNODES | Not supported | Compacted the data on a vnode. Not supported.
| 8 | CREATE ACCOUNT | Deprecated| This Enterprise Edition-only statement has been removed. It returns the error "This statement is no longer supported."
| 9 | CREATE DATABASE | Modified | Deprecated<ul><li>BLOCKS: Specified the number of blocks for each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>CACHE: Specified the size of the memory blocks used by each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>CACHELAST: Specified how to cache the newest row of data. CACHEMODEL now replaces CACHELAST. </li><li>DAYS: The length of time to store in a single file. Replaced by DURATION. </li><li>FSYNC: Specified the fsync interval when WAL was set to 2. Replaced by WAL_FSYNC_PERIOD. </li><li>QUORUM: Specified the number of confirmations required. STRICT is now used to specify strong or weak consistency. </li><li>UPDATE: Specified whether update operations were supported. All databases now support updating data in certain columns. </li><li>WAL: Specified the WAL level. Replaced by WAL_LEVEL. <br/>Added</li><li>BUFFER: Specifies the size of the write cache pool for each vnode. </li><li>CACHEMODEL: Specifies whether to cache the latest subtable data. </li><li>CACHESIZE: Specifies the size of the cache for the newest subtable data. </li><li>DURATION: Replaces DAYS. Now supports units. </li><li>PAGES: Specifies the number of pages in the metadata storage engine cache on each vnode. </li><li>PAGESIZE: specifies the size (in KB) of each page in the metadata storage engine cache on each vnode. </li><li>RETENTIONS: Specifies the aggregation interval and retention period </li><li>STRICT: Specifies whether strong data consistency is enabled. </li><li>SINGLE_STABLE: Specifies whether a database can contain multiple supertables. </li><li>VGROUPS: Specifies the initial number of vgroups when a database is created. </li><li>WAL_FSYNC_PERIOD: Replaces the FSYNC parameter. </li><li>WAL_LEVEL: Replaces the WAL parameter. </li><li>WAL_RETENTION_PERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription. </li><li>WAL_RETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription. </li><li>WAL_ROLL_PERIOD: Specifies the WAL rotation period. </li><li>WAL_SEGMENT_SIZE: specifies the maximum size of a WAL file. <br/>Modified</li><li>KEEP: Now supports units. </li></ul>
| 9 | CREATE DATABASE | Modified | Deprecated<ul><li>BLOCKS: Specified the number of blocks for each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>CACHE: Specified the size of the memory blocks used by each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode. </li><li>CACHELAST: Specified how to cache the newest row of data. CACHEMODEL now replaces CACHELAST. </li><li>DAYS: The length of time to store in a single file. Replaced by DURATION. </li><li>FSYNC: Specified the fsync interval when WAL was set to 2. Replaced by WAL_FSYNC_PERIOD. </li><li>QUORUM: Specified the number of confirmations required. STRICT is now used to specify strong or weak consistency. </li><li>UPDATE: Specified whether update operations were supported. All databases now support updating data in certain columns. </li><li>WAL: Specified the WAL level. Replaced by WAL_LEVEL. <br/>Added</li><li>BUFFER: Specifies the size of the write cache pool for each vnode. </li><li>CACHEMODEL: Specifies whether to cache the latest subtable data. </li><li>CACHESIZE: Specifies the size of the cache for the newest subtable data. </li><li>DURATION: Replaces DAYS. Now supports units. </li><li>PAGES: Specifies the number of pages in the metadata storage engine cache on each vnode. </li><li>PAGESIZE: specifies the size (in KB) of each page in the metadata storage engine cache on each vnode. </li><li>RETENTIONS: Specifies the aggregation interval and retention period </li><li>STRICT: Specifies whether strong data consistency is enabled. </li><li>SINGLE_STABLE: Specifies whether a database can contain multiple supertables. </li><li>VGROUPS: Specifies the initial number of vgroups when a database is created. </li><li>WAL_FSYNC_PERIOD: Replaces the FSYNC parameter. </li><li>WAL_LEVEL: Replaces the WAL parameter. </li><li>WAL_RETENTION_PERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription. </li><li>WAL_RETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription. <br/>Modified</li><li>KEEP: Now supports units. </li></ul>
| 10 | CREATE DNODE | Modified | Now supports specifying hostname and port separately<ul><li>CREATE DNODE dnode_host_name PORT port_val</li></ul>
| 11 | CREATE INDEX | Added | Creates an SMA index.
| 12 | CREATE MNODE | Added | Creates an mnode.

View File

@ -214,19 +214,6 @@ The data of tdinsight dashboard is stored in `log` database (default. You can ch
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|cluster\_id|NCHAR|TAG|cluster id|
### logs table
`logs` table contains login information records.
|field|type|is\_tag|comment|
|:----|:---|:-----|:------|
|ts|TIMESTAMP||timestamp|
|level|VARCHAR||log level|
|content|NCHAR||log content|
|dnode\_id|INT|TAG|dnode id|
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|cluster\_id|NCHAR|TAG|cluster id|
### log\_summary table
`log_summary` table contains log summary information records.

View File

@ -36,8 +36,8 @@ REST connection supports all platforms that can run Java.
| taos-jdbcdriver version | major changes | TDengine version |
| :---------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------: | :--------------: |
| 3.2.4 | Subscription add the enable.auto.commit parameter and the unsubscribe() method in the WebSocket connection | 3.0.5.0 or later |
| 3.2.3 | Fixed resultSet data parsing failure in some cases | 3.0.5.0 or later |
| 3.2.4 | Subscription add the enable.auto.commit parameter and the unsubscribe() method in the WebSocket connection | - |
| 3.2.3 | Fixed resultSet data parsing failure in some cases | - |
| 3.2.2 | Subscription add seek function | 3.0.5.0 or later |
| 3.2.1 | JDBC REST connection supports schemaless/prepareStatement over WebSocket | 3.0.3.0 or later |
| 3.2.0 | This version has been deprecated | - |
@ -1019,11 +1019,13 @@ while(true) {
#### Assignment subscription Offset
```java
// get offset
long position(TopicPartition partition) throws SQLException;
Map<TopicPartition, Long> position(String topic) throws SQLException;
Map<TopicPartition, Long> beginningOffsets(String topic) throws SQLException;
Map<TopicPartition, Long> endOffsets(String topic) throws SQLException;
// Overrides the fetch offsets that the consumer will use on the next poll(timeout).
void seek(TopicPartition partition, long offset) throws SQLException;
```

View File

@ -648,12 +648,12 @@ stmt.execute()?;
//stmt.execute()?;
```
For a working example, see [GitHub](https://github.com/taosdata/taos-connector-rust/blob/main/examples/bind.rs).
For a working example, see [GitHub](https://github.com/taosdata/taos-connector-rust/blob/main/taos/examples/bind.rs).
For information about other structure APIs, see the [Rust documentation](https://docs.rs/taos).
[taos]: https://github.com/taosdata/rust-connector-taos
[taos]: https://github.com/taosdata/taos-connector-rust
[r2d2]: https://crates.io/crates/r2d2
[TaosBuilder]: https://docs.rs/taos/latest/taos/struct.TaosBuilder.html
[TaosCfg]: https://docs.rs/taos/latest/taos/struct.TaosCfg.html

View File

@ -87,9 +87,9 @@ TDengine currently supports timestamp, number, character, Boolean type, and the
|NCHAR|str|
|JSON|str|
## Installation
## Installation Steps
### Preparation
### Pre-installation preparation
1. Install Python. The recent taospy package requires Python 3.6.2+. The earlier versions of taospy require Python 3.7+. The taos-ws-py package requires Python 3.7+. If Python is not available on your system, refer to the [Python BeginnersGuide](https://wiki.python.org/moin/BeginnersGuide/Download) to install it.
2. Install [pip](https://pypi.org/project/pip/). In most cases, the Python installer comes with the pip utility. If not, please refer to [pip documentation](https://pip.pypa.io/en/stable/installation/) to install it.
@ -275,7 +275,7 @@ Transfer-Encoding: chunked
</TabItem>
</Tabs>
### Using connectors to establish connections
### Specify the Host and Properties to get the connection
The following example code assumes that TDengine is installed locally and that the default configuration is used for both FQDN and serverPort.
@ -331,7 +331,69 @@ The parameter of `connect()` is the url of TDengine, and the protocol is `taosws
</TabItem>
</Tabs>
## Example program
### Priority of configuration parameters
If the configuration parameters are duplicated in the parameters or client configuration file, the priority of the parameters, from highest to lowest, are as follows:
1. Parameters in `connect` function.
2. the configuration file taos.cfg of the TDengine client driver when using a native connection.
## Usage examples
### Create database and tables
<Tabs defaultValue="rest">
<TabItem value="native" label="native connection">
```python
conn = taos.connect()
# Execute a sql, ignore the result set, just get affected rows. It's useful for DDL and DML statement.
conn.execute("DROP DATABASE IF EXISTS test")
conn.execute("CREATE DATABASE test")
# change database. same as execute "USE db"
conn.select_db("test")
conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)")
```
</TabItem>
<TabItem value="rest" label="REST connection">
```python
conn = taosrest.connect(url="http://localhost:6041")
# Execute a sql, ignore the result set, just get affected rows. It's useful for DDL and DML statement.
conn.execute("DROP DATABASE IF EXISTS test")
conn.execute("CREATE DATABASE test")
conn.execute("USE test")
conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)")
```
</TabItem>
<TabItem value="websocket" label="WebSocket connection">
```python
conn = taosws.connect(url="ws://localhost:6041")
# Execute a sql, ignore the result set, just get affected rows. It's useful for DDL and DML statement.
conn.execute("DROP DATABASE IF EXISTS test")
conn.execute("CREATE DATABASE test")
conn.execute("USE test")
conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)")
```
</TabItem>
</Tabs>
### Insert data
```python
conn.execute("INSERT INTO t1 USING weather TAGS(1) VALUES (now, 23.5) (now+1m, 23.5) (now+2m, 24.4)")
```
:::
now is an internal function. The default is the current time of the client's computer. now + 1s represents the current time of the client plus 1 second, followed by the number representing the unit of time: a (milliseconds), s (seconds), m (minutes), h (hours), d (days), w (weeks), n (months), y (years).
:::
### Basic Usage
@ -453,7 +515,7 @@ The `query` method of the `TaosConnection` class can be used to query data and r
</TabItem>
</Tabs>
### Usage with req_id
### Execute SQL with reqId
By using the optional req_id parameter, you can specify a request ID that can be used for tracing.
@ -553,221 +615,7 @@ As the way to connect introduced above but add `req_id` argument.
</TabItem>
</Tabs>
### Subscription
Connector support data subscription. For more information about subscroption, please refer to [Data Subscription](../../../develop/tmq/).
<Tabs defaultValue="native">
<TabItem value="native" label="native connection">
The `consumer` in the connector contains the subscription api.
##### Create Consumer
The syntax for creating a consumer is `consumer = Consumer(configs)`. For more subscription api parameters, please refer to [Data Subscription](../../../develop/tmq/).
```python
from taos.tmq import Consumer
consumer = Consumer({"group.id": "local", "td.connect.ip": "127.0.0.1"})
```
##### Subscribe topics
The `subscribe` function is used to subscribe to a list of topics.
```python
consumer.subscribe(['topic1', 'topic2'])
```
##### Consume
The `poll` function is used to consume data in tmq. The parameter of the `poll` function is a value of type float representing the timeout in seconds. It returns a `Message` before timing out, or `None` on timing out. You have to handle error messages in response data.
```python
while True:
res = consumer.poll(1)
if not res:
continue
err = res.error()
if err is not None:
raise err
val = res.value()
for block in val:
print(block.fetchall())
```
##### assignment
The `assignment` function is used to get the assignment of the topic.
```python
assignments = consumer.assignment()
```
##### Seek
The `seek` function is used to reset the assignment of the topic.
```python
tp = TopicPartition(topic='topic1', partition=0, offset=0)
consumer.seek(tp)
```
##### After consuming data
You should unsubscribe to the topics and close the consumer after consuming.
```python
consumer.unsubscribe()
consumer.close()
```
##### Tmq subscription example
```python
{{#include docs/examples/python/tmq_example.py}}
```
##### assignment and seek example
```python
{{#include docs/examples/python/tmq_assignment_example.py:taos_get_assignment_and_seek_demo}}
```
</TabItem>
<TabItem value="websocket" label="WebSocket connection">
In addition to native connections, the connector also supports subscriptions via websockets.
##### Create Consumer
The syntax for creating a consumer is "consumer = consumer = Consumer(conf=configs)". You need to specify that the `td.connect.websocket.scheme` parameter is set to "ws" in the configuration. For more subscription api parameters, please refer to [Data Subscription](../../../develop/tmq/#create-a-consumer).
```python
import taosws
consumer = taosws.(conf={"group.id": "local", "td.connect.websocket.scheme": "ws"})
```
##### subscribe topics
The `subscribe` function is used to subscribe to a list of topics.
```python
consumer.subscribe(['topic1', 'topic2'])
```
##### Consume
The `poll` function is used to consume data in tmq. The parameter of the `poll` function is a value of type float representing the timeout in seconds. It returns a `Message` before timing out, or `None` on timing out. You have to handle error messages in response data.
```python
while True:
res = consumer.poll(timeout=1.0)
if not res:
continue
err = res.error()
if err is not None:
raise err
for block in message:
for row in block:
print(row)
```
##### assignment
The `assignment` function is used to get the assignment of the topic.
```python
assignments = consumer.assignment()
```
##### Seek
The `seek` function is used to reset the assignment of the topic.
```python
consumer.seek(topic='topic1', partition=0, offset=0)
```
##### After consuming data
You should unsubscribe to the topics and close the consumer after consuming.
```python
consumer.unsubscribe()
consumer.close()
```
##### Subscription example
```python
{{#include docs/examples/python/tmq_websocket_example.py}}
```
##### Assignment and seek example
```python
{{#include docs/examples/python/tmq_websocket_assgnment_example.py:taosws_get_assignment_and_seek_demo}}
```
</TabItem>
</Tabs>
### Schemaless Insert
Connector support schemaless insert.
<Tabs defaultValue="list">
<TabItem value="list" label="List Insert">
##### Simple insert
```python
{{#include docs/examples/python/schemaless_insert.py}}
```
##### Insert with ttl argument
```python
{{#include docs/examples/python/schemaless_insert_ttl.py}}
```
##### Insert with req_id argument
```python
{{#include docs/examples/python/schemaless_insert_req_id.py}}
```
</TabItem>
<TabItem value="raw" label="Raw Insert">
##### Simple insert
```python
{{#include docs/examples/python/schemaless_insert_raw.py}}
```
##### Insert with ttl argument
```python
{{#include docs/examples/python/schemaless_insert_raw_ttl.py}}
```
##### Insert with req_id argument
```python
{{#include docs/examples/python/schemaless_insert_raw_req_id.py}}
```
</TabItem>
</Tabs>
### Parameter Binding
### Writing data via parameter binding
The Python connector provides a parameter binding api for inserting data. Similar to most databases, TDengine currently only supports the question mark `?` to indicate the parameters to be bound.
@ -898,16 +746,273 @@ stmt.close()
</TabItem>
</Tabs>
### Schemaless Writing
Connector support schemaless insert.
<Tabs defaultValue="list">
<TabItem value="list" label="List Insert">
##### Simple insert
```python
{{#include docs/examples/python/schemaless_insert.py}}
```
##### Insert with ttl argument
```python
{{#include docs/examples/python/schemaless_insert_ttl.py}}
```
##### Insert with req_id argument
```python
{{#include docs/examples/python/schemaless_insert_req_id.py}}
```
</TabItem>
<TabItem value="raw" label="Raw Insert">
##### Simple insert
```python
{{#include docs/examples/python/schemaless_insert_raw.py}}
```
##### Insert with ttl argument
```python
{{#include docs/examples/python/schemaless_insert_raw_ttl.py}}
```
##### Insert with req_id argument
```python
{{#include docs/examples/python/schemaless_insert_raw_req_id.py}}
```
</TabItem>
</Tabs>
### Schemaless with reqId
There is a optional parameter called `req_id` in `schemaless_insert` and `schemaless_insert_raw` method. This reqId can be used to request link tracing.
```python
{{#include docs/examples/python/schemaless_insert_req_id.py}}
```
```python
{{#include docs/examples/python/schemaless_insert_raw_req_id.py}}
```
### Data Subscription
Connector support data subscription. For more information about subscroption, please refer to [Data Subscription](../../../develop/tmq/).
#### Create a Topic
To create topic, please refer to [Data Subscription](../../../develop/tmq/#create-a-topic).
#### Create a Consumer
<Tabs defaultValue="native">
<TabItem value="native" label="native connection">
The consumer in the connector contains the subscription api. The syntax for creating a consumer is consumer = Consumer(configs). For more subscription api parameters, please refer to [Data Subscription](../../../develop/tmq/#create-a-consumer).
```python
from taos.tmq import Consumer
consumer = Consumer({"group.id": "local", "td.connect.ip": "127.0.0.1"})
```
</TabItem>
<TabItem value="websocket" label="WebSocket connection">
In addition to native connections, the connector also supports subscriptions via websockets.
The syntax for creating a consumer is "consumer = consumer = Consumer(conf=configs)". You need to specify that the `td.connect.websocket.scheme` parameter is set to "ws" in the configuration. For more subscription api parameters, please refer to [Data Subscription](../../../develop/tmq/#create-a-consumer).
```python
import taosws
consumer = taosws.(conf={"group.id": "local", "td.connect.websocket.scheme": "ws"})
```
</TabItem>
</Tabs>
#### Subscribe to a Topic
<Tabs defaultValue="native">
<TabItem value="native" label="native connection">
The `subscribe` function is used to subscribe to a list of topics.
```python
consumer.subscribe(['topic1', 'topic2'])
```
</TabItem>
<TabItem value="websocket" label="WebSocket connection">
The `subscribe` function is used to subscribe to a list of topics.
```python
consumer.subscribe(['topic1', 'topic2'])
```
</TabItem>
</Tabs>
#### Consume messages
<Tabs defaultValue="native">
<TabItem value="native" label="native connection">
The `poll` function is used to consume data in tmq. The parameter of the `poll` function is a value of type float representing the timeout in seconds. It returns a `Message` before timing out, or `None` on timing out. You have to handle error messages in response data.
```python
while True:
res = consumer.poll(1)
if not res:
continue
err = res.error()
if err is not None:
raise err
val = res.value()
for block in val:
print(block.fetchall())
```
</TabItem>
<TabItem value="websocket" label="WebSocket connection">
The `poll` function is used to consume data in tmq. The parameter of the `poll` function is a value of type float representing the timeout in seconds. It returns a `Message` before timing out, or `None` on timing out. You have to handle error messages in response data.
```python
while True:
res = consumer.poll(timeout=1.0)
if not res:
continue
err = res.error()
if err is not None:
raise err
for block in message:
for row in block:
print(row)
```
</TabItem>
</Tabs>
#### Assignment subscription Offset
<Tabs defaultValue="native">
<TabItem value="native" label="native connection">
The `assignment` function is used to get the assignment of the topic.
```python
assignments = consumer.assignment()
```
The `seek` function is used to reset the assignment of the topic.
```python
tp = TopicPartition(topic='topic1', partition=0, offset=0)
consumer.seek(tp)
```
</TabItem>
<TabItem value="websocket" label="WebSocket connection">
The `assignment` function is used to get the assignment of the topic.
```python
assignments = consumer.assignment()
```
The `seek` function is used to reset the assignment of the topic.
```python
consumer.seek(topic='topic1', partition=0, offset=0)
```
</TabItem>
</Tabs>
#### Close subscriptions
<Tabs defaultValue="native">
<TabItem value="native" label="native connection">
You should unsubscribe to the topics and close the consumer after consuming.
```python
consumer.unsubscribe()
consumer.close()
```
</TabItem>
<TabItem value="websocket" label="WebSocket connection">
You should unsubscribe to the topics and close the consumer after consuming.
```python
consumer.unsubscribe()
consumer.close()
```
</TabItem>
</Tabs>
#### Full Sample Code
<Tabs defaultValue="native">
<TabItem value="native" label="native connection">
```python
{{#include docs/examples/python/tmq_example.py}}
```
```python
{{#include docs/examples/python/tmq_assignment_example.py:taos_get_assignment_and_seek_demo}}
```
</TabItem>
<TabItem value="websocket" label="WebSocket connection">
```python
{{#include docs/examples/python/tmq_websocket_example.py}}
```
```python
{{#include docs/examples/python/tmq_websocket_assgnment_example.py:taosws_get_assignment_and_seek_demo}}
```
</TabItem>
</Tabs>
### Other sample programs
| Example program links | Example program content |
| ------------------------------------------------------------------------------------------------------------- | ------------------- ---- |
| [bind_multi.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-multi.py) | parameter binding,
bind multiple rows at once |
| [bind_row.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-row.py) | bind_row.py
|-----------------------|-------------------------|
| [bind_multi.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-multi.py) | parameter binding, bind multiple rows at once |
| [bind_row.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/bind-row.py) | parameter binding, bind one row at once |
| [insert_lines.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/insert-lines.py) | InfluxDB line protocol writing |
| [json_tag.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/json-tag.py) | Use JSON type tags |
| [tmq.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/tmq.py) | TMQ subscription |
| [tmq_consumer.py](https://github.com/taosdata/taos-connector-python/blob/main/examples/tmq_consumer.py) | TMQ subscription |
## Other notes

View File

@ -59,9 +59,9 @@ The different database framework specifications for various programming language
| -------------------------------------- | ------------- | --------------- | ------------- | ------------- | ------------- | ------------- |
| **Connection Management** | Support | Support | Support | Support | Support | Support |
| **Regular Query** | Support | Support | Support | Support | Support | Support |
| **Parameter Binding** | Supported | Not Supported | Support | Support | Not Supported | Support |
| **Parameter Binding** | Supported | Supported | Support | Support | Not Supported | Support |
| **Subscription (TMQ) ** | Supported | Support | Support | Not Supported | Not Supported | Support |
| **Schemaless** | Supported | Not Supported | Supported | Not Supported | Not Supported | Not Supported |
| **Schemaless** | Supported | Supported | Supported | Not Supported | Not Supported | Not Supported |
| **Bulk Pulling (based on WebSocket) ** | Support | Support | Support | Support | Support | Support |
:::warning

View File

@ -364,6 +364,7 @@ The configuration parameters for specifying super table tag columns and data col
- **min**: The minimum value of the column/label of the data type. The generated value will equal or large than the minimum value.
- **max**: The maximum value of the column/label of the data type. The generated value will less than the maximum value.
- **fun**: This column of data is filled with functions. Currently, only the sin and cos functions are supported. The input parameter is the timestamp and converted to an angle value. The conversion formula is: angle x=input time column ts value % 360. At the same time, it supports coefficient adjustment and random fluctuation factor adjustment, presented in a fixed format expression, such as fun="10\*sin(x)+100\*random(5)", where x represents the angle, ranging from 0 to 360 degrees, and the growth step size is consistent with the time column step size. 10 represents the coefficient of multiplication, 100 represents the coefficient of addition or subtraction, and 5 represents the fluctuation range within a random range of 5%. The currently supported data types are int, bigint, float, and double. Note: The expression is fixed and cannot be reversed.
- **values**: The value field of the nchar/binary column/label, which will be chosen randomly from the values.
@ -470,3 +471,26 @@ The configuration parameters for subscribing to a super table are set in `super_
- **sql**: The SQL command to be executed. For the query SQL of super table, keep "xxxx" in the SQL command. The program will automatically replace it with all the sub-table names of the super table.
Replace it with all the sub-table names in the super table.
- **result**: The file to save the query result. If not specified, taosBenchmark will not save result.
#### data type on taosBenchmark
| # | **TDengine** | **taosBenchmark**
| --- | :----------------: | :---------------:
| 1 | TIMESTAMP | timestamp
| 2 | INT | int
| 3 | INT UNSIGNED | uint
| 4 | BIGINT | bigint
| 5 | BIGINT UNSIGNED | ubigint
| 6 | FLOAT | float
| 7 | DOUBLE | double
| 8 | BINARY | binary
| 9 | SMALLINT | smallint
| 10 | SMALLINT UNSIGNED | usmallint
| 11 | TINYINT | tinyint
| 12 | TINYINT UNSIGNED | utinyint
| 13 | BOOL | bool
| 14 | NCHAR | nchar
| 15 | VARCHAR | varchar
| 15 | JSON | json
noteLowercase characters must be used on taosBenchmark datatype

View File

@ -5,7 +5,7 @@ description: This document describes the supported platforms for the TDengine se
## List of supported platforms for TDengine server
| | **Windows Server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** | **macOS** |
| | **Windows Server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18 or later** | **macOS** |
| ------------ | ---------------------------- | ----------------- | ---------------- | ---------------- | --------- |
| X64 | ● | ● | ● | ● | ● |
| ARM64 | | | ● | | ● |

View File

@ -166,7 +166,7 @@ Please note the `taoskeeper` needs to be installed and running to create the `lo
| Attribute | Description |
| ------------- | ---------------------------------------------------------------------------- |
| Applicable | Server Only |
| Applicable | Server and Client |
| Meaning | Switch for allowing TDengine to collect and report service usage information |
| Value Range | 0: Not allowed; 1: Allowed |
| Default Value | 1 |
@ -174,7 +174,7 @@ Please note the `taoskeeper` needs to be installed and running to create the `lo
| Attribute | Description |
| ------------- | ---------------------------------------------------------------------------- |
| Applicable | Server Only |
| Applicable | Server and Client |
| Meaning | Switch for allowing TDengine to collect and report crash related information |
| Value Range | 0,1 0: Not allowed; 1: allowed |
| Default Value | 1 |
@ -722,6 +722,25 @@ The charset that takes effect is UTF-8.
| Value Range | 0: not change; 1: change by modification |
| Default Value | 0 |
### keepTimeOffset
| Attribute | Description |
| ------------- | ------------------------- |
| Applicable | Server Only |
| Meaning | Latency of data migration |
| Unit | hour |
| Value Range | 0-23 |
| Default Value | 0 |
### tmqMaxTopicNum
| Attribute | Description |
| -------- | ------------------ |
| Applicable | Server Only |
| Meaning | The max num of topics |
| Value Range | 1-10000|
| Default Value | 20 |
## 3.0 Parameters
| # | **Parameter** | **Applicable to 2.x ** | **Applicable to 3.0 ** | Current behavior in 3.0 |
@ -779,3 +798,4 @@ The charset that takes effect is UTF-8.
| 53 | udf | Yes | Yes | |
| 54 | enableCoreFile | Yes | Yes | |
| 55 | ttlChangeOnWrite | No | Yes | |
| 56 | keepTimeOffset | Yes | Yes | |

View File

@ -363,7 +363,10 @@ The following configuration items apply to TDengine Sink Connector and TDengine
7. `out.format`: Result output format. `line` indicates that the output format is InfluxDB line protocol format, `json` indicates that the output format is json. The default is line.
8. `topic.per.stable`: If it's set to true, it means one super table in TDengine corresponds to a topic in Kafka, the topic naming rule is `<topic.prefix><topic.delimiter><connection.database><topic.delimiter><stable.name>`; if it's set to false, it means the whole DB corresponds to a topic in Kafka, the topic naming rule is `<topic.prefix><topic.delimiter><connection.database>`.
9. `topic.ignore.db`: Whether the topic naming rule contains the database name: true indicates that the rule is `<topic.prefix><topic.delimiter><stable.name>`, false indicates that the rule is `<topic.prefix><topic.delimiter><connection.database><topic.delimiter><stable.name>`, and the default is false. Does not take effect when `topic.per.stable` is set to false.
10. `topic.delimiter`: topic name delimiterdefault is `-`
10. `topic.delimiter`: topic name delimiterdefault is `-`.
11. `read.method`: read method for query TDengine data, query or subscription. default is subscription.
12. `subscription.group.id`: subscription group id for subscription data from TDengine, this field is required when `read.method` is subscription.
13. `subscription.from`: subscription from latest or earliest. default is latest。
## Other notes

View File

@ -12,50 +12,25 @@ To use DBeaver to manage TDengine, you need to prepare the following:
- Install DBeaver. DBeaver supports mainstream operating systems including Windows, macOS, and Linux. Please make sure you download and install the correct version (23.1.1+) and platform package. Please refer to the [official DBeaver documentation](https://github.com/dbeaver/dbeaver/wiki/Installation) for detailed installation steps.
- If you use an on-premises TDengine cluster, please make sure that TDengine and taosAdapter are deployed and running properly. For detailed information, please refer to the taosAdapter User Manual.
- If you use TDengine Cloud, please [register](https://cloud.tdengine.com/) for an account.
## Usage
### Use DBeaver to access on-premises TDengine cluster
## Use DBeaver to access on-premises TDengine cluster
1. Start the DBeaver application, click the button or menu item to choose **New Database Connection**, and then select **TDengine** in the **Timeseries** category.
![Connect TDengine with DBeaver](./dbeaver/dbeaver-connect-tdengine-en.webp)
![Connect TDengine with DBeaver](./dbeaver/dbeaver-connect-tdengine-en.webp)
2. Configure the TDengine connection by filling in the host address, port number, username, and password. If TDengine is deployed on the local machine, you are only required to fill in the username and password. The default username is root and the default password is taosdata. Click **Test Connection** to check whether the connection is workable. If you do not have the TDengine Java connector installed on the local machine, DBeaver will prompt you to download and install it.
![Configure the TDengine connection](./dbeaver/dbeaver-config-tdengine-en.webp))
![Configure the TDengine connection](./dbeaver/dbeaver-config-tdengine-en.webp))
3. If the connection is successful, it will be displayed as shown in the following figure. If the connection fails, please check whether the TDengine service and taosAdapter are running correctly and whether the host address, port number, username, and password are correct.
![Connection successful](./dbeaver/dbeaver-connect-tdengine-test-en.webp)
![Connection successful](./dbeaver/dbeaver-connect-tdengine-test-en.webp)
4. Use DBeaver to select databases and tables and browse your data stored in TDengine.
![Browse TDengine data with DBeaver](./dbeaver/dbeaver-browse-data-en.webp)
![Browse TDengine data with DBeaver](./dbeaver/dbeaver-browse-data-en.webp)
5. You can also manipulate TDengine data by executing SQL commands.
![Use SQL commands to manipulate TDengine data in DBeaver](./dbeaver/dbeaver-sql-execution-en.webp)
### Use DBeaver to access TDengine Cloud
1. Log in to the TDengine Cloud service, select **Programming** > **Java** in the management console, and then copy the string value of `TDENGINE_JDBC_URL` displayed in the **Config** section.
![Copy JDBC URL from TDengine Cloud](./dbeaver/tdengine-cloud-jdbc-dsn-en.webp)
2. Start the DBeaver application, click the button or menu item to choose **New Database Connection**, and then select **TDengine Cloud** in the **Timeseries** category.
![Connect TDengine Cloud with DBeaver](./dbeaver/dbeaver-connect-tdengine-cloud-en.webp)
3. Configure the TDengine Cloud connection by filling in the JDBC URL value. Click **Test Connection**. If you do not have the TDengine Java connector installed on the local machine, DBeaver will prompt you to download and install it. If the connection is successful, it will be displayed as shown in the following figure. If the connection fails, please check whether the TDengine Cloud service is running properly and whether the JDBC URL is correct.
![Configure the TDengine Cloud connection](./dbeaver/dbeaver-connect-tdengine-cloud-test-en.webp)
4. Use DBeaver to select databases and tables and browse your data stored in TDengine Cloud.
![Browse TDengine Cloud data with DBeaver](./dbeaver/dbeaver-browse-data-cloud-en.webp)
5. You can also manipulate TDengine Cloud data by executing SQL commands.
![Use SQL commands to manipulate TDengine Cloud data in DBeaver](./dbeaver/dbeaver-sql-execution-cloud-en.webp)
![Use SQL commands to manipulate TDengine data in DBeaver](./dbeaver/dbeaver-sql-execution-en.webp)

View File

@ -6,10 +6,14 @@ description: This document provides download links for all released versions of
TDengine 3.x installation packages can be downloaded at the following links:
For TDengine 2.x installation packages by version, please visit [here](https://www.taosdata.com/all-downloads).
For TDengine 2.x installation packages by version, please visit [here](https://tdengine.com/downloads/historical/).
import Release from "/components/ReleaseV3";
## 3.0.7.1
<Release type="tdengine" version="3.0.7.1" />
## 3.0.7.0
<Release type="tdengine" version="3.0.7.0" />

View File

@ -201,7 +201,7 @@ Active: inactive (dead)
<TabItem label="Windows 系统" value="windows">
安装后,可以在拥有管理员权限的 cmd 窗口执行 `sc start taosd` 或在 `C:\TDengine` 目录下,运行 `taosd.exe` 来启动 TDengine 服务进程。
安装后,可以在拥有管理员权限的 cmd 窗口执行 `sc start taosd` 或在 `C:\TDengine` 目录下,运行 `taosd.exe` 来启动 TDengine 服务进程。如需使用 http/REST 服务,请执行 `sc start taosadapter` 或运行 `taosadapter.exe` 来启动 taosAdapter 服务进程。
**TDengine 命令行CLI**

View File

@ -398,7 +398,7 @@ def finish(buf: bytes) -> output_type:
3. 定义一个标量函数,输入一个时间戳,输出距离这个时间最近的下一个周日。完成这个函数要用到第三方库 moment。我们在这个示例中讲解使用第三方库的注意事项。
4. 定义一个聚合函数,计算某一列最大值和最小值的差, 也就是实现 TDengien 内置的 spread 函数。
同时也包含大量实用的 debug 技巧。
本文假设你用的是 Linux 系统,且已安装好了 TDengine 3.0.4.0+ 和 Python 3.x
本文假设你用的是 Linux 系统,且已安装好了 TDengine 3.0.4.0+ 和 Python 3.7+
注意:**UDF 内无法通过 print 函数输出日志,需要自己写文件或用 python 内置的 logging 库写文件**。

View File

@ -1022,11 +1022,13 @@ while(true) {
#### 指定订阅 Offset
```java
// 获取 offset
long position(TopicPartition partition) throws SQLException;
Map<TopicPartition, Long> position(String topic) throws SQLException;
Map<TopicPartition, Long> beginningOffsets(String topic) throws SQLException;
Map<TopicPartition, Long> endOffsets(String topic) throws SQLException;
// 指定下一次 poll 中使用的 offset
void seek(TopicPartition partition, long offset) throws SQLException;
```

View File

@ -1,4 +1,5 @@
---
toc_max_heading_level: 4
sidebar_label: Python
title: TDengine Python Connector
description: "taospy 是 TDengine 的官方 Python 连接器。taospy 提供了丰富的 API 使得 Python 应用可以很方便地使用 TDengine。tasopy 对 TDengine 的原生接口和 REST 接口都进行了封装, 分别对应 tasopy 的两个子模块taos 和 taosrest。除了对原生接口和 REST 接口的封装taospy 还提供了符合 Python 数据访问规范(PEP 249)的编程接口。这使得 taospy 和很多第三方工具集成变得简单,比如 SQLAlchemy 和 pandas"
@ -70,7 +71,7 @@ Python Connector 的所有数据库操作如果出现异常,都会直接抛出
{{#include docs/examples/python/handle_exception.py}}
```
TDengine DataType 和 Python DataType
## TDengine DataType 和 Python DataType
TDengine 目前支持时间戳、数字、字符、布尔类型,与 Python 对应类型转换如下:
@ -276,7 +277,7 @@ Transfer-Encoding: chunked
</TabItem>
</Tabs>
### 使用连接器建立连接
### 指定 Host 和 Properties 获取连接
以下示例代码假设 TDengine 安装在本机, 且 FQDN 和 serverPort 都使用了默认配置。
@ -332,8 +333,69 @@ Transfer-Encoding: chunked
</TabItem>
</Tabs>
### 配置参数的优先级
如果配置参数在参数和客户端配置文件中有重复,则参数的优先级由高到低分别如下:
1. 连接参数
2. 使用原生连接时TDengine 客户端驱动的配置文件 taos.cfg
## 使用示例
### 创建数据库和表
<Tabs defaultValue="rest">
<TabItem value="native" label="原生连接">
```python
conn = taos.connect()
# Execute a sql, ignore the result set, just get affected rows. It's useful for DDL and DML statement.
conn.execute("DROP DATABASE IF EXISTS test")
conn.execute("CREATE DATABASE test")
# change database. same as execute "USE db"
conn.select_db("test")
conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)")
```
</TabItem>
<TabItem value="rest" label="REST 连接">
```python
conn = taosrest.connect(url="http://localhost:6041")
# Execute a sql, ignore the result set, just get affected rows. It's useful for DDL and DML statement.
conn.execute("DROP DATABASE IF EXISTS test")
conn.execute("CREATE DATABASE test")
conn.execute("USE test")
conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)")
```
</TabItem>
<TabItem value="websocket" label="WebSocket 连接">
```python
conn = taosws.connect(url="ws://localhost:6041")
# Execute a sql, ignore the result set, just get affected rows. It's useful for DDL and DML statement.
conn.execute("DROP DATABASE IF EXISTS test")
conn.execute("CREATE DATABASE test")
conn.execute("USE test")
conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)")
```
</TabItem>
</Tabs>
### 插入数据
```python
conn.execute("INSERT INTO t1 USING weather TAGS(1) VALUES (now, 23.5) (now+1m, 23.5) (now+2m, 24.4)")
```
:::
now 为系统内部函数,默认为客户端所在计算机当前时间。 now + 1s 代表客户端当前时间往后加 1 秒数字后面代表时间单位a(毫秒)s(秒)m(分)h(小时)d(天)w(周)n(月)y(年)。
:::
### 基本使用
<Tabs defaultValue="rest">
@ -372,7 +434,6 @@ Transfer-Encoding: chunked
:::note
TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线程的场景下,这个游标实例必须保持线程独享,不能跨线程共享使用,否则会导致返回结果出现错误。
:::
</TabItem>
@ -455,7 +516,7 @@ RestClient 类是对于 REST API 的直接封装。它只包含一个 sql() 方
</TabItem>
</Tabs>
### 与 req_id 一起使用
### 执行带有 reqId 的 SQL
使用可选的 req_id 参数,指定请求 id可以用于 tracing
@ -556,224 +617,6 @@ RestClient 类是对于 REST API 的直接封装。它只包含一个 sql() 方
</TabItem>
</Tabs>
### 数据订阅
连接器支持数据订阅功能,数据订阅功能请参考 [数据订阅文档](../../develop/tmq/)。
<Tabs defaultValue="native">
<TabItem value="native" label="原生连接">
`Consumer` 提供了 Python 连接器订阅 TMQ 数据的 API。
##### 创建 Consumer
创建 Consumer 语法为 `consumer = Consumer(configs)`,参数定义请参考 [数据订阅文档](../../develop/tmq/#%E5%88%9B%E5%BB%BA%E6%B6%88%E8%B4%B9%E8%80%85-consumer)。
```python
from taos.tmq import Consumer
consumer = Consumer({"group.id": "local", "td.connect.ip": "127.0.0.1"})
```
##### 订阅 topics
Consumer API 的 `subscribe` 方法用于订阅 topicsconsumer 支持同时订阅多个 topic。
```python
consumer.subscribe(['topic1', 'topic2'])
```
##### 消费数据
Consumer API 的 `poll` 方法用于消费数据,`poll` 方法接收一个 float 类型的超时时间超时时间单位为秒s`poll` 方法在超时之前返回一条 Message 类型的数据或超时返回 `None`。消费者必须通过 Message 的 `error()` 方法校验返回数据的 error 信息。
```python
while True:
res = consumer.poll(1)
if not res:
continue
err = res.error()
if err is not None:
raise err
val = res.value()
for block in val:
print(block.fetchall())
```
##### 获取消费进度
Consumer API 的 `assignment` 方法用于获取 Consumer 订阅的所有 topic 的消费进度,返回结果类型为 TopicPartition 列表。
```python
assignments = consumer.assignment()
```
##### 指定订阅 Offset
Consumer API 的 `seek` 方法用于重置 Consumer 的消费进度到指定位置,方法参数类型为 TopicPartition。
```python
tp = TopicPartition(topic='topic1', partition=0, offset=0)
consumer.seek(tp)
```
##### 关闭订阅
消费结束后,应当取消订阅,并关闭 Consumer。
```python
consumer.unsubscribe()
consumer.close()
```
##### 完整示例
```python
{{#include docs/examples/python/tmq_example.py}}
```
##### 获取和重置消费进度示例代码
```python
{{#include docs/examples/python/tmq_assignment_example.py:taos_get_assignment_and_seek_demo}}
```
</TabItem>
<TabItem value="websocket" label="WebSocket 连接">
除了原生的连接方式Python 连接器还支持通过 websocket 订阅 TMQ 数据,使用 websocket 方式订阅 TMQ 数据需要安装 `taos-ws-py`。
taosws `Consumer` API 提供了基于 Websocket 订阅 TMQ 数据的 API。
##### 创建 Consumer
创建 Consumer 语法为 `consumer = Consumer(conf=configs)`,使用时需要指定 `td.connect.websocket.scheme` 参数值为 "ws",参数定义请参考 [数据订阅文档](../../develop/tmq/#%E5%88%9B%E5%BB%BA%E6%B6%88%E8%B4%B9%E8%80%85-consumer)。
```python
import taosws
consumer = taosws.(conf={"group.id": "local", "td.connect.websocket.scheme": "ws"})
```
##### 订阅 topics
Consumer API 的 `subscribe` 方法用于订阅 topicsconsumer 支持同时订阅多个 topic。
```python
consumer.subscribe(['topic1', 'topic2'])
```
##### 消费数据
Consumer API 的 `poll` 方法用于消费数据,`poll` 方法接收一个 float 类型的超时时间超时时间单位为秒s`poll` 方法在超时之前返回一条 Message 类型的数据或超时返回 `None`。消费者必须通过 Message 的 `error()` 方法校验返回数据的 error 信息。
```python
while True:
res = consumer.poll(timeout=1.0)
if not res:
continue
err = res.error()
if err is not None:
raise err
for block in message:
for row in block:
print(row)
```
##### 获取消费进度
Consumer API 的 `assignment` 方法用于获取 Consumer 订阅的所有 topic 的消费进度,返回结果类型为 TopicPartition 列表。
```python
assignments = consumer.assignment()
```
##### 重置消费进度
Consumer API 的 `seek` 方法用于重置 Consumer 的消费进度到指定位置。
```python
consumer.seek(topic='topic1', partition=0, offset=0)
```
##### 结束消费
消费结束后,应当取消订阅,并关闭 Consumer。
```python
consumer.unsubscribe()
consumer.close()
```
##### tmq 订阅示例代码
```python
{{#include docs/examples/python/tmq_websocket_example.py}}
```
连接器提供了 `assignment` 接口,用于获取 topic assignment 的功能,可以查询订阅的 topic 的消费进度,并提供 `seek` 接口,用于重置 topic 的消费进度。
##### 获取和重置消费进度示例代码
```python
{{#include docs/examples/python/tmq_websocket_assgnment_example.py:taosws_get_assignment_and_seek_demo}}
```
</TabItem>
</Tabs>
### 无模式写入
连接器支持无模式写入功能。
<Tabs defaultValue="list">
<TabItem value="list" label="List 写入">
##### 简单写入
```python
{{#include docs/examples/python/schemaless_insert.py}}
```
##### 带有 ttl 参数的写入
```python
{{#include docs/examples/python/schemaless_insert_ttl.py}}
```
##### 带有 req_id 参数的写入
```python
{{#include docs/examples/python/schemaless_insert_req_id.py}}
```
</TabItem>
<TabItem value="raw" label="Raw 写入">
##### 简单写入
```python
{{#include docs/examples/python/schemaless_insert_raw.py}}
```
##### 带有 ttl 参数的写入
```python
{{#include docs/examples/python/schemaless_insert_raw_ttl.py}}
```
##### 带有 req_id 参数的写入
```python
{{#include docs/examples/python/schemaless_insert_raw_req_id.py}}
```
</TabItem>
</Tabs>
### 通过参数绑定写入数据
TDengine 的 Python 连接器支持参数绑定风格的 Prepare API 方式写入数据,和大多数数据库类似,目前仅支持用 `?` 来代表待绑定的参数。
@ -909,6 +752,264 @@ stmt.close()
</TabItem>
</Tabs>
### 无模式写入
连接器支持无模式写入功能。
<Tabs defaultValue="list">
<TabItem value="list" label="List 写入">
##### 简单写入
```python
{{#include docs/examples/python/schemaless_insert.py}}
```
##### 带有 ttl 参数的写入
```python
{{#include docs/examples/python/schemaless_insert_ttl.py}}
```
##### 带有 req_id 参数的写入
```python
{{#include docs/examples/python/schemaless_insert_req_id.py}}
```
</TabItem>
<TabItem value="raw" label="Raw 写入">
##### 简单写入
```python
{{#include docs/examples/python/schemaless_insert_raw.py}}
```
##### 带有 ttl 参数的写入
```python
{{#include docs/examples/python/schemaless_insert_raw_ttl.py}}
```
##### 带有 req_id 参数的写入
```python
{{#include docs/examples/python/schemaless_insert_raw_req_id.py}}
```
</TabItem>
</Tabs>
### 执行带有 reqId 的无模式写入
连接器的 `schemaless_insert` 和 `schemaless_insert_raw` 方法支持 `req_id` 可选参数,此 `req_Id` 可用于请求链路追踪。
```python
{{#include docs/examples/python/schemaless_insert_req_id.py}}
```
```python
{{#include docs/examples/python/schemaless_insert_raw_req_id.py}}
```
### 数据订阅
连接器支持数据订阅功能,数据订阅功能请参考 [数据订阅文档](../../develop/tmq/)。
#### 创建 Topic
创建 Topic 相关请参考 [数据订阅文档](../../develop/tmq/#创建-topic)。
#### 创建 Consumer
<Tabs defaultValue="native">
<TabItem value="native" label="原生连接">
`Consumer` 提供了 Python 连接器订阅 TMQ 数据的 API。创建 Consumer 语法为 `consumer = Consumer(configs)`,参数定义请参考 [数据订阅文档](../../develop/tmq/#创建消费者-consumer)。
```python
from taos.tmq import Consumer
consumer = Consumer({"group.id": "local", "td.connect.ip": "127.0.0.1"})
```
</TabItem>
<TabItem value="websocket" label="WebSocket 连接">
除了原生的连接方式Python 连接器还支持通过 websocket 订阅 TMQ 数据,使用 websocket 方式订阅 TMQ 数据需要安装 `taos-ws-py`。
taosws `Consumer` API 提供了基于 Websocket 订阅 TMQ 数据的 API。创建 Consumer 语法为 `consumer = Consumer(conf=configs)`,使用时需要指定 `td.connect.websocket.scheme` 参数值为 "ws",参数定义请参考 [数据订阅文档](../../develop/tmq/#%E5%88%9B%E5%BB%BA%E6%B6%88%E8%B4%B9%E8%80%85-consumer)。
```python
import taosws
consumer = taosws.(conf={"group.id": "local", "td.connect.websocket.scheme": "ws"})
```
</TabItem>
</Tabs>
#### 订阅 topics
<Tabs defaultValue="native">
<TabItem value="native" label="原生连接">
Consumer API 的 `subscribe` 方法用于订阅 topicsconsumer 支持同时订阅多个 topic。
```python
consumer.subscribe(['topic1', 'topic2'])
```
</TabItem>
<TabItem value="websocket" label="WebSocket 连接">
Consumer API 的 `subscribe` 方法用于订阅 topicsconsumer 支持同时订阅多个 topic。
```python
consumer.subscribe(['topic1', 'topic2'])
```
</TabItem>
</Tabs>
#### 消费数据
<Tabs defaultValue="native">
<TabItem value="native" label="原生连接">
Consumer API 的 `poll` 方法用于消费数据,`poll` 方法接收一个 float 类型的超时时间超时时间单位为秒s`poll` 方法在超时之前返回一条 Message 类型的数据或超时返回 `None`。消费者必须通过 Message 的 `error()` 方法校验返回数据的 error 信息。
```python
while True:
res = consumer.poll(1)
if not res:
continue
err = res.error()
if err is not None:
raise err
val = res.value()
for block in val:
print(block.fetchall())
```
</TabItem>
<TabItem value="websocket" label="WebSocket 连接">
Consumer API 的 `poll` 方法用于消费数据,`poll` 方法接收一个 float 类型的超时时间超时时间单位为秒s`poll` 方法在超时之前返回一条 Message 类型的数据或超时返回 `None`。消费者必须通过 Message 的 `error()` 方法校验返回数据的 error 信息。
```python
while True:
res = consumer.poll(timeout=1.0)
if not res:
continue
err = res.error()
if err is not None:
raise err
for block in message:
for row in block:
print(row)
```
</TabItem>
</Tabs>
#### 获取消费进度
<Tabs defaultValue="native">
<TabItem value="native" label="原生连接">
Consumer API 的 `assignment` 方法用于获取 Consumer 订阅的所有 topic 的消费进度,返回结果类型为 TopicPartition 列表。
```python
assignments = consumer.assignment()
```
Consumer API 的 `seek` 方法用于重置 Consumer 的消费进度到指定位置,方法参数类型为 TopicPartition。
```python
tp = TopicPartition(topic='topic1', partition=0, offset=0)
consumer.seek(tp)
```
</TabItem>
<TabItem value="websocket" label="WebSocket 连接">
Consumer API 的 `assignment` 方法用于获取 Consumer 订阅的所有 topic 的消费进度,返回结果类型为 TopicPartition 列表。
```python
assignments = consumer.assignment()
```
Consumer API 的 `seek` 方法用于重置 Consumer 的消费进度到指定位置。
```python
consumer.seek(topic='topic1', partition=0, offset=0)
```
</TabItem>
</Tabs>
#### 关闭订阅
<Tabs defaultValue="native">
<TabItem value="native" label="原生连接">
消费结束后,应当取消订阅,并关闭 Consumer。
```python
consumer.unsubscribe()
consumer.close()
```
</TabItem>
<TabItem value="websocket" label="WebSocket 连接">
消费结束后,应当取消订阅,并关闭 Consumer。
```python
consumer.unsubscribe()
consumer.close()
```
</TabItem>
</Tabs>
#### 完整示例
<Tabs defaultValue="native">
<TabItem value="native" label="原生连接">
```python
{{#include docs/examples/python/tmq_example.py}}
```
```python
{{#include docs/examples/python/tmq_assignment_example.py:taos_get_assignment_and_seek_demo}}
```
</TabItem>
<TabItem value="websocket" label="WebSocket 连接">
```python
{{#include docs/examples/python/tmq_websocket_example.py}}
```
```python
{{#include docs/examples/python/tmq_websocket_assgnment_example.py:taosws_get_assignment_and_seek_demo}}
```
</TabItem>
</Tabs>
### 更多示例程序
| 示例程序链接 | 示例程序内容 |

View File

@ -2,10 +2,10 @@
```text
taos> show databases;
name | create_time | vgroups | ntables | replica | strict | duration | keep | buffer | pagesize | pages | minrows | maxrows | comp | precision | status | retention | single_stable | cachemodel | cachesize | wal_level | wal_fsync_period | wal_retention_period | wal_retention_size | wal_roll_period | wal_seg_size |
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================
information_schema | NULL | NULL | 14 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | ready | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
performance_schema | NULL | NULL | 3 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | ready | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
name | create_time | vgroups | ntables | replica | strict | duration | keep | buffer | pagesize | pages | minrows | maxrows | comp | precision | status | retention | single_stable | cachemodel | cachesize | wal_level | wal_fsync_period | wal_retention_period | wal_retention_size |
===============================================================================================================================================================================================================================================================================================================================================================================================================================
information_schema | NULL | NULL | 14 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | ready | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
performance_schema | NULL | NULL | 3 | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL | ready | NULL | NULL | NULL | NULL | NULL | NULL | NULL | NULL |
test | 2022-08-04 16:46:40.506 | 2 | 0 | 1 | off | 14400m | 5256000m,5256000m,5256000m | 96 | 4 | 256 |
100 | 4096 | 2 | ms | ready | NULL | false | none | 1 | 1 | 3000 | 0 | 0 | 0 | 0 |
Query OK, 3 rows in database (0.123000s)

View File

@ -58,9 +58,9 @@ TDengine 版本更新往往会增加新的功能特性,列表中的连接器
| ------------------------------ | -------- | ---------- | -------- | -------- | ----------- | -------- |
| **连接管理** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **普通查询** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **参数绑定** | 支持 | 暂不支持 | 支持 | 支持 | 暂不支持 | 支持 |
| **参数绑定** | 支持 | 支持 | 支持 | 支持 | 暂不支持 | 支持 |
| **数据订阅TMQ** | 支持 | 支持 | 支持 | 暂不支持 | 暂不支持 | 支持 |
| **Schemaless** | 支持 | 暂不支持 | 支持 | 暂不支持 | 暂不支持 | 支持 |
| **Schemaless** | 支持 | 支持 | 支持 | 暂不支持 | 暂不支持 | 支持 |
| **批量拉取(基于 WebSocket** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
:::warning

View File

@ -4,23 +4,31 @@ title: 在 Kubernetes 上部署 TDengine 集群
description: 利用 Kubernetes 部署 TDengine 集群的详细指南
---
作为面向云原生架构设计的时序数据库TDengine 支持 Kubernetes 部署。这里介绍如何使用 YAML 文件一步一步从头创建一个 TDengine 集群,并重点介绍 Kubernetes 环境下 TDengine 的常用操作。
## 概述
作为面向云原生架构设计的时序数据库TDengine 本身就支持 Kubernetes 部署。这里介绍如何使用 YAML 文件从头一步一步创建一个可用于生产使用的高可用 TDengine 集群,并重点介绍 Kubernetes 环境下 TDengine 的常用操作。
为了满足[高可用](https://docs.taosdata.com/tdinternal/high-availability/)的需求,集群需要满足如下要求:
- 3个及以上 dnode TDengine 的同一个 vgroup 中的多个 vnode ,不允许同时分布在一个 dnode 所以如果创建3副本的数据库则 dnode 数大于等于3
- 3个 mnode mnode 负责整个集群的管理工作TDengine 默认是一个 mnode。如果这个 mnode 所在的 dnode 掉线,则整个集群不可用。
- 数据库的3副本TDengine 的副本配置是数据库级别所以数据库3副本可满足在3个 dnode 的集群中,任意一个 dnode 下线,都不影响集群的正常使用。**如果下线** **dnode** **个数为2时此时集群不可用****因为****RAFT无法完成选举****。**企业版在灾难恢复场景任一节点数据文件损坏都可以通过重新拉起dnode进行恢复
## 前置条件
要使用 Kubernetes 部署管理 TDengine 集群,需要做好如下准备工作。
* 本文适用 Kubernetes v1.5 以上版本
* 本文和下一章使用 minikube、kubectl 和 helm 等工具进行安装部署,请提前安装好相应软件
* Kubernetes 已经安装部署并能正常访问使用或更新必要的容器仓库或其他服务
- 本文适用 Kubernetes v1.19 以上版本
- 本文使用 kubectl 工具进行安装部署,请提前安装好相应软件
- Kubernetes 已经安装部署并能正常访问使用或更新必要的容器仓库或其他服务
以下配置文件也可以从 [GitHub 仓库](https://github.com/taosdata/TDengine-Operator/tree/3.0/src/tdengine) 下载。
## 配置 Service 服务
创建一个 Service 配置文件:`taosd-service.yaml`,服务名称 `metadata.name` (此处为 "taosd") 将在下一步中使用到。添加 TDengine 所用到的端口:
创建一个 Service 配置文件:`taosd-service.yaml`,服务名称 `metadata.name` (此处为 "taosd") 将在下一步中使用到。首先添加 TDengine 所用到的端口,然后在选择器设置确定的标签 app (此处为 “tdengine”)。
```yaml
```YAML
---
apiVersion: v1
kind: Service
@ -42,10 +50,11 @@ spec:
## 有状态服务 StatefulSet
根据 Kubernetes 对各类部署的说明,我们将使用 StatefulSet 作为 TDengine 的服务类型。
创建文件 `tdengine.yaml`,其中 replicas 定义集群节点的数量为 3。节点时区为中国Asia/Shanghai每个节点分配 10G 标准standard存储。你也可以根据实际情况进行相应修改。
根据 Kubernetes 对各类部署的说明,我们将使用 StatefulSet 作为 TDengine 的部署资源类型。 创建文件 `tdengine.yaml`,其中 replicas 定义集群节点的数量为 3。节点时区为中国Asia/Shanghai每个节点分配 5G 标准standard存储参考[Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) 配置 storage class )。你也可以根据实际情况进行相应修改。
```yaml
请特别注意startupProbe的配置在 dnode 的 Pod 掉线一段时间后,再重新启动,这个时候新上线的 dnode 会短暂不可用。如果startupProbe配置过小Kubernetes 会认为该 Pod 处于不正常的状态,并尝试重启该 Pod该 dnode 的 Pod 会频繁重启,始终无法恢复到正常状态。参考 [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
```YAML
---
apiVersion: apps/v1
kind: StatefulSet
@ -69,7 +78,7 @@ spec:
spec:
containers:
- name: "tdengine"
image: "tdengine/tdengine:3.0.0.0"
image: "tdengine/tdengine:3.0.7.1"
imagePullPolicy: "IfNotPresent"
ports:
- name: tcp6030
@ -108,6 +117,12 @@ spec:
volumeMounts:
- name: taosdata
mountPath: /var/lib/taos
startupProbe:
exec:
command:
- taos-check
failureThreshold: 360
periodSeconds: 10
readinessProbe:
exec:
command:
@ -129,199 +144,373 @@ spec:
storageClassName: "standard"
resources:
requests:
storage: "10Gi"
storage: "5Gi"
```
## 使用 kubectl 命令部署 TDengine 集群
顺序执行以下命令。
首先创建对应的 namespace然后顺序执行以下命令
```bash
kubectl apply -f taosd-service.yaml
kubectl apply -f tdengine.yaml
```Bash
kubectl apply -f taosd-service.yaml -n tdengine-test
kubectl apply -f tdengine.yaml -n tdengine-test
```
上面的配置将生成一个三节点的 TDengine 集群dnode 为自动配置,可以使用 show dnodes 命令查看当前集群的节点:
```bash
kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"
kubectl exec -i -t tdengine-1 -- taos -s "show dnodes"
kubectl exec -i -t tdengine-2 -- taos -s "show dnodes"
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show dnodes"
kubectl exec -it tdengine-2 -n tdengine-test -- taos -s "show dnodes"
```
输出如下:
```
```Bash
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
Query OK, 3 rows in database (0.003655s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 0 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
2 | tdengine-1.ta... | 0 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
3 | tdengine-2.ta... | 0 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
Query OK, 3 row(s) in set (0.001853s)
```
查看当前mnode
```Bash
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
taos> show mnodes\G
*************************** 1.row ***************************
id: 1
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
role: leader
status: ready
create_time: 2023-07-19 17:54:18.559
reboot_time: 2023-07-19 17:54:19.520
Query OK, 1 row(s) in set (0.001282s)
```
## 创建mnode
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 2"
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 3"
```
查看mnode
```Bash
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
taos> show mnodes\G
*************************** 1.row ***************************
id: 1
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
role: leader
status: ready
create_time: 2023-07-19 17:54:18.559
reboot_time: 2023-07-20 09:19:36.060
*************************** 2.row ***************************
id: 2
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
role: follower
status: ready
create_time: 2023-07-20 09:22:05.600
reboot_time: 2023-07-20 09:22:12.838
*************************** 3.row ***************************
id: 3
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
role: follower
status: ready
create_time: 2023-07-20 09:22:20.042
reboot_time: 2023-07-20 09:22:23.271
Query OK, 3 row(s) in set (0.003108s)
```
## 使能端口转发
利用 kubectl 端口转发功能可以使应用可以访问 Kubernetes 环境运行的 TDengine 集群。
```
kubectl port-forward tdengine-0 6041:6041 &
```Plain
kubectl port-forward -n tdengine-test tdengine-0 6041:6041 &
```
使用 curl 命令验证 TDengine REST API 使用的 6041 接口。
```
$ curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
Handling connection for 6041
{"code":0,"column_meta":[["name","VARCHAR",64],["create_time","TIMESTAMP",8],["vgroups","SMALLINT",2],["ntables","BIGINT",8],["replica","TINYINT",1],["strict","VARCHAR",4],["duration","VARCHAR",10],["keep","VARCHAR",32],["buffer","INT",4],["pagesize","INT",4],["pages","INT",4],["minrows","INT",4],["maxrows","INT",4],["comp","TINYINT",1],["precision","VARCHAR",2],["status","VARCHAR",10],["retention","VARCHAR",60],["single_stable","BOOL",1],["cachemodel","VARCHAR",11],["cachesize","INT",4],["wal_level","TINYINT",1],["wal_fsync_period","INT",4],["wal_retention_period","INT",4],["wal_retention_size","BIGINT",8],["wal_roll_period","INT",4],["wal_segment_size","BIGINT",8]],"data":[["information_schema",null,null,16,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null],["performance_schema",null,null,10,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null]],"rows":2}
```Plain
curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
{"code":0,"column_meta":[["name","VARCHAR",64]],"data":[["information_schema"],["performance_schema"],["test"],["test1"]],"rows":4}
```
## 使用 dashboard 进行图形化管理
## 集群测试
minikube 提供 dashboard 命令支持图形化管理界面。
### 数据准备
```
$ minikube dashboard
* Verifying dashboard health ...
* Launching proxy ...
* Verifying proxy health ...
* Opening http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
#### taosBenchmark
通过taosBenchmark 创建一个3副本的数据库同时写入1亿条数据同时查看数据
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taosBenchmark -I stmt -d test -n 10000 -t 10000 -a 3
# query data
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "select count(*) from test.meters;"
taos> select count(*) from test.meters;
count(*) |
========================
100000000 |
Query OK, 1 row(s) in set (0.103537s)
```
对于某些公有云环境minikube 绑定在 127.0.0.1 IP 地址上无法通过远程访问,需要使用 kubectl proxy 命令将端口映射到 0.0.0.0 IP 地址上,再通过浏览器访问虚拟机公网 IP 和端口以及相同的 dashboard URL 路径即可远程访问 dashboard。
查看vnode分布通过show dnodes
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 8 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
2 | tdengine-1.ta... | 8 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
3 | tdengine-2.ta... | 8 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
Query OK, 3 row(s) in set (0.001357s)
```
$ kubectl proxy --accept-hosts='^.*$' --address='0.0.0.0'
通过show vgroup 查看 vnode 分布情况
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show test.vgroups"
taos> show test.vgroups
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode | v4_status | cacheload | cacheelements | tsma |
==============================================================================================================================================================================================
2 | test | 1267 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
3 | test | 1215 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
4 | test | 1215 | 1 | leader | 2 | follower | 3 | follower | NULL | NULL | 0 | 0 | 0 |
5 | test | 1307 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
6 | test | 1245 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
7 | test | 1275 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
8 | test | 1231 | 1 | leader | 2 | follower | 3 | follower | NULL | NULL | 0 | 0 | 0 |
9 | test | 1245 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
Query OK, 8 row(s) in set (0.001488s)
```
#### 手工创建
常见一个三副本的test1并创建一张表写入2条数据
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- \
taos -s \
"create database if not exists test1 replica 3;
use test1;
create table if not exists t1(ts timestamp, n int);
insert into t1 values(now, 1)(now+1s, 2);"
```
通过show test1.vgroup 查看xnode分布情况
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show test1.vgroups"
taos> show test1.vgroups
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode | v4_status | cacheload | cacheelements | tsma |
==============================================================================================================================================================================================
10 | test1 | 1 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
11 | test1 | 0 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
Query OK, 2 row(s) in set (0.001489s)
```
### 容错测试
Mnode leader 所在的 dnode 掉线dnode1
```Bash
kubectl get pod -l app=tdengine -n tdengine-test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 0/1 ErrImagePull 2 (2s ago) 20m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (6m48s ago) 20m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 21m 10.244.1.223 node85 <none> <none>
```
此时集群mnode发生重新选举dnode1上的monde 成为leader
```Bash
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
Welcome to the TDengine Command Line Interface, Client Version:3.0.7.1.202307190706
Copyright (c) 2022 by TDengine, all rights reserved.
taos> show mnodes\G
*************************** 1.row ***************************
id: 1
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
role: offline
status: offline
create_time: 2023-07-19 17:54:18.559
reboot_time: 1970-01-01 08:00:00.000
*************************** 2.row ***************************
id: 2
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
role: leader
status: ready
create_time: 2023-07-20 09:22:05.600
reboot_time: 2023-07-20 09:32:00.227
*************************** 3.row ***************************
id: 3
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
role: follower
status: ready
create_time: 2023-07-20 09:22:20.042
reboot_time: 2023-07-20 09:32:00.026
Query OK, 3 row(s) in set (0.001513s)
```
集群可以正常读写
```Bash
# insert
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "insert into test1.t1 values(now, 1)(now+1s, 2);"
taos> insert into test1.t1 values(now, 1)(now+1s, 2);
Insert OK, 2 row(s) affected (0.002098s)
# select
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "select *from test1.t1"
taos> select *from test1.t1
ts | n |
========================================
2023-07-19 18:04:58.104 | 1 |
2023-07-19 18:04:59.104 | 2 |
2023-07-19 18:06:00.303 | 1 |
2023-07-19 18:06:01.303 | 2 |
Query OK, 4 row(s) in set (0.001994s)
```
同理至于非leader得mnode掉线读写当然可以正常进行这里就不做过多的展示。
## 集群扩容
TDengine 集群支持自动扩容:
```bash
```Bash
kubectl scale statefulsets tdengine --replicas=4
```
上面命令行中参数 `--replica=4` 表示要将 TDengine 集群扩容到 4 个节点,执行后首先检查 POD 的状态:
```bash
kubectl get pods -l app=tdengine
```Bash
kubectl get pod -l app=tdengine -n tdengine-test -o wide
```
输出如下:
```
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 161m
tdengine-1 1/1 Running 0 161m
tdengine-2 1/1 Running 0 32m
tdengine-3 1/1 Running 0 32m
```Plain
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 1/1 Running 4 (6h26m ago) 6h53m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (6h39m ago) 6h53m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 5h16m 10.244.1.224 node85 <none> <none>
tdengine-3 1/1 Running 0 3m24s 10.244.2.76 node86 <none> <none>
```
此时 POD 的状态仍然是 RunningTDengine 集群中的 dnode 状态要等 POD 状态为 `ready` 之后才能看到:
此时 Pod 的状态仍然是 RunningTDengine 集群中的 dnode 状态要等 Pod 状态为 `ready` 之后才能看到:
```bash
kubectl exec -i -t tdengine-3 -- taos -s "show dnodes"
```Bash
kubectl exec -it tdengine-3 -n tdengine-test -- taos -s "show dnodes"
```
扩容后的四节点 TDengine 集群的 dnode 列表:
```
```Plain
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
4 | tdengine-3.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:33:16.039 | |
Query OK, 4 rows in database (0.008377s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
4 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:01:44.007 | 2023-07-20 16:01:44.889 | | | |
Query OK, 4 row(s) in set (0.003628s)
```
## 集群缩容
由于 TDengine 集群在扩缩容时会对数据进行节点间迁移,使用 kubectl 命令进行缩容需要首先使用 "drop dnodes" 命令,节点删除完成后再进行 Kubernetes 集群缩容。
由于 TDengine 集群在扩缩容时会对数据进行节点间迁移,使用 kubectl 命令进行缩容需要首先使用 "drop dnodes" 命令**如果集群中存在3副本的db那么缩容后的** **dnode** **个数也要必须大于等于3否则drop dnode操作会被中止**),然后再节点删除完成后再进行 Kubernetes 集群缩容。
注意:由于 Kubernetes Statefulset 中 Pod 的只能按创建顺序逆序移除,所以 TDengine drop dnode 也需要按照创建顺序逆序移除,否则会导致 Pod 处于错误状态。
```
$ kubectl exec -i -t tdengine-0 -- taos -s "drop dnode 4"
```
```bash
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "drop dnode 4"
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
Query OK, 3 rows in database (0.004861s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
Query OK, 3 row(s) in set (0.003324s)
```
确认移除成功后(使用 kubectl exec -i -t tdengine-0 -- taos -s "show dnodes" 查看和确认 dnode 列表),使用 kubectl 命令移除 POD
```
kubectl scale statefulsets tdengine --replicas=3
```Plain
kubectl scale statefulsets tdengine --replicas=3 -n tdengine-test
```
最后一个 POD 将会被删除。使用命令 kubectl get pods -l app=tdengine 查看POD状态
```
$ kubectl get pods -l app=tdengine
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 4m7s
tdengine-1 1/1 Running 0 3m55s
tdengine-2 1/1 Running 0 2m28s
```Plain
kubectl get pod -l app=tdengine -n tdengine-test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 1/1 Running 4 (6h55m ago) 7h22m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (7h9m ago) 7h23m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 5h45m 10.244.1.224 node85 <none> <none>
```
POD删除后需要手动删除PVC否则下次扩容时会继续使用以前的数据导致无法正常加入集群。
```bash
$ kubectl delete pvc taosdata-tdengine-3
```Bash
kubectl delete pvc aosdata-tdengine-3 -n tdengine-test
```
此时的集群状态是安全的,需要时还可以再次进行扩容:
```bash
$ kubectl scale statefulsets tdengine --replicas=4
```Bash
kubectl scale statefulsets tdengine --replicas=4 -n tdengine-test
statefulset.apps/tdengine scaled
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 35m
tdengine-1 1/1 Running 0 34m
tdengine-2 1/1 Running 0 12m
tdengine-3 0/1 ContainerCreating 0 4s
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 35m
tdengine-1 1/1 Running 0 34m
tdengine-2 1/1 Running 0 12m
tdengine-3 0/1 Running 0 7s
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
kubectl get pod -l app=tdengine -n tdengine-test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 1/1 Running 4 (6h59m ago) 7h27m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (7h13m ago) 7h27m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 5h49m 10.244.1.224 node85 <none> <none>
tdengine-3 1/1 Running 0 20s 10.244.2.77 node86 <none> <none>
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
5 | tdengine-2.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:01:36.479 | |
6 | tdengine-3.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:13:54.411 | |
Query OK, 4 row(s) in set (0.001348s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
5 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:31:34.092 | 2023-07-20 16:38:17.419 | | | |
Query OK, 4 row(s) in set (0.003881s)
```
## 清理 TDengine 集群
> **删除pvc时需要注意下pv persistentVolumeReclaimPolicy策略建议改为Delete这样在删除pvc时才会自动清理pv同时会清理底层的csi存储资源如果没有配置删除pvc自动清理pv的策略再删除pvc后在手动清理pv时pv对应的csi存储资源可能不会被释放。**
完整移除 TDengine 集群,需要分别清理 statefulset、svc、configmap、pvc。
```bash
kubectl delete statefulset -l app=tdengine
kubectl delete svc -l app=tdengine
kubectl delete pvc -l app=tdengine
kubectl delete configmap taoscfg
```Bash
kubectl delete statefulset -l app=tdengine -n tdengine-test
kubectl delete svc -l app=tdengine -n tdengine-test
kubectl delete pvc -l app=tdengine -n tdengine-test
kubectl delete configmap taoscfg -n tdengine-test
```
## 常见错误
@ -330,65 +519,26 @@ kubectl delete configmap taoscfg
未进行 "drop dnode" 直接进行缩容,由于 TDengine 尚未删除节点,缩容 pod 导致 TDengine 集群中部分节点处于 offline 状态。
```
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
```Plain
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
5 | tdengine-2.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:01:36.479 | status msg timeout |
6 | tdengine-3.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:13:54.411 | status msg timeout |
Query OK, 4 row(s) in set (0.001323s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
5 | tdengine-3.ta... | 0 | 16 | offline | 2023-07-20 16:31:34.092 | 2023-07-20 16:38:17.419 | status msg timeout | | |
Query OK, 4 row(s) in set (0.003862s)
```
### 错误二
## 最后
TDengine 集群会持有 replica 参数,如果缩容后的节点数小于这个值,集群将无法使用
对于在 Kubernetes 环境下 TDengine 的高可用和高可靠来说,对于硬件损坏、灾难恢复,分为两个层面来讲
创建一个库使用 replica 参数为 2插入部分数据
1. 底层的分布式块存储具备的灾难恢复能力,块存储的多副本,当下流行的分布式块存储如 Ceph就具备多副本能力将存储副本扩展到不同的机架、机柜、机房、数据中心或者直接使用公有云厂商提供的块存储服务
2. TDengine的灾难恢复在 TDengine Enterprise 中,本身具备了当一个 dnode 永久下线物理机磁盘损坏数据分拣丢失重新拉起一个空白的dnode来恢复原dnode的工作。
```bash
kubectl exec -i -t tdengine-0 -- \
taos -s \
"create database if not exists test replica 2;
use test;
create table if not exists t1(ts timestamp, n int);
insert into t1 values(now, 1)(now+1s, 2);"
最后,欢迎使用[TDengine Cloud](https://cloud.taosdata.com/)来体验一站式全托管的TDengine云服务。
```
缩容到单节点:
```bash
kubectl scale statefulsets tdengine --replicas=1
```
在 TDengine CLI 中的所有数据库操作将无法成功。
```
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
Query OK, 2 row(s) in set (0.000845s)
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
Query OK, 2 row(s) in set (0.000837s)
taos> use test;
Database changed.
taos> insert into t1 values(now, 3);
DB error: Unable to resolve FQDN (0.013874s)
```
> TDengine Cloud 是一个极简的全托管时序数据处理云服务平台,它是基于开源的时序数据库 TDengine 而开发的。除高性能的时序数据库之外它还具有缓存、订阅和流计算等系统功能而且提供了便利而又安全的数据分享、以及众多的企业级功能。它可以让物联网、工业互联网、金融、IT 运维监控等领域企业在时序数据的管理上大幅降低人力成本和运营成本。

View File

@ -42,12 +42,21 @@ CREATE DATABASE db_name PRECISION 'ns';
| 14 | NCHAR | 自定义 | 记录包含多字节字符在内的字符串,如中文字符。每个 NCHAR 字符占用 4 字节的存储空间。字符串两端使用单引号引用,字符串内的单引号需用转义字符 `\'`。NCHAR 使用时须指定字符串大小,类型为 NCHAR(10) 的列表示此列的字符串最多存储 10 个 NCHAR 字符。如果用户字符串长度超出声明长度,将会报错。 |
| 15 | JSON | | JSON 数据类型, 只有 Tag 可以是 JSON 格式 |
| 16 | VARCHAR | 自定义 | BINARY 类型的别名 |
| 17 | GEOMETRY | 自定义 | 几何类型 |
:::note
- 表的每行长度不能超过 48KB从 3.0.5.0 版本开始为 64KB注意每个 BINARY/NCHAR 类型的列还会额外占用 2 个字节的存储位置)。
- 表的每行长度不能超过 48KB从 3.0.5.0 版本开始为 64KB注意每个 BINARY/NCHAR/GEOMETRY 类型的列还会额外占用 2 个字节的存储位置)。
- 虽然 BINARY 类型在底层存储上支持字节型的二进制字符,但不同编程语言对二进制数据的处理方式并不保证一致,因此建议在 BINARY 类型中只存储 ASCII 可见字符,而避免存储不可见字符。多字节的数据,例如中文字符,则需要使用 NCHAR 类型进行保存。如果强行使用 BINARY 类型保存中文字符,虽然有时也能正常读写,但并不带有字符集信息,很容易出现数据乱码甚至数据损坏等情况。
- BINARY 类型理论上最长可以有 16,374从 3.0.5.0 版本开始,数据列为 65,517标签列为 16,382 字节。BINARY 仅支持字符串输入,字符串两端需使用单引号引用。使用时须指定大小,如 BINARY(20) 定义了最长为 20 个单字节字符的字符串,每个字符占 1 字节的存储空间,总共固定占用 20 字节的空间,此时如果用户字符串超出 20 字节将会报错。对于字符串内的单引号,可以用转义字符反斜线加单引号来表示,即 `\'`
- GEOMETRY 类型数据列为最大长度为 65,517 字节,标签列最大长度为 16,382 字节。支持 2D 的 POINT、LINESTRING 和 POLYGON 子类型数据。长度计算方式如下表所示:
| # | **语法** | **最小长度** | **最大长度** | **每组坐标长度增长** |
|---|--------------------------------------|----------|------------|--------------|
| 1 | POINT(1.0 1.0) | 21 | 21 | 无 |
| 2 | LINESTRING(1.0 1.0, 2.0 2.0) | 9+2*16 | 9+4094*16 | +16 |
| 3 | POLYGON((1.0 1.0, 2.0 2.0, 1.0 1.0)) | 13+3*16 | 13+4094*16 | +16 |
- SQL 语句中的数值类型将依据是否存在小数点或使用科学计数法表示来判断数值类型是否为整型或者浮点型因此在使用时要注意相应类型越界的情况。例如9999999999999999999 会认为超过长整型的上边界而溢出,而 9999999999999999999.0 会被认为是有效的浮点数。
:::

View File

@ -36,13 +36,12 @@ database_option: {
| TSDB_PAGESIZE value
| WAL_RETENTION_PERIOD value
| WAL_RETENTION_SIZE value
| WAL_SEGMENT_SIZE value
}
```
### 参数说明
- BUFFER: 一个 VNODE 写入内存池大小,单位为 MB默认为 96最小为 3最大为 16384。
- BUFFER: 一个 VNODE 写入内存池大小,单位为 MB默认为 256最小为 3最大为 16384。
- CACHEMODEL表示是否在内存中缓存子表的最近数据。默认为 none。
- none表示不缓存。
- last_row表示缓存子表最近一行数据。这将显著改善 LAST_ROW 函数的性能表现。
@ -74,10 +73,8 @@ database_option: {
- TABLE_PREFIX当其为正值时在决定把一个表分配到哪个 vgroup 时要忽略表名中指定长度的前缀;当其为负值时,在决定把一个表分配到哪个 vgroup 时只使用表名中指定长度的前缀;例如,假定表名为 "v30001",当 TSDB_PREFIX = 2 时 使用 "0001" 来决定分配到哪个 vgroup ,当 TSDB_PREFIX = -2 时使用 "v3" 来决定分配到哪个 vgroup
- TABLE_SUFFIX当其为正值时在决定把一个表分配到哪个 vgroup 时要忽略表名中指定长度的后缀;当其为负值时,在决定把一个表分配到哪个 vgroup 时只使用表名中指定长度的后缀;例如,假定表名为 "v30001",当 TSDB_SUFFIX = 2 时 使用 "v300" 来决定分配到哪个 vgroup ,当 TSDB_SUFFIX = -2 时使用 "01" 来决定分配到哪个 vgroup。
- TSDB_PAGESIZE一个 VNODE 中时序数据存储引擎的页大小,单位为 KB默认为 4 KB。范围为 1 到 16384即 1 KB到 16 MB。
- WAL_RETENTION_PERIOD: 为了数据订阅消费需要WAL日志文件额外保留的最大时长策略。WAL日志清理不受订阅客户端消费状态影响。单位为 s。默认为 0表示无需为订阅保留。新建订阅应先设置恰当的时长策略
- WAL_RETENTION_PERIOD: 为了数据订阅消费需要WAL日志文件额外保留的最大时长策略。WAL日志清理不受订阅客户端消费状态影响。单位为 s。默认为 3600表示在 WAL 保留最近 3600 秒的数据,请根据数据订阅的需要修改这个参数为适当值
- WAL_RETENTION_SIZE为了数据订阅消费需要WAL日志文件额外保留的最大累计大小策略。单位为 KB。默认为 0表示累计大小无上限。
- WAL_ROLL_PERIODwal 文件切换时长,单位为 s。当WAL文件创建并写入后经过该时间会自动创建一个新的WAL文件。默认为 0即仅在TSDB落盘时创建新文件。
- WAL_SEGMENT_SIZEwal 单个文件大小,单位为 KB。当前写入文件大小超过上限后会自动创建一个新的WAL文件。默认为 0即仅在TSDB落盘时创建新文件。
### 创建数据库示例
```sql
@ -85,7 +82,7 @@ create database if not exists db vgroups 10 buffer 10
```
以上示例创建了一个有 10 个 vgroup 名为 db 的数据库, 其中每个 vnode 分配 10MB 的写入缓存
以上示例创建了一个有 10 个 vgroup 名为 db 的数据库, 其中每个 vnode 分配 10MB 的写入缓存
### 使用数据库

View File

@ -43,12 +43,11 @@ table_option: {
1. 表的第一个字段必须是 TIMESTAMP并且系统自动将其设为主键
2. 表名最大长度为 192
3. 表的每行长度不能超过 48KB从 3.0.5.0 版本开始为 64KB;(注意:每个 BINARY/NCHAR 类型的列还会额外占用 2 个字节的存储位置)
3. 表的每行长度不能超过 48KB从 3.0.5.0 版本开始为 64KB;(注意:每个 BINARY/NCHAR/GEOMETRY 类型的列还会额外占用 2 个字节的存储位置)
4. 子表名只能由字母、数字和下划线组成,且不能以数字开头,不区分大小写
5. 使用数据类型 binary 或 nchar需指定其最长的字节数如 binary(20),表示 20 字节;
5. 使用数据类型 BINARY/NCHAR/GEOMETRY需指定其最长的字节数如 BINARY(20),表示 20 字节;
6. 为了兼容支持更多形式的表名TDengine 引入新的转义符 "\`",可以让表名与关键词不冲突,同时不受限于上述表名称合法性约束检查。但是同样具有长度限制要求。使用转义字符以后,不再对转义字符中的内容进行大小写统一。
例如:\`aBc\` 和 \`abc\` 是不同的表名,但是 abc 和 aBc 是相同的表名。
需要注意的是转义字符中的内容必须是可打印字符。
**参数说明**

View File

@ -51,6 +51,11 @@ DESCRIBE [db_name.]stb_name;
### 获取超级表中所有子表的标签信息
```
SHOW TABLE TAGS FROM table_name [FROM db_name];
SHOW TABLE TAGS FROM [db_name.]table_name;
```
```
taos> SHOW TABLE TAGS FROM st1;
tbname | id | loc |

View File

@ -39,7 +39,7 @@ TDengine 支持 `UNION ALL` 和 `UNION` 操作符。UNION ALL 将查询返回的
| 3 | \>, < | 除 BLOB、MEDIUMBLOB 和 JSON 外的所有类型 | 大于,小于 |
| 4 | \>=, <= | 除 BLOB、MEDIUMBLOB 和 JSON 外的所有类型 | 大于等于,小于等于 |
| 5 | IS [NOT] NULL | 所有类型 | 是否为空值 |
| 6 | [NOT] BETWEEN AND | 除 BOOL、BLOB、MEDIUMBLOB 和 JSON 外的所有类型 | 闭区间比较 |
| 6 | [NOT] BETWEEN AND | 除 BOOL、BLOB、MEDIUMBLOB、JSON 和 GEOMETRY 外的所有类型 | 闭区间比较 |
| 7 | IN | 除 BLOB、MEDIUMBLOB 和 JSON 外的所有类型,且不可以为表的时间戳主键列 | 与列表内的任意值相等 |
| 8 | LIKE | BINARY、NCHAR 和 VARCHAR | 通配符匹配 |
| 9 | MATCH, NMATCH | BINARY、NCHAR 和 VARCHAR | 正则表达式匹配 |

View File

@ -10,11 +10,9 @@ description: 合法字符集和命名中的限制规则
2. 允许英文字符或下划线开头,不允许以数字开头
3. 不区分大小写
4. 转义后表(列)名规则:
为了兼容支持更多形式的表TDengine 引入新的转义符 "`"。可用让表名与关键词不冲突,同时不受限于上述表名称合法性约束检查
转义后的表(列)名同样受到长度限制要求,且长度计算的时候不计算转义符。使用转义字符以后,不再对转义字符中的内容进行大小写统一
为了兼容支持更多形式的表TDengine 引入新的转义符 "`"。使用转义字符以后,不再对转义字符中的内容进行大小写统一,即可以保留用户指定表名中的大小写属性。
例如:\`aBc\` 和 \`abc\` 是不同的表(列)名,但是 abc 和 aBc 是相同的表(列)名。
需要注意的是转义字符中的内容必须是可打印字符。
## 密码合法字符集
@ -36,7 +34,7 @@ description: 合法字符集和命名中的限制规则
- 库的数目,超级表的数目、表的数目,系统不做限制,仅受系统资源限制
- 数据库的副本数只能设置为 1 或 3
- 用户名的最大长度是 23 字节
- 用户密码的最大长度是 128 字节
- 用户密码的最大长度是 31 字节
- 总数据行数取决于可用资源
- 单个数据库的虚拟结点数上限为 1024
@ -48,13 +46,13 @@ description: 合法字符集和命名中的限制规则
### 转义后表(列)名规则:
为了兼容支持更多形式的表TDengine 引入新的转义符 "`",可以避免表名与关键词的冲突,同时不受限于上述表名合法性约束检查,转义符不计入表名的长度。
为了兼容支持更多形式的表TDengine 引入新的转义符 "`",可以避免表名与关键词的冲突,转义符不计入表名的长度。
转义后的表(列)名同样受到长度限制要求,且长度计算的时候不计算转义符。使用转义字符以后,不再对转义字符中的内容进行大小写统一。
例如:
\`aBc\` 和 \`abc\` 是不同的表(列)名,但是 abc 和 aBc 是相同的表(列)名。
:::note
转义字符中的内容必须是可打印字符
转义字符中的内容必须符合命名规则中的字符约束
:::

View File

@ -334,8 +334,6 @@ description: TDengine 保留关键字的详细列表
- WAL_LEVEL
- WAL_RETENTION_PERIOD
- WAL_RETENTION_SIZE
- WAL_ROLL_PERIOD
- WAL_SEGMENT_SIZE
- WATERMARK
- WHERE
- WINDOW_CLOSE

View File

@ -28,15 +28,15 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
提供 dnode 的相关信息。也可以使用 SHOW DNODES 来查询这些信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :------------: | ------------ | ------------------------- |
| # | **列名** | **数据类型** | **说明** |
| --- | :------------: | ------------ | ----------------------------------------------------------------------------------------------------- |
| 1 | vnodes | SMALLINT | dnode 中的实际 vnode 个数。需要注意,`vnodes` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 2 | support_vnodes | SMALLINT | 最多支持的 vnode 个数 |
| 3 | status | BINARY(10) | 当前状态 |
| 4 | note | BINARY(256) | 离线原因等信息 |
| 5 | id | SMALLINT | dnode id |
| 6 | endpoint | BINARY(134) | dnode 的地址 |
| 7 | create | TIMESTAMP | 创建时间 |
| 2 | support_vnodes | SMALLINT | 最多支持的 vnode 个数 |
| 3 | status | BINARY(10) | 当前状态 |
| 4 | note | BINARY(256) | 离线原因等信息 |
| 5 | id | SMALLINT | dnode id |
| 6 | endpoint | BINARY(134) | dnode 的地址 |
| 7 | create | TIMESTAMP | 创建时间 |
## INS_MNODES
@ -100,77 +100,75 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
| 23 | wal_fsync_period | INT | 数据落盘周期。需要注意,`wal_fsync_period` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 24 | wal_retention_period | INT | WAL 的保存时长。需要注意,`wal_retention_period` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 25 | wal_retention_size | INT | WAL 的保存上限。需要注意,`wal_retention_size` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 26 | wal_roll_period | INT | wal 文件切换时长。需要注意,`wal_roll_period` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 27 | wal_segment_size | BIGINT | wal 单个文件大小。需要注意,`wal_segment_size` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 28 | stt_trigger | SMALLINT | 触发文件合并的落盘文件的个数。需要注意,`stt_trigger` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 29 | table_prefix | SMALLINT | 内部存储引擎根据表名分配存储该表数据的 VNODE 时要忽略的前缀的长度。需要注意,`table_prefix` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 30 | table_suffix | SMALLINT | 内部存储引擎根据表名分配存储该表数据的 VNODE 时要忽略的后缀的长度。需要注意,`table_suffix` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 31 | tsdb_pagesize | INT | 时序数据存储引擎中的页大小。需要注意,`tsdb_pagesize` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 26 | stt_trigger | SMALLINT | 触发文件合并的落盘文件的个数。需要注意,`stt_trigger` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 27 | table_prefix | SMALLINT | 内部存储引擎根据表名分配存储该表数据的 VNODE 时要忽略的前缀的长度。需要注意,`table_prefix` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 28 | table_suffix | SMALLINT | 内部存储引擎根据表名分配存储该表数据的 VNODE 时要忽略的后缀的长度。需要注意,`table_suffix` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 29 | tsdb_pagesize | INT | 时序数据存储引擎中的页大小。需要注意,`tsdb_pagesize` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
## INS_FUNCTIONS
用户创建的自定义函数的信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------ | -------------- |
| 1 | name | BINARY(64) | 函数名 |
| 2 | comment | BINARY(255) | 补充说明。需要注意,`comment` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 3 | aggregate | INT | 是否为聚合函数。需要注意,`aggregate` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 4 | output_type | BINARY(31) | 输出类型 |
| 5 | create_time | TIMESTAMP | 创建时间 |
| 6 | code_len | INT | 代码长度 |
| 7 | bufsize | INT | buffer 大小 |
| 8 | func_language | BINARY(31) | 自定义函数编程语言 |
| 9 | func_body | BINARY(16384) | 函数体定义 |
| 10 | func_version | INT | 函数版本号。初始版本为0每次替换更新版本号加1。|
| # | **列名** | **数据类型** | **说明** |
| --- | :-----------: | ------------- | --------------------------------------------------------------------------------------------- |
| 1 | name | BINARY(64) | 函数名 |
| 2 | comment | BINARY(255) | 补充说明。需要注意,`comment` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 3 | aggregate | INT | 是否为聚合函数。需要注意,`aggregate` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 4 | output_type | BINARY(31) | 输出类型 |
| 5 | create_time | TIMESTAMP | 创建时间 |
| 6 | code_len | INT | 代码长度 |
| 7 | bufsize | INT | buffer 大小 |
| 8 | func_language | BINARY(31) | 自定义函数编程语言 |
| 9 | func_body | BINARY(16384) | 函数体定义 |
| 10 | func_version | INT | 函数版本号。初始版本为0每次替换更新版本号加1。 |
## INS_INDEXES
提供用户创建的索引的相关信息。也可以使用 SHOW INDEX 来查询这些信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :--------------: | ------------ | ---------------------------------------------------------------------------------- |
| 1 | db_name | BINARY(32) | 包含此索引的表所在的数据库名 |
| 2 | table_name | BINARY(192) | 包含此索引的表的名称 |
| 3 | index_name | BINARY(192) | 索引名 |
| 4 | column_name | BINARY(64) | 建索引的列的列名 |
| 5 | index_type | BINARY(10) | 目前有 SMA 和 FULLTEXT |
| 6 | index_extensions | BINARY(256) | 索引的额外信息。对 SMA 类型的索引,是函数名的列表。对 FULLTEXT 类型的索引为 NULL。 |
| # | **列名** | **数据类型** | **说明** |
| --- | :--------------: | ------------ | ------------------------------------------------------- |
| 1 | db_name | BINARY(32) | 包含此索引的表所在的数据库名 |
| 2 | table_name | BINARY(192) | 包含此索引的表的名称 |
| 3 | index_name | BINARY(192) | 索引名 |
| 4 | column_name | BINARY(64) | 建索引的列的列名 |
| 5 | index_type | BINARY(10) | 目前有 SMA 和 tag |
| 6 | index_extensions | BINARY(256) | 索引的额外信息。对 SMA/tag 类型的索引,是函数名的列表。 |
## INS_STABLES
提供用户创建的超级表的相关信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :-----------: | ------------ | ------------------------ |
| 1 | stable_name | BINARY(192) | 超级表表名 |
| 2 | db_name | BINARY(64) | 超级表所在的数据库的名称 |
| 3 | create_time | TIMESTAMP | 创建时间 |
| 4 | columns | INT | 列数目 |
| 5 | tags | INT | 标签数目。需要注意,`tags` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 6 | last_update | TIMESTAMP | 最后更新时间 |
| 7 | table_comment | BINARY(1024) | 表注释 |
| 8 | watermark | BINARY(64) | 窗口的关闭时间。需要注意,`watermark` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 9 | max_delay | BINARY(64) | 推送计算结果的最大延迟。需要注意,`max_delay` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 10 | rollup | BINARY(128) | rollup 聚合函数。需要注意,`rollup` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| # | **列名** | **数据类型** | **说明** |
| --- | :-----------: | ------------ | ----------------------------------------------------------------------------------------------------- |
| 1 | stable_name | BINARY(192) | 超级表表名 |
| 2 | db_name | BINARY(64) | 超级表所在的数据库的名称 |
| 3 | create_time | TIMESTAMP | 创建时间 |
| 4 | columns | INT | 列数目 |
| 5 | tags | INT | 标签数目。需要注意,`tags` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 6 | last_update | TIMESTAMP | 最后更新时间 |
| 7 | table_comment | BINARY(1024) | 表注释 |
| 8 | watermark | BINARY(64) | 窗口的关闭时间。需要注意,`watermark` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 9 | max_delay | BINARY(64) | 推送计算结果的最大延迟。需要注意,`max_delay` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 10 | rollup | BINARY(128) | rollup 聚合函数。需要注意,`rollup` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
## INS_TABLES
提供用户创建的普通表和子表的相关信息
| # | **列名** | **数据类型** | **说明** |
| --- | :-----------: | ------------ | ---------------- |
| 1 | table_name | BINARY(192) | 表名 |
| 2 | db_name | BINARY(64) | 数据库名 |
| 3 | create_time | TIMESTAMP | 创建时间 |
| 4 | columns | INT | 列数目 |
| 5 | stable_name | BINARY(192) | 所属的超级表表名 |
| 6 | uid | BIGINT | 表 id |
| 7 | vgroup_id | INT | vgroup id |
| 8 | ttl | INT | 表的生命周期。需要注意,`ttl` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 9 | table_comment | BINARY(1024) | 表注释 |
| 10 | type | BINARY(21) | 表类型 |
| # | **列名** | **数据类型** | **说明** |
| --- | :-----------: | ------------ | ------------------------------------------------------------------------------------- |
| 1 | table_name | BINARY(192) | 表名 |
| 2 | db_name | BINARY(64) | 数据库名 |
| 3 | create_time | TIMESTAMP | 创建时间 |
| 4 | columns | INT | 列数目 |
| 5 | stable_name | BINARY(192) | 所属的超级表表名 |
| 6 | uid | BIGINT | 表 id |
| 7 | vgroup_id | INT | vgroup id |
| 8 | ttl | INT | 表的生命周期。需要注意,`ttl` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 9 | table_comment | BINARY(1024) | 表注释 |
| 10 | type | BINARY(21) | 表类型 |
## INS_TAGS
@ -185,17 +183,17 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
## INS_COLUMNS
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------- | ---------------------- |
| 1 | table_name | BINARY(192) | 表名 |
| 2 | db_name | BINARY(64) | 该表所在的数据库的名称 |
| 3 | table_type | BINARY(21) | 表类型 |
| 4 | col_name | BINARY(64) | 列 的名称 |
| 5 | col_type | BINARY(32) | 列 的类型 |
| 6 | col_length | INT | 列 的长度 |
| 7 | col_precision | INT | 列 的精度 |
| 8 | col_scale | INT | 列 的比例 |
| 9 | col_nullable | INT | 列 是否可以为空 |
| # | **列名** | **数据类型** | **说明** |
| --- | :-----------: | ------------ | ---------------------- |
| 1 | table_name | BINARY(192) | 表名 |
| 2 | db_name | BINARY(64) | 该表所在的数据库的名称 |
| 3 | table_type | BINARY(21) | 表类型 |
| 4 | col_name | BINARY(64) | 列 的名称 |
| 5 | col_type | BINARY(32) | 列 的类型 |
| 6 | col_length | INT | 列 的长度 |
| 7 | col_precision | INT | 列 的精度 |
| 8 | col_scale | INT | 列 的比例 |
| 9 | col_nullable | INT | 列 是否可以为空 |
## INS_USERS
@ -211,60 +209,60 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
提供企业版授权的相关信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------ | -------------------------------------------------- |
| 1 | version | BINARY(9) | 企业版授权说明official(官方授权的)/trial(试用的) |
| 2 | cpu_cores | BINARY(9) | 授权使用的 CPU 核心数量 |
| 3 | dnodes | BINARY(10) | 授权使用的 dnode 节点数量。需要注意,`dnodes` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 4 | streams | BINARY(10) | 授权创建的流数量。需要注意,`streams` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 5 | users | BINARY(10) | 授权创建的用户数量。需要注意,`users` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 6 | accounts | BINARY(10) | 授权创建的帐户数量。需要注意,`accounts` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 7 | storage | BINARY(21) | 授权使用的存储空间大小。需要注意,`storage` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 8 | connections | BINARY(21) | 授权使用的客户端连接数量。需要注意,`connections` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 9 | databases | BINARY(11) | 授权使用的数据库数量。需要注意,`databases` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 10 | speed | BINARY(9) | 授权使用的数据点每秒写入数量 |
| 11 | querytime | BINARY(9) | 授权使用的查询总时长 |
| 12 | timeseries | BINARY(21) | 授权使用的测点数量 |
| 13 | expired | BINARY(5) | 是否到期true到期false未到期 |
| 14 | expire_time | BINARY(19) | 试用期到期时间 |
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------ | --------------------------------------------------------------------------------------------------------- |
| 1 | version | BINARY(9) | 企业版授权说明official(官方授权的)/trial(试用的) |
| 2 | cpu_cores | BINARY(9) | 授权使用的 CPU 核心数量 |
| 3 | dnodes | BINARY(10) | 授权使用的 dnode 节点数量。需要注意,`dnodes` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 4 | streams | BINARY(10) | 授权创建的流数量。需要注意,`streams` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 5 | users | BINARY(10) | 授权创建的用户数量。需要注意,`users` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 6 | accounts | BINARY(10) | 授权创建的帐户数量。需要注意,`accounts` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 7 | storage | BINARY(21) | 授权使用的存储空间大小。需要注意,`storage` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 8 | connections | BINARY(21) | 授权使用的客户端连接数量。需要注意,`connections` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 9 | databases | BINARY(11) | 授权使用的数据库数量。需要注意,`databases` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 10 | speed | BINARY(9) | 授权使用的数据点每秒写入数量 |
| 11 | querytime | BINARY(9) | 授权使用的查询总时长 |
| 12 | timeseries | BINARY(21) | 授权使用的测点数量 |
| 13 | expired | BINARY(5) | 是否到期true到期false未到期 |
| 14 | expire_time | BINARY(19) | 试用期到期时间 |
## INS_VGROUPS
系统中所有 vgroups 的信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :-------: | ------------ | ------------------------------------------------------ |
| 1 | vgroup_id | INT | vgroup id |
| 2 | db_name | BINARY(32) | 数据库名 |
| 3 | tables | INT | 此 vgroup 内有多少表。需要注意,`tables` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 4 | status | BINARY(10) | 此 vgroup 的状态 |
| 5 | v1_dnode | INT | 第一个成员所在的 dnode 的 id |
| 6 | v1_status | BINARY(10) | 第一个成员的状态 |
| 7 | v2_dnode | INT | 第二个成员所在的 dnode 的 id |
| 8 | v2_status | BINARY(10) | 第二个成员的状态 |
| 9 | v3_dnode | INT | 第三个成员所在的 dnode 的 id |
| 10 | v3_status | BINARY(10) | 第三个成员的状态 |
| 11 | nfiles | INT | 此 vgroup 中数据/元数据文件的数量 |
| 12 | file_size | INT | 此 vgroup 中数据/元数据文件的大小 |
| 13 | tsma | TINYINT | 此 vgroup 是否专用于 Time-range-wise SMA1: 是, 0: 否 |
| # | **列名** | **数据类型** | **说明** |
| --- | :-------: | ------------ | ------------------------------------------------------------------------------------------------ |
| 1 | vgroup_id | INT | vgroup id |
| 2 | db_name | BINARY(32) | 数据库名 |
| 3 | tables | INT | 此 vgroup 内有多少表。需要注意,`tables` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 4 | status | BINARY(10) | 此 vgroup 的状态 |
| 5 | v1_dnode | INT | 第一个成员所在的 dnode 的 id |
| 6 | v1_status | BINARY(10) | 第一个成员的状态 |
| 7 | v2_dnode | INT | 第二个成员所在的 dnode 的 id |
| 8 | v2_status | BINARY(10) | 第二个成员的状态 |
| 9 | v3_dnode | INT | 第三个成员所在的 dnode 的 id |
| 10 | v3_status | BINARY(10) | 第三个成员的状态 |
| 11 | nfiles | INT | 此 vgroup 中数据/元数据文件的数量 |
| 12 | file_size | INT | 此 vgroup 中数据/元数据文件的大小 |
| 13 | tsma | TINYINT | 此 vgroup 是否专用于 Time-range-wise SMA1: 是, 0: 否 |
## INS_CONFIGS
系统配置参数。
| # | **列名** | **数据类型** | **说明** |
| --- | :------: | ------------ | ------------ |
| 1 | name | BINARY(32) | 配置项名称 |
| # | **列名** | **数据类型** | **说明** |
| --- | :------: | ------------ | --------------------------------------------------------------------------------------- |
| 1 | name | BINARY(32) | 配置项名称 |
| 2 | value | BINARY(64) | 该配置项的值。需要注意,`value` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
## INS_DNODE_VARIABLES
系统中每个 dnode 的配置参数。
| # | **列名** | **数据类型** | **说明** |
| --- | :------: | ------------ | ------------ |
| 1 | dnode_id | INT | dnode 的 ID |
| 2 | name | BINARY(32) | 配置项名称 |
| # | **列名** | **数据类型** | **说明** |
| --- | :------: | ------------ | --------------------------------------------------------------------------------------- |
| 1 | dnode_id | INT | dnode 的 ID |
| 2 | name | BINARY(32) | 配置项名称 |
| 3 | value | BINARY(64) | 该配置项的值。需要注意,`value` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
## INS_TOPICS
@ -284,19 +282,19 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
| 2 | consumer_group | BINARY(193) | 订阅者的消费者组 |
| 3 | vgroup_id | INT | 消费者被分配的 vgroup id |
| 4 | consumer_id | BIGINT | 消费者的唯一 id |
| 5 | offset | BINARY(64) | 消费者的消费进度 |
| 6 | rows | BIGINT | 消费者的消费的数据条数 |
| 5 | offset | BINARY(64) | 消费者的消费进度 |
| 6 | rows | BIGINT | 消费者的消费的数据条数 |
## INS_STREAMS
| # | **列名** | **数据类型** | **说明** |
| --- | :----------: | ------------ | --------------------------------------- |
| 1 | stream_name | BINARY(64) | 流计算名称 |
| 2 | create_time | TIMESTAMP | 创建时间 |
| 3 | sql | BINARY(1024) | 创建流计算时提供的 SQL 语句 |
| 4 | status | BINARY(20) | 流当前状态 |
| 5 | source_db | BINARY(64) | 源数据库 |
| 6 | target_db | BINARY(64) | 目的数据库 |
| 7 | target_table | BINARY(192) | 流计算写入的目标表 |
| 8 | watermark | BIGINT | watermark详见 SQL 手册流式计算。需要注意,`watermark` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| # | **列名** | **数据类型** | **说明** |
| --- | :----------: | ------------ | -------------------------------------------------------------------------------------------------------------------- |
| 1 | stream_name | BINARY(64) | 流计算名称 |
| 2 | create_time | TIMESTAMP | 创建时间 |
| 3 | sql | BINARY(1024) | 创建流计算时提供的 SQL 语句 |
| 4 | status | BINARY(20) | 流当前状态 |
| 5 | source_db | BINARY(64) | 源数据库 |
| 6 | target_db | BINARY(64) | 目的数据库 |
| 7 | target_table | BINARY(192) | 流计算写入的目标表 |
| 8 | watermark | BIGINT | watermark详见 SQL 手册流式计算。需要注意,`watermark` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 9 | trigger | INT | 计算结果推送模式,详见 SQL 手册流式计算。需要注意,`trigger` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |

View File

@ -101,6 +101,7 @@ SHOW GRANTS;
```sql
SHOW INDEXES FROM tbl_name [FROM db_name];
SHOW INDEXES FROM [db_name.]tbl_name;
```
显示已创建的索引。
@ -269,6 +270,7 @@ Query OK, 24 row(s) in set (0.002444s)
```sql
SHOW TAGS FROM child_table_name [FROM db_name];
SHOW TAGS FROM [db_name.]child_table_name;
```
显示子表的标签信息。

View File

@ -16,7 +16,7 @@ CREATE USER use_name PASS 'password' [SYSINFO {1|0}];
use_name 最长为 23 字节。
password 最长为 128 字节,合法字符包括"a-zA-Z0-9!?$%^&*()_+={[}]:;@~#|<,>.?/",不可以出现单双引号、撇号、反斜杠和空格,且不可以为空。
password 最长为 31 字节,合法字符包括"a-zA-Z0-9!?$%^&*()_+={[}]:;@~#|<,>.?/",不可以出现单双引号、撇号、反斜杠和空格,且不可以为空。
SYSINFO 表示用户是否可以查看系统信息。1 表示可以查看0 表示不可以查看。系统信息包括服务端配置信息、服务端各种节点信息(如 DNODE、QNODE等、存储相关的信息等。默认为可以查看系统信息。

View File

@ -4,12 +4,13 @@ title: 索引
description: 索引功能的使用细节
---
TDengine 从 3.0.0.0 版本开始引入了索引功能,支持 SMA 索引和 FULLTEXT 索引。
TDengine 从 3.0.0.0 版本开始引入了索引功能,支持 SMA 索引和 tag 索引。
## 创建索引
```sql
CREATE FULLTEXT INDEX index_name ON tb_name (col_name [, col_name] ...)
CREATE INDEX index_name ON tb_name index_option
CREATE SMA INDEX index_name ON tb_name index_option
@ -28,9 +29,23 @@ functions:
- WATERMARK: 最小单位毫秒,取值范围 [0ms, 900000ms],默认值为 5 秒,只可用于超级表。
- MAX_DELAY: 最小单位毫秒,取值范围 [1ms, 900000ms],默认值为 interval 的值(但不能超过最大值),只可用于超级表。注:不建议 MAX_DELAY 设置太小,否则会过于频繁的推送结果,影响存储和查询性能,如无特殊需求,取默认值即可。
### FULLTEXT 索引
对指定列建立文本索引可以提升含有文本过滤的查询的性能。FULLTEXT 索引不支持 index_option 语法。现阶段只支持对 JSON 类型的标签列创建 FULLTEXT 索引。不支持多列联合索引,但可以为每个列分布创建 FULLTEXT 索引。
```sql
DROP DATABASE IF EXISTS d0;
CREATE DATABASE d0;
USE d0;
CREATE TABLE IF NOT EXISTS st1 (ts timestamp, c1 int, c2 float, c3 double) TAGS (t1 int unsigned);
CREATE TABLE ct1 USING st1 TAGS(1000);
CREATE TABLE ct2 USING st1 TAGS(2000);
INSERT INTO ct1 VALUES(now+0s, 10, 2.0, 3.0);
INSERT INTO ct1 VALUES(now+1s, 11, 2.1, 3.1)(now+2s, 12, 2.2, 3.2)(now+3s, 13, 2.3, 3.3);
CREATE SMA INDEX sma_index_name1 ON st1 FUNCTION(max(c1),max(c2),min(c1)) INTERVAL(5m,10s) SLIDING(5m) WATERMARK 5s MAX_DELAY 1m;
-- 从 SMA 索引查询
ALTER LOCAL 'querySmaOptimize' '1';
SELECT max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDING(5m);
SELECT _wstart,_wend,_wduration,max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDING(5m);
-- 从原始数据查询
ALTER LOCAL 'querySmaOptimize' '0';
```
## 删除索引
@ -41,8 +56,8 @@ DROP INDEX index_name;
## 查看索引
````sql
```sql
SHOW INDEXES FROM tbl_name [FROM db_name];
SHOW INDEXES FROM [db_name.]tbl_name;
````
显示在所指定的数据库或表上已创建的索引。

View File

@ -18,6 +18,7 @@ description: "TDengine 3.0 版本的语法变更说明"
| 8 | 混合运算 | 增强 | 查询中的混合运算标量运算和矢量运算混合全面增强SELECT的各个子句均全面支持符合语法语义的混合运算。
| 9 | 标签运算 | 新增 |在查询中,标签列可以像普通列一样参与各种运算,用于各种子句。
| 10 | 时间线子句和时间函数用于超级表查询 | 增强 |没有PARTITION BY时超级表的数据会被合并成一条时间线。
| 11 | GEOMETRY | 新增 | 几何类型。
## SQL 语句变更
@ -33,7 +34,7 @@ description: "TDengine 3.0 版本的语法变更说明"
| 6 | ALTER USER | 调整 | 废除<ul><li>PRIVILEGE修改用户权限。3.0版本使用GRANT和REVOKE来授予和回收权限。<br/>新增</li><li>ENABLE启用或停用此用户。</li><li>SYSINFO修改用户是否可查看系统信息。</li></ul>
| 7 | COMPACT VNODES | 暂不支持 | 整理指定VNODE的数据。3.0.0版本暂不支持。
| 8 | CREATE ACCOUNT | 废除 | 2.x中为企业版功能3.0不再支持。语法暂时保留了执行报“This statement is no longer supported”错误。
| 9 | CREATE DATABASE | 调整 | <p>废除</p><ul><li>BLOCKSVNODE使用的内存块数。3.0版本使用BUFFER来表示VNODE写入内存池的大小。</li><li>CACHEVNODE使用的内存块的大小。3.0版本使用BUFFER来表示VNODE写入内存池的大小。</li><li>CACHELAST缓存最新一行数据的模式。3.0版本用CACHEMODEL代替。</li><li>DAYS数据文件存储数据的时间跨度。3.0版本使用DURATION代替。</li><li>FSYNC当 WAL 设置为 2 时,执行 fsync 的周期。3.0版本使用WAL_FSYNC_PERIOD代替。</li><li>QUORUM写入需要的副本确认数。3.0版本使用STRICT来指定强一致还是弱一致。</li><li>UPDATE更新操作的支持模式。3.0版本所有数据库都支持部分列更新。</li><li>WALWAL 级别。3.0版本使用WAL_LEVEL代替。</li></ul><p>新增</p><ul><li>BUFFER一个 VNODE 写入内存池大小。</li><li>CACHEMODEL表示是否在内存中缓存子表的最近数据。</li><li>CACHESIZE表示缓存子表最近数据的内存大小。</li><li>DURATION代替原DAYS参数。新增支持带单位的设置方式。</li><li>PAGES一个 VNODE 中元数据存储引擎的缓存页个数。</li><li>PAGESIZE一个 VNODE 中元数据存储引擎的页大小。</li><li>RETENTIONS表示数据的聚合周期和保存时长。</li><li>STRICT表示数据同步的一致性要求。</li><li>SINGLE_STABLE表示此数据库中是否只可以创建一个超级表。</li><li>VGROUPS数据库中初始VGROUP的数目。</li><li>WAL_FSYNC_PERIOD代替原FSYNC参数。</li><li>WAL_LEVEL代替原WAL参数。</li><li>WAL_RETENTION_PERIODwal文件的额外保留策略用于数据订阅。</li><li>WAL_RETENTION_SIZEwal文件的额外保留策略用于数据订阅。</li><li>WAL_ROLL_PERIODwal文件切换时长。</li><li>WAL_SEGMENT_SIZEwal单个文件大小。</li></ul><p>调整</p><ul><li>KEEP3.0版本新增支持带单位的设置方式。</li></ul>
| 9 | CREATE DATABASE | 调整 | <p>废除</p><ul><li>BLOCKSVNODE使用的内存块数。3.0版本使用BUFFER来表示VNODE写入内存池的大小。</li><li>CACHEVNODE使用的内存块的大小。3.0版本使用BUFFER来表示VNODE写入内存池的大小。</li><li>CACHELAST缓存最新一行数据的模式。3.0版本用CACHEMODEL代替。</li><li>DAYS数据文件存储数据的时间跨度。3.0版本使用DURATION代替。</li><li>FSYNC当 WAL 设置为 2 时,执行 fsync 的周期。3.0版本使用WAL_FSYNC_PERIOD代替。</li><li>QUORUM写入需要的副本确认数。3.0版本使用STRICT来指定强一致还是弱一致。</li><li>UPDATE更新操作的支持模式。3.0版本所有数据库都支持部分列更新。</li><li>WALWAL 级别。3.0版本使用WAL_LEVEL代替。</li></ul><p>新增</p><ul><li>BUFFER一个 VNODE 写入内存池大小。</li><li>CACHEMODEL表示是否在内存中缓存子表的最近数据。</li><li>CACHESIZE表示缓存子表最近数据的内存大小。</li><li>DURATION代替原DAYS参数。新增支持带单位的设置方式。</li><li>PAGES一个 VNODE 中元数据存储引擎的缓存页个数。</li><li>PAGESIZE一个 VNODE 中元数据存储引擎的页大小。</li><li>RETENTIONS表示数据的聚合周期和保存时长。</li><li>STRICT表示数据同步的一致性要求。</li><li>SINGLE_STABLE表示此数据库中是否只可以创建一个超级表。</li><li>VGROUPS数据库中初始VGROUP的数目。</li><li>WAL_FSYNC_PERIOD代替原FSYNC参数。</li><li>WAL_LEVEL代替原WAL参数。</li><li>WAL_RETENTION_PERIODwal文件的额外保留策略用于数据订阅。</li><li>WAL_RETENTION_SIZEwal文件的额外保留策略用于数据订阅。</li></ul><p>调整</p><ul><li>KEEP3.0版本新增支持带单位的设置方式。</li></ul>
| 10 | CREATE DNODE | 调整 | 新增主机名和端口号分开指定语法<ul><li>CREATE DNODE dnode_host_name PORT port_val</li></ul>
| 11 | CREATE INDEX | 新增 | 创建SMA索引。
| 12 | CREATE MNODE | 新增 | 创建管理节点。

View File

@ -362,6 +362,8 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
- **max** : 数据类型的 列/标签 的最大值。生成的值将小于最小值。
- **fun** : 此列数据以函数填充,目前只支持 sin 和 cos 两函数,输入参数为时间戳换算成角度值,换算公式: 角度 x = 输入的时间列ts值 % 360。同时支持系数调节随机波动因子调节以固定格式的表达式展现如 fun=“10\*sin(x)+100\*random(5)” , x 表示角度,取值 0 ~ 360度增长步长与时间列步长一致。10 表示乘的系数100 表示加或减的系数5 表示波动幅度在 5% 的随机范围内。目前支持的数据类型为 int, bigint, float, double 四种数据类型。注意:表达式为固定模式,不可前后颠倒。
- **values** : nchar/binary 列/标签的值域,将从值中随机选择。
- **sma**: 将该列加入 SMA 中,值为 "yes" 或者 "no",默认为 "no"。
@ -437,3 +439,29 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
- **sqls**
- **sql** : 执行的 SQL 命令,必填。
#### 配置文件中数据类型书写对照表
| # | **引擎** | **taosBenchmark**
| --- | :----------------: | :---------------:
| 1 | TIMESTAMP | timestamp
| 2 | INT | int
| 3 | INT UNSIGNED | uint
| 4 | BIGINT | bigint
| 5 | BIGINT UNSIGNED | ubigint
| 6 | FLOAT | float
| 7 | DOUBLE | double
| 8 | BINARY | binary
| 9 | SMALLINT | smallint
| 10 | SMALLINT UNSIGNED | usmallint
| 11 | TINYINT | tinyint
| 12 | TINYINT UNSIGNED | utinyint
| 13 | BOOL | bool
| 14 | NCHAR | nchar
| 15 | VARCHAR | varchar
| 15 | JSON | json
注意taosBenchmark 配置文件中数据类型必须小写方可识别

View File

@ -5,7 +5,7 @@ description: "TDengine 服务端、客户端和连接器支持的平台列表"
## TDengine 服务端支持的平台列表
| | **Windows server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **macOS** |
| | **Windows server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18 以上** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **macOS** |
| ------------ | ---------------------------- | ----------------- | ---------------- | ---------------- | ------------ | ----------------- | ---------------- | --------- |
| X64 | ● | ● | ● | ● | ● | ● | ● | ● |
| 树莓派 ARM64 | | | ● | | | | | |

View File

@ -184,7 +184,7 @@ taos -C
| 属性 | 说明 |
| -------- | ------------------------ |
| 适用范围 | 仅服务端适用 |
| 适用范围 | 客户端和服务端都适用 |
| 含义 | 是否上传 telemetry |
| 取值范围 | 0,1 0: 不上传1上传 |
| 缺省值 | 1 |
@ -193,7 +193,7 @@ taos -C
| 属性 | 说明 |
| -------- | ------------------------ |
| 适用范围 | 仅服务端适用 |
| 适用范围 | 客户端和服务端都适用 |
| 含义 | 是否上传 crash 信息 |
| 取值范围 | 0,1 0: 不上传1上传 |
| 缺省值 | 1 |
@ -726,6 +726,25 @@ charset 的有效值是 UTF-8。
| 取值范围 | 0: 不改变1改变 |
| 缺省值 | 0 |
### keepTimeOffset
| 属性 | 说明 |
| -------- | ------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 迁移操作的延时 |
| 单位 | 小时 |
| 取值范围 | 0-23 |
| 缺省值 | 0 |
### tmqMaxTopicNum
| 属性 | 说明 |
| -------- | ------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 订阅最多可建立的 topic 数量 |
| 取值范围 | 1-10000|
| 缺省值 | 20 |
## 压缩参数
### compressMsgSize
@ -794,6 +813,7 @@ charset 的有效值是 UTF-8。
| 53 | udf | 是 | 是 | |
| 54 | enableCoreFile | 是 | 是 | |
| 55 | ttlChangeOnWrite | 否 | 是 | |
| 56 | keepTimeOffset | 是 | 是 | |
## 2.x->3.0 的废弃参数
@ -808,76 +828,74 @@ charset 的有效值是 UTF-8。
| 7 | offlineThreshold | 是 | 否 | 3.0 行为未知 |
| 8 | role | 是 | 否 | 由 supportVnode 决定是否能够创建 |
| 9 | dnodeNopLoop | 是 | 否 | 2.6 文档中未找到此参数 |
| 10 | keepTimeOffset | 是 | 否 | 2.6 文档中未找到此参数 |
| 11 | rpcTimer | 是 | 否 | 3.0 行为未知 |
| 12 | rpcMaxTime | 是 | 否 | 3.0 行为未知 |
| 13 | rpcForceTcp | 是 | 否 | 默认为 TCP |
| 14 | tcpConnTimeout | 是 | 否 | 3.0 行为未知 |
| 15 | syncCheckInterval | 是 | 否 | 3.0 行为未知 |
| 16 | maxTmrCtrl | 是 | 否 | 3.0 行为未知 |
| 17 | monitorReplica | 是 | 否 | 由 RAFT 协议管理多副本 |
| 18 | smlTagNullName | 是 | 否 | 3.0 行为未知 |
| 20 | ratioOfQueryCores | 是 | 否 | 由 线程池 相关配置参数决定 |
| 21 | maxStreamCompDelay | 是 | 否 | 3.0 行为未知 |
| 22 | maxFirstStreamCompDelay | 是 | 否 | 3.0 行为未知 |
| 23 | retryStreamCompDelay | 是 | 否 | 3.0 行为未知 |
| 24 | streamCompDelayRatio | 是 | 否 | 3.0 行为未知 |
| 25 | maxVgroupsPerDb | 是 | 否 | 由 create db 的参数 vgroups 指定实际 vgroups 数量 |
| 26 | maxTablesPerVnode | 是 | 否 | DB 中的所有表近似平均分配到各个 vgroup |
| 27 | minTablesPerVnode | 是 | 否 | DB 中的所有表近似平均分配到各个 vgroup |
| 28 | tableIncStepPerVnode | 是 | 否 | DB 中的所有表近似平均分配到各个 vgroup |
| 29 | cache | 是 | 否 | 由 buffer 代替 cache\*blocks |
| 30 | blocks | 是 | 否 | 由 buffer 代替 cache\*blocks |
| 31 | days | 是 | 否 | 由 create db 的参数 duration 取代 |
| 32 | keep | 是 | 否 | 由 create db 的参数 keep 取代 |
| 33 | minRows | 是 | 否 | 由 create db 的参数 minRows 取代 |
| 34 | maxRows | 是 | 否 | 由 create db 的参数 maxRows 取代 |
| 35 | quorum | 是 | 否 | 由 RAFT 协议决定 |
| 36 | comp | 是 | 否 | 由 create db 的参数 comp 取代 |
| 37 | walLevel | 是 | 否 | 由 create db 的参数 wal_level 取代 |
| 38 | fsync | 是 | 否 | 由 create db 的参数 wal_fsync_period 取代 |
| 39 | replica | 是 | 否 | 由 create db 的参数 replica 取代 |
| 40 | partitions | 是 | 否 | 3.0 行为未知 |
| 41 | update | 是 | 否 | 允许更新部分列 |
| 42 | cachelast | 是 | 否 | 由 create db 的参数 cacheModel 取代 |
| 43 | maxSQLLength | 是 | 否 | SQL 上限为 1MB无需参数控制 |
| 44 | maxWildCardsLength | 是 | 否 | 3.0 行为未知 |
| 45 | maxRegexStringLen | 是 | 否 | 3.0 行为未知 |
| 46 | maxNumOfOrderedRes | 是 | 否 | 3.0 行为未知 |
| 47 | maxConnections | 是 | 否 | 取决于系统配置和系统处理能力,详见后面的 Note |
| 48 | mnodeEqualVnodeNum | 是 | 否 | 3.0 行为未知 |
| 49 | http | 是 | 否 | http 服务由 taosAdapter 提供 |
| 50 | httpEnableRecordSql | 是 | 否 | taosd 不提供 http 服务 |
| 51 | httpMaxThreads | 是 | 否 | taosd 不提供 http 服务 |
| 52 | restfulRowLimit | 是 | 否 | taosd 不提供 http 服务 |
| 53 | httpDbNameMandatory | 是 | 否 | taosd 不提供 http 服务 |
| 54 | httpKeepAlive | 是 | 否 | taosd 不提供 http 服务 |
| 55 | enableRecordSql | 是 | 否 | 3.0 行为未知 |
| 56 | maxBinaryDisplayWidth | 是 | 否 | 3.0 行为未知 |
| 57 | stream | 是 | 否 | 默认启用连续查询 |
| 58 | retrieveBlockingModel | 是 | 否 | 3.0 行为未知 |
| 59 | tsdbMetaCompactRatio | 是 | 否 | 3.0 行为未知 |
| 60 | defaultJSONStrType | 是 | 否 | 3.0 行为未知 |
| 61 | walFlushSize | 是 | 否 | 3.0 行为未知 |
| 62 | keepTimeOffset | 是 | 否 | 3.0 行为未知 |
| 63 | flowctrl | 是 | 否 | 3.0 行为未知 |
| 64 | slaveQuery | 是 | 否 | 3.0 行为未知: slave vnode 是否能够处理查询? |
| 65 | adjustMaster | 是 | 否 | 3.0 行为未知 |
| 66 | topicBinaryLen | 是 | 否 | 3.0 行为未知 |
| 67 | telegrafUseFieldNum | 是 | 否 | 3.0 行为未知 |
| 68 | deadLockKillQuery | 是 | 否 | 3.0 行为未知 |
| 69 | clientMerge | 是 | 否 | 3.0 行为未知 |
| 70 | sdbDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 71 | odbcDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 72 | httpDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 73 | monDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 74 | cqDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 75 | shortcutFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 76 | probeSeconds | 是 | 否 | 3.0 行为未知 |
| 77 | probeKillSeconds | 是 | 否 | 3.0 行为未知 |
| 78 | probeInterval | 是 | 否 | 3.0 行为未知 |
| 79 | lossyColumns | 是 | 否 | 3.0 行为未知 |
| 80 | fPrecision | 是 | 否 | 3.0 行为未知 |
| 81 | dPrecision | 是 | 否 | 3.0 行为未知 |
| 82 | maxRange | 是 | 否 | 3.0 行为未知 |
| 83 | range | 是 | 否 | 3.0 行为未知 |
| 10 | rpcTimer | 是 | 否 | 3.0 行为未知 |
| 11 | rpcMaxTime | 是 | 否 | 3.0 行为未知 |
| 12 | rpcForceTcp | 是 | 否 | 默认为 TCP |
| 13 | tcpConnTimeout | 是 | 否 | 3.0 行为未知 |
| 14 | syncCheckInterval | 是 | 否 | 3.0 行为未知 |
| 15 | maxTmrCtrl | 是 | 否 | 3.0 行为未知 |
| 16 | monitorReplica | 是 | 否 | 由 RAFT 协议管理多副本 |
| 17 | smlTagNullName | 是 | 否 | 3.0 行为未知 |
| 18 | ratioOfQueryCores | 是 | 否 | 由 线程池 相关配置参数决定 |
| 19 | maxStreamCompDelay | 是 | 否 | 3.0 行为未知 |
| 20 | maxFirstStreamCompDelay | 是 | 否 | 3.0 行为未知 |
| 21 | retryStreamCompDelay | 是 | 否 | 3.0 行为未知 |
| 22 | streamCompDelayRatio | 是 | 否 | 3.0 行为未知 |
| 23 | maxVgroupsPerDb | 是 | 否 | 由 create db 的参数 vgroups 指定实际 vgroups 数量 |
| 24 | maxTablesPerVnode | 是 | 否 | DB 中的所有表近似平均分配到各个 vgroup |
| 25 | minTablesPerVnode | 是 | 否 | DB 中的所有表近似平均分配到各个 vgroup |
| 26 | tableIncStepPerVnode | 是 | 否 | DB 中的所有表近似平均分配到各个 vgroup |
| 27 | cache | 是 | 否 | 由 buffer 代替 cache\*blocks |
| 28 | blocks | 是 | 否 | 由 buffer 代替 cache\*blocks |
| 29 | days | 是 | 否 | 由 create db 的参数 duration 取代 |
| 30 | keep | 是 | 否 | 由 create db 的参数 keep 取代 |
| 31 | minRows | 是 | 否 | 由 create db 的参数 minRows 取代 |
| 32 | maxRows | 是 | 否 | 由 create db 的参数 maxRows 取代 |
| 33 | quorum | 是 | 否 | 由 RAFT 协议决定 |
| 34 | comp | 是 | 否 | 由 create db 的参数 comp 取代 |
| 35 | walLevel | 是 | 否 | 由 create db 的参数 wal_level 取代 |
| 36 | fsync | 是 | 否 | 由 create db 的参数 wal_fsync_period 取代 |
| 37 | replica | 是 | 否 | 由 create db 的参数 replica 取代 |
| 38 | partitions | 是 | 否 | 3.0 行为未知 |
| 39 | update | 是 | 否 | 允许更新部分列 |
| 40 | cachelast | 是 | 否 | 由 create db 的参数 cacheModel 取代 |
| 41 | maxSQLLength | 是 | 否 | SQL 上限为 1MB无需参数控制 |
| 42 | maxWildCardsLength | 是 | 否 | 3.0 行为未知 |
| 43 | maxRegexStringLen | 是 | 否 | 3.0 行为未知 |
| 44 | maxNumOfOrderedRes | 是 | 否 | 3.0 行为未知 |
| 45 | maxConnections | 是 | 否 | 取决于系统配置和系统处理能力,详见后面的 Note |
| 46 | mnodeEqualVnodeNum | 是 | 否 | 3.0 行为未知 |
| 47 | http | 是 | 否 | http 服务由 taosAdapter 提供 |
| 48 | httpEnableRecordSql | 是 | 否 | taosd 不提供 http 服务 |
| 49 | httpMaxThreads | 是 | 否 | taosd 不提供 http 服务 |
| 50 | restfulRowLimit | 是 | 否 | taosd 不提供 http 服务 |
| 51 | httpDbNameMandatory | 是 | 否 | taosd 不提供 http 服务 |
| 52 | httpKeepAlive | 是 | 否 | taosd 不提供 http 服务 |
| 53 | enableRecordSql | 是 | 否 | 3.0 行为未知 |
| 54 | maxBinaryDisplayWidth | 是 | 否 | 3.0 行为未知 |
| 55 | stream | 是 | 否 | 默认启用连续查询 |
| 56 | retrieveBlockingModel | 是 | 否 | 3.0 行为未知 |
| 57 | tsdbMetaCompactRatio | 是 | 否 | 3.0 行为未知 |
| 58 | defaultJSONStrType | 是 | 否 | 3.0 行为未知 |
| 59 | walFlushSize | 是 | 否 | 3.0 行为未知 |
| 60 | flowctrl | 是 | 否 | 3.0 行为未知 |
| 61 | slaveQuery | 是 | 否 | 3.0 行为未知: slave vnode 是否能够处理查询? |
| 62 | adjustMaster | 是 | 否 | 3.0 行为未知 |
| 63 | topicBinaryLen | 是 | 否 | 3.0 行为未知 |
| 64 | telegrafUseFieldNum | 是 | 否 | 3.0 行为未知 |
| 65 | deadLockKillQuery | 是 | 否 | 3.0 行为未知 |
| 66 | clientMerge | 是 | 否 | 3.0 行为未知 |
| 67 | sdbDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 68 | odbcDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 69 | httpDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 70 | monDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 71 | cqDebugFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 72 | shortcutFlag | 是 | 否 | 参考 3.0 的 DebugFlag 系列参数 |
| 73 | probeSeconds | 是 | 否 | 3.0 行为未知 |
| 74 | probeKillSeconds | 是 | 否 | 3.0 行为未知 |
| 75 | probeInterval | 是 | 否 | 3.0 行为未知 |
| 76 | lossyColumns | 是 | 否 | 3.0 行为未知 |
| 77 | fPrecision | 是 | 否 | 3.0 行为未知 |
| 78 | dPrecision | 是 | 否 | 3.0 行为未知 |
| 79 | maxRange | 是 | 否 | 3.0 行为未知 |
| 80 | range | 是 | 否 | 3.0 行为未知 |

View File

@ -210,19 +210,6 @@ TDinsight dashboard 数据来源于 log 库存放监控数据的默认db
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|cluster\_id|NCHAR|TAG|cluster id|
### logs 表
`logs` 表记录登录信息。
|field|type|is\_tag|comment|
|:----|:---|:-----|:------|
|ts|TIMESTAMP||timestamp|
|level|VARCHAR||log level|
|content|NCHAR||log content长度不超过1024字节|
|dnode\_id|INT|TAG|dnode id|
|dnode\_ep|NCHAR|TAG|dnode endpoint|
|cluster\_id|NCHAR|TAG|cluster id|
### log\_summary 表
`log_summary` 记录日志统计信息。

View File

@ -369,6 +369,9 @@ curl -X DELETE http://localhost:8083/connectors/TDengineSourceConnector
8. `topic.per.stable`: 如果设置为 true表示一个超级表对应一个 Kafka topictopic的命名规则 `<topic.prefix><topic.delimiter><connection.database><topic.delimiter><stable.name>`;如果设置为 false则指定的 DB 中的所有数据进入一个 Kafka topictopic 的命名规则为 `<topic.prefix><topic.delimiter><connection.database>`
9. `topic.ignore.db`: topic 命名规则是否包含 database 名称true 表示规则为 `<topic.prefix><topic.delimiter><stable.name>`false 表示规则为 `<topic.prefix><topic.delimiter><connection.database><topic.delimiter><stable.name>`,默认 false。此配置项在 `topic.per.stable` 设置为 false 时不生效。
10. `topic.delimiter`: topic 名称分割符,默认为 `-`
11. `read.method`: 从 TDengine 读取数据方式query 或是 subscription。默认为 subscription。
12. `subscription.group.id`: 指定 TDengine 数据订阅的组 id`read.method` 为 subscription 时,此项为必填项。
13. `subscription.from`: 指定 TDengine 数据订阅起始位置latest 或是 earliest。默认为 latest。
## 其他说明

View File

@ -8,21 +8,16 @@ DBeaver 是一款流行的跨平台数据库管理工具,方便开发者、数
## 前置条件
### 安装 DBeaver
使用 DBeaver 管理 TDengine 需要以下几方面的准备工作。
- 安装 DBeaver。DBeaver 支持主流操作系统包括 Windows、macOS 和 Linux。请注意[下载](https://dbeaver.io/download/)正确平台和版本23.1.1+)的安装包。详细安装步骤请参考 [DBeaver 官方文档](https://github.com/dbeaver/dbeaver/wiki/Installation)。
- 如果使用独立部署的 TDengine 集群,请确认 TDengine 正常运行,并且 taosAdapter 已经安装并正常运行,具体细节请参考 [taosAdapter 的使用手册](/reference/taosadapter)。
- 如果使用 TDengine Cloud请[注册](https://cloud.taosdata.com/)相应账号。
## 使用步骤
### 使用 DBeaver 访问内部部署的 TDengine
## 使用 DBeaver 访问内部部署的 TDengine
1. 启动 DBeaver 应用,点击按钮或菜单项选择“连接到数据库”,然后在时间序列分类栏中选择 TDengine。
![DBeaver 连接 TDengine](./dbeaver/dbeaver-connect-tdengine-zh.webp)
![DBeaver 连接 TDengine](./dbeaver/dbeaver-connect-tdengine-zh.webp)
2. 配置 TDengine 连接,填入主机地址、端口号、用户名和密码。如果 TDengine 部署在本机,可以只填用户名和密码,默认用户名为 root默认密码为 taosdata。点击“测试连接”可以对连接是否可用进行测试。如果本机没有安装 TDengine Java
连接器DBeaver 会提示下载安装。
@ -31,37 +26,12 @@ DBeaver 是一款流行的跨平台数据库管理工具,方便开发者、数
3. 连接成功将显示如下图所示。如果显示连接失败,请检查 TDengine 服务和 taosAdapter 是否正确运行,主机地址、端口号、用户名和密码是否正确。
![连接成功](./dbeaver/dbeaver-connect-tdengine-test-zh.webp)
![连接成功](./dbeaver/dbeaver-connect-tdengine-test-zh.webp)
4. 使用 DBeaver 选择数据库和表可以浏览 TDengine 服务的数据。
![DBeaver 浏览 TDengine 数据](./dbeaver/dbeaver-browse-data-zh.webp)
![DBeaver 浏览 TDengine 数据](./dbeaver/dbeaver-browse-data-zh.webp)
5. 也可以通过执行 SQL 命令的方式对 TDengine 数据进行操作。
![DBeaver SQL 命令](./dbeaver/dbeaver-sql-execution-zh.webp)
### 使用 DBeaver 访问 TDengine Cloud
1. 登录 TDengine Cloud 服务在管理界面中选择“编程”和“Java”然后复制 TDENGINE_JDBC_URL 的字符串值。
![复制 TDengine Cloud DSN](./dbeaver/tdengine-cloud-jdbc-dsn-zh.webp)
2. 启动 DBeaver 应用,点击按钮或菜单项选择“连接到数据库”,然后在时间序列分类栏中选择 TDengine Cloud。
![DBeaver 连接 TDengine Cloud](./dbeaver/dbeaver-connect-tdengine-cloud-zh.webp)
3. 配置 TDengine Cloud 连接,填入 JDBC_URL 值。点击“测试连接”,如果本机没有安装 TDengine Java
连接器DBeaver 会提示下载安装。连接成功将显示如下图所示。如果显示连接失败,请检查 TDengine Cloud 服务是否启动JDBC_URL 是否正确。
![配置 TDengine Cloud 连接](./dbeaver/dbeaver-connect-tdengine-cloud-test-zh.webp)
4. 使用 DBeaver 选择数据库和表可以浏览 TDengine Cloud 服务的数据。
![DBeaver 浏览 TDengine Cloud 数据](./dbeaver/dbeaver-browse-cloud-data-zh.webp)
5. 也可以通过执行 SQL 命令的方式对 TDengine Cloud 数据进行操作。
![DBeaver SQL 命令 操作 TDengine Cloud](./dbeaver/dbeaver-sql-execution-cloud-zh.webp)
![DBeaver SQL 命令](./dbeaver/dbeaver-sql-execution-zh.webp)

View File

@ -112,7 +112,7 @@ TDengine 3.0 采用 hash 一致性算法,确定每张数据表所在的 vnode
### 数据分区
TDengine 除 vnode 分片之外,还对时序数据按照时间段进行分区。每个数据文件只包含一个时间段的时序数据,时间段的长度由 DB 的配置参数 days 决定。这种按时间段分区的方法还便于高效实现数据的保留策略,只要数据文件超过规定的天数(系统配置参数 keep将被自动删除。而且不同的时间段可以存放于不同的路径和存储介质以便于大数据的冷热管理实现多级存储。
TDengine 除 vnode 分片之外,还对时序数据按照时间段进行分区。每个数据文件只包含一个时间段的时序数据,时间段的长度由 DB 的配置参数 duration 决定。这种按时间段分区的方法还便于高效实现数据的保留策略,只要数据文件超过规定的天数(系统配置参数 keep将被自动删除。而且不同的时间段可以存放于不同的路径和存储介质以便于大数据的冷热管理实现多级存储。
总的来说,**TDengine 是通过 vnode 以及时间两个维度,对大数据进行切分**,便于并行高效的管理,实现水平扩展。

View File

@ -10,6 +10,10 @@ TDengine 2.x 各版本安装包请访问[这里](https://www.taosdata.com/all-do
import Release from "/components/ReleaseV3";
## 3.0.7.1
<Release type="tdengine" version="3.0.7.1" />
## 3.0.7.0
<Release type="tdengine" version="3.0.7.0" />

View File

@ -36,7 +36,11 @@ dotnet build -c Release
## Usage
```
Usage: mono taosdemo.exe [OPTION...]
Usage with mono:
$ mono taosdemo.exe [OPTION...]
Usage with dotnet:
Usage: .\bin\Release\net5.0\taosdemo.exe [OPTION...]
--help Show usage.

View File

@ -72,7 +72,7 @@ namespace TDengineDriver
{
if ("--help" == argv[i])
{
Console.WriteLine("Usage: mono taosdemo.exe [OPTION...]");
Console.WriteLine("Usage: taosdemo.exe [OPTION...]");
Console.WriteLine("");
HelpPrint("--help", "Show usage.");
Console.WriteLine("");
@ -305,7 +305,7 @@ namespace TDengineDriver
this.conn = TDengine.Connect(this.host, this.user, this.password, db, this.port);
if (this.conn == IntPtr.Zero)
{
Console.WriteLine("Connect to TDengine failed");
Console.WriteLine("Connect to TDengine failed. Reason: {0}\n", TDengine.Error(0));
CleanAndExitProgram(1);
}
else

View File

@ -47,7 +47,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.0.0</version>
<version>3.2.4</version>
</dependency>
<dependency>

View File

@ -287,11 +287,20 @@ DLL_EXPORT TAOS_RES *tmq_consumer_poll(tmq_t *tmq, int64_t timeout);
DLL_EXPORT int32_t tmq_consumer_close(tmq_t *tmq);
DLL_EXPORT int32_t tmq_commit_sync(tmq_t *tmq, const TAOS_RES *msg);
DLL_EXPORT void tmq_commit_async(tmq_t *tmq, const TAOS_RES *msg, tmq_commit_cb *cb, void *param);
DLL_EXPORT int32_t tmq_commit_offset_sync(tmq_t *tmq, const char *pTopicName, int32_t vgId, int64_t offset);
DLL_EXPORT void tmq_commit_offset_async(tmq_t *tmq, const char *pTopicName, int32_t vgId, int64_t offset, tmq_commit_cb *cb, void *param);
DLL_EXPORT int32_t tmq_get_topic_assignment(tmq_t *tmq, const char *pTopicName, tmq_topic_assignment **assignment,
int32_t *numOfAssignment);
DLL_EXPORT void tmq_free_assignment(tmq_topic_assignment* pAssignment);
DLL_EXPORT int32_t tmq_offset_seek(tmq_t *tmq, const char *pTopicName, int32_t vgId, int64_t offset);
DLL_EXPORT const char *tmq_get_topic_name(TAOS_RES *res);
DLL_EXPORT const char *tmq_get_db_name(TAOS_RES *res);
DLL_EXPORT int32_t tmq_get_vgroup_id(TAOS_RES *res);
DLL_EXPORT int64_t tmq_get_vgroup_offset(TAOS_RES* res);
DLL_EXPORT int64_t tmq_position(tmq_t *tmq, const char *pTopicName, int32_t vgId);
DLL_EXPORT int64_t tmq_committed(tmq_t *tmq, const char *pTopicName, int32_t vgId);
/* ----------------------TMQ CONFIGURATION INTERFACE---------------------- */
enum tmq_conf_res_t {
@ -309,11 +318,6 @@ DLL_EXPORT void tmq_conf_set_auto_commit_cb(tmq_conf_t *conf, tmq_comm
/* -------------------------TMQ MSG HANDLE INTERFACE---------------------- */
DLL_EXPORT const char *tmq_get_topic_name(TAOS_RES *res);
DLL_EXPORT const char *tmq_get_db_name(TAOS_RES *res);
DLL_EXPORT int32_t tmq_get_vgroup_id(TAOS_RES *res);
DLL_EXPORT int64_t tmq_get_vgroup_offset(TAOS_RES* res);
/* ------------------------------ TAOSX -----------------------------------*/
// note: following apis are unstable
enum tmq_res_t {

View File

@ -34,6 +34,7 @@ extern char tsFirst[];
extern char tsSecond[];
extern char tsLocalFqdn[];
extern char tsLocalEp[];
extern char tsVersionName[];
extern uint16_t tsServerPort;
extern int32_t tsVersion;
extern int32_t tsStatusInterval;
@ -48,6 +49,7 @@ extern int32_t tsMaxNumOfDistinctResults;
extern int32_t tsCompatibleModel;
extern bool tsPrintAuth;
extern int64_t tsTickPerMin[3];
extern int64_t tsTickPerHour[3];
extern int32_t tsCountAlwaysReturnValue;
extern float tsSelectivityRatio;
extern int32_t tsTagFilterResCacheSize;
@ -83,8 +85,14 @@ extern int64_t tsVndCommitMaxIntervalMs;
extern int64_t tsMndSdbWriteDelta;
extern int64_t tsMndLogRetention;
extern int8_t tsGrant;
extern int32_t tsMndGrantMode;
extern bool tsMndSkipGrant;
// dnode
extern int64_t tsDndStart;
extern int64_t tsDndStartOsUptime;
extern int64_t tsDndUpTime;
// monitor
extern bool tsEnableMonitor;
extern int32_t tsMonitorInterval;
@ -185,6 +193,7 @@ extern bool tsDisableStream;
extern int64_t tsStreamBufferSize;
extern int64_t tsCheckpointInterval;
extern bool tsFilterScalarMode;
extern int32_t tsKeepTimeOffset;
extern int32_t tsMaxStreamBackendCache;
extern int32_t tsPQSortMemThreshold;

View File

@ -106,7 +106,6 @@ enum {
HEARTBEAT_KEY_DBINFO,
HEARTBEAT_KEY_STBINFO,
HEARTBEAT_KEY_TMQ,
HEARTBEAT_KEY_USER_PASSINFO,
};
typedef enum _mgmt_table {
@ -636,6 +635,7 @@ typedef struct {
SEpSet epSet;
int32_t svrTimestamp;
int32_t passVer;
int32_t authVer;
char sVer[TSDB_VERSION_LEN];
char sDetailVer[128];
} SConnectRsp;
@ -703,6 +703,7 @@ int32_t tDeserializeSGetUserAuthReq(void* buf, int32_t bufLen, SGetUserAuthReq*
typedef struct {
char user[TSDB_USER_LEN];
int32_t version;
int32_t passVer;
int8_t superAuth;
int8_t sysInfo;
int8_t enable;
@ -719,14 +720,6 @@ int32_t tSerializeSGetUserAuthRsp(void* buf, int32_t bufLen, SGetUserAuthRsp* pR
int32_t tDeserializeSGetUserAuthRsp(void* buf, int32_t bufLen, SGetUserAuthRsp* pRsp);
void tFreeSGetUserAuthRsp(SGetUserAuthRsp* pRsp);
typedef struct SUserPassVersion {
char user[TSDB_USER_LEN];
int32_t version;
} SUserPassVersion;
typedef SGetUserAuthReq SGetUserPassReq;
typedef SUserPassVersion SGetUserPassRsp;
/*
* for client side struct, only column id, type, bytes are necessary
* But for data in vnode side, we need all the following information.
@ -1070,14 +1063,6 @@ int32_t tSerializeSUserAuthBatchRsp(void* buf, int32_t bufLen, SUserAuthBatchRsp
int32_t tDeserializeSUserAuthBatchRsp(void* buf, int32_t bufLen, SUserAuthBatchRsp* pRsp);
void tFreeSUserAuthBatchRsp(SUserAuthBatchRsp* pRsp);
typedef struct {
SArray* pArray; // Array of SGetUserPassRsp
} SUserPassBatchRsp;
int32_t tSerializeSUserPassBatchRsp(void* buf, int32_t bufLen, SUserPassBatchRsp* pRsp);
int32_t tDeserializeSUserPassBatchRsp(void* buf, int32_t bufLen, SUserPassBatchRsp* pRsp);
void tFreeSUserPassBatchRsp(SUserPassBatchRsp* pRsp);
typedef struct {
char db[TSDB_DB_FNAME_LEN];
STimeWindow timeRange;
@ -1159,6 +1144,7 @@ typedef struct {
char timezone[TD_TIMEZONE_LEN]; // tsTimezone
char locale[TD_LOCALE_LEN]; // tsLocale
char charset[TD_LOCALE_LEN]; // tsCharset
int8_t ttlChangeOnWrite;
} SClusterCfg;
typedef struct {
@ -1195,6 +1181,8 @@ typedef struct {
typedef struct {
int8_t syncState;
int8_t syncRestore;
int64_t syncTerm;
int64_t roleTimeMs;
} SMnodeLoad;
typedef struct {
@ -1510,6 +1498,7 @@ int32_t tDeserializeSShowVariablesReq(void* buf, int32_t bufLen, SShowVariablesR
typedef struct {
char name[TSDB_CONFIG_OPTION_LEN + 1];
char value[TSDB_CONFIG_VALUE_LEN + 1];
char scope[TSDB_CONFIG_SCOPE_LEN + 1];
} SVariablesInfo;
typedef struct {
@ -3393,6 +3382,12 @@ typedef struct {
int8_t reserved;
} SMqHbRsp;
typedef struct {
SMsgHead head;
int64_t consumerId;
char subKey[TSDB_SUBSCRIBE_KEY_LEN];
} SMqSeekReq;
#define TD_AUTO_CREATE_TABLE 0x1
typedef struct {
int64_t suid;
@ -3522,6 +3517,8 @@ int32_t tSerializeSMqHbReq(void* buf, int32_t bufLen, SMqHbReq* pReq);
int32_t tDeserializeSMqHbReq(void* buf, int32_t bufLen, SMqHbReq* pReq);
int32_t tDeatroySMqHbReq(SMqHbReq* pReq);
int32_t tSerializeSMqSeekReq(void *buf, int32_t bufLen, SMqSeekReq *pReq);
int32_t tDeserializeSMqSeekReq(void *buf, int32_t bufLen, SMqSeekReq *pReq);
#define SUBMIT_REQ_AUTO_CREATE_TABLE 0x1
#define SUBMIT_REQ_COLUMN_DATA_FORMAT 0x2

View File

@ -307,12 +307,13 @@ enum {
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_SUBSCRIBE, "vnode-tmq-subscribe", SMqRebVgReq, SMqRebVgRsp)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_DELETE_SUB, "vnode-tmq-delete-sub", SMqVDeleteReq, SMqVDeleteRsp)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_COMMIT_OFFSET, "vnode-tmq-commit-offset", STqOffset, STqOffset)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_SEEK_TO_OFFSET, "vnode-tmq-seekto-offset", STqOffset, STqOffset)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_SEEK, "vnode-tmq-seek", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_ADD_CHECKINFO, "vnode-tmq-add-checkinfo", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_DEL_CHECKINFO, "vnode-del-checkinfo", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_CONSUME, "vnode-tmq-consume", SMqPollReq, SMqDataBlkRsp)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_CONSUME_PUSH, "vnode-tmq-consume-push", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_VG_WALINFO, "vnode-tmq-vg-walinfo", SMqPollReq, SMqDataBlkRsp)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_VG_COMMITTEDINFO, "vnode-tmq-committedinfo", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_MAX_MSG, "vnd-tmq-max", NULL, NULL)

View File

@ -16,105 +16,105 @@
#ifndef _TD_COMMON_TOKEN_H_
#define _TD_COMMON_TOKEN_H_
#define TK_OR 1
#define TK_AND 2
#define TK_UNION 3
#define TK_ALL 4
#define TK_MINUS 5
#define TK_EXCEPT 6
#define TK_INTERSECT 7
#define TK_NK_BITAND 8
#define TK_NK_BITOR 9
#define TK_NK_LSHIFT 10
#define TK_NK_RSHIFT 11
#define TK_NK_PLUS 12
#define TK_NK_MINUS 13
#define TK_NK_STAR 14
#define TK_NK_SLASH 15
#define TK_NK_REM 16
#define TK_NK_CONCAT 17
#define TK_CREATE 18
#define TK_ACCOUNT 19
#define TK_NK_ID 20
#define TK_PASS 21
#define TK_NK_STRING 22
#define TK_ALTER 23
#define TK_PPS 24
#define TK_TSERIES 25
#define TK_STORAGE 26
#define TK_STREAMS 27
#define TK_QTIME 28
#define TK_DBS 29
#define TK_USERS 30
#define TK_CONNS 31
#define TK_STATE 32
#define TK_USER 33
#define TK_ENABLE 34
#define TK_NK_INTEGER 35
#define TK_SYSINFO 36
#define TK_DROP 37
#define TK_GRANT 38
#define TK_ON 39
#define TK_TO 40
#define TK_REVOKE 41
#define TK_FROM 42
#define TK_SUBSCRIBE 43
#define TK_NK_COMMA 44
#define TK_READ 45
#define TK_WRITE 46
#define TK_NK_DOT 47
#define TK_WITH 48
#define TK_DNODE 49
#define TK_PORT 50
#define TK_DNODES 51
#define TK_RESTORE 52
#define TK_NK_IPTOKEN 53
#define TK_FORCE 54
#define TK_UNSAFE 55
#define TK_LOCAL 56
#define TK_QNODE 57
#define TK_BNODE 58
#define TK_SNODE 59
#define TK_MNODE 60
#define TK_VNODE 61
#define TK_DATABASE 62
#define TK_USE 63
#define TK_FLUSH 64
#define TK_TRIM 65
#define TK_COMPACT 66
#define TK_IF 67
#define TK_NOT 68
#define TK_EXISTS 69
#define TK_BUFFER 70
#define TK_CACHEMODEL 71
#define TK_CACHESIZE 72
#define TK_COMP 73
#define TK_DURATION 74
#define TK_NK_VARIABLE 75
#define TK_MAXROWS 76
#define TK_MINROWS 77
#define TK_KEEP 78
#define TK_PAGES 79
#define TK_PAGESIZE 80
#define TK_TSDB_PAGESIZE 81
#define TK_PRECISION 82
#define TK_REPLICA 83
#define TK_VGROUPS 84
#define TK_SINGLE_STABLE 85
#define TK_RETENTIONS 86
#define TK_SCHEMALESS 87
#define TK_WAL_LEVEL 88
#define TK_WAL_FSYNC_PERIOD 89
#define TK_WAL_RETENTION_PERIOD 90
#define TK_WAL_RETENTION_SIZE 91
#define TK_WAL_ROLL_PERIOD 92
#define TK_WAL_SEGMENT_SIZE 93
#define TK_STT_TRIGGER 94
#define TK_TABLE_PREFIX 95
#define TK_TABLE_SUFFIX 96
#define TK_NK_COLON 97
#define TK_MAX_SPEED 98
#define TK_START 99
#define TK_OR 1
#define TK_AND 2
#define TK_UNION 3
#define TK_ALL 4
#define TK_MINUS 5
#define TK_EXCEPT 6
#define TK_INTERSECT 7
#define TK_NK_BITAND 8
#define TK_NK_BITOR 9
#define TK_NK_LSHIFT 10
#define TK_NK_RSHIFT 11
#define TK_NK_PLUS 12
#define TK_NK_MINUS 13
#define TK_NK_STAR 14
#define TK_NK_SLASH 15
#define TK_NK_REM 16
#define TK_NK_CONCAT 17
#define TK_CREATE 18
#define TK_ACCOUNT 19
#define TK_NK_ID 20
#define TK_PASS 21
#define TK_NK_STRING 22
#define TK_ALTER 23
#define TK_PPS 24
#define TK_TSERIES 25
#define TK_STORAGE 26
#define TK_STREAMS 27
#define TK_QTIME 28
#define TK_DBS 29
#define TK_USERS 30
#define TK_CONNS 31
#define TK_STATE 32
#define TK_USER 33
#define TK_ENABLE 34
#define TK_NK_INTEGER 35
#define TK_SYSINFO 36
#define TK_DROP 37
#define TK_GRANT 38
#define TK_ON 39
#define TK_TO 40
#define TK_REVOKE 41
#define TK_FROM 42
#define TK_SUBSCRIBE 43
#define TK_NK_COMMA 44
#define TK_READ 45
#define TK_WRITE 46
#define TK_NK_DOT 47
#define TK_WITH 48
#define TK_DNODE 49
#define TK_PORT 50
#define TK_DNODES 51
#define TK_RESTORE 52
#define TK_NK_IPTOKEN 53
#define TK_FORCE 54
#define TK_UNSAFE 55
#define TK_LOCAL 56
#define TK_QNODE 57
#define TK_BNODE 58
#define TK_SNODE 59
#define TK_MNODE 60
#define TK_VNODE 61
#define TK_DATABASE 62
#define TK_USE 63
#define TK_FLUSH 64
#define TK_TRIM 65
#define TK_COMPACT 66
#define TK_IF 67
#define TK_NOT 68
#define TK_EXISTS 69
#define TK_BUFFER 70
#define TK_CACHEMODEL 71
#define TK_CACHESIZE 72
#define TK_COMP 73
#define TK_DURATION 74
#define TK_NK_VARIABLE 75
#define TK_MAXROWS 76
#define TK_MINROWS 77
#define TK_KEEP 78
#define TK_PAGES 79
#define TK_PAGESIZE 80
#define TK_TSDB_PAGESIZE 81
#define TK_PRECISION 82
#define TK_REPLICA 83
#define TK_VGROUPS 84
#define TK_SINGLE_STABLE 85
#define TK_RETENTIONS 86
#define TK_SCHEMALESS 87
#define TK_WAL_LEVEL 88
#define TK_WAL_FSYNC_PERIOD 89
#define TK_WAL_RETENTION_PERIOD 90
#define TK_WAL_RETENTION_SIZE 91
#define TK_WAL_ROLL_PERIOD 92
#define TK_WAL_SEGMENT_SIZE 93
#define TK_STT_TRIGGER 94
#define TK_TABLE_PREFIX 95
#define TK_TABLE_SUFFIX 96
#define TK_NK_COLON 97
#define TK_MAX_SPEED 98
#define TK_START 99
#define TK_TIMESTAMP 100
#define TK_END 101
#define TK_TABLE 102
@ -355,8 +355,6 @@
#define TK_WAL 337
#define TK_NK_SPACE 600
#define TK_NK_COMMENT 601
#define TK_NK_ILLEGAL 602

View File

@ -228,7 +228,7 @@ typedef struct SStoreTqReader {
} SStoreTqReader;
typedef struct SStoreSnapshotFn {
int32_t (*createSnapshot)(SSnapContext* ctx, int64_t uid);
int32_t (*setForSnapShot)(SSnapContext* ctx, int64_t uid);
int32_t (*destroySnapshot)(SSnapContext* ctx);
SMetaTableInfo (*getMetaTableInfoFromSnapshot)(SSnapContext* ctx);
int32_t (*getTableInfoFromSnapshot)(SSnapContext* ctx, void** pBuf, int32_t* contLen, int16_t* type, int64_t* uid);

View File

@ -36,9 +36,10 @@ extern "C" {
#define SHOW_CREATE_TB_RESULT_FIELD1_LEN (TSDB_TABLE_NAME_LEN + VARSTR_HEADER_SIZE)
#define SHOW_CREATE_TB_RESULT_FIELD2_LEN (TSDB_MAX_ALLOWED_SQL_LEN * 3)
#define SHOW_LOCAL_VARIABLES_RESULT_COLS 2
#define SHOW_LOCAL_VARIABLES_RESULT_COLS 3
#define SHOW_LOCAL_VARIABLES_RESULT_FIELD1_LEN (TSDB_CONFIG_OPTION_LEN + VARSTR_HEADER_SIZE)
#define SHOW_LOCAL_VARIABLES_RESULT_FIELD2_LEN (TSDB_CONFIG_VALUE_LEN + VARSTR_HEADER_SIZE)
#define SHOW_LOCAL_VARIABLES_RESULT_FIELD3_LEN (TSDB_CONFIG_SCOPE_LEN + VARSTR_HEADER_SIZE)
#define SHOW_ALIVE_RESULT_COLS 1
@ -361,7 +362,7 @@ typedef struct SRestoreComponentNodeStmt {
typedef struct SCreateTopicStmt {
ENodeType type;
char topicName[TSDB_TABLE_NAME_LEN];
char topicName[TSDB_TOPIC_NAME_LEN];
char subDbName[TSDB_DB_NAME_LEN];
char subSTbName[TSDB_TABLE_NAME_LEN];
bool ignoreExists;
@ -372,13 +373,13 @@ typedef struct SCreateTopicStmt {
typedef struct SDropTopicStmt {
ENodeType type;
char topicName[TSDB_TABLE_NAME_LEN];
char topicName[TSDB_TOPIC_NAME_LEN];
bool ignoreNotExists;
} SDropTopicStmt;
typedef struct SDropCGroupStmt {
ENodeType type;
char topicName[TSDB_TABLE_NAME_LEN];
char topicName[TSDB_TOPIC_NAME_LEN];
char cgroup[TSDB_CGROUP_LEN];
bool ignoreNotExists;
} SDropCGroupStmt;

View File

@ -55,6 +55,7 @@ typedef struct SLogicNode {
EGroupAction groupAction;
EOrder inputTsOrder;
EOrder outputTsOrder;
bool forceCreateNonBlockingOptr; // true if the operator can use non-blocking(pipeline) mode
} SLogicNode;
typedef enum EScanType {
@ -105,6 +106,7 @@ typedef struct SScanLogicNode {
bool hasNormalCols; // neither tag column nor primary key tag column
bool sortPrimaryKey;
bool igLastNull;
bool groupOrderScan;
} SScanLogicNode;
typedef struct SJoinLogicNode {
@ -247,6 +249,7 @@ typedef struct SSortLogicNode {
SNodeList* pSortKeys;
bool groupSort;
int64_t maxRows;
bool skipPKSortOpt;
} SSortLogicNode;
typedef struct SPartitionLogicNode {
@ -317,6 +320,7 @@ typedef struct SPhysiNode {
struct SPhysiNode* pParent;
SNode* pLimit;
SNode* pSlimit;
bool forceCreateNonBlockingOptr;
} SPhysiNode;
typedef struct SScanPhysiNode {
@ -327,6 +331,7 @@ typedef struct SScanPhysiNode {
uint64_t suid;
int8_t tableType;
SName tableName;
bool groupOrderScan;
} SScanPhysiNode;
typedef SScanPhysiNode STagScanPhysiNode;
@ -524,7 +529,6 @@ typedef struct SSortPhysiNode {
SNodeList* pExprs; // these are expression list of order_by_clause and parameter expression of aggregate function
SNodeList* pSortKeys; // element is SOrderByExprNode, and SOrderByExprNode::pExpr is SColumnNode
SNodeList* pTargets;
int64_t maxRows;
} SSortPhysiNode;
typedef SSortPhysiNode SGroupSortPhysiNode;

View File

@ -58,6 +58,7 @@ typedef struct SParseContext {
bool isSuperUser;
bool enableSysInfo;
bool async;
bool hasInvisibleCol;
const char* svrVer;
bool nodeOffline;
SArray* pTableMetaPos; // sql table pos => catalog data pos

View File

@ -45,8 +45,9 @@ enum {
TASK_STATUS__FAIL,
TASK_STATUS__STOP,
TASK_STATUS__SCAN_HISTORY, // stream task scan history data by using tsdbread in the stream scanner
TASK_STATUS__HALT, // stream task will handle all data in the input queue, and then paused
TASK_STATUS__PAUSE,
TASK_STATUS__SCAN_HISTORY_WAL, // scan history data in wal
TASK_STATUS__HALT, // pause, but not be manipulated by user command
TASK_STATUS__PAUSE, // pause
};
enum {
@ -272,6 +273,7 @@ typedef struct SStreamStatus {
int8_t keepTaskStatus;
bool transferState;
int8_t timerActive; // timer is active
int8_t pauseAllowed; // allowed task status to be set to be paused
} SStreamStatus;
typedef struct SHistDataRange {
@ -296,15 +298,21 @@ typedef struct SDispatchMsgInfo {
} SDispatchMsgInfo;
typedef struct {
int8_t outputType;
int8_t outputStatus;
SStreamQueue* outputQueue;
} SSTaskOutputInfo;
int8_t type;
int8_t status;
SStreamQueue* queue;
} STaskOutputInfo;
typedef struct {
int64_t init;
int64_t step1Start;
int64_t step2Start;
} STaskTimestamp;
struct SStreamTask {
SStreamId id;
SSTaskBasicInfo info;
int8_t outputType;
STaskOutputInfo outputInfo;
SDispatchMsgInfo msgInfo;
SStreamStatus status;
SCheckpointInfo chkInfo;
@ -315,7 +323,7 @@ struct SStreamTask {
SArray* pUpstreamEpInfoList; // SArray<SStreamChildEpInfo*>, // children info
int32_t nextCheckId;
SArray* checkpointInfo; // SArray<SStreamCheckpointInfo>
STaskTimestamp tsInfo;
// output
union {
STaskDispatcherFixedEp fixedEpDispatcher;
@ -326,9 +334,7 @@ struct SStreamTask {
};
int8_t inputStatus;
int8_t outputStatus;
SStreamQueue* inputQueue;
SStreamQueue* outputQueue;
// trigger
int8_t triggerStatus;
@ -337,6 +343,8 @@ struct SStreamTask {
void* launchTaskTimer;
SMsgCb* pMsgCb; // msg handle
SStreamState* pState; // state backend
SArray* pRspMsgList;
TdThreadMutex lock;
// the followings attributes don't be serialized
int32_t notReadyTasks;
@ -346,6 +354,7 @@ struct SStreamTask {
int32_t refCnt;
int64_t checkpointingId;
int32_t checkpointAlignCnt;
int32_t transferStateAlignCnt;
struct SStreamMeta* pMeta;
SSHashObj* pNameMap;
};
@ -457,7 +466,9 @@ typedef struct {
typedef struct {
int64_t streamId;
int32_t taskId;
int32_t upstreamTaskId;
int32_t downstreamTaskId;
int32_t upstreamNodeId;
int32_t childId;
} SStreamScanHistoryFinishReq, SStreamTransferReq;
@ -518,6 +529,17 @@ int32_t tDecodeSStreamCheckpointReq(SDecoder* pDecoder, SStreamCheckpointReq* pR
int32_t tEncodeSStreamCheckpointRsp(SEncoder* pEncoder, const SStreamCheckpointRsp* pRsp);
int32_t tDecodeSStreamCheckpointRsp(SDecoder* pDecoder, SStreamCheckpointRsp* pRsp);
typedef struct {
int64_t streamId;
int32_t upstreamTaskId;
int32_t upstreamNodeId;
int32_t downstreamId;
int32_t downstreamNode;
} SStreamCompleteHistoryMsg;
int32_t tEncodeCompleteHistoryDataMsg(SEncoder* pEncoder, const SStreamCompleteHistoryMsg* pReq);
int32_t tDecodeCompleteHistoryDataMsg(SDecoder* pDecoder, SStreamCompleteHistoryMsg* pReq);
typedef struct {
int64_t streamId;
int32_t downstreamTaskId;
@ -558,7 +580,6 @@ int32_t streamProcessDispatchMsg(SStreamTask* pTask, SStreamDispatchReq* pReq, S
int32_t streamProcessDispatchRsp(SStreamTask* pTask, SStreamDispatchRsp* pRsp, int32_t code);
int32_t streamProcessRetrieveReq(SStreamTask* pTask, SStreamRetrieveReq* pReq, SRpcMsg* pMsg);
// int32_t streamProcessRetrieveRsp(SStreamTask* pTask, SStreamRetrieveRsp* pRsp);
void streamTaskInputFail(SStreamTask* pTask);
int32_t streamTryExec(SStreamTask* pTask);
@ -567,21 +588,25 @@ int32_t streamTaskOutputResultBlock(SStreamTask* pTask, SStreamDataBlock* pBlock
bool streamTaskShouldStop(const SStreamStatus* pStatus);
bool streamTaskShouldPause(const SStreamStatus* pStatus);
bool streamTaskIsIdle(const SStreamTask* pTask);
int32_t streamTaskEndScanWAL(SStreamTask* pTask);
SStreamChildEpInfo * streamTaskGetUpstreamTaskEpInfo(SStreamTask* pTask, int32_t taskId);
int32_t streamScanExec(SStreamTask* pTask, int32_t batchSz);
char* createStreamTaskIdStr(int64_t streamId, int32_t taskId);
// recover and fill history
void streamPrepareNdoCheckDownstream(SStreamTask* pTask);
int32_t streamTaskCheckDownstreamTasks(SStreamTask* pTask);
void streamTaskCheckDownstreamTasks(SStreamTask* pTask);
int32_t streamTaskDoCheckDownstreamTasks(SStreamTask* pTask);
int32_t streamTaskLaunchScanHistory(SStreamTask* pTask);
int32_t streamTaskCheckStatus(SStreamTask* pTask);
int32_t streamSendCheckRsp(const SStreamMeta* pMeta, const SStreamTaskCheckReq* pReq, SStreamTaskCheckRsp* pRsp,
SRpcHandleInfo* pRpcInfo, int32_t taskId);
int32_t streamProcessCheckRsp(SStreamTask* pTask, const SStreamTaskCheckRsp* pRsp);
int32_t streamCheckHistoryTaskDownstream(SStreamTask* pTask);
int32_t streamLaunchFillHistoryTask(SStreamTask* pTask);
int32_t streamTaskScanHistoryDataComplete(SStreamTask* pTask);
int32_t streamStartRecoverTask(SStreamTask* pTask, int8_t igUntreated);
void streamHistoryTaskSetVerRangeStep2(SStreamTask* pTask);
bool streamHistoryTaskSetVerRangeStep2(SStreamTask* pTask, int64_t latestVer);
bool streamTaskRecoverScanStep1Finished(SStreamTask* pTask);
bool streamTaskRecoverScanStep2Finished(SStreamTask* pTask);
@ -592,6 +617,12 @@ int32_t streamSetParamForScanHistory(SStreamTask* pTask);
int32_t streamRestoreParam(SStreamTask* pTask);
int32_t streamSetStatusNormal(SStreamTask* pTask);
const char* streamGetTaskStatusStr(int32_t status);
void streamTaskPause(SStreamTask* pTask);
void streamTaskResume(SStreamTask* pTask);
void streamTaskHalt(SStreamTask* pTask);
void streamTaskResumeFromHalt(SStreamTask* pTask);
void streamTaskDisablePause(SStreamTask* pTask);
void streamTaskEnablePause(SStreamTask* pTask);
// source level
int32_t streamSetParamForStreamScannerStep1(SStreamTask* pTask, SVersionRange* pVerRange, STimeWindow* pWindow);
@ -603,21 +634,24 @@ int32_t streamDispatchScanHistoryFinishMsg(SStreamTask* pTask);
int32_t streamDispatchTransferStateMsg(SStreamTask* pTask);
// agg level
int32_t streamAggScanHistoryPrepare(SStreamTask* pTask);
int32_t streamProcessScanHistoryFinishReq(SStreamTask* pTask, int32_t taskId, int32_t childId);
int32_t streamTaskScanHistoryPrepare(SStreamTask* pTask);
int32_t streamProcessScanHistoryFinishReq(SStreamTask* pTask, SStreamScanHistoryFinishReq *pReq, SRpcHandleInfo* pRpcInfo);
int32_t streamProcessScanHistoryFinishRsp(SStreamTask* pTask);
// stream task meta
void streamMetaInit();
void streamMetaCleanup();
SStreamMeta* streamMetaOpen(const char* path, void* ahandle, FTaskExpand expandFunc, int32_t vgId);
void streamMetaClose(SStreamMeta* streamMeta);
// save to b-tree meta store
int32_t streamMetaSaveTask(SStreamMeta* pMeta, SStreamTask* pTask);
int32_t streamMetaAddDeployedTask(SStreamMeta* pMeta, int64_t ver, SStreamTask* pTask);
int32_t streamMetaAddSerializedTask(SStreamMeta* pMeta, int64_t checkpointVer, char* msg, int32_t msgLen);
int32_t streamMetaGetNumOfTasks(const SStreamMeta* pMeta); // todo remove it
int32_t streamMetaRemoveTask(SStreamMeta* pMeta, int32_t taskId);
int32_t streamMetaRegisterTask(SStreamMeta* pMeta, int64_t ver, SStreamTask* pTask, bool* pAdded);
int32_t streamMetaUnregisterTask(SStreamMeta* pMeta, int32_t taskId);
int32_t streamMetaGetNumOfTasks(SStreamMeta* pMeta); // todo remove it
SStreamTask* streamMetaAcquireTask(SStreamMeta* pMeta, int32_t taskId);
void streamMetaReleaseTask(SStreamMeta* pMeta, SStreamTask* pTask);
void streamMetaRemoveTask(SStreamMeta* pMeta, int32_t taskId);
int32_t streamMetaBegin(SStreamMeta* pMeta);
int32_t streamMetaCommit(SStreamMeta* pMeta);
@ -630,6 +664,8 @@ int32_t streamProcessCheckpointRsp(SStreamMeta* pMeta, SStreamTask* pTask, SStre
int32_t streamTaskReleaseState(SStreamTask* pTask);
int32_t streamTaskReloadState(SStreamTask* pTask);
int32_t streamAlignTransferState(SStreamTask* pTask);
#ifdef __cplusplus
}

View File

@ -239,29 +239,31 @@ typedef struct SSyncState {
ESyncState state;
bool restored;
bool canRead;
SyncTerm term;
int64_t roleTimeMs;
} SSyncState;
int32_t syncInit();
void syncCleanUp();
int64_t syncOpen(SSyncInfo* pSyncInfo);
int32_t syncStart(int64_t rid);
void syncStop(int64_t rid);
void syncPreStop(int64_t rid);
void syncPostStop(int64_t rid);
int32_t syncPropose(int64_t rid, SRpcMsg* pMsg, bool isWeak, int64_t* seq);
int32_t syncIsCatchUp(int64_t rid);
int32_t syncInit();
void syncCleanUp();
int64_t syncOpen(SSyncInfo* pSyncInfo);
int32_t syncStart(int64_t rid);
void syncStop(int64_t rid);
void syncPreStop(int64_t rid);
void syncPostStop(int64_t rid);
int32_t syncPropose(int64_t rid, SRpcMsg* pMsg, bool isWeak, int64_t* seq);
int32_t syncIsCatchUp(int64_t rid);
ESyncRole syncGetRole(int64_t rid);
int32_t syncProcessMsg(int64_t rid, SRpcMsg* pMsg);
int32_t syncReconfig(int64_t rid, SSyncCfg* pCfg);
int32_t syncBeginSnapshot(int64_t rid, int64_t lastApplyIndex);
int32_t syncEndSnapshot(int64_t rid);
int32_t syncLeaderTransfer(int64_t rid);
int32_t syncStepDown(int64_t rid, SyncTerm newTerm);
bool syncIsReadyForRead(int64_t rid);
bool syncSnapshotSending(int64_t rid);
bool syncSnapshotRecving(int64_t rid);
int32_t syncSendTimeoutRsp(int64_t rid, int64_t seq);
int32_t syncForceBecomeFollower(SSyncNode* ths, const SRpcMsg* pRpcMsg);
int32_t syncProcessMsg(int64_t rid, SRpcMsg* pMsg);
int32_t syncReconfig(int64_t rid, SSyncCfg* pCfg);
int32_t syncBeginSnapshot(int64_t rid, int64_t lastApplyIndex);
int32_t syncEndSnapshot(int64_t rid);
int32_t syncLeaderTransfer(int64_t rid);
int32_t syncStepDown(int64_t rid, SyncTerm newTerm);
bool syncIsReadyForRead(int64_t rid);
bool syncSnapshotSending(int64_t rid);
bool syncSnapshotRecving(int64_t rid);
int32_t syncSendTimeoutRsp(int64_t rid, int64_t seq);
int32_t syncForceBecomeFollower(SSyncNode* ths, const SRpcMsg* pRpcMsg);
SSyncState syncGetState(int64_t rid);
void syncGetRetryEpSet(int64_t rid, SEpSet* pEpSet);

View File

@ -69,6 +69,13 @@ void tfsUpdateSize(STfs *pTfs);
*/
SDiskSize tfsGetSize(STfs *pTfs);
/**
* @brief Get the number of disks at level of multi-tier storage.
*
* @param pTfs
* @return int32_t
*/
int32_t tfsGetDisksAtLevel(STfs *pTfs, int32_t level);
/**
* @brief Get level of multi-tier storage.
*
@ -123,6 +130,15 @@ int32_t tfsMkdir(STfs *pTfs, const char *rname);
*/
int32_t tfsMkdirAt(STfs *pTfs, const char *rname, SDiskID diskId);
/**
* @brief Recursive make directory at all levels in tfs.
*
* @param pTfs The fs object.
* @param rname The rel name of directory.
* @return int32_t 0 for success, -1 for failure.
*/
int32_t tfsMkdirRecur(STfs *pTfs, const char *rname);
/**
* @brief Recursive create directories in tfs.
*
@ -160,7 +176,17 @@ int32_t tfsRmdir(STfs *pTfs, const char *rname);
* @param nrname The rel name of new file.
* @return int32_t 0 for success, -1 for failure.
*/
int32_t tfsRename(STfs *pTfs, const char *orname, const char *nrname);
int32_t tfsRename(STfs *pTfs, int32_t diskPrimary, const char *orname, const char *nrname);
/**
* @brief Search fname in level of tfs
*
* @param pTfs The fs object.
* @param level The level to search on
* @param fname The relative file name to be searched
* @param int32_t diskId for successs, -1 for failure
*/
int32_t tfsSearch(STfs *pTfs, int32_t level, const char *fname);
/**
* @brief Init file object in tfs.

View File

@ -46,6 +46,7 @@ typedef struct SRpcHandleInfo {
int8_t noResp; // has response or not(default 0, 0: resp, 1: no resp)
int8_t persistHandle; // persist handle or not
int8_t hasEpSet;
int32_t cliVer;
// app info
void *ahandle; // app handle set by client
@ -83,6 +84,7 @@ typedef struct SRpcInit {
int32_t sessions; // number of sessions allowed
int8_t connType; // TAOS_CONN_UDP, TAOS_CONN_TCPC, TAOS_CONN_TCPS
int32_t idleTime; // milliseconds, 0 means idle timer is disabled
int32_t compatibilityVer;
int32_t retryMinInterval; // retry init interval
int32_t retryStepFactor; // retry interval factor

View File

@ -53,6 +53,7 @@ extern "C" {
#else
#include <argp.h>
#include <sys/prctl.h>
#include <sys/sysinfo.h>
#if defined(_TD_X86_)
#include <cpuid.h>
#endif

View File

@ -35,8 +35,9 @@ typedef struct {
bool taosCheckSystemIsLittleEnd();
void taosGetSystemInfo();
int64_t taosGetOsUptime();
int32_t taosGetEmail(char *email, int32_t maxLen);
int32_t taosGetOsReleaseName(char *releaseName, int32_t maxLen);
int32_t taosGetOsReleaseName(char *releaseName, char* sName, char* ver, int32_t maxLen);
int32_t taosGetCpuInfo(char *cpuModel, int32_t maxLen, float *numOfCores);
int32_t taosGetCpuCores(float *numOfCores);
void taosGetCpuUsage(double *cpu_system, double *cpu_engine);

View File

@ -416,7 +416,7 @@ int32_t* taosGetErrno();
// #define TSDB_CODE_VND_MSG_NOT_PROCESSED TAOS_DEF_ERROR_CODE(0, 0x0501) // 2.x
// #define TSDB_CODE_VND_ACTION_NEED_REPROCESS. TAOS_DEF_ERROR_CODE(0, 0x0502) // 2.x
#define TSDB_CODE_VND_INVALID_VGROUP_ID TAOS_DEF_ERROR_CODE(0, 0x0503)
// #define TSDB_CODE_VND_INIT_FAILED TAOS_DEF_ERROR_CODE(0, 0x0504) // 2.x
#define TSDB_CODE_VND_INIT_FAILED TAOS_DEF_ERROR_CODE(0, 0x0504)
// #define TSDB_CODE_VND_NO_DISKSPACE TAOS_DEF_ERROR_CODE(0, 0x0505) // 2.x
// #define TSDB_CODE_VND_NO_DISK_PERMISSIONS TAOS_DEF_ERROR_CODE(0, 0x0506) // 2.x
// #define TSDB_CODE_VND_NO_SUCH_FILE_OR_DIR TAOS_DEF_ERROR_CODE(0, 0x0507) // 2.x
@ -757,9 +757,8 @@ int32_t* taosGetErrno();
#define TSDB_CODE_RSMA_INVALID_SCHEMA TAOS_DEF_ERROR_CODE(0, 0x3153)
#define TSDB_CODE_RSMA_STREAM_STATE_OPEN TAOS_DEF_ERROR_CODE(0, 0x3154)
#define TSDB_CODE_RSMA_STREAM_STATE_COMMIT TAOS_DEF_ERROR_CODE(0, 0x3155)
#define TSDB_CODE_RSMA_FS_REF TAOS_DEF_ERROR_CODE(0, 0x3156)
#define TSDB_CODE_RSMA_FS_SYNC TAOS_DEF_ERROR_CODE(0, 0x3157)
#define TSDB_CODE_RSMA_FS_UPDATE TAOS_DEF_ERROR_CODE(0, 0x3158)
#define TSDB_CODE_RSMA_FS_SYNC TAOS_DEF_ERROR_CODE(0, 0x3156)
#define TSDB_CODE_RSMA_RESULT TAOS_DEF_ERROR_CODE(0, 0x3157)
//index
#define TSDB_CODE_INDEX_REBUILDING TAOS_DEF_ERROR_CODE(0, 0x3200)
@ -775,6 +774,12 @@ int32_t* taosGetErrno();
#define TSDB_CODE_TMQ_CONSUMER_ERROR TAOS_DEF_ERROR_CODE(0, 0x4003)
#define TSDB_CODE_TMQ_TOPIC_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x4004)
#define TSDB_CODE_TMQ_GROUP_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x4005)
#define TSDB_CODE_TMQ_SNAPSHOT_ERROR TAOS_DEF_ERROR_CODE(0, 0x4006)
#define TSDB_CODE_TMQ_VERSION_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x4007)
#define TSDB_CODE_TMQ_INVALID_VGID TAOS_DEF_ERROR_CODE(0, 0x4008)
#define TSDB_CODE_TMQ_INVALID_TOPIC TAOS_DEF_ERROR_CODE(0, 0x4009)
#define TSDB_CODE_TMQ_NEED_INITIALIZED TAOS_DEF_ERROR_CODE(0, 0x4010)
#define TSDB_CODE_TMQ_NO_COMMITTED TAOS_DEF_ERROR_CODE(0, 0x4011)
// stream
#define TSDB_CODE_STREAM_TASK_NOT_EXIST TAOS_DEF_ERROR_CODE(0, 0x4100)

View File

@ -22,7 +22,7 @@
extern "C" {
#endif
#define TARRAY_MIN_SIZE 8
#define TARRAY_MIN_SIZE 4
#define TARRAY_GET_ELEM(array, index) ((void*)((char*)((array)->pData) + (index) * (array)->elemSize))
#define TARRAY_ELEM_IDX(array, ele) (POINTER_DISTANCE(ele, (array)->pData) / (array)->elemSize)
@ -138,7 +138,7 @@ size_t taosArrayGetSize(const SArray* pArray);
* @param index
* @param pData
*/
void* taosArrayInsert(SArray* pArray, size_t index, void* pData);
void* taosArrayInsert(SArray* pArray, size_t index, const void* pData);
/**
* set data in array
@ -204,9 +204,9 @@ void taosArrayClearEx(SArray* pArray, void (*fp)(void*));
void* taosArrayDestroy(SArray* pArray);
void taosArrayDestroyP(SArray* pArray, FDelete fp);
void taosArrayDestroyP(SArray* pArray, FDelete fp);
void taosArrayDestroyEx(SArray* pArray, FDelete fp);
void taosArrayDestroyEx(SArray* pArray, FDelete fp);
void taosArraySwap(SArray* a, SArray* b);

172
include/util/tarray2.h Normal file
View File

@ -0,0 +1,172 @@
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#include "talgo.h"
#ifndef _TD_UTIL_TARRAY2_H_
#define _TD_UTIL_TARRAY2_H_
#ifdef __cplusplus
extern "C" {
#endif
// a: a
// e: element
// ep: element pointer
// cmp: compare function
// idx: index
// cb: callback function
#define TARRAY2(TYPE) \
struct { \
int32_t size; \
int32_t capacity; \
TYPE *data; \
}
typedef void (*TArray2Cb)(void *);
#define TARRAY2_SIZE(a) ((a)->size)
#define TARRAY2_CAPACITY(a) ((a)->capacity)
#define TARRAY2_DATA(a) ((a)->data)
#define TARRAY2_GET(a, i) ((a)->data[i])
#define TARRAY2_GET_PTR(a, i) ((a)->data + i)
#define TARRAY2_FIRST(a) ((a)->data[0])
#define TARRAY2_LAST(a) ((a)->data[(a)->size - 1])
#define TARRAY2_DATA_LEN(a) ((a)->size * sizeof(((a)->data[0])))
static FORCE_INLINE int32_t tarray2_make_room(void *arr, int32_t expSize, int32_t eleSize) {
TARRAY2(void) *a = arr;
int32_t capacity = (a->capacity > 0) ? (a->capacity << 1) : 32;
while (capacity < expSize) {
capacity <<= 1;
}
void *p = taosMemoryRealloc(a->data, capacity * eleSize);
if (p == NULL) return TSDB_CODE_OUT_OF_MEMORY;
a->capacity = capacity;
a->data = p;
return 0;
}
static FORCE_INLINE int32_t tarray2InsertBatch(void *arr, int32_t idx, const void *elePtr, int32_t numEle,
int32_t eleSize) {
TARRAY2(uint8_t) *a = arr;
int32_t ret = 0;
if (a->size + numEle > a->capacity) {
ret = tarray2_make_room(a, a->size + numEle, eleSize);
}
if (ret == 0) {
if (idx < a->size) {
memmove(a->data + (idx + numEle) * eleSize, a->data + idx * eleSize, (a->size - idx) * eleSize);
}
memcpy(a->data + idx * eleSize, elePtr, numEle * eleSize);
a->size += numEle;
}
return ret;
}
static FORCE_INLINE void *tarray2Search(void *arr, const void *elePtr, int32_t eleSize, __compar_fn_t compar,
int32_t flag) {
TARRAY2(void) *a = arr;
return taosbsearch(elePtr, a->data, a->size, eleSize, compar, flag);
}
static FORCE_INLINE int32_t tarray2SearchIdx(void *arr, const void *elePtr, int32_t eleSize, __compar_fn_t compar,
int32_t flag) {
TARRAY2(void) *a = arr;
void *p = taosbsearch(elePtr, a->data, a->size, eleSize, compar, flag);
if (p == NULL) {
return -1;
} else {
return (int32_t)(((uint8_t *)p - (uint8_t *)a->data) / eleSize);
}
}
static FORCE_INLINE int32_t tarray2SortInsert(void *arr, const void *elePtr, int32_t eleSize, __compar_fn_t compar) {
TARRAY2(void) *a = arr;
int32_t idx = tarray2SearchIdx(arr, elePtr, eleSize, compar, TD_GT);
return tarray2InsertBatch(arr, idx < 0 ? a->size : idx, elePtr, 1, eleSize);
}
#define TARRAY2_INIT_EX(a, size_, capacity_, data_) \
do { \
(a)->size = (size_); \
(a)->capacity = (capacity_); \
(a)->data = (data_); \
} while (0)
#define TARRAY2_INIT(a) TARRAY2_INIT_EX(a, 0, 0, NULL)
#define TARRAY2_CLEAR(a, cb) \
do { \
if ((cb) && (a)->size > 0) { \
TArray2Cb cb_ = (TArray2Cb)(cb); \
for (int32_t i = 0; i < (a)->size; ++i) { \
cb_((a)->data + i); \
} \
} \
(a)->size = 0; \
} while (0)
#define TARRAY2_DESTROY(a, cb) \
do { \
TARRAY2_CLEAR(a, cb); \
if ((a)->data) { \
taosMemoryFree((a)->data); \
(a)->data = NULL; \
} \
(a)->capacity = 0; \
} while (0)
#define TARRAY2_INSERT_PTR(a, idx, ep) tarray2InsertBatch(a, idx, ep, 1, sizeof((a)->data[0]))
#define TARRAY2_APPEND_PTR(a, ep) tarray2InsertBatch(a, (a)->size, ep, 1, sizeof((a)->data[0]))
#define TARRAY2_APPEND_BATCH(a, ep, n) tarray2InsertBatch(a, (a)->size, ep, n, sizeof((a)->data[0]))
#define TARRAY2_APPEND(a, e) TARRAY2_APPEND_PTR(a, &(e))
// return (TYPE *)
#define TARRAY2_SEARCH(a, ep, cmp, flag) tarray2Search(a, ep, sizeof(((a)->data[0])), (__compar_fn_t)cmp, flag)
#define TARRAY2_SEARCH_IDX(a, ep, cmp, flag) tarray2SearchIdx(a, ep, sizeof(((a)->data[0])), (__compar_fn_t)cmp, flag)
#define TARRAY2_SORT_INSERT(a, e, cmp) tarray2SortInsert(a, &(e), sizeof(((a)->data[0])), (__compar_fn_t)cmp)
#define TARRAY2_SORT_INSERT_P(a, ep, cmp) tarray2SortInsert(a, ep, sizeof(((a)->data[0])), (__compar_fn_t)cmp)
#define TARRAY2_REMOVE(a, idx, cb) \
do { \
if ((idx) < (a)->size) { \
if (cb) { \
TArray2Cb cb_ = (TArray2Cb)(cb); \
cb_((a)->data + (idx)); \
} \
if ((idx) < (a)->size - 1) { \
memmove((a)->data + (idx), (a)->data + (idx) + 1, sizeof((*(a)->data)) * ((a)->size - (idx)-1)); \
} \
(a)->size--; \
} \
} while (0)
#define TARRAY2_FOREACH(a, e) for (int32_t __i = 0; __i < (a)->size && ((e) = (a)->data[__i], 1); __i++)
#define TARRAY2_FOREACH_REVERSE(a, e) for (int32_t __i = (a)->size - 1; __i >= 0 && ((e) = (a)->data[__i], 1); __i--)
#define TARRAY2_FOREACH_PTR(a, ep) for (int32_t __i = 0; __i < (a)->size && ((ep) = &(a)->data[__i], 1); __i++)
#define TARRAY2_FOREACH_PTR_REVERSE(a, ep) \
for (int32_t __i = (a)->size - 1; __i >= 0 && ((ep) = &(a)->data[__i], 1); __i--)
#ifdef __cplusplus
}
#endif
#endif /*_TD_UTIL_TARRAY2_H_*/

View File

@ -50,11 +50,17 @@ typedef enum {
CFG_DTYPE_TIMEZONE
} ECfgDataType;
typedef enum {
CFG_SCOPE_SERVER,
CFG_SCOPE_CLIENT,
CFG_SCOPE_BOTH
} ECfgScopeType;
typedef struct SConfigItem {
ECfgSrcType stype;
ECfgDataType dtype;
bool tsc;
char *name;
int8_t scope;
char *name;
union {
bool bval;
float fval;
@ -92,20 +98,21 @@ int32_t cfgGetSize(SConfig *pCfg);
SConfigItem *cfgGetItem(SConfig *pCfg, const char *name);
int32_t cfgSetItem(SConfig *pCfg, const char *name, const char *value, ECfgSrcType stype);
int32_t cfgAddBool(SConfig *pCfg, const char *name, bool defaultVal, bool tsc);
int32_t cfgAddInt32(SConfig *pCfg, const char *name, int32_t defaultVal, int64_t minval, int64_t maxval, bool tsc);
int32_t cfgAddInt64(SConfig *pCfg, const char *name, int64_t defaultVal, int64_t minval, int64_t maxval, bool tsc);
int32_t cfgAddFloat(SConfig *pCfg, const char *name, float defaultVal, double minval, double maxval, bool tsc);
int32_t cfgAddString(SConfig *pCfg, const char *name, const char *defaultVal, bool tsc);
int32_t cfgAddDir(SConfig *pCfg, const char *name, const char *defaultVal, bool tsc);
int32_t cfgAddLocale(SConfig *pCfg, const char *name, const char *defaultVal);
int32_t cfgAddCharset(SConfig *pCfg, const char *name, const char *defaultVal);
int32_t cfgAddTimezone(SConfig *pCfg, const char *name, const char *defaultVal);
int32_t cfgAddBool(SConfig *pCfg, const char *name, bool defaultVal, int8_t scope);
int32_t cfgAddInt32(SConfig *pCfg, const char *name, int32_t defaultVal, int64_t minval, int64_t maxval, int8_t scope);
int32_t cfgAddInt64(SConfig *pCfg, const char *name, int64_t defaultVal, int64_t minval, int64_t maxval, int8_t scope);
int32_t cfgAddFloat(SConfig *pCfg, const char *name, float defaultVal, double minval, double maxval, int8_t scope);
int32_t cfgAddString(SConfig *pCfg, const char *name, const char *defaultVal, int8_t scope);
int32_t cfgAddDir(SConfig *pCfg, const char *name, const char *defaultVal, int8_t scope);
int32_t cfgAddLocale(SConfig *pCfg, const char *name, const char *defaultVal, int8_t scope);
int32_t cfgAddCharset(SConfig *pCfg, const char *name, const char *defaultVal, int8_t scope);
int32_t cfgAddTimezone(SConfig *pCfg, const char *name, const char *defaultVal, int8_t scope);
const char *cfgStypeStr(ECfgSrcType type);
const char *cfgDtypeStr(ECfgDataType type);
void cfgDumpItemValue(SConfigItem *pItem, char *buf, int32_t bufSize, int32_t *pLen);
void cfgDumpItemScope(SConfigItem *pItem, char *buf, int32_t bufSize, int32_t *pLen);
void cfgDumpCfg(SConfig *pCfg, bool tsc, bool dump);

View File

@ -191,16 +191,16 @@ typedef enum ELogicConditionType {
#define TSDB_MAX_COLUMNS 4096
#define TSDB_MIN_COLUMNS 2 // PRIMARY COLUMN(timestamp) + other columns
#define TSDB_NODE_NAME_LEN 64
#define TSDB_TABLE_NAME_LEN 193 // it is a null-terminated string
#define TSDB_TOPIC_NAME_LEN 193 // it is a null-terminated string
#define TSDB_CGROUP_LEN 193 // it is a null-terminated string
#define TSDB_OFFSET_LEN 64 // it is a null-terminated string
#define TSDB_USER_CGROUP_LEN (TSDB_USER_LEN + TSDB_CGROUP_LEN) // it is a null-terminated string
#define TSDB_STREAM_NAME_LEN 193 // it is a null-terminated string
#define TSDB_DB_NAME_LEN 65
#define TSDB_DB_FNAME_LEN (TSDB_ACCT_ID_LEN + TSDB_DB_NAME_LEN + TSDB_NAME_DELIMITER_LEN)
#define TSDB_PRIVILEDGE_CONDITION_LEN 200
#define TSDB_NODE_NAME_LEN 64
#define TSDB_TABLE_NAME_LEN 193 // it is a null-terminated string
#define TSDB_TOPIC_NAME_LEN 193 // it is a null-terminated string
#define TSDB_CGROUP_LEN 193 // it is a null-terminated string
#define TSDB_OFFSET_LEN 64 // it is a null-terminated string
#define TSDB_USER_CGROUP_LEN (TSDB_USER_LEN + TSDB_CGROUP_LEN) // it is a null-terminated string
#define TSDB_STREAM_NAME_LEN 193 // it is a null-terminated string
#define TSDB_DB_NAME_LEN 65
#define TSDB_DB_FNAME_LEN (TSDB_ACCT_ID_LEN + TSDB_DB_NAME_LEN + TSDB_NAME_DELIMITER_LEN)
#define TSDB_PRIVILEDGE_CONDITION_LEN 200
#define TSDB_FUNC_NAME_LEN 65
#define TSDB_FUNC_COMMENT_LEN 1024 * 1024
@ -249,15 +249,15 @@ typedef enum ELogicConditionType {
#define TSDB_LABEL_LEN 8
#define TSDB_JOB_STATUS_LEN 32
#define TSDB_CLUSTER_ID_LEN 40
#define TSDB_FQDN_LEN 128
#define TSDB_EP_LEN (TSDB_FQDN_LEN + 6)
#define TSDB_IPv4ADDR_LEN 16
#define TSDB_FILENAME_LEN 128
#define TSDB_SHOW_SQL_LEN 2048
#define TSDB_CLUSTER_ID_LEN 40
#define TSDB_FQDN_LEN 128
#define TSDB_EP_LEN (TSDB_FQDN_LEN + 6)
#define TSDB_IPv4ADDR_LEN 16
#define TSDB_FILENAME_LEN 128
#define TSDB_SHOW_SQL_LEN 2048
#define TSDB_SHOW_SCHEMA_JSON_LEN TSDB_MAX_COLUMNS * 256
#define TSDB_SLOW_QUERY_SQL_LEN 512
#define TSDB_SHOW_SUBQUERY_LEN 1000
#define TSDB_SLOW_QUERY_SQL_LEN 512
#define TSDB_SHOW_SUBQUERY_LEN 1000
#define TSDB_TRANS_STAGE_LEN 12
#define TSDB_TRANS_TYPE_LEN 16
@ -370,7 +370,7 @@ typedef enum ELogicConditionType {
#define TSDB_DEFAULT_DB_SCHEMALESS TSDB_DB_SCHEMALESS_OFF
#define TSDB_MIN_STT_TRIGGER 1
#define TSDB_MAX_STT_TRIGGER 16
#define TSDB_DEFAULT_SST_TRIGGER 1
#define TSDB_DEFAULT_SST_TRIGGER 2
#define TSDB_MIN_HASH_PREFIX (2 - TSDB_TABLE_NAME_LEN)
#define TSDB_MAX_HASH_PREFIX (TSDB_TABLE_NAME_LEN - 2)
#define TSDB_DEFAULT_HASH_PREFIX 0
@ -379,8 +379,8 @@ typedef enum ELogicConditionType {
#define TSDB_DEFAULT_HASH_SUFFIX 0
#define TSDB_DB_MIN_WAL_RETENTION_PERIOD -1
#define TSDB_REP_DEF_DB_WAL_RET_PERIOD 0
#define TSDB_REPS_DEF_DB_WAL_RET_PERIOD 0
#define TSDB_REP_DEF_DB_WAL_RET_PERIOD 3600
#define TSDB_REPS_DEF_DB_WAL_RET_PERIOD 3600
#define TSDB_DB_MIN_WAL_RETENTION_SIZE -1
#define TSDB_REP_DEF_DB_WAL_RET_SIZE 0
#define TSDB_REPS_DEF_DB_WAL_RET_SIZE 0
@ -410,10 +410,10 @@ typedef enum ELogicConditionType {
#define TSDB_EXPLAIN_RESULT_ROW_SIZE (16 * 1024)
#define TSDB_EXPLAIN_RESULT_COLUMN_NAME "QUERY_PLAN"
#define TSDB_MAX_FIELD_LEN 65519 // 16384:65519
#define TSDB_MAX_BINARY_LEN TSDB_MAX_FIELD_LEN // 16384-8:65519
#define TSDB_MAX_NCHAR_LEN TSDB_MAX_FIELD_LEN // 16384-8:65519
#define TSDB_MAX_GEOMETRY_LEN TSDB_MAX_FIELD_LEN // 16384-8:65519
#define TSDB_MAX_FIELD_LEN 65519 // 16384:65519
#define TSDB_MAX_BINARY_LEN TSDB_MAX_FIELD_LEN // 16384-8:65519
#define TSDB_MAX_NCHAR_LEN TSDB_MAX_FIELD_LEN // 16384-8:65519
#define TSDB_MAX_GEOMETRY_LEN TSDB_MAX_FIELD_LEN // 16384-8:65519
#define PRIMARYKEY_TIMESTAMP_COL_ID 1
#define COL_REACH_END(colId, maxColId) ((colId) > (maxColId))
@ -492,6 +492,7 @@ enum {
#define TSDB_CONFIG_OPTION_LEN 32
#define TSDB_CONFIG_VALUE_LEN 64
#define TSDB_CONFIG_SCOPE_LEN 8
#define TSDB_CONFIG_NUMBER 8
#define QUERY_ID_SIZE 20

View File

@ -77,7 +77,7 @@ PriorityQueueNode* taosPQTop(PriorityQueue* pq);
size_t taosPQSize(PriorityQueue* pq);
void taosPQPush(PriorityQueue* pq, const PriorityQueueNode* node);
PriorityQueueNode* taosPQPush(PriorityQueue* pq, const PriorityQueueNode* node);
void taosPQPop(PriorityQueue* pq);
@ -89,7 +89,13 @@ void taosBQSetFn(BoundedQueue* q, pq_comp_fn fn);
void destroyBoundedQueue(BoundedQueue* q);
void taosBQPush(BoundedQueue* q, PriorityQueueNode* n);
/*
* Push one node into BQ
* @retval NULL if n is upper than top node in q, and n is not freed
* @retval the pushed Node if pushing succeeded
* @note if maxSize exceeded, the original highest node is popped and freed with deleteFn
* */
PriorityQueueNode* taosBQPush(BoundedQueue* q, PriorityQueueNode* n);
PriorityQueueNode* taosBQTop(BoundedQueue* q);

View File

@ -241,6 +241,54 @@ void tdListNodeGetData(SList *list, SListNode *node, void *target);
void tdListInitIter(SList *list, SListIter *pIter, TD_LIST_DIRECTION_T direction);
SListNode *tdListNext(SListIter *pIter);
// macros ====================================================================================
// q: for queue
// n: for node
// m: for member
#define LISTD(TYPE) \
struct { \
TYPE *next, *prev; \
}
#define LISTD_NEXT(n, m) ((n)->m.next)
#define LISTD_PREV(n, m) ((n)->m.prev)
#define LISTD_INIT(q, m) (LISTD_NEXT(q, m) = LISTD_PREV(q, m) = (q))
#define LISTD_HEAD(q, m) (LISTD_NEXT(q, m))
#define LISTD_TAIL(q, m) (LISTD_PREV(q, m))
#define LISTD_PREV_NEXT(n, m) (LISTD_NEXT(LISTD_PREV(n, m), m))
#define LISTD_NEXT_PREV(n, m) (LISTD_PREV(LISTD_NEXT(n, m), m))
#define LISTD_INSERT_HEAD(q, n, m) \
do { \
LISTD_NEXT(n, m) = LISTD_NEXT(q, m); \
LISTD_PREV(n, m) = (q); \
LISTD_NEXT_PREV(n, m) = (n); \
LISTD_NEXT(q, m) = (n); \
} while (0)
#define LISTD_INSERT_TAIL(q, n, m) \
do { \
LISTD_NEXT(n, m) = (q); \
LISTD_PREV(n, m) = LISTD_PREV(q, m); \
LISTD_PREV_NEXT(n, m) = (n); \
LISTD_PREV(q, m) = (n); \
} while (0)
#define LISTD_REMOVE(n, m) \
do { \
LISTD_PREV_NEXT(n, m) = LISTD_NEXT(n, m); \
LISTD_NEXT_PREV(n, m) = LISTD_PREV(n, m); \
} while (0)
#define LISTD_FOREACH(q, n, m) for ((n) = LISTD_HEAD(q, m); (n) != (q); (n) = LISTD_NEXT(n, m))
#define LISTD_FOREACH_REVERSE(q, n, m) for ((n) = LISTD_TAIL(q, m); (n) != (q); (n) = LISTD_PREV(n, m))
#define LISTD_FOREACH_SAFE(q, n, t, m) \
for ((n) = LISTD_HEAD(q, m), (t) = LISTD_NEXT(n, m); (n) != (q); (n) = (t), (t) = LISTD_NEXT(n, m))
#define LISTD_FOREACH_REVERSE_SAFE(q, n, t, m) \
for ((n) = LISTD_TAIL(q, m), (t) = LISTD_PREV(n, m); (n) != (q); (n) = (t), (t) = LISTD_PREV(n, m))
#ifdef __cplusplus
}
#endif

View File

@ -39,7 +39,7 @@ void tRBTreeDrop(SRBTree *pTree, SRBTreeNode *z);
SRBTreeNode *tRBTreeDropByKey(SRBTree *pTree, void *pKey);
SRBTreeNode *tRBTreeDropMin(SRBTree *pTree);
SRBTreeNode *tRBTreeDropMax(SRBTree *pTree);
SRBTreeNode *tRBTreeGet(SRBTree *pTree, const SRBTreeNode *pKeyNode);
SRBTreeNode *tRBTreeGet(const SRBTree *pTree, const SRBTreeNode *pKeyNode);
// SRBTreeIter =============================================
#define tRBTreeIterCreate(tree, ascend) \
@ -67,9 +67,9 @@ struct SRBTree {
};
struct SRBTreeIter {
int8_t asc;
SRBTree *pTree;
SRBTreeNode *pNode;
int8_t asc;
const SRBTree *pTree;
SRBTreeNode *pNode;
};
#ifdef __cplusplus

View File

@ -29,7 +29,7 @@ extern "C" {
int32_t strdequote(char *src);
size_t strtrim(char *src);
char *strnchr(const char *haystack, char needle, int32_t len, bool skipquote);
TdUcs4* wcsnchr(const TdUcs4* haystack, TdUcs4 needle, size_t len);
TdUcs4 *wcsnchr(const TdUcs4 *haystack, TdUcs4 needle, size_t len);
char **strsplit(char *src, const char *delim, int32_t *num);
char *strtolower(char *dst, const char *src);
@ -37,11 +37,11 @@ char *strntolower(char *dst, const char *src, int32_t n);
char *strntolower_s(char *dst, const char *src, int32_t n);
int64_t strnatoi(char *num, int32_t len);
size_t tstrncspn(const char *str, size_t ssize, const char *reject, size_t rsize);
size_t twcsncspn(const TdUcs4 *wcs, size_t size, const TdUcs4 *reject, size_t rsize);
size_t tstrncspn(const char *str, size_t ssize, const char *reject, size_t rsize);
size_t twcsncspn(const TdUcs4 *wcs, size_t size, const TdUcs4 *reject, size_t rsize);
char *strbetween(char *string, char *begin, char *end);
char *paGetToken(char *src, char **token, int32_t *tokenLen);
char *strbetween(char *string, char *begin, char *end);
char *paGetToken(char *src, char **token, int32_t *tokenLen);
int32_t taosByteArrayToHexStr(char bytes[], int32_t len, char hexstr[]);
int32_t taosHexStrToByteArray(char hexstr[], char bytes[]);
@ -81,12 +81,13 @@ static FORCE_INLINE void taosEncryptPass_c(uint8_t *inBuf, size_t len, char *tar
static FORCE_INLINE int32_t taosGetTbHashVal(const char *tbname, int32_t tblen, int32_t method, int32_t prefix,
int32_t suffix) {
if ((prefix == 0 && suffix == 0) || (tblen <= (prefix + suffix)) || (tblen <= -1 * (prefix + suffix)) || prefix * suffix < 0) {
if ((prefix == 0 && suffix == 0) || (tblen <= (prefix + suffix)) || (tblen <= -1 * (prefix + suffix)) ||
prefix * suffix < 0) {
return MurmurHash3_32(tbname, tblen);
} else if (prefix > 0 || suffix > 0) {
return MurmurHash3_32(tbname + prefix, tblen - prefix - suffix);
} else {
char tbName[TSDB_TABLE_FNAME_LEN];
char tbName[TSDB_TABLE_FNAME_LEN];
int32_t offset = 0;
if (prefix < 0) {
offset = -1 * prefix;
@ -94,20 +95,33 @@ static FORCE_INLINE int32_t taosGetTbHashVal(const char *tbname, int32_t tblen,
}
if (suffix < 0) {
strncpy(tbName + offset, tbname + tblen + suffix, -1 * suffix);
offset += -1 *suffix;
offset += -1 * suffix;
}
return MurmurHash3_32(tbName, offset);
}
}
#define TSDB_CHECK_CODE(CODE, LINO, LABEL) \
if (CODE) { \
LINO = __LINE__; \
goto LABEL; \
do { \
if ((CODE)) { \
LINO = __LINE__; \
goto LABEL; \
} \
} while (0)
#define TSDB_CHECK_NULL(ptr, CODE, LINO, LABEL, ERRNO) \
if ((ptr) == NULL) { \
(CODE) = (ERRNO); \
(LINO) = __LINE__; \
goto LABEL; \
}
#define ARRAY_SIZE(a) (sizeof(a) / sizeof(a[0]))
#define VND_CHECK_CODE(CODE, LINO, LABEL) TSDB_CHECK_CODE(CODE, LINO, LABEL)
#define TCONTAINER_OF(ptr, type, member) ((type *)((char *)(ptr)-offsetof(type, member)))
#ifdef __cplusplus
}
#endif

View File

@ -108,6 +108,9 @@
# time period of keeping log files, in days
# logKeepDays 0
# unit Hour. Latency of data migration
# keepTimeOffset 0
############ 3. Debug Flag and levels #############################################

View File

@ -87,7 +87,7 @@ os.system("rm -rf /tmp/dumpdata/*")
# dump data out
print("taosdump dump out data")
os.system("taosdump -o /tmp/dumpdata -D test -y -h %s "%serverHost)
os.system("taosdump -o /tmp/dumpdata -D test -h %s "%serverHost)
# drop database of test
print("drop database test")
@ -95,7 +95,7 @@ os.system(" taos -s ' drop database test ;' -h %s "%serverHost)
# dump data in
print("taosdump dump data in")
os.system("taosdump -i /tmp/dumpdata -y -h %s "%serverHost)
os.system("taosdump -i /tmp/dumpdata -h %s "%serverHost)
result = conn.query("SELECT count(*) from test.meters")

View File

@ -613,12 +613,6 @@ function install_examples() {
fi
}
function install_web() {
if [ -d "${script_dir}/share" ]; then
${csudo}cp -rf ${script_dir}/share/* ${install_main_dir}/share > /dev/null 2>&1 ||:
fi
}
function clean_service_on_sysvinit() {
if ps aux | grep -v grep | grep ${serverName2} &>/dev/null; then
@ -894,7 +888,6 @@ function updateProduct() {
fi
install_examples
install_web
if [ -z $1 ]; then
install_bin
install_service
@ -907,20 +900,22 @@ function updateProduct() {
echo
echo -e "${GREEN_DARK}To configure ${productName2} ${NC}: edit ${cfg_install_dir}/${configFile2}"
[ -f ${configDir}/${clientName2}adapter.toml ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To configure ${clientName2} Adapter ${NC}: edit ${configDir}/${clientName2}adapter.toml"
echo -e "${GREEN_DARK}To configure ${clientName2}Adapter ${NC}: edit ${configDir}/${clientName2}adapter.toml"
if ((${service_mod} == 0)); then
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}systemctl start ${serverName2}${NC}"
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}systemctl start ${clientName2}adapter ${NC}"
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${csudo}systemctl start ${clientName2}adapter ${NC}"
elif ((${service_mod} == 1)); then
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}service ${serverName2} start${NC}"
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}service ${clientName2}adapter start${NC}"
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${csudo}service ${clientName2}adapter start${NC}"
else
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ./${serverName2}${NC}"
[ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${clientName2}adapter &${NC}"
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${clientName2}adapter &${NC}"
fi
echo -e "${GREEN_DARK}To enable ${clientName2}keeper ${NC}: sudo systemctl enable ${clientName2}keeper &${NC}"
if [ ${openresty_work} = 'true' ]; then
echo -e "${GREEN_DARK}To access ${productName2} ${NC}: use ${GREEN_UNDERLINE}${clientName2} -h $serverFqdn${NC} in shell OR from ${GREEN_UNDERLINE}http://127.0.0.1:${web_port}${NC}"
@ -934,6 +929,7 @@ function updateProduct() {
fi
echo
echo -e "\033[44;32;1m${productName2} is updated successfully!${NC}"
echo -e "\033[44;32;1mTo manage ${productName2} instance, view documentation and explorer features, you need to install ${clientName2}Explorer ${NC}"
else
install_bin
install_config
@ -971,8 +967,7 @@ function installProduct() {
if [ "$verMode" == "cluster" ]; then
install_connector
fi
install_examples
install_web
install_examples
if [ -z $1 ]; then # install service and client
# For installing new
@ -989,21 +984,23 @@ function installProduct() {
echo
echo -e "${GREEN_DARK}To configure ${productName2} ${NC}: edit ${cfg_install_dir}/${configFile2}"
[ -f ${configDir}/${clientName2}adapter.toml ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To configure ${clientName2} Adapter ${NC}: edit ${configDir}/${clientName2}adapter.toml"
echo -e "${GREEN_DARK}To configure ${clientName2}Adapter ${NC}: edit ${configDir}/${clientName2}adapter.toml"
if ((${service_mod} == 0)); then
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}systemctl start ${serverName2}${NC}"
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}systemctl start ${clientName2}adapter ${NC}"
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${csudo}systemctl start ${clientName2}adapter ${NC}"
elif ((${service_mod} == 1)); then
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}service ${serverName2} start${NC}"
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}service ${clientName2}adapter start${NC}"
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${csudo}service ${clientName2}adapter start${NC}"
else
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${serverName2}${NC}"
[ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${clientName2}adapter &${NC}"
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${clientName2}adapter &${NC}"
fi
echo -e "${GREEN_DARK}To enable ${clientName2}keeper ${NC}: sudo systemctl enable ${clientName2}keeper &${NC}"
if [ ! -z "$firstEp" ]; then
tmpFqdn=${firstEp%%:*}
substr=":"
@ -1025,6 +1022,7 @@ function installProduct() {
fi
echo -e "\033[44;32;1m${productName2} is installed successfully!${NC}"
echo -e "\033[44;32;1mTo manage ${productName2} instance, view documentation and explorer features, you need to install ${clientName2}Explorer ${NC}"
echo
else # Only install client
install_bin

View File

@ -267,7 +267,9 @@ function install_log() {
}
function install_connector() {
${csudo}cp -rf ${script_dir}/connector/ ${install_main_dir}/
if [ -d ${script_dir}/connector ]; then
${csudo}cp -rf ${script_dir}/connector/ ${install_main_dir}/
fi
}
function install_examples() {

View File

@ -56,8 +56,8 @@ copy %binary_dir%\\build\\bin\\taos.exe %target_dir% > nul
if exist %binary_dir%\\build\\bin\\taosBenchmark.exe (
copy %binary_dir%\\build\\bin\\taosBenchmark.exe %target_dir% > nul
)
if exist %binary_dir%\\build\\lib\\taosws.dll.lib (
copy %binary_dir%\\build\\lib\\taosws.dll.lib %target_dir%\\driver > nul
if exist %binary_dir%\\build\\lib\\taosws.lib (
copy %binary_dir%\\build\\lib\\taosws.lib %target_dir%\\driver > nul
)
if exist %binary_dir%\\build\\lib\\taosws.dll (
copy %binary_dir%\\build\\lib\\taosws.dll %target_dir%\\driver > nul

View File

@ -432,12 +432,6 @@ function install_examples() {
${csudo}cp -rf ${source_dir}/examples/* ${install_main_dir}/examples || :
}
function install_web() {
if [ -d "${binary_dir}/build/share" ]; then
${csudo}cp -rf ${binary_dir}/build/share/* ${install_main_dir}/share || :
fi
}
function clean_service_on_sysvinit() {
if ps aux | grep -v grep | grep ${serverName} &>/dev/null; then
${csudo}service ${serverName} stop || :
@ -592,7 +586,6 @@ function update_TDengine() {
install_lib
# install_connector
install_examples
install_web
install_bin
install_app

View File

@ -126,7 +126,6 @@ else
fi
install_files="${script_dir}/install.sh"
web_dir="${top_dir}/../enterprise/src/plugins/web"
init_file_deb=${script_dir}/../deb/taosd
init_file_rpm=${script_dir}/../rpm/taosd
@ -320,17 +319,6 @@ if [[ $dbName == "taos" ]]; then
mkdir -p ${install_dir}/examples/taosbenchmark-json && cp ${examples_dir}/../tools/taos-tools/example/* ${install_dir}/examples/taosbenchmark-json
fi
# Add web files
if [ "$verMode" == "cluster" ] || [ "$verMode" == "cloud" ]; then
if [ -d "${web_dir}/admin" ] ; then
mkdir -p ${install_dir}/share/
cp -Rfap ${web_dir}/admin ${install_dir}/share/
cp ${web_dir}/png/taos.png ${install_dir}/share/admin/images/taos.png
cp -rf ${build_dir}/share/{etc,srv} ${install_dir}/share ||:
else
echo "directory not found for enterprise release: ${web_dir}/admin"
fi
fi
fi
# Copy driver

View File

@ -46,9 +46,10 @@ enum {
RES_TYPE__TMQ_METADATA,
};
#define SHOW_VARIABLES_RESULT_COLS 2
#define SHOW_VARIABLES_RESULT_COLS 3
#define SHOW_VARIABLES_RESULT_FIELD1_LEN (TSDB_CONFIG_OPTION_LEN + VARSTR_HEADER_SIZE)
#define SHOW_VARIABLES_RESULT_FIELD2_LEN (TSDB_CONFIG_VALUE_LEN + VARSTR_HEADER_SIZE)
#define SHOW_VARIABLES_RESULT_FIELD3_LEN (TSDB_CONFIG_SCOPE_LEN + VARSTR_HEADER_SIZE)
#define TD_RES_QUERY(res) (*(int8_t*)res == RES_TYPE__QUERY)
#define TD_RES_TMQ(res) (*(int8_t*)res == RES_TYPE__TMQ)
@ -63,7 +64,7 @@ typedef struct {
// statistics
int32_t reportCnt;
int32_t connKeyCnt;
int32_t passKeyCnt; // with passVer call back
int8_t connHbFlag; // 0 init, 1 send req, 2 get resp
int64_t reportBytes; // not implemented
int64_t startTime;
// ctl
@ -83,8 +84,9 @@ typedef struct {
int8_t threadStop;
int8_t quitByKill;
TdThread thread;
TdThreadMutex lock; // used when app init and cleanup
TdThreadMutex lock; // used when app init and cleanup
SHashObj* appSummary;
SHashObj* appHbHash; // key: clusterId
SArray* appHbMgrs; // SArray<SAppHbMgr*> one for each cluster
FHbReqHandle reqHandle[CONN_TYPE__MAX];
FHbRspHandle rspHandle[CONN_TYPE__MAX];
@ -146,6 +148,7 @@ typedef struct STscObj {
int64_t id; // ref ID returned by taosAddRef
TdThreadMutex mutex; // used to protect the operation on db
int32_t numOfReqs; // number of sqlObj bound to this connection
int32_t authVer;
SAppInstInfo* pAppInfo;
SHashObj* pRequests;
SPassInfo passInfo;

View File

@ -29,6 +29,7 @@
#include "trpc.h"
#include "tsched.h"
#include "ttime.h"
#include "tversion.h"
#if defined(CUS_NAME) || defined(CUS_PROMPT) || defined(CUS_EMAIL)
#include "cus_name.h"
@ -111,7 +112,8 @@ static void deregisterRequest(SRequestObj *pRequest) {
atomic_add_fetch_64((int64_t *)&pActivity->numOfSlowQueries, 1);
if (tsSlowLogScope & reqType) {
taosPrintSlowLog("PID:%d, Conn:%u, QID:0x%" PRIx64 ", Start:%" PRId64 ", Duration:%" PRId64 "us, SQL:%s",
taosGetPId(), pTscObj->connId, pRequest->requestId, pRequest->metric.start, duration, pRequest->sqlstr);
taosGetPId(), pTscObj->connId, pRequest->requestId, pRequest->metric.start, duration,
pRequest->sqlstr);
}
}
@ -175,6 +177,8 @@ void *openTransporter(const char *user, const char *auth, int32_t numOfThread) {
rpcInit.connLimitNum = connLimitNum;
rpcInit.timeToGetConn = tsTimeToGetAvailableConn;
taosVersionStrToInt(version, &(rpcInit.compatibilityVer));
void *pDnodeConn = rpcOpen(&rpcInit);
if (pDnodeConn == NULL) {
tscError("failed to init connection to server");
@ -358,17 +362,16 @@ int32_t releaseRequest(int64_t rid) { return taosReleaseRef(clientReqRefPool, ri
int32_t removeRequest(int64_t rid) { return taosRemoveRef(clientReqRefPool, rid); }
void destroySubRequests(SRequestObj *pRequest) {
int32_t reqIdx = -1;
int32_t reqIdx = -1;
SRequestObj *pReqList[16] = {NULL};
uint64_t tmpRefId = 0;
uint64_t tmpRefId = 0;
if (pRequest->relation.userRefId && pRequest->relation.userRefId != pRequest->self) {
return;
}
SRequestObj* pTmp = pRequest;
SRequestObj *pTmp = pRequest;
while (pTmp->relation.prevRefId) {
tmpRefId = pTmp->relation.prevRefId;
pTmp = acquireRequest(tmpRefId);
@ -376,9 +379,9 @@ void destroySubRequests(SRequestObj *pRequest) {
pReqList[++reqIdx] = pTmp;
releaseRequest(tmpRefId);
} else {
tscError("0x%" PRIx64 ", prev req ref 0x%" PRIx64 " is not there, reqId:0x%" PRIx64, pTmp->self,
tmpRefId, pTmp->requestId);
break;
tscError("0x%" PRIx64 ", prev req ref 0x%" PRIx64 " is not there, reqId:0x%" PRIx64, pTmp->self, tmpRefId,
pTmp->requestId);
break;
}
}
@ -391,16 +394,15 @@ void destroySubRequests(SRequestObj *pRequest) {
pTmp = acquireRequest(tmpRefId);
if (pTmp) {
tmpRefId = pTmp->relation.nextRefId;
removeRequest(pTmp->self);
removeRequest(pTmp->self);
releaseRequest(pTmp->self);
} else {
tscError("0x%" PRIx64 " is not there", tmpRefId);
break;
break;
}
}
}
void doDestroyRequest(void *p) {
if (NULL == p) {
return;
@ -412,7 +414,7 @@ void doDestroyRequest(void *p) {
tscTrace("begin to destroy request %" PRIx64 " p:%p", reqId, pRequest);
destroySubRequests(pRequest);
taosHashRemove(pRequest->pTscObj->pRequests, &pRequest->self, sizeof(pRequest->self));
schedulerFreeJob(&pRequest->body.queryJob, 0);
@ -473,15 +475,15 @@ void taosStopQueryImpl(SRequestObj *pRequest) {
}
void stopAllQueries(SRequestObj *pRequest) {
int32_t reqIdx = -1;
int32_t reqIdx = -1;
SRequestObj *pReqList[16] = {NULL};
uint64_t tmpRefId = 0;
uint64_t tmpRefId = 0;
if (pRequest->relation.userRefId && pRequest->relation.userRefId != pRequest->self) {
return;
}
SRequestObj* pTmp = pRequest;
SRequestObj *pTmp = pRequest;
while (pTmp->relation.prevRefId) {
tmpRefId = pTmp->relation.prevRefId;
pTmp = acquireRequest(tmpRefId);
@ -489,9 +491,9 @@ void stopAllQueries(SRequestObj *pRequest) {
pReqList[++reqIdx] = pTmp;
releaseRequest(tmpRefId);
} else {
tscError("0x%" PRIx64 ", prev req ref 0x%" PRIx64 " is not there, reqId:0x%" PRIx64, pTmp->self,
tmpRefId, pTmp->requestId);
break;
tscError("0x%" PRIx64 ", prev req ref 0x%" PRIx64 " is not there, reqId:0x%" PRIx64, pTmp->self, tmpRefId,
pTmp->requestId);
break;
}
}
@ -510,12 +512,11 @@ void stopAllQueries(SRequestObj *pRequest) {
releaseRequest(pTmp->self);
} else {
tscError("0x%" PRIx64 " is not there", tmpRefId);
break;
break;
}
}
}
void crashReportThreadFuncUnexpectedStopped(void) { atomic_store_32(&clientStop, -1); }
static void *tscCrashReportThreadFp(void *param) {

View File

@ -22,10 +22,10 @@
typedef struct {
union {
struct {
int64_t clusterId;
int32_t passKeyCnt;
int32_t passVer;
int32_t reqCnt;
SAppHbMgr *pAppHbMgr;
int64_t clusterId;
int32_t reqCnt;
int8_t connHbFlag;
};
};
} SHbParam;
@ -34,12 +34,14 @@ static SClientHbMgr clientHbMgr = {0};
static int32_t hbCreateThread();
static void hbStopThread();
static int32_t hbUpdateUserAuthInfo(SAppHbMgr *pAppHbMgr, SUserAuthBatchRsp *batchRsp);
static int32_t hbMqHbReqHandle(SClientHbKey *connKey, void *param, SClientHbReq *req) { return 0; }
static int32_t hbMqHbRspHandle(SAppHbMgr *pAppHbMgr, SClientHbRsp *pRsp) { return 0; }
static int32_t hbProcessUserAuthInfoRsp(void *value, int32_t valueLen, struct SCatalog *pCatalog) {
static int32_t hbProcessUserAuthInfoRsp(void *value, int32_t valueLen, struct SCatalog *pCatalog,
SAppHbMgr *pAppHbMgr) {
int32_t code = 0;
SUserAuthBatchRsp batchRsp = {0};
@ -56,54 +58,68 @@ static int32_t hbProcessUserAuthInfoRsp(void *value, int32_t valueLen, struct SC
catalogUpdateUserAuthInfo(pCatalog, rsp);
}
if (numOfBatchs > 0) hbUpdateUserAuthInfo(pAppHbMgr, &batchRsp);
atomic_val_compare_exchange_8(&pAppHbMgr->connHbFlag, 1, 2);
taosArrayDestroy(batchRsp.pArray);
return TSDB_CODE_SUCCESS;
}
static int32_t hbProcessUserPassInfoRsp(void *value, int32_t valueLen, SClientHbKey *connKey, SAppHbMgr *pAppHbMgr) {
int32_t code = 0;
int32_t numOfBatchs = 0;
SUserPassBatchRsp batchRsp = {0};
if (tDeserializeSUserPassBatchRsp(value, valueLen, &batchRsp) != 0) {
code = TSDB_CODE_INVALID_MSG;
return code;
}
numOfBatchs = taosArrayGetSize(batchRsp.pArray);
SClientHbReq *pReq = NULL;
while ((pReq = taosHashIterate(pAppHbMgr->activeInfo, pReq))) {
STscObj *pTscObj = (STscObj *)acquireTscObj(pReq->connKey.tscRid);
if (!pTscObj) {
continue;
}
SPassInfo *passInfo = &pTscObj->passInfo;
if (!passInfo->fp) {
releaseTscObj(pReq->connKey.tscRid);
static int32_t hbUpdateUserAuthInfo(SAppHbMgr *pAppHbMgr, SUserAuthBatchRsp *batchRsp) {
uint64_t clusterId = pAppHbMgr->pAppInstInfo->clusterId;
for (int i = 0; i < TARRAY_SIZE(clientHbMgr.appHbMgrs); ++i) {
SAppHbMgr *hbMgr = taosArrayGetP(clientHbMgr.appHbMgrs, i);
if (!hbMgr || hbMgr->pAppInstInfo->clusterId != clusterId) {
continue;
}
for (int32_t i = 0; i < numOfBatchs; ++i) {
SGetUserPassRsp *rsp = taosArrayGet(batchRsp.pArray, i);
if (0 == strncmp(rsp->user, pTscObj->user, TSDB_USER_LEN)) {
int32_t oldVer = atomic_load_32(&passInfo->ver);
if (oldVer < rsp->version) {
atomic_store_32(&passInfo->ver, rsp->version);
if (passInfo->fp) {
(*passInfo->fp)(passInfo->param, &passInfo->ver, TAOS_NOTIFY_PASSVER);
SClientHbReq *pReq = NULL;
SGetUserAuthRsp *pRsp = NULL;
while ((pReq = taosHashIterate(hbMgr->activeInfo, pReq))) {
STscObj *pTscObj = (STscObj *)acquireTscObj(pReq->connKey.tscRid);
if (!pTscObj) {
continue;
}
if (!pRsp) {
for (int32_t j = 0; j < TARRAY_SIZE(batchRsp->pArray); ++j) {
SGetUserAuthRsp *rsp = TARRAY_GET_ELEM(batchRsp->pArray, j);
if (0 == strncmp(rsp->user, pTscObj->user, TSDB_USER_LEN)) {
pRsp = rsp;
break;
}
tscDebug("update passVer of user %s from %d to %d, tscRid:%" PRIi64, rsp->user, oldVer,
}
if (!pRsp) {
releaseTscObj(pReq->connKey.tscRid);
break;
}
}
pTscObj->authVer = pRsp->version;
if (pTscObj->sysInfo != pRsp->sysInfo) {
tscDebug("update sysInfo of user %s from %" PRIi8 " to %" PRIi8 ", tscRid:%" PRIi64, pRsp->user,
pTscObj->sysInfo, pRsp->sysInfo, pTscObj->id);
pTscObj->sysInfo = pRsp->sysInfo;
}
if (pTscObj->passInfo.fp) {
SPassInfo *passInfo = &pTscObj->passInfo;
int32_t oldVer = atomic_load_32(&passInfo->ver);
if (oldVer < pRsp->passVer) {
atomic_store_32(&passInfo->ver, pRsp->passVer);
if (passInfo->fp) {
(*passInfo->fp)(passInfo->param, &pRsp->passVer, TAOS_NOTIFY_PASSVER);
}
tscDebug("update passVer of user %s from %d to %d, tscRid:%" PRIi64, pRsp->user, oldVer,
atomic_load_32(&passInfo->ver), pTscObj->id);
}
break;
}
releaseTscObj(pReq->connKey.tscRid);
}
releaseTscObj(pReq->connKey.tscRid);
}
taosArrayDestroy(batchRsp.pArray);
return code;
return 0;
}
static int32_t hbGenerateVgInfoFromRsp(SDBVgInfo **pInfo, SUseDbRsp *rsp) {
@ -121,7 +137,6 @@ static int32_t hbGenerateVgInfoFromRsp(SDBVgInfo **pInfo, SUseDbRsp *rsp) {
vgInfo->hashSuffix = rsp->hashSuffix;
vgInfo->vgHash = taosHashInit(rsp->vgNum, taosGetDefaultHashFunction(TSDB_DATA_TYPE_INT), true, HASH_ENTRY_LOCK);
if (NULL == vgInfo->vgHash) {
taosMemoryFree(vgInfo);
tscError("hash init[%d] failed", rsp->vgNum);
code = TSDB_CODE_OUT_OF_MEMORY;
goto _return;
@ -131,8 +146,6 @@ static int32_t hbGenerateVgInfoFromRsp(SDBVgInfo **pInfo, SUseDbRsp *rsp) {
SVgroupInfo *pInfo = taosArrayGet(rsp->pVgroupInfos, j);
if (taosHashPut(vgInfo->vgHash, &pInfo->vgId, sizeof(int32_t), pInfo, sizeof(SVgroupInfo)) != 0) {
tscError("hash push failed, errno:%d", errno);
taosHashCleanup(vgInfo->vgHash);
taosMemoryFree(vgInfo);
code = TSDB_CODE_OUT_OF_MEMORY;
goto _return;
}
@ -316,7 +329,7 @@ static int32_t hbQueryHbRspHandle(SAppHbMgr *pAppHbMgr, SClientHbRsp *pRsp) {
break;
}
hbProcessUserAuthInfoRsp(kv->value, kv->valueLen, pCatalog);
hbProcessUserAuthInfoRsp(kv->value, kv->valueLen, pCatalog, pAppHbMgr);
break;
}
case HEARTBEAT_KEY_DBINFO: {
@ -353,15 +366,6 @@ static int32_t hbQueryHbRspHandle(SAppHbMgr *pAppHbMgr, SClientHbRsp *pRsp) {
hbProcessStbInfoRsp(kv->value, kv->valueLen, pCatalog);
break;
}
case HEARTBEAT_KEY_USER_PASSINFO: {
if (kv->valueLen <= 0 || NULL == kv->value) {
tscError("invalid hb user pass info, len:%d, value:%p", kv->valueLen, kv->value);
break;
}
hbProcessUserPassInfoRsp(kv->value, kv->valueLen, &pRsp->connKey, pAppHbMgr);
break;
}
default:
tscError("invalid hb key type:%d", kv->key);
break;
@ -479,7 +483,6 @@ int32_t hbBuildQueryDesc(SQueryHbReqBasic *hbBasic, STscObj *pObj) {
if (code) {
taosArrayDestroy(desc.subDesc);
desc.subDesc = NULL;
desc.subPlanNum = 0;
}
desc.subPlanNum = taosArrayGetSize(desc.subDesc);
} else {
@ -543,7 +546,7 @@ int32_t hbGetQueryBasicInfo(SClientHbKey *connKey, SClientHbReq *req) {
return TSDB_CODE_SUCCESS;
}
static int32_t hbGetUserBasicInfo(SClientHbKey *connKey, SHbParam *param, SClientHbReq *req) {
static int32_t hbGetUserAuthInfo(SClientHbKey *connKey, SHbParam *param, SClientHbReq *req) {
STscObj *pTscObj = (STscObj *)acquireTscObj(connKey->tscRid);
if (!pTscObj) {
tscWarn("tscObj rid %" PRIx64 " not exist", connKey->tscRid);
@ -552,46 +555,61 @@ static int32_t hbGetUserBasicInfo(SClientHbKey *connKey, SHbParam *param, SClien
int32_t code = 0;
if (param && (param->passVer != INT32_MIN) && (param->passVer <= pTscObj->passInfo.ver)) {
tscDebug("hb got user basic info, no need since passVer %d <= %d", param->passVer, pTscObj->passInfo.ver);
SKv kv = {.key = HEARTBEAT_KEY_USER_AUTHINFO};
SKv *pKv = NULL;
if ((pKv = taosHashGet(req->info, &kv.key, sizeof(kv.key)))) {
int32_t userNum = pKv->valueLen / sizeof(SUserAuthVersion);
SUserAuthVersion *userAuths = (SUserAuthVersion *)pKv->value;
for (int32_t i = 0; i < userNum; ++i) {
SUserAuthVersion *pUserAuth = userAuths + i;
// both key and user exist, update version
if (strncmp(pUserAuth->user, pTscObj->user, TSDB_USER_LEN) == 0) {
pUserAuth->version = htonl(-1); // force get userAuthInfo
goto _return;
}
}
// key exists, user not exist, append user
SUserAuthVersion *qUserAuth =
(SUserAuthVersion *)taosMemoryRealloc(pKv->value, (userNum + 1) * sizeof(SUserAuthVersion));
if (qUserAuth) {
strncpy((qUserAuth + userNum)->user, pTscObj->user, TSDB_USER_LEN);
(qUserAuth + userNum)->version = htonl(-1); // force get userAuthInfo
pKv->value = qUserAuth;
pKv->valueLen += sizeof(SUserAuthVersion);
} else {
code = TSDB_CODE_OUT_OF_MEMORY;
}
goto _return;
}
SUserPassVersion *user = taosMemoryMalloc(sizeof(SUserPassVersion));
// key/user not exist, add user
SUserAuthVersion *user = taosMemoryMalloc(sizeof(SUserAuthVersion));
if (!user) {
code = TSDB_CODE_OUT_OF_MEMORY;
goto _return;
}
strncpy(user->user, pTscObj->user, TSDB_USER_LEN);
user->version = htonl(pTscObj->passInfo.ver);
tstrncpy(user->user, pTscObj->user, TSDB_USER_LEN);
user->version = htonl(-1); // force get userAuthInfo
kv.valueLen = sizeof(SUserAuthVersion);
kv.value = user;
SKv kv = {
.key = HEARTBEAT_KEY_USER_PASSINFO,
.valueLen = sizeof(SUserPassVersion),
.value = user,
};
tscDebug("hb got user basic info, valueLen:%d, user:%s, passVer:%d, tscRid:%" PRIi64, kv.valueLen, user->user,
pTscObj->passInfo.ver, connKey->tscRid);
tscDebug("hb got user auth info, valueLen:%d, user:%s, authVer:%d, tscRid:%" PRIi64, kv.valueLen, user->user,
pTscObj->authVer, connKey->tscRid);
if (!req->info) {
req->info = taosHashInit(64, hbKeyHashFunc, 1, HASH_ENTRY_LOCK);
}
if (taosHashPut(req->info, &kv.key, sizeof(kv.key), &kv, sizeof(kv)) < 0) {
taosMemoryFree(user);
code = terrno ? terrno : TSDB_CODE_APP_ERROR;
goto _return;
}
// assign the passVer
if (param) {
param->passVer = pTscObj->passInfo.ver;
}
_return:
releaseTscObj(connKey->tscRid);
if (code) {
tscError("hb got user basic info failed since %s", terrstr(code));
tscError("hb got user auth info failed since %s", terrstr(code));
}
return code;
@ -749,14 +767,21 @@ int32_t hbQueryHbReqHandle(SClientHbKey *connKey, void *param, SClientHbReq *req
hbGetQueryBasicInfo(connKey, req);
if (hbParam->passKeyCnt > 0) {
hbGetUserBasicInfo(connKey, hbParam, req);
}
if (hbParam->reqCnt == 0) {
code = hbGetExpiredUserInfo(connKey, pCatalog, req);
if (TSDB_CODE_SUCCESS != code) {
return code;
if (!taosHashGet(clientHbMgr.appHbHash, &hbParam->clusterId, sizeof(hbParam->clusterId))) {
code = hbGetExpiredUserInfo(connKey, pCatalog, req);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
}
// invoke after hbGetExpiredUserInfo
if (2 != atomic_load_8(&hbParam->pAppHbMgr->connHbFlag)) {
code = hbGetUserAuthInfo(connKey, hbParam, req);
if (TSDB_CODE_SUCCESS != code) {
return code;
}
atomic_store_8(&hbParam->pAppHbMgr->connHbFlag, 1);
}
code = hbGetExpiredDBInfo(connKey, pCatalog, req);
@ -770,7 +795,7 @@ int32_t hbQueryHbReqHandle(SClientHbKey *connKey, void *param, SClientHbReq *req
}
}
++hbParam->reqCnt; // success to get catalog info
++hbParam->reqCnt; // success to get catalog info
return TSDB_CODE_SUCCESS;
}
@ -815,9 +840,9 @@ SClientHbBatchReq *hbGatherAllInfo(SAppHbMgr *pAppHbMgr) {
if (param.clusterId == 0) {
// init
param.clusterId = pOneReq->clusterId;
param.passVer = INT32_MIN;
param.pAppHbMgr = pAppHbMgr;
param.connHbFlag = atomic_load_8(&pAppHbMgr->connHbFlag);
}
param.passKeyCnt = atomic_load_32(&pAppHbMgr->passKeyCnt);
break;
}
default:
@ -901,6 +926,10 @@ static void *hbThreadFunc(void *param) {
int sz = taosArrayGetSize(clientHbMgr.appHbMgrs);
if (sz > 0) {
hbGatherAppInfo();
if (sz > 1 && !clientHbMgr.appHbHash) {
clientHbMgr.appHbHash = taosHashInit(0, taosGetDefaultHashFunction(TSDB_DATA_TYPE_UBIGINT), false, HASH_NO_LOCK);
}
taosHashClear(clientHbMgr.appHbHash);
}
for (int i = 0; i < sz; i++) {
@ -953,7 +982,7 @@ static void *hbThreadFunc(void *param) {
asyncSendMsgToServer(pAppInstInfo->pTransporter, &epSet, &transporterId, pInfo);
tFreeClientHbBatchReq(pReq);
// hbClearReqInfo(pAppHbMgr);
taosHashPut(clientHbMgr.appHbHash, &pAppHbMgr->pAppInstInfo->clusterId, sizeof(uint64_t), NULL, 0);
atomic_add_fetch_32(&pAppHbMgr->reportCnt, 1);
}
@ -961,6 +990,7 @@ static void *hbThreadFunc(void *param) {
taosMsleep(HEARTBEAT_INTERVAL);
}
taosHashCleanup(clientHbMgr.appHbHash);
return NULL;
}
@ -1009,7 +1039,7 @@ SAppHbMgr *appHbMgrInit(SAppInstInfo *pAppInstInfo, char *key) {
// init stat
pAppHbMgr->startTime = taosGetTimestampMs();
pAppHbMgr->connKeyCnt = 0;
pAppHbMgr->passKeyCnt = 0;
pAppHbMgr->connHbFlag = 0;
pAppHbMgr->reportCnt = 0;
pAppHbMgr->reportBytes = 0;
pAppHbMgr->key = taosStrdup(key);
@ -1127,7 +1157,6 @@ void hbMgrCleanUp() {
appHbMgrCleanup();
taosArrayDestroy(clientHbMgr.appHbMgrs);
taosThreadMutexUnlock(&clientHbMgr.lock);
clientHbMgr.appHbMgrs = NULL;
}
@ -1180,12 +1209,6 @@ void hbDeregisterConn(STscObj *pTscObj, SClientHbKey connKey) {
}
atomic_sub_fetch_32(&pAppHbMgr->connKeyCnt, 1);
taosThreadMutexLock(&pTscObj->mutex);
if (pTscObj->passInfo.fp) {
atomic_sub_fetch_32(&pAppHbMgr->passKeyCnt, 1);
}
taosThreadMutexUnlock(&pTscObj->mutex);
}
// set heart beat thread quit mode , if quicByKill 1 then kill thread else quit from inner

View File

@ -26,7 +26,7 @@
#include "tpagedbuf.h"
#include "tref.h"
#include "tsched.h"
#include "tversion.h"
static int32_t initEpSetFromCfg(const char* firstEp, const char* secondEp, SCorEpSet* pEpSet);
static SMsgSendInfo* buildConnectMsg(SRequestObj* pRequest);
@ -237,8 +237,9 @@ int32_t buildRequest(uint64_t connId, const char* sql, int sqlLen, void* param,
return TSDB_CODE_SUCCESS;
}
int32_t buildPreviousRequest(SRequestObj *pRequest, const char* sql, SRequestObj** pNewRequest) {
int32_t code = buildRequest(pRequest->pTscObj->id, sql, strlen(sql), pRequest, pRequest->validateOnly, pNewRequest, 0);
int32_t buildPreviousRequest(SRequestObj* pRequest, const char* sql, SRequestObj** pNewRequest) {
int32_t code =
buildRequest(pRequest->pTscObj->id, sql, strlen(sql), pRequest, pRequest->validateOnly, pNewRequest, 0);
if (TSDB_CODE_SUCCESS == code) {
pRequest->relation.prevRefId = (*pNewRequest)->self;
(*pNewRequest)->relation.nextRefId = pRequest->self;
@ -502,8 +503,7 @@ void setResSchemaInfo(SReqResultInfo* pResInfo, const SSchema* pSchema, int32_t
pResInfo->userFields[i].bytes = pSchema[i].bytes;
pResInfo->userFields[i].type = pSchema[i].type;
if (pSchema[i].type == TSDB_DATA_TYPE_VARCHAR ||
pSchema[i].type == TSDB_DATA_TYPE_GEOMETRY) {
if (pSchema[i].type == TSDB_DATA_TYPE_VARCHAR || pSchema[i].type == TSDB_DATA_TYPE_GEOMETRY) {
pResInfo->userFields[i].bytes -= VARSTR_HEADER_SIZE;
} else if (pSchema[i].type == TSDB_DATA_TYPE_NCHAR || pSchema[i].type == TSDB_DATA_TYPE_JSON) {
pResInfo->userFields[i].bytes = (pResInfo->userFields[i].bytes - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE;
@ -891,7 +891,7 @@ static bool incompletaFileParsing(SNode* pStmt) {
void continuePostSubQuery(SRequestObj* pRequest, TAOS_ROW row) {
SSqlCallbackWrapper* pWrapper = pRequest->pWrapper;
int32_t code = nodesAcquireAllocator(pWrapper->pParseCtx->allocatorId);
int32_t code = nodesAcquireAllocator(pWrapper->pParseCtx->allocatorId);
if (TSDB_CODE_SUCCESS == code) {
int64_t analyseStart = taosGetTimestampUs();
code = qContinueParsePostQuery(pWrapper->pParseCtx, pRequest->pQuery, (void**)row);
@ -934,7 +934,7 @@ void postSubQueryFetchCb(void* param, TAOS_RES* res, int32_t rowNum) {
TAOS_ROW row = NULL;
if (rowNum > 0) {
row = taos_fetch_row(res); // for single row only now
row = taos_fetch_row(res); // for single row only now
}
SRequestObj* pNextReq = acquireRequest(pRequest->relation.nextRefId);
@ -1297,13 +1297,19 @@ int initEpSetFromCfg(const char* firstEp, const char* secondEp, SCorEpSet* pEpSe
return -1;
}
int32_t code = taosGetFqdnPortFromEp(firstEp, &mgmtEpSet->eps[0]);
int32_t code = taosGetFqdnPortFromEp(firstEp, &mgmtEpSet->eps[mgmtEpSet->numOfEps]);
if (code != TSDB_CODE_SUCCESS) {
terrno = TSDB_CODE_TSC_INVALID_FQDN;
return terrno;
}
mgmtEpSet->numOfEps++;
uint32_t addr = taosGetIpv4FromFqdn(mgmtEpSet->eps[mgmtEpSet->numOfEps].fqdn);
if (addr == 0xffffffff) {
tscError("failed to resolve firstEp fqdn: %s, code:%s", mgmtEpSet->eps[mgmtEpSet->numOfEps].fqdn,
tstrerror(TSDB_CODE_TSC_INVALID_FQDN));
memset(&(mgmtEpSet->eps[mgmtEpSet->numOfEps]), 0, sizeof(mgmtEpSet->eps[mgmtEpSet->numOfEps]));
} else {
mgmtEpSet->numOfEps++;
}
}
if (secondEp && secondEp[0] != 0) {
@ -1313,12 +1319,19 @@ int initEpSetFromCfg(const char* firstEp, const char* secondEp, SCorEpSet* pEpSe
}
taosGetFqdnPortFromEp(secondEp, &mgmtEpSet->eps[mgmtEpSet->numOfEps]);
mgmtEpSet->numOfEps++;
uint32_t addr = taosGetIpv4FromFqdn(mgmtEpSet->eps[mgmtEpSet->numOfEps].fqdn);
if (addr == 0xffffffff) {
tscError("failed to resolve secondEp fqdn: %s, code:%s", mgmtEpSet->eps[mgmtEpSet->numOfEps].fqdn,
tstrerror(TSDB_CODE_TSC_INVALID_FQDN));
memset(&(mgmtEpSet->eps[mgmtEpSet->numOfEps]), 0, sizeof(mgmtEpSet->eps[mgmtEpSet->numOfEps]));
} else {
mgmtEpSet->numOfEps++;
}
}
if (mgmtEpSet->numOfEps == 0) {
terrno = TSDB_CODE_TSC_INVALID_FQDN;
return -1;
terrno = TSDB_CODE_RPC_NETWORK_UNAVAIL;
return TSDB_CODE_RPC_NETWORK_UNAVAIL;
}
return 0;
@ -2135,6 +2148,7 @@ TSDB_SERVER_STATUS taos_check_server_status(const char* fqdn, int port, char* de
connLimitNum = TMIN(connLimitNum, 500);
rpcInit.connLimitNum = connLimitNum;
rpcInit.timeToGetConn = tsTimeToGetAvailableConn;
taosVersionStrToInt(version, &(rpcInit.compatibilityVer));
clientRpc = rpcOpen(&rpcInit);
if (clientRpc == NULL) {
@ -2494,11 +2508,10 @@ TAOS_RES* taosQueryImplWithReqid(TAOS* taos, const char* sql, bool validateOnly,
return pRequest;
}
static void fetchCallback(void* pResult, void* param, int32_t code) {
SRequestObj* pRequest = (SRequestObj*)param;
static void fetchCallback(void *pResult, void *param, int32_t code) {
SRequestObj *pRequest = (SRequestObj *)param;
SReqResultInfo *pResultInfo = &pRequest->body.resInfo;
SReqResultInfo* pResultInfo = &pRequest->body.resInfo;
tscDebug("0x%" PRIx64 " enter scheduler fetch cb, code:%d - %s, reqId:0x%" PRIx64, pRequest->self, code,
tstrerror(code), pRequest->requestId);
@ -2520,7 +2533,7 @@ static void fetchCallback(void *pResult, void *param, int32_t code) {
}
pRequest->code =
setQueryResultFromRsp(pResultInfo, (const SRetrieveTableRsp *)pResultInfo->pData, pResultInfo->convertUcs4, true);
setQueryResultFromRsp(pResultInfo, (const SRetrieveTableRsp*)pResultInfo->pData, pResultInfo->convertUcs4, true);
if (pRequest->code != TSDB_CODE_SUCCESS) {
pResultInfo->numOfRows = 0;
pRequest->code = code;
@ -2531,19 +2544,19 @@ static void fetchCallback(void *pResult, void *param, int32_t code) {
pRequest->self, pResultInfo->numOfRows, pResultInfo->totalRows, pResultInfo->completed,
pRequest->requestId);
STscObj *pTscObj = pRequest->pTscObj;
SAppClusterSummary *pActivity = &pTscObj->pAppInfo->summary;
atomic_add_fetch_64((int64_t *)&pActivity->fetchBytes, pRequest->body.resInfo.payloadLen);
STscObj* pTscObj = pRequest->pTscObj;
SAppClusterSummary* pActivity = &pTscObj->pAppInfo->summary;
atomic_add_fetch_64((int64_t*)&pActivity->fetchBytes, pRequest->body.resInfo.payloadLen);
}
pRequest->body.fetchFp(pRequest->body.param, pRequest, pResultInfo->numOfRows);
}
void taosAsyncFetchImpl(SRequestObj *pRequest, __taos_async_fn_t fp, void *param) {
void taosAsyncFetchImpl(SRequestObj* pRequest, __taos_async_fn_t fp, void* param) {
pRequest->body.fetchFp = fp;
pRequest->body.param = param;
SReqResultInfo *pResultInfo = &pRequest->body.resInfo;
SReqResultInfo* pResultInfo = &pRequest->body.resInfo;
// this query has no results or error exists, return directly
if (taos_num_fields(pRequest) == 0 || pRequest->code != TSDB_CODE_SUCCESS) {
@ -2578,5 +2591,3 @@ void taosAsyncFetchImpl(SRequestObj *pRequest, __taos_async_fn_t fp, void *param
schedulerFetchRows(pRequest->body.queryJob, &req);
}

View File

@ -135,11 +135,6 @@ int taos_set_notify_cb(TAOS *taos, __taos_notify_fn_t fp, void *param, int type)
switch (type) {
case TAOS_NOTIFY_PASSVER: {
taosThreadMutexLock(&pObj->mutex);
if (fp && !pObj->passInfo.fp) {
atomic_add_fetch_32(&pObj->pAppInfo->pAppHbMgr->passKeyCnt, 1);
} else if (!fp && pObj->passInfo.fp) {
atomic_sub_fetch_32(&pObj->pAppInfo->pAppHbMgr->passKeyCnt, 1);
}
pObj->passInfo.fp = fp;
pObj->passInfo.param = param;
taosThreadMutexUnlock(&pObj->mutex);
@ -563,13 +558,12 @@ int taos_select_db(TAOS *taos, const char *db) {
return code;
}
void taos_stop_query(TAOS_RES *res) {
if (res == NULL || TD_RES_TMQ(res) || TD_RES_TMQ_META(res) || TD_RES_TMQ_METADATA(res)) {
return;
}
stopAllQueries((SRequestObj*)res);
stopAllQueries((SRequestObj *)res);
}
bool taos_is_null(TAOS_RES *res, int32_t row, int32_t col) {
@ -790,7 +784,7 @@ void destorySqlCallbackWrapper(SSqlCallbackWrapper *pWrapper) {
taosMemoryFree(pWrapper);
}
void destroyCtxInRequest(SRequestObj* pRequest) {
void destroyCtxInRequest(SRequestObj *pRequest) {
schedulerFreeJob(&pRequest->body.queryJob, 0);
qDestroyQuery(pRequest->pQuery);
pRequest->pQuery = NULL;
@ -798,7 +792,6 @@ void destroyCtxInRequest(SRequestObj* pRequest) {
pRequest->pWrapper = NULL;
}
static void doAsyncQueryFromAnalyse(SMetaData *pResultMeta, void *param, int32_t code) {
SSqlCallbackWrapper *pWrapper = (SSqlCallbackWrapper *)param;
SRequestObj *pRequest = pWrapper->pRequest;
@ -812,15 +805,15 @@ static void doAsyncQueryFromAnalyse(SMetaData *pResultMeta, void *param, int32_t
if (TSDB_CODE_SUCCESS == code) {
code = qAnalyseSqlSemantic(pWrapper->pParseCtx, pWrapper->pCatalogReq, pResultMeta, pQuery);
}
pRequest->metric.analyseCostUs += taosGetTimestampUs() - analyseStart;
handleQueryAnslyseRes(pWrapper, pResultMeta, code);
}
int32_t cloneCatalogReq(SCatalogReq* * ppTarget, SCatalogReq* pSrc) {
int32_t code = TSDB_CODE_SUCCESS;
SCatalogReq* pTarget = taosMemoryCalloc(1, sizeof(SCatalogReq));
int32_t cloneCatalogReq(SCatalogReq **ppTarget, SCatalogReq *pSrc) {
int32_t code = TSDB_CODE_SUCCESS;
SCatalogReq *pTarget = taosMemoryCalloc(1, sizeof(SCatalogReq));
if (pTarget == NULL) {
code = TSDB_CODE_OUT_OF_MEMORY;
} else {
@ -847,17 +840,16 @@ int32_t cloneCatalogReq(SCatalogReq* * ppTarget, SCatalogReq* pSrc) {
return code;
}
void handleSubQueryFromAnalyse(SSqlCallbackWrapper *pWrapper, SMetaData *pResultMeta, SNode* pRoot) {
SRequestObj* pNewRequest = NULL;
SSqlCallbackWrapper* pNewWrapper = NULL;
int32_t code = buildPreviousRequest(pWrapper->pRequest, pWrapper->pRequest->sqlstr, &pNewRequest);
void handleSubQueryFromAnalyse(SSqlCallbackWrapper *pWrapper, SMetaData *pResultMeta, SNode *pRoot) {
SRequestObj *pNewRequest = NULL;
SSqlCallbackWrapper *pNewWrapper = NULL;
int32_t code = buildPreviousRequest(pWrapper->pRequest, pWrapper->pRequest->sqlstr, &pNewRequest);
if (code) {
handleQueryAnslyseRes(pWrapper, pResultMeta, code);
return;
}
pNewRequest->pQuery = (SQuery*)nodesMakeNode(QUERY_NODE_QUERY);
pNewRequest->pQuery = (SQuery *)nodesMakeNode(QUERY_NODE_QUERY);
if (NULL == pNewRequest->pQuery) {
code = TSDB_CODE_OUT_OF_MEMORY;
} else {
@ -876,16 +868,16 @@ void handleSubQueryFromAnalyse(SSqlCallbackWrapper *pWrapper, SMetaData *pResult
}
void handleQueryAnslyseRes(SSqlCallbackWrapper *pWrapper, SMetaData *pResultMeta, int32_t code) {
SRequestObj *pRequest = pWrapper->pRequest;
SQuery *pQuery = pRequest->pQuery;
SRequestObj *pRequest = pWrapper->pRequest;
SQuery *pQuery = pRequest->pQuery;
if (code == TSDB_CODE_SUCCESS && pQuery->pPrevRoot) {
SNode* prevRoot = pQuery->pPrevRoot;
SNode *prevRoot = pQuery->pPrevRoot;
pQuery->pPrevRoot = NULL;
handleSubQueryFromAnalyse(pWrapper, pResultMeta, prevRoot);
return;
}
if (code == TSDB_CODE_SUCCESS) {
pRequest->stableQuery = pQuery->stableQuery;
if (pQuery->pRoot) {
@ -1048,7 +1040,7 @@ int32_t createParseContext(const SRequestObj *pRequest, SParseContext **pCxt) {
}
int32_t prepareAndParseSqlSyntax(SSqlCallbackWrapper **ppWrapper, SRequestObj *pRequest, bool updateMetaForce) {
int32_t code = TSDB_CODE_SUCCESS;
int32_t code = TSDB_CODE_SUCCESS;
STscObj *pTscObj = pRequest->pTscObj;
SSqlCallbackWrapper *pWrapper = taosMemoryCalloc(1, sizeof(SSqlCallbackWrapper));
if (pWrapper == NULL) {
@ -1086,7 +1078,6 @@ int32_t prepareAndParseSqlSyntax(SSqlCallbackWrapper **ppWrapper, SRequestObj *p
return code;
}
void doAsyncQuery(SRequestObj *pRequest, bool updateMetaForce) {
SSqlCallbackWrapper *pWrapper = NULL;
int32_t code = TSDB_CODE_SUCCESS;
@ -1133,12 +1124,12 @@ void doAsyncQuery(SRequestObj *pRequest, bool updateMetaForce) {
}
void restartAsyncQuery(SRequestObj *pRequest, int32_t code) {
int32_t reqIdx = 0;
int32_t reqIdx = 0;
SRequestObj *pReqList[16] = {NULL};
SRequestObj *pUserReq = NULL;
pReqList[0] = pRequest;
uint64_t tmpRefId = 0;
SRequestObj* pTmp = pRequest;
uint64_t tmpRefId = 0;
SRequestObj *pTmp = pRequest;
while (pTmp->relation.prevRefId) {
tmpRefId = pTmp->relation.prevRefId;
pTmp = acquireRequest(tmpRefId);
@ -1146,9 +1137,9 @@ void restartAsyncQuery(SRequestObj *pRequest, int32_t code) {
pReqList[++reqIdx] = pTmp;
releaseRequest(tmpRefId);
} else {
tscError("0x%" PRIx64 ", prev req ref 0x%" PRIx64 " is not there, reqId:0x%" PRIx64, pTmp->self,
tmpRefId, pTmp->requestId);
break;
tscError("0x%" PRIx64 ", prev req ref 0x%" PRIx64 " is not there, reqId:0x%" PRIx64, pTmp->self, tmpRefId,
pTmp->requestId);
break;
}
}
@ -1157,11 +1148,11 @@ void restartAsyncQuery(SRequestObj *pRequest, int32_t code) {
pTmp = acquireRequest(tmpRefId);
if (pTmp) {
tmpRefId = pTmp->relation.nextRefId;
removeRequest(pTmp->self);
removeRequest(pTmp->self);
releaseRequest(pTmp->self);
} else {
tscError("0x%" PRIx64 " is not there", tmpRefId);
break;
break;
}
}

View File

@ -99,13 +99,20 @@ int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) {
goto End;
}
int updateEpSet = 1;
if (connectRsp.dnodeNum == 1) {
SEpSet srcEpSet = getEpSet_s(&pTscObj->pAppInfo->mgmtEp);
SEpSet dstEpSet = connectRsp.epSet;
rpcSetDefaultAddr(pTscObj->pAppInfo->pTransporter, srcEpSet.eps[srcEpSet.inUse].fqdn,
dstEpSet.eps[dstEpSet.inUse].fqdn);
} else if (connectRsp.dnodeNum > 1 && !isEpsetEqual(&pTscObj->pAppInfo->mgmtEp.epSet, &connectRsp.epSet)) {
SEpSet* pOrig = &pTscObj->pAppInfo->mgmtEp.epSet;
if (srcEpSet.numOfEps == 1) {
rpcSetDefaultAddr(pTscObj->pAppInfo->pTransporter, srcEpSet.eps[srcEpSet.inUse].fqdn,
dstEpSet.eps[dstEpSet.inUse].fqdn);
updateEpSet = 0;
}
}
if (updateEpSet == 1 && !isEpsetEqual(&pTscObj->pAppInfo->mgmtEp.epSet, &connectRsp.epSet)) {
SEpSet corEpSet = getEpSet_s(&pTscObj->pAppInfo->mgmtEp);
SEpSet* pOrig = &corEpSet;
SEp* pOrigEp = &pOrig->eps[pOrig->inUse];
SEp* pNewEp = &connectRsp.epSet.eps[connectRsp.epSet.inUse];
tscDebug("mnode epset updated from %d/%d=>%s:%d to %d/%d=>%s:%d in connRsp", pOrig->inUse, pOrig->numOfEps,
@ -131,6 +138,7 @@ int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) {
pTscObj->connType = connectRsp.connType;
pTscObj->passInfo.ver = connectRsp.passVer;
pTscObj->authVer = connectRsp.authVer;
hbRegisterConn(pTscObj->pAppInfo->pAppHbMgr, pTscObj->id, connectRsp.clusterId, connectRsp.connType);
@ -427,13 +435,16 @@ static int32_t buildShowVariablesBlock(SArray* pVars, SSDataBlock** block) {
SColumnInfoData infoData = {0};
infoData.info.type = TSDB_DATA_TYPE_VARCHAR;
infoData.info.bytes = SHOW_VARIABLES_RESULT_FIELD1_LEN;
taosArrayPush(pBlock->pDataBlock, &infoData);
infoData.info.type = TSDB_DATA_TYPE_VARCHAR;
infoData.info.bytes = SHOW_VARIABLES_RESULT_FIELD2_LEN;
taosArrayPush(pBlock->pDataBlock, &infoData);
infoData.info.type = TSDB_DATA_TYPE_VARCHAR;
infoData.info.bytes = SHOW_VARIABLES_RESULT_FIELD3_LEN;
taosArrayPush(pBlock->pDataBlock, &infoData);
int32_t numOfCfg = taosArrayGetSize(pVars);
blockDataEnsureCapacity(pBlock, numOfCfg);
@ -449,6 +460,11 @@ static int32_t buildShowVariablesBlock(SArray* pVars, SSDataBlock** block) {
STR_WITH_MAXSIZE_TO_VARSTR(value, pInfo->value, TSDB_CONFIG_VALUE_LEN + VARSTR_HEADER_SIZE);
pColInfo = taosArrayGet(pBlock->pDataBlock, c++);
colDataSetVal(pColInfo, i, value, false);
char scope[TSDB_CONFIG_SCOPE_LEN + VARSTR_HEADER_SIZE] = {0};
STR_WITH_MAXSIZE_TO_VARSTR(scope, pInfo->scope, TSDB_CONFIG_SCOPE_LEN + VARSTR_HEADER_SIZE);
pColInfo = taosArrayGet(pBlock->pDataBlock, c++);
colDataSetVal(pColInfo, i, scope, false);
}
pBlock->info.rows = numOfCfg;

View File

@ -1286,6 +1286,10 @@ static int32_t taosAlterTable(TAOS* taos, void* meta, int32_t metaLen) {
taosArrayPush(pArray, &pVgData);
pQuery = (SQuery*)nodesMakeNode(QUERY_NODE_QUERY);
if (NULL == pQuery) {
code = TSDB_CODE_OUT_OF_MEMORY;
goto end;
}
pQuery->execMode = QUERY_EXEC_MODE_SCHEDULE;
pQuery->msgType = TDMT_VND_ALTER_TABLE;
pQuery->stableQuery = false;
@ -1323,6 +1327,9 @@ end:
int taos_write_raw_block_with_fields(TAOS* taos, int rows, char* pData, const char* tbname, TAOS_FIELD* fields,
int numFields) {
if (!taos || !pData || !tbname) {
return TSDB_CODE_INVALID_PARA;
}
int32_t code = TSDB_CODE_SUCCESS;
STableMeta* pTableMeta = NULL;
SQuery* pQuery = NULL;
@ -1409,6 +1416,9 @@ end:
}
int taos_write_raw_block(TAOS* taos, int rows, char* pData, const char* tbname) {
if (!taos || !pData || !tbname) {
return TSDB_CODE_INVALID_PARA;
}
int32_t code = TSDB_CODE_SUCCESS;
STableMeta* pTableMeta = NULL;
SQuery* pQuery = NULL;
@ -1808,6 +1818,7 @@ end:
}
char* tmq_get_json_meta(TAOS_RES* res) {
if (res == NULL) return NULL;
uDebug("tmq_get_json_meta called");
if (!TD_RES_TMQ_META(res) && !TD_RES_TMQ_METADATA(res)) {
return NULL;

View File

@ -456,7 +456,7 @@ int smlJsonParseObj(char **start, SSmlLineInfo *element, int8_t *offset) {
static inline int32_t smlParseMetricFromJSON(SSmlHandle *info, cJSON *metric, SSmlLineInfo *elements) {
elements->measureLen = strlen(metric->valuestring);
if (IS_INVALID_TABLE_LEN(elements->measureLen)) {
uError("OTD:0x%" PRIx64 " Metric lenght is 0 or large than 192", info->id);
uError("OTD:0x%" PRIx64 " Metric length is 0 or large than 192", info->id);
return TSDB_CODE_TSC_INVALID_TABLE_ID_LENGTH;
}

Some files were not shown because too many files have changed in this diff Show More