other: merge 3.0

This commit is contained in:
Haojun Liao 2023-07-29 01:32:02 +08:00
commit f6e07d1fdb
184 changed files with 5282 additions and 1778 deletions

View File

@ -39,7 +39,7 @@ TDengine 是一款开源、高性能、云原生的时序数据库 (Time-Series
# 构建
TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。
TDengine 目前可以在 Linux、 Windows、macOS 等平台上安装和运行。任何 OS 的应用也可以选择 taosAdapter 的 RESTful 接口连接服务端 taosd。CPU 支持 X64/ARM64后续会支持 MIPS64、Alpha64、ARM32、RISC-V 等 CPU 架构。目前不支持使用交叉编译器构建。
用户可根据需求选择通过源码、[容器](https://docs.taosdata.com/get-started/docker/)、[安装包](https://docs.taosdata.com/get-started/package/)或[Kubernetes](https://docs.taosdata.com/deployment/k8s/)来安装。本快速指南仅适用于通过源码安装。
@ -352,4 +352,4 @@ TDengine 提供了丰富的应用程序开发接口,其中包括 C/C++、Java
# 加入技术交流群
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine1",加小 T 为好友,即可入群。
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小 T 为好友,即可入群。

View File

@ -47,7 +47,7 @@ For user manual, system design and architecture, please refer to [TDengine Docum
# Building
At the moment, TDengine server supports running on Linux/Windows/macOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future.
At the moment, TDengine server supports running on Linux/Windows/macOS systems. Any application can also choose the RESTful interface provided by taosAdapter to connect the taosd service . TDengine supports X64/ARM64 CPU, and it will support MIPS64, Alpha64, ARM32, RISC-V and other CPU architectures in the future. Right now we don't support build with cross-compiling environment.
You can choose to install through source code, [container](https://docs.tdengine.com/get-started/docker/), [installation package](https://docs.tdengine.com/get-started/package/) or [Kubernetes](https://docs.tdengine.com/deployment/k8s/). This quick guide only applies to installing from source.

View File

@ -18,7 +18,7 @@ The full package of TDengine includes the TDengine Server (`taosd`), TDengine Cl
The standard server installation package includes `taos`, `taosd`, `taosAdapter`, `taosBenchmark`, and sample code. You can also download the Lite package that includes only `taosd` and the C/C++ connector.
The TDengine Community Edition is released as Deb and RPM packages. The Deb package can be installed on Debian, Ubuntu, and derivative systems. The RPM package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the Deb or RPM package, download and install taosTools separately. TDengine can also be installed on x64 Windows and x64/m1 macOS.
TDengine OSS is released as Deb and RPM packages. The Deb package can be installed on Debian, Ubuntu, and derivative systems. The RPM package can be installed on CentOS, RHEL, SUSE, and derivative systems. A .tar.gz package is also provided for enterprise customers, and you can install TDengine over `apt-get` as well. The .tar.tz package includes `taosdump` and the TDinsight installation script. If you want to use these utilities with the Deb or RPM package, download and install taosTools separately. TDengine can also be installed on x64 Windows and x64/m1 macOS.
## Operating environment requirements
In the Linux system, the minimum requirements for the operating environment are as follows:
@ -201,7 +201,7 @@ You can use the TDengine CLI to monitor your TDengine deployment and execute ad
<TabItem label="Windows" value="windows">
After the installation is complete, please run `sc start taosd` or run `C:\TDengine\taosd.exe` with administrator privilege to start TDengine Server.
After the installation is complete, please run `sc start taosd` or run `C:\TDengine\taosd.exe` with administrator privilege to start TDengine Server. Please run `sc start taosadapter` or run `C:\TDengine\taosadapter.exe` with administrator privilege to start taosAdapter to provide http/REST service.
## Command Line Interface (CLI)

View File

@ -21,17 +21,6 @@ import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/>
```
## Study TDengine Knowledge Map
The TDengine Knowledge Map covers the various knowledge points of TDengine, revealing the invocation relationships and data flow between various conceptual entities. Learning and understanding the TDengine Knowledge Map will help you quickly master the TDengine knowledge system.
<figure>
<center>
<a href="pathname:///img/tdengine-map.svg" target="_blank"><img src="/img/tdengine-map.svg" width="80%" /></a>
<figcaption>Diagram 1. TDengine Knowledge Map</figcaption>
</center>
</figure>
## Join TDengine Community
<table width="100%">

View File

@ -298,7 +298,7 @@ You configure the following parameters when creating a consumer:
| `td.connect.port` | string | Port of the server side | |
| `group.id` | string | Consumer group ID; consumers with the same ID are in the same group | **Required**. Maximum length: 192. Each topic can create up to 100 consumer groups. |
| `client.id` | string | Client ID | Maximum length: 192. |
| `auto.offset.reset` | enum | Initial offset for the consumer group | Specify `earliest`, `latest`, or `none`(default) |
| `auto.offset.reset` | enum | Initial offset for the consumer group | `earliest`: subscribe from the earliest data, this is the default behavior; `latest`: subscribe from the latest data; or `none`: can't subscribe without committed offset|
| `enable.auto.commit` | boolean | Commit automatically; true: user application doesn't need to explicitly commit; false: user application need to handle commit by itself | Default value is true |
| `auto.commit.interval.ms` | integer | Interval for automatic commits, in milliseconds |
| `msg.with.table.name` | boolean | Specify whether to deserialize table names from messages | default value: false

View File

@ -403,7 +403,7 @@ In this section we will demonstrate 5 examples of developing UDF in Python langu
In the guide, some debugging skills of using Python UDF will be explained too.
We assume you are using Linux system and already have TDengine 3.0.4.0+ and Python 3.x.
We assume you are using Linux system and already have TDengine 3.0.4.0+ and Python 3.7+.
Note:**You can't use print() function to output log inside a UDF, you have to write the log to a specific file or use logging module of Python.**

View File

@ -4,23 +4,31 @@ sidebar_label: Kubernetes
description: This document describes how to deploy TDengine on Kubernetes.
---
TDengine is a cloud-native time-series database that can be deployed on Kubernetes. This document gives a step-by-step description of how you can use YAML files to create a TDengine cluster and introduces common operations for TDengine in a Kubernetes environment.
## Overview
As a time series database for Cloud Native architecture design, TDengine supports Kubernetes deployment. Firstly we introduce how to use YAML files to create a highly available TDengine cluster from scratch step by step for production usage, and highlight the common operations of TDengine in Kubernetes environment.
To meet [high availability ](https://docs.taosdata.com/tdinternal/high-availability/)requirements, clusters need to meet the following requirements:
- 3 or more dnodes: multiple vnodes in the same vgroup of TDengine are not allowed to be distributed in one dnode at the same time, so if you create a database with 3 replicas, the number of dnodes is greater than or equal to 3
- 3 mnodes: mnode is responsible for the management of the entire TDengine cluster. The default number of mnode in TDengine cluster is only one. If the dnode where the mnode located is dropped, the entire cluster is unavailable.
- Database 3 replicas: The TDengine replica configuration is the database level, so 3 replicas for the database must need three dnodes in the cluster. If any one dnode is offline, does not affect the normal usage of the whole cluster. **If the number of offline** **dnodes** **is 2, then the cluster is not available,** **because** ** the cluster can not complete the election based on RAFT** **.** (Enterprise version: in the disaster recovery scenario, any node data file is damaged, can be restored by pulling up the dnode again)
## Prerequisites
Before deploying TDengine on Kubernetes, perform the following:
* Current steps are compatible with Kubernetes v1.5 and later version.
* Install and configure minikube, kubectl, and helm.
* Install and deploy Kubernetes and ensure that it can be accessed and used normally. Update any container registries or other services as necessary.
- This article applies Kubernetes 1.19 and above
- This article uses the **kubectl** tool to install and deploy, please install the corresponding software in advance
- Kubernetes have been installed and deployed and can access or update the necessary container repositories or other services
You can download the configuration files in this document from [GitHub](https://github.com/taosdata/TDengine-Operator/tree/3.0/src/tdengine).
## Configure the service
Create a service configuration file named `taosd-service.yaml`. Record the value of `metadata.name` (in this example, `taos`) for use in the next step. Add the ports required by TDengine:
Create a service configuration file named `taosd-service.yaml`. Record the value of `metadata.name` (in this example, `taos`) for use in the next step. And then add the ports required by TDengine and record the value of the selector label "app" (in this example, `tdengine`) for use in the next step:
```yaml
```YAML
---
apiVersion: v1
kind: Service
@ -31,10 +39,10 @@ metadata:
spec:
ports:
- name: tcp6030
- protocol: "TCP"
protocol: "TCP"
port: 6030
- name: tcp6041
- protocol: "TCP"
protocol: "TCP"
port: 6041
selector:
app: "tdengine"
@ -42,10 +50,11 @@ spec:
## Configure the service as StatefulSet
Configure the TDengine service as a StatefulSet.
Create the `tdengine.yaml` file and set `replicas` to 3. In this example, the region is set to Asia/Shanghai and 10 GB of standard storage are allocated per node. You can change the configuration based on your environment and business requirements.
According to Kubernetes instructions for various deployments, we will use StatefulSet as the deployment resource type of TDengine. Create the file `tdengine.yaml `, where replicas defines the number of cluster nodes as 3. The node time zone is China (Asia/Shanghai), and each node is allocated 5G standard storage (refer to the [Storage Classes ](https://kubernetes.io/docs/concepts/storage/storage-classes/)configuration storage class). You can also modify accordingly according to the actual situation.
```yaml
Please pay special attention to the startupProbe configuration. If dnode's Pod drops for a period of time and then restart, the newly launched dnode Pod will be temporarily unavailable. The reason is the startupProbe configuration is too small, Kubernetes will know that the Pod is in an abnormal state and try to restart it, then the dnode's Pod will restart frequently and never return to the normal status. Refer to [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
```YAML
---
apiVersion: apps/v1
kind: StatefulSet
@ -69,14 +78,14 @@ spec:
spec:
containers:
- name: "tdengine"
image: "tdengine/tdengine:3.0.0.0"
image: "tdengine/tdengine:3.0.7.1"
imagePullPolicy: "IfNotPresent"
ports:
- name: tcp6030
- protocol: "TCP"
protocol: "TCP"
containerPort: 6030
- name: tcp6041
- protocol: "TCP"
protocol: "TCP"
containerPort: 6041
env:
# POD_NAME for FQDN config
@ -102,12 +111,18 @@ spec:
# Must set if you want a cluster.
- name: TAOS_FIRST_EP
value: "$(STS_NAME)-0.$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local:$(TAOS_SERVER_PORT)"
# TAOS_FQDN should always be set in k8s env.
# TAOS_FQND should always be set in k8s env.
- name: TAOS_FQDN
value: "$(POD_NAME).$(SERVICE_NAME).$(STS_NAMESPACE).svc.cluster.local"
volumeMounts:
- name: taosdata
mountPath: /var/lib/taos
startupProbe:
exec:
command:
- taos-check
failureThreshold: 360
periodSeconds: 10
readinessProbe:
exec:
command:
@ -129,266 +144,401 @@ spec:
storageClassName: "standard"
resources:
requests:
storage: "10Gi"
storage: "5Gi"
```
## Use kubectl to deploy TDengine
Run the following commands:
First create the corresponding namespace, and then execute the following command in sequence :
```bash
kubectl apply -f taosd-service.yaml
kubectl apply -f tdengine.yaml
```Bash
kubectl apply -f taosd-service.yaml -n tdengine-test
kubectl apply -f tdengine.yaml -n tdengine-test
```
The preceding configuration generates a TDengine cluster with three nodes in which dnodes are automatically configured. You can run the `show dnodes` command to query the nodes in the cluster:
The above configuration will generate a three-node TDengine cluster, dnode is automatically configured, you can use the **show dnodes** command to view the nodes of the current cluster:
```bash
kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"
kubectl exec -i -t tdengine-1 -- taos -s "show dnodes"
kubectl exec -i -t tdengine-2 -- taos -s "show dnodes"
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show dnodes"
kubectl exec -it tdengine-2 -n tdengine-test -- taos -s "show dnodes"
```
The output is as follows:
```
```Bash
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
Query OK, 3 rows in database (0.003655s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 0 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
2 | tdengine-1.ta... | 0 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
3 | tdengine-2.ta... | 0 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
Query OK, 3 row(s) in set (0.001853s)
```
View the current mnode
```Bash
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
taos> show mnodes\G
*************************** 1.row ***************************
id: 1
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
role: leader
status: ready
create_time: 2023-07-19 17:54:18.559
reboot_time: 2023-07-19 17:54:19.520
Query OK, 1 row(s) in set (0.001282s)
```
## Create mnode
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 2"
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 3"
```
View mnode
```Bash
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
taos> show mnodes\G
*************************** 1.row ***************************
id: 1
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
role: leader
status: ready
create_time: 2023-07-19 17:54:18.559
reboot_time: 2023-07-20 09:19:36.060
*************************** 2.row ***************************
id: 2
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
role: follower
status: ready
create_time: 2023-07-20 09:22:05.600
reboot_time: 2023-07-20 09:22:12.838
*************************** 3.row ***************************
id: 3
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
role: follower
status: ready
create_time: 2023-07-20 09:22:20.042
reboot_time: 2023-07-20 09:22:23.271
Query OK, 3 row(s) in set (0.003108s)
```
## Enable port forwarding
The kubectl port forwarding feature allows applications to access the TDengine cluster running on Kubernetes.
Kubectl port forwarding enables applications to access TDengine clusters running in Kubernetes environments.
```
kubectl port-forward tdengine-0 6041:6041 &
```bash
kubectl port-forward -n tdengine-test tdengine-0 6041:6041 &
```
Use curl to verify that the TDengine REST API is working on port 6041:
Use **curl** to verify that the TDengine REST API is working on port 6041:
```
$ curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
Handling connection for 6041
{"code":0,"column_meta":[["name","VARCHAR",64],["create_time","TIMESTAMP",8],["vgroups","SMALLINT",2],["ntables","BIGINT",8],["replica","TINYINT",1],["strict","VARCHAR",4],["duration","VARCHAR",10],["keep","VARCHAR",32],["buffer","INT",4],["pagesize","INT",4],["pages","INT",4],["minrows","INT",4],["maxrows","INT",4],["comp","TINYINT",1],["precision","VARCHAR",2],["status","VARCHAR",10],["retention","VARCHAR",60],["single_stable","BOOL",1],["cachemodel","VARCHAR",11],["cachesize","INT",4],["wal_level","TINYINT",1],["wal_fsync_period","INT",4],["wal_retention_period","INT",4],["wal_retention_size","BIGINT",8]],"data":[["information_schema",null,null,16,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null],["performance_schema",null,null,10,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null]],"rows":2}
```bash
curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
{"code":0,"column_meta":[["name","VARCHAR",64]],"data":[["information_schema"],["performance_schema"],["test"],["test1"]],"rows":4}
```
## Enable the dashboard for visualization
## Test cluster
The minikube dashboard command enables visualized cluster management.
### Data preparation
```
$ minikube dashboard
* Verifying dashboard health ...
* Launching proxy ...
* Verifying proxy health ...
* Opening http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
#### taosBenchmark
Create a 3 replicas database with taosBenchmark, write 100 million data at the same time, and view the data at the same time
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taosBenchmark -I stmt -d test -n 10000 -t 10000 -a 3
# query data
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "select count(*) from test.meters;"
taos> select count(*) from test.meters;
count(*) |
========================
100000000 |
Query OK, 1 row(s) in set (0.103537s)
```
In some public clouds, minikube cannot be remotely accessed if it is bound to 127.0.0.1. In this case, use the kubectl proxy command to map the port to 0.0.0.0. Then, you can access the dashboard by using a web browser to open the dashboard URL above on the public IP address and port of the virtual machine.
View vnode distribution by showing dnodes
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 8 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
2 | tdengine-1.ta... | 8 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
3 | tdengine-2.ta... | 8 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
Query OK, 3 row(s) in set (0.001357s)
```
$ kubectl proxy --accept-hosts='^.*$' --address='0.0.0.0'
View xnode distribution by showing vgroup
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show test.vgroups"
taos> show test.vgroups
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode | v4_status | cacheload | cacheelements | tsma |
==============================================================================================================================================================================================
2 | test | 1267 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
3 | test | 1215 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
4 | test | 1215 | 1 | leader | 2 | follower | 3 | follower | NULL | NULL | 0 | 0 | 0 |
5 | test | 1307 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
6 | test | 1245 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
7 | test | 1275 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
8 | test | 1231 | 1 | leader | 2 | follower | 3 | follower | NULL | NULL | 0 | 0 | 0 |
9 | test | 1245 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
Query OK, 8 row(s) in set (0.001488s)
```
#### Manually created
Common a three-copy test1, and create a table, write 2 pieces of data
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- \
taos -s \
"create database if not exists test1 replica 3;
use test1;
create table if not exists t1(ts timestamp, n int);
insert into t1 values(now, 1)(now+1s, 2);"
```
View xnode distribution by showing test1.vgroup
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show test1.vgroups"
taos> show test1.vgroups
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode | v4_status | cacheload | cacheelements | tsma |
==============================================================================================================================================================================================
10 | test1 | 1 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
11 | test1 | 0 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
Query OK, 2 row(s) in set (0.001489s)
```
### Test fault tolerance
The dnode where the mnode leader is located is disconnected, dnode1
```Bash
kubectl get pod -l app=tdengine -n tdengine-test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 0/1 ErrImagePull 2 (2s ago) 20m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (6m48s ago) 20m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 21m 10.244.1.223 node85 <none> <none>
```
At this time, the cluster mnode has a re-election, and the monde on dnode1 becomes the leader.
```Bash
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
Welcome to the TDengine Command Line Interface, Client Version:3.0.7.1.202307190706
Copyright (c) 2022 by TDengine, all rights reserved.
taos> show mnodes\G
*************************** 1.row ***************************
id: 1
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
role: offline
status: offline
create_time: 2023-07-19 17:54:18.559
reboot_time: 1970-01-01 08:00:00.000
*************************** 2.row ***************************
id: 2
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
role: leader
status: ready
create_time: 2023-07-20 09:22:05.600
reboot_time: 2023-07-20 09:32:00.227
*************************** 3.row ***************************
id: 3
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
role: follower
status: ready
create_time: 2023-07-20 09:22:20.042
reboot_time: 2023-07-20 09:32:00.026
Query OK, 3 row(s) in set (0.001513s)
```
Cluster can read and write normally
```Bash
# insert
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "insert into test1.t1 values(now, 1)(now+1s, 2);"
taos> insert into test1.t1 values(now, 1)(now+1s, 2);
Insert OK, 2 row(s) affected (0.002098s)
# select
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "select *from test1.t1"
taos> select *from test1.t1
ts | n |
========================================
2023-07-19 18:04:58.104 | 1 |
2023-07-19 18:04:59.104 | 2 |
2023-07-19 18:06:00.303 | 1 |
2023-07-19 18:06:01.303 | 2 |
Query OK, 4 row(s) in set (0.001994s)
```
Similarly, as for the non-leader mnode dropped, read and write can of course be normal, here will not do too much display .
## Scaling Out Your Cluster
TDengine clusters can scale automatically:
TDengine cluster supports automatic expansion:
```bash
```Bash
kubectl scale statefulsets tdengine --replicas=4
```
The preceding command increases the number of replicas to 4. After running this command, query the pod status:
The parameter `--replica = 4 `in the above command line indicates that you want to expand the TDengine cluster to 4 nodes. After execution, first check the status of the Pod:
```bash
kubectl get pods -l app=tdengine
```Bash
kubectl get pod -l app=tdengine -n tdengine-test -o wide
```
The output is as follows:
```
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 161m
tdengine-1 1/1 Running 0 161m
tdengine-2 1/1 Running 0 32m
tdengine-3 1/1 Running 0 32m
```Plain
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 1/1 Running 4 (6h26m ago) 6h53m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (6h39m ago) 6h53m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 5h16m 10.244.1.224 node85 <none> <none>
tdengine-3 1/1 Running 0 3m24s 10.244.2.76 node86 <none> <none>
```
The status of all pods is Running. Once the pod status changes to Ready, you can check the dnode status:
At this time, the state of the POD is still Running, and the dnode state in the TDengine cluster can only be seen after the Pod status is `ready `:
```bash
kubectl exec -i -t tdengine-3 -- taos -s "show dnodes"
```Bash
kubectl exec -it tdengine-3 -n tdengine-test -- taos -s "show dnodes"
```
The following output shows that the TDengine cluster has been expanded to 4 replicas:
The dnode list of the expanded four-node TDengine cluster:
```
```Plain
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
4 | tdengine-3.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:33:16.039 | |
Query OK, 4 rows in database (0.008377s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
4 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:01:44.007 | 2023-07-20 16:01:44.889 | | | |
Query OK, 4 row(s) in set (0.003628s)
```
## Scaling In Your Cluster
When you scale in a TDengine cluster, your data is migrated to different nodes. You must run the drop dnodes command in TDengine to remove dnodes before scaling in your Kubernetes environment.
Since the TDengine cluster will migrate data between nodes during volume expansion and contraction, using the **kubectl** command to reduce the volume requires first using the "drop dnodes" command ( **If there are 3 replicas of db in the cluster, the number of dnodes after reduction must also be greater than or equal to 3, otherwise the drop dnode operation will be aborted** ), the node deletion is completed before Kubernetes cluster reduction.
Note: In a Kubernetes StatefulSet service, the newest pods are always removed first. For this reason, when you scale in your TDengine cluster, ensure that you drop the newest dnodes.
Note: Since Kubernetes Pods in the Statefulset can only be removed in reverse order of creation, the TDengine drop dnode also needs to be removed in reverse order of creation, otherwise the Pod will be in an error state.
```
$ kubectl exec -i -t tdengine-0 -- taos -s "drop dnode 4"
```
```bash
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "drop dnode 4"
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
Query OK, 3 rows in database (0.004861s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
Query OK, 3 row(s) in set (0.003324s)
```
Verify that the dnode have been successfully removed by running the `kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"` command. Then run the following command to remove the pod:
After confirming that the removal is successful (use kubectl exec -i -t tdengine-0 --taos -s "show dnodes" to view and confirm the dnode list), use the kubectl command to remove the Pod:
```
kubectl scale statefulsets tdengine --replicas=3
```Plain
kubectl scale statefulsets tdengine --replicas=3 -n tdengine-test
```
The newest pod in the deployment is removed. Run the `kubectl get pods -l app=tdengine` command to query the pod status:
The last Pod will be deleted. Use the command kubectl get pods -l app = tdengine to check the Pod status:
```
$ kubectl get pods -l app=tdengine
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 4m7s
tdengine-1 1/1 Running 0 3m55s
tdengine-2 1/1 Running 0 2m28s
```Plain
kubectl get pod -l app=tdengine -n tdengine-test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 1/1 Running 4 (6h55m ago) 7h22m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (7h9m ago) 7h23m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 5h45m 10.244.1.224 node85 <none> <none>
```
After the pod has been removed, manually delete the PersistentVolumeClaim (PVC). Otherwise, future scale-outs will attempt to use existing data.
After the Pod is deleted, the PVC needs to be deleted manually, otherwise the previous data will continue to be used for the next expansion, resulting in the inability to join the cluster normally.
```bash
$ kubectl delete pvc taosdata-tdengine-3
```Bash
kubectl delete pvc aosdata-tdengine-3 -n tdengine-test
```
Your cluster has now been safely scaled in, and you can scale it out again as necessary.
The cluster state at this time is safe and can be scaled up again if needed.
```bash
$ kubectl scale statefulsets tdengine --replicas=4
```Bash
kubectl scale statefulsets tdengine --replicas=4 -n tdengine-test
statefulset.apps/tdengine scaled
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 35m
tdengine-1 1/1 Running 0 34m
tdengine-2 1/1 Running 0 12m
tdengine-3 0/1 ContainerCreating 0 4s
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 35m
tdengine-1 1/1 Running 0 34m
tdengine-2 1/1 Running 0 12m
tdengine-3 0/1 Running 0 7s
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
kubectl get pod -l app=tdengine -n tdengine-test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 1/1 Running 4 (6h59m ago) 7h27m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (7h13m ago) 7h27m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 5h49m 10.244.1.224 node85 <none> <none>
tdengine-3 1/1 Running 0 20s 10.244.2.77 node86 <none> <none>
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
5 | tdengine-2.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:01:36.479 | |
6 | tdengine-3.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:13:54.411 | |
Query OK, 4 row(s) in set (0.001348s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
5 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:31:34.092 | 2023-07-20 16:38:17.419 | | | |
Query OK, 4 row(s) in set (0.003881s)
```
## Remove a TDengine Cluster
To fully remove a TDengine cluster, you must delete its statefulset, svc, configmap, and pvc entries:
> **When deleting the PVC, you need to pay attention to the pv persistentVolumeReclaimPolicy policy. It is recommended to change to Delete, so that the PV will be automatically cleaned up when the PVC is deleted, and the underlying CSI storage resources will be cleaned up at the same time. If the policy of deleting the PVC to automatically clean up the PV is not configured, and then after deleting the pvc, when manually cleaning up the PV, the CSI storage resources corresponding to the PV may not be released.**
```bash
kubectl delete statefulset -l app=tdengine
kubectl delete svc -l app=tdengine
kubectl delete pvc -l app=tdengine
kubectl delete configmap taoscfg
Complete removal of TDengine cluster, need to clean up statefulset, svc, configmap, pvc respectively.
```Bash
kubectl delete statefulset -l app=tdengine -n tdengine-test
kubectl delete svc -l app=tdengine -n tdengine-test
kubectl delete pvc -l app=tdengine -n tdengine-test
kubectl delete configmap taoscfg -n tdengine-test
```
## Troubleshooting
### Error 1
If you remove a pod without first running `drop dnode`, some TDengine nodes will go offline.
No "drop dnode" is directly reduced. Since the TDengine has not deleted the node, the reduced pod causes some nodes in the TDengine cluster to be offline.
```
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
```Plain
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
5 | tdengine-2.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:01:36.479 | status msg timeout |
6 | tdengine-3.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:13:54.411 | status msg timeout |
Query OK, 4 row(s) in set (0.001323s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
5 | tdengine-3.ta... | 0 | 16 | offline | 2023-07-20 16:31:34.092 | 2023-07-20 16:38:17.419 | status msg timeout | | |
Query OK, 4 row(s) in set (0.003862s)
```
### Error 2
## Finally
If the number of nodes after a scale-in is less than the value of the replica parameter, the cluster will go down:
For the high availability and high reliability of TDengine in a Kubernetes environment, hardware damage and disaster recovery are divided into two levels:
Create a database with replica set to 2 and add data.
1. The disaster recovery capability of the underlying distributed Block Storage, the multi-copy of Block Storage, the current popular distributed Block Storage such as Ceph, has the multi-copy capability, extending the storage copy to different racks, cabinets, computer rooms, Data center (or directly use the Block Storage service provided by Public Cloud vendors)
2. TDengine disaster recovery, in TDengine Enterprise, itself has when a dnode permanently offline (TCE-metal disk damage, data sorting loss), re-pull a blank dnode to restore the original dnode work.
```bash
kubectl exec -i -t tdengine-0 -- \
taos -s \
"create database if not exists test replica 2;
use test;
create table if not exists t1(ts timestamp, n int);
insert into t1 values(now, 1)(now+1s, 2);"
Finally, welcome to [TDengine Cloud ](https://cloud.tdengine.com/)to experience the one-stop fully managed TDengine Cloud as a Service.
```
Scale in to one node:
```bash
kubectl scale statefulsets tdengine --replicas=1
```
In the TDengine CLI, you can see that no database operations succeed:
```
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
Query OK, 2 row(s) in set (0.000845s)
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
Query OK, 2 row(s) in set (0.000837s)
taos> use test;
Database changed.
taos> insert into t1 values(now, 3);
DB error: Unable to resolve FQDN (0.013874s)
```
> TDengine Cloud is a minimalist fully managed time series data processing Cloud as a Service platform developed based on the open source time series database TDengine. In addition to high-performance time series database, it also has system functions such as caching, subscription and stream computing, and provides convenient and secure data sharing, as well as numerous enterprise-level functions. It allows enterprises in the fields of Internet of Things, Industrial Internet, Finance, IT operation and maintenance monitoring to significantly reduce labor costs and operating costs in the management of time series data.

View File

@ -56,7 +56,7 @@ database_option: {
- WAL_FSYNC_PERIOD: specifies the interval (in milliseconds) at which data is written from the WAL to disk. This parameter takes effect only when the WAL parameter is set to 2. The default value is 3000. Enter a value between 0 and 180000. The value 0 indicates that incoming data is immediately written to disk.
- MAXROWS: specifies the maximum number of rows recorded in a block. The default value is 4096.
- MINROWS: specifies the minimum number of rows recorded in a block. The default value is 100.
- KEEP: specifies the time for which data is retained. Enter a value between 1 and 365000. The default value is 3650. The value of the KEEP parameter must be greater than or equal to the value of the DURATION parameter. TDengine automatically deletes data that is older than the value of the KEEP parameter. You can use m (minutes), h (hours), and d (days) as the unit, for example KEEP 100h or KEEP 10d. If you do not include a unit, d is used by default. The Enterprise Edition supports [Tiered Storage](https://docs.tdengine.com/tdinternal/arch/#tiered-storage) function, thus multiple KEEP values (comma separated and up to 3 values supported, and meet keep 0 <= keep 1 <= keep 2, e.g. KEEP 100h,100d,3650d) are supported; the Community Edition does not support Tiered Storage function (although multiple keep values are configured, they do not take effect, only the maximum keep value is used as KEEP).
- KEEP: specifies the time for which data is retained. Enter a value between 1 and 365000. The default value is 3650. The value of the KEEP parameter must be greater than or equal to the value of the DURATION parameter. TDengine automatically deletes data that is older than the value of the KEEP parameter. You can use m (minutes), h (hours), and d (days) as the unit, for example KEEP 100h or KEEP 10d. If you do not include a unit, d is used by default. TDengine Enterprise supports [Tiered Storage](https://docs.tdengine.com/tdinternal/arch/#tiered-storage) function, thus multiple KEEP values (comma separated and up to 3 values supported, and meet keep 0 <= keep 1 <= keep 2, e.g. KEEP 100h,100d,3650d) are supported; TDengine OSS does not support Tiered Storage function (although multiple keep values are configured, they do not take effect, only the maximum keep value is used as KEEP).
- PAGES: specifies the number of pages in the metadata storage engine cache on each vnode. Enter a value greater than or equal to 64. The default value is 256. The space occupied by metadata storage on each vnode is equal to the product of the values of the PAGESIZE and PAGES parameters. The space occupied by default is 1 MB.
- PAGESIZE: specifies the size (in KB) of each page in the metadata storage engine cache on each vnode. The default value is 4. Enter a value between 1 and 16384.
- PRECISION: specifies the precision at which a database records timestamps. Enter ms for milliseconds, us for microseconds, or ns for nanoseconds. The default value is ms.
@ -73,7 +73,7 @@ database_option: {
- TABLE_PREFIX: The prefix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the prefix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then "0001" is used if TSDB_PREFIX is set to 2 but "v3" is used if TSDB_PREFIX is set to -2; It can help you to control the distribution of tables.
- TABLE_SUFFIX: The suffix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the suffix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then "v300" is used if TSDB_SUFFIX is set to 2 but "01" is used if TSDB_SUFFIX is set to -2; It can help you to control the distribution of tables.
- TSDB_PAGESIZE: The page size of the data storage engine in a vnode. The unit is KB. The default is 4 KB. The range is 1 to 16384, that is, 1 KB to 16 MB.
- WAL_RETENTION_PERIOD: specifies the maximum time of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a time in seconds. The default value 0. A value of 0 indicates that WAL files are not required to keep for consumption. Alter it with a proper value at first to create topics.
- WAL_RETENTION_PERIOD: specifies the maximum time of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a time in seconds. The default value is 3600, which means the data in latest 3600 seconds will be kept in WAL for data subscription. Please adjust this parameter to a more proper value for your data subscription.
- WAL_RETENTION_SIZE: specifies the maximum total size of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a size in KB. The default value is 0. A value of 0 indicates that the total size of WAL files to keep for consumption has no upper limit.
### Example Statement

View File

@ -28,47 +28,47 @@ This document introduces the tables of INFORMATION_SCHEMA and their structure.
Provides information about dnodes. Similar to SHOW DNODES.
| # | **Column** | **Data Type** | **Description** |
| --- | :------------: | ------------ | ------------------------- |
| 1 | vnodes | SMALLINT | Current number of vnodes on the dnode. It should be noted that `vnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 2 | support_vnodes | SMALLINT | Maximum number of vnodes on the dnode |
| 3 | status | BINARY(10) | Current status |
| 4 | note | BINARY(256) | Reason for going offline or other information |
| 5 | id | SMALLINT | Dnode ID |
| 6 | endpoint | BINARY(134) | Dnode endpoint |
| 7 | create | TIMESTAMP | Creation time |
| # | **Column** | **Data Type** | **Description** |
| --- | :------------: | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | vnodes | SMALLINT | Current number of vnodes on the dnode. It should be noted that `vnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 2 | support_vnodes | SMALLINT | Maximum number of vnodes on the dnode |
| 3 | status | BINARY(10) | Current status |
| 4 | note | BINARY(256) | Reason for going offline or other information |
| 5 | id | SMALLINT | Dnode ID |
| 6 | endpoint | BINARY(134) | Dnode endpoint |
| 7 | create | TIMESTAMP | Creation time |
## INS_MNODES
Provides information about mnodes. Similar to SHOW MNODES.
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | ------------------ |
| 1 | id | SMALLINT | Mnode ID |
| 2 | endpoint | BINARY(134) | Mnode endpoint |
| 3 | role | BINARY(10) | Current role |
| 4 | role_time | TIMESTAMP | Time at which the current role was assumed |
| 5 | create_time | TIMESTAMP | Creation time |
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | ------------------------------------------ |
| 1 | id | SMALLINT | Mnode ID |
| 2 | endpoint | BINARY(134) | Mnode endpoint |
| 3 | role | BINARY(10) | Current role |
| 4 | role_time | TIMESTAMP | Time at which the current role was assumed |
| 5 | create_time | TIMESTAMP | Creation time |
## INS_QNODES
Provides information about qnodes. Similar to SHOW QNODES.
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | ------------ |
| 1 | id | SMALLINT | Qnode ID |
| 2 | endpoint | BINARY(134) | Qnode endpoint |
| 3 | create_time | TIMESTAMP | Creation time |
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | --------------- |
| 1 | id | SMALLINT | Qnode ID |
| 2 | endpoint | BINARY(134) | Qnode endpoint |
| 3 | create_time | TIMESTAMP | Creation time |
## INS_CLUSTER
Provides information about the cluster.
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | ---------- |
| 1 | id | BIGINT | Cluster ID |
| 2 | name | BINARY(134) | Cluster name |
| 3 | create_time | TIMESTAMP | Creation time |
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | --------------- |
| 1 | id | BIGINT | Cluster ID |
| 2 | name | BINARY(134) | Cluster name |
| 3 | create_time | TIMESTAMP | Creation time |
## INS_DATABASES
@ -109,191 +109,191 @@ Provides information about user-created databases. Similar to SHOW DATABASES.
Provides information about user-defined functions.
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | -------------- |
| 1 | name | BINARY(64) | Function name |
| 2 | comment | BINARY(255) | Function description. It should be noted that `comment` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 3 | aggregate | INT | Whether the UDF is an aggregate function. It should be noted that `aggregate` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 4 | output_type | BINARY(31) | Output data type |
| 5 | create_time | TIMESTAMP | Creation time |
| 6 | code_len | INT | Length of the source code |
| 7 | bufsize | INT | Buffer size |
| 8 | func_language | BINARY(31) | UDF programming language |
| 9 | func_body | BINARY(16384) | UDF function body |
| 10 | func_version | INT | UDF function version. starting from 0. Increasing by 1 each time it is updated|
| # | **Column** | **Data Type** | **Description** |
| --- | :-----------: | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | name | BINARY(64) | Function name |
| 2 | comment | BINARY(255) | Function description. It should be noted that `comment` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 3 | aggregate | INT | Whether the UDF is an aggregate function. It should be noted that `aggregate` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 4 | output_type | BINARY(31) | Output data type |
| 5 | create_time | TIMESTAMP | Creation time |
| 6 | code_len | INT | Length of the source code |
| 7 | bufsize | INT | Buffer size |
| 8 | func_language | BINARY(31) | UDF programming language |
| 9 | func_body | BINARY(16384) | UDF function body |
| 10 | func_version | INT | UDF function version. starting from 0. Increasing by 1 each time it is updated |
## INS_INDEXES
Provides information about user-created indices. Similar to SHOW INDEX.
| # | **Column** | **Data Type** | **Description** |
| --- | :--------------: | ------------ | ---------------------------------------------------------------------------------- |
| 1 | db_name | BINARY(32) | Database containing the table with the specified index |
| 2 | table_name | BINARY(192) | Table containing the specified index |
| 3 | index_name | BINARY(192) | Index name |
| 4 | db_name | BINARY(64) | Index column |
| 5 | index_type | BINARY(10) | SMA or FULLTEXT index |
| 6 | index_extensions | BINARY(256) | Other information For SMA indices, this shows a list of functions. For FULLTEXT indices, this is null. |
| # | **Column** | **Data Type** | **Description** |
| --- | :--------------: | ------------- | --------------------------------------------------------------------- |
| 1 | db_name | BINARY(32) | Database containing the table with the specified index |
| 2 | table_name | BINARY(192) | Table containing the specified index |
| 3 | index_name | BINARY(192) | Index name |
| 4 | db_name | BINARY(64) | Index column |
| 5 | index_type | BINARY(10) | SMA or tag index |
| 6 | index_extensions | BINARY(256) | Other information For SMA/tag indices, this shows a list of functions |
## INS_STABLES
Provides information about supertables.
| # | **Column** | **Data Type** | **Description** |
| --- | :-----------: | ------------ | ------------------------ |
| 1 | stable_name | BINARY(192) | Supertable name |
| 2 | db_name | BINARY(64) | All databases in the supertable |
| 3 | create_time | TIMESTAMP | Creation time |
| 4 | columns | INT | Number of columns |
| 5 | tags | INT | Number of tags. It should be noted that `tags` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 6 | last_update | TIMESTAMP | Last updated time |
| 7 | table_comment | BINARY(1024) | Table description |
| 8 | watermark | BINARY(64) | Window closing time. It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | max_delay | BINARY(64) | Maximum delay for pushing stream processing results. It should be noted that `max_delay` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 10 | rollup | BINARY(128) | Rollup aggregate function. It should be noted that `rollup` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| # | **Column** | **Data Type** | **Description** |
| --- | :-----------: | ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | stable_name | BINARY(192) | Supertable name |
| 2 | db_name | BINARY(64) | All databases in the supertable |
| 3 | create_time | TIMESTAMP | Creation time |
| 4 | columns | INT | Number of columns |
| 5 | tags | INT | Number of tags. It should be noted that `tags` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 6 | last_update | TIMESTAMP | Last updated time |
| 7 | table_comment | BINARY(1024) | Table description |
| 8 | watermark | BINARY(64) | Window closing time. It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | max_delay | BINARY(64) | Maximum delay for pushing stream processing results. It should be noted that `max_delay` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 10 | rollup | BINARY(128) | Rollup aggregate function. It should be noted that `rollup` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
## INS_TABLES
Provides information about standard tables and subtables.
| # | **Column** | **Data Type** | **Description** |
| --- | :-----------: | ------------ | ---------------- |
| 1 | table_name | BINARY(192) | Table name |
| 2 | db_name | BINARY(64) | Database name |
| 3 | create_time | TIMESTAMP | Creation time |
| 4 | columns | INT | Number of columns |
| 5 | stable_name | BINARY(192) | Supertable name |
| 6 | uid | BIGINT | Table ID |
| 7 | vgroup_id | INT | Vgroup ID |
| 8 | ttl | INT | Table time-to-live. It should be noted that `ttl` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | table_comment | BINARY(1024) | Table description |
| 10 | type | BINARY(20) | Table type |
| # | **Column** | **Data Type** | **Description** |
| --- | :-----------: | ------------- | ---------------------------------------------------------------------------------------------------------------------------------- |
| 1 | table_name | BINARY(192) | Table name |
| 2 | db_name | BINARY(64) | Database name |
| 3 | create_time | TIMESTAMP | Creation time |
| 4 | columns | INT | Number of columns |
| 5 | stable_name | BINARY(192) | Supertable name |
| 6 | uid | BIGINT | Table ID |
| 7 | vgroup_id | INT | Vgroup ID |
| 8 | ttl | INT | Table time-to-live. It should be noted that `ttl` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | table_comment | BINARY(1024) | Table description |
| 10 | type | BINARY(20) | Table type |
## INS_TAGS
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | ---------------------- |
| 1 | table_name | BINARY(192) | Table name |
| 2 | db_name | BINARY(64) | Database name |
| 3 | stable_name | BINARY(192) | Supertable name |
| 4 | tag_name | BINARY(64) | Tag name |
| 5 | tag_type | BINARY(64) | Tag type |
| 6 | tag_value | BINARY(16384) | Tag value |
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | --------------- |
| 1 | table_name | BINARY(192) | Table name |
| 2 | db_name | BINARY(64) | Database name |
| 3 | stable_name | BINARY(192) | Supertable name |
| 4 | tag_name | BINARY(64) | Tag name |
| 5 | tag_type | BINARY(64) | Tag type |
| 6 | tag_value | BINARY(16384) | Tag value |
## INS_COLUMNS
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | ---------------------- |
| 1 | table_name | BINARY(192) | Table name |
| 2 | db_name | BINARY(64) | Database name |
| 3 | table_type | BINARY(21) | Table type |
| 4 | col_name | BINARY(64) | Column name |
| 5 | col_type | BINARY(32) | Column type |
| 6 | col_length | INT | Column length |
| 7 | col_precision | INT | Column precision |
| 8 | col_scale | INT | Column scale |
| 9 | col_nullable | INT | Column nullable |
| # | **Column** | **Data Type** | **Description** |
| --- | :-----------: | ------------- | ---------------- |
| 1 | table_name | BINARY(192) | Table name |
| 2 | db_name | BINARY(64) | Database name |
| 3 | table_type | BINARY(21) | Table type |
| 4 | col_name | BINARY(64) | Column name |
| 5 | col_type | BINARY(32) | Column type |
| 6 | col_length | INT | Column length |
| 7 | col_precision | INT | Column precision |
| 8 | col_scale | INT | Column scale |
| 9 | col_nullable | INT | Column nullable |
## INS_USERS
Provides information about TDengine users.
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | -------- |
| 1 | user_name | BINARY(23) | User name |
| 2 | privilege | BINARY(256) | User permissions |
| 3 | create_time | TIMESTAMP | Creation time |
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | ---------------- |
| 1 | user_name | BINARY(23) | User name |
| 2 | privilege | BINARY(256) | User permissions |
| 3 | create_time | TIMESTAMP | Creation time |
## INS_GRANTS
Provides information about TDengine Enterprise Edition permissions.
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | -------------------------------------------------- |
| 1 | version | BINARY(9) | Whether the deployment is a licensed or trial version |
| 2 | cpu_cores | BINARY(9) | CPU cores included in license |
| 3 | dnodes | BINARY(10) | Dnodes included in license. It should be noted that `dnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 4 | streams | BINARY(10) | Streams included in license. It should be noted that `streams` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 5 | users | BINARY(10) | Users included in license. It should be noted that `users` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 6 | accounts | BINARY(10) | Accounts included in license. It should be noted that `accounts` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 7 | storage | BINARY(21) | Storage space included in license. It should be noted that `storage` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 8 | connections | BINARY(21) | Client connections included in license. It should be noted that `connections` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | databases | BINARY(11) | Databases included in license. It should be noted that `databases` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 10 | speed | BINARY(9) | Write speed specified in license (data points per second) |
| 11 | querytime | BINARY(9) | Total query time specified in license |
| 12 | timeseries | BINARY(21) | Number of metrics included in license |
| 13 | expired | BINARY(5) | Whether the license has expired |
| 14 | expire_time | BINARY(19) | When the trial period expires |
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | version | BINARY(9) | Whether the deployment is a licensed or trial version |
| 2 | cpu_cores | BINARY(9) | CPU cores included in license |
| 3 | dnodes | BINARY(10) | Dnodes included in license. It should be noted that `dnodes` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 4 | streams | BINARY(10) | Streams included in license. It should be noted that `streams` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 5 | users | BINARY(10) | Users included in license. It should be noted that `users` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 6 | accounts | BINARY(10) | Accounts included in license. It should be noted that `accounts` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 7 | storage | BINARY(21) | Storage space included in license. It should be noted that `storage` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 8 | connections | BINARY(21) | Client connections included in license. It should be noted that `connections` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | databases | BINARY(11) | Databases included in license. It should be noted that `databases` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 10 | speed | BINARY(9) | Write speed specified in license (data points per second) |
| 11 | querytime | BINARY(9) | Total query time specified in license |
| 12 | timeseries | BINARY(21) | Number of metrics included in license |
| 13 | expired | BINARY(5) | Whether the license has expired |
| 14 | expire_time | BINARY(19) | When the trial period expires |
## INS_VGROUPS
Provides information about vgroups.
| # | **Column** | **Data Type** | **Description** |
| --- | :-------: | ------------ | ------------------------------------------------------ |
| 1 | vgroup_id | INT | Vgroup ID |
| 2 | db_name | BINARY(32) | Database name |
| 3 | tables | INT | Tables in vgroup. It should be noted that `tables` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 4 | status | BINARY(10) | Vgroup status |
| 5 | v1_dnode | INT | Dnode ID of first vgroup member |
| 6 | v1_status | BINARY(10) | Status of first vgroup member |
| 7 | v2_dnode | INT | Dnode ID of second vgroup member |
| 8 | v2_status | BINARY(10) | Status of second vgroup member |
| 9 | v3_dnode | INT | Dnode ID of third vgroup member |
| 10 | v3_status | BINARY(10) | Status of third vgroup member |
| 11 | nfiles | INT | Number of data and metadata files in the vgroup |
| 12 | file_size | INT | Size of the data and metadata files in the vgroup |
| 13 | tsma | TINYINT | Whether time-range-wise SMA is enabled. 1 means enabled; 0 means disabled. |
| # | **Column** | **Data Type** | **Description** |
| --- | :--------: | ------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| 1 | vgroup_id | INT | Vgroup ID |
| 2 | db_name | BINARY(32) | Database name |
| 3 | tables | INT | Tables in vgroup. It should be noted that `tables` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 4 | status | BINARY(10) | Vgroup status |
| 5 | v1_dnode | INT | Dnode ID of first vgroup member |
| 6 | v1_status | BINARY(10) | Status of first vgroup member |
| 7 | v2_dnode | INT | Dnode ID of second vgroup member |
| 8 | v2_status | BINARY(10) | Status of second vgroup member |
| 9 | v3_dnode | INT | Dnode ID of third vgroup member |
| 10 | v3_status | BINARY(10) | Status of third vgroup member |
| 11 | nfiles | INT | Number of data and metadata files in the vgroup |
| 12 | file_size | INT | Size of the data and metadata files in the vgroup |
| 13 | tsma | TINYINT | Whether time-range-wise SMA is enabled. 1 means enabled; 0 means disabled. |
## INS_CONFIGS
Provides system configuration information.
| # | **Column** | **Data Type** | **Description** |
| --- | :------: | ------------ | ------------ |
| 1 | name | BINARY(32) | Parameter |
| 2 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| # | **Column** | **Data Type** | **Description** |
| --- | :--------: | ------------- | ----------------------------------------------------------------------------------------------------------------------- |
| 1 | name | BINARY(32) | Parameter |
| 2 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
## INS_DNODE_VARIABLES
Provides dnode configuration information.
| # | **Column** | **Data Type** | **Description** |
| --- | :------: | ------------ | ------------ |
| 1 | dnode_id | INT | Dnode ID |
| 2 | name | BINARY(32) | Parameter |
| 3 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| # | **Column** | **Data Type** | **Description** |
| --- | :--------: | ------------- | ----------------------------------------------------------------------------------------------------------------------- |
| 1 | dnode_id | INT | Dnode ID |
| 2 | name | BINARY(32) | Parameter |
| 3 | value | BINARY(64) | Value. It should be noted that `value` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
## INS_TOPICS
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------ | ------------------------------ |
| 1 | topic_name | BINARY(192) | Topic name |
| 2 | db_name | BINARY(64) | Database for the topic |
| 3 | create_time | TIMESTAMP | Creation time |
| 4 | sql | BINARY(1024) | SQL statement used to create the topic |
| # | **Column** | **Data Type** | **Description** |
| --- | :---------: | ------------- | -------------------------------------- |
| 1 | topic_name | BINARY(192) | Topic name |
| 2 | db_name | BINARY(64) | Database for the topic |
| 3 | create_time | TIMESTAMP | Creation time |
| 4 | sql | BINARY(1024) | SQL statement used to create the topic |
## INS_SUBSCRIPTIONS
| # | **Column** | **Data Type** | **Description** |
| --- | :------------: | ------------ | ------------------------ |
| 1 | topic_name | BINARY(204) | Subscribed topic |
| 2 | consumer_group | BINARY(193) | Subscribed consumer group |
| 3 | vgroup_id | INT | Vgroup ID for the consumer |
| 4 | consumer_id | BIGINT | Consumer ID |
| 5 | offset | BINARY(64) | Consumption progress |
| 6 | rows | BIGINT | Number of consumption items |
| # | **Column** | **Data Type** | **Description** |
| --- | :------------: | ------------- | --------------------------- |
| 1 | topic_name | BINARY(204) | Subscribed topic |
| 2 | consumer_group | BINARY(193) | Subscribed consumer group |
| 3 | vgroup_id | INT | Vgroup ID for the consumer |
| 4 | consumer_id | BIGINT | Consumer ID |
| 5 | offset | BINARY(64) | Consumption progress |
| 6 | rows | BIGINT | Number of consumption items |
## INS_STREAMS
| # | **Column** | **Data Type** | **Description** |
| --- | :----------: | ------------ | --------------------------------------- |
| 1 | stream_name | BINARY(64) | Stream name |
| 2 | create_time | TIMESTAMP | Creation time |
| 3 | sql | BINARY(1024) | SQL statement used to create the stream |
| 4 | status | BINARY(20) | Current status |
| 5 | source_db | BINARY(64) | Source database |
| 6 | target_db | BINARY(64) | Target database |
| 7 | target_table | BINARY(192) | Target table |
| 8 | watermark | BIGINT | Watermark (see stream processing documentation). It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | trigger | INT | Method of triggering the result push (see stream processing documentation). It should be noted that `trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| # | **Column** | **Data Type** | **Description** |
| --- | :----------: | ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | stream_name | BINARY(64) | Stream name |
| 2 | create_time | TIMESTAMP | Creation time |
| 3 | sql | BINARY(1024) | SQL statement used to create the stream |
| 4 | status | BINARY(20) | Current status |
| 5 | source_db | BINARY(64) | Source database |
| 6 | target_db | BINARY(64) | Target database |
| 7 | target_table | BINARY(192) | Target table |
| 8 | watermark | BIGINT | Watermark (see stream processing documentation). It should be noted that `watermark` is a TDengine keyword and needs to be escaped with ` when used as a column name. |
| 9 | trigger | INT | Method of triggering the result push (see stream processing documentation). It should be noted that `trigger` is a TDengine keyword and needs to be escaped with ` when used as a column name. |

View File

@ -4,12 +4,12 @@ sidebar_label: Indexing
description: This document describes the SQL statements related to indexing in TDengine.
---
TDengine supports SMA and FULLTEXT indexing.
TDengine supports SMA and tag indexing.
## Create an Index
```sql
CREATE FULLTEXT INDEX index_name ON tb_name (col_name [, col_name] ...)
CREATE INDEX index_name ON tb_name (col_name [, col_name] ...)
CREATE SMA INDEX index_name ON tb_name index_option
@ -46,10 +46,6 @@ SELECT _wstart,_wend,_wduration,max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDIN
ALTER LOCAL 'querySmaOptimize' '0';
```
### FULLTEXT Indexing
Creates a text index for the specified column. FULLTEXT indexing improves performance for queries with text filtering. The index_option syntax is not supported for FULLTEXT indexing. FULLTEXT indexing is supported for JSON tag columns only. Multiple columns cannot be indexed together. However, separate indices can be created for each column.
## Delete an Index
```sql

View File

@ -31,7 +31,8 @@ Websocket connections are supported on all platforms that can run Go.
| connector-rust version | TDengine version | major features |
| :----------------: | :--------------: | :--------------------------------------------------: |
| v0.8.12 | 3.0.5.0 or later | TMQ: Get consuming progress and seek offset to consume. |
| v0.9.2 | 3.0.7.0 or later | STMT: Get tag_fields and col_fields under ws. |
| v0.8.12 | 3.0.5.0 | TMQ: Get consuming progress and seek offset to consume. |
| v0.8.0 | 3.0.4.0 | Support schemaless insert. |
| v0.7.6 | 3.0.3.0 | Support req_id in query. |
| v0.6.0 | 3.0.0.0 | Base features. |
@ -648,12 +649,12 @@ stmt.execute()?;
//stmt.execute()?;
```
For a working example, see [GitHub](https://github.com/taosdata/taos-connector-rust/blob/main/examples/bind.rs).
For a working example, see [GitHub](https://github.com/taosdata/taos-connector-rust/blob/main/taos/examples/bind.rs).
For information about other structure APIs, see the [Rust documentation](https://docs.rs/taos).
[taos]: https://github.com/taosdata/rust-connector-taos
[taos]: https://github.com/taosdata/taos-connector-rust
[r2d2]: https://crates.io/crates/r2d2
[TaosBuilder]: https://docs.rs/taos/latest/taos/struct.TaosBuilder.html
[TaosCfg]: https://docs.rs/taos/latest/taos/struct.TaosCfg.html

View File

@ -59,9 +59,9 @@ The different database framework specifications for various programming language
| -------------------------------------- | ------------- | --------------- | ------------- | ------------- | ------------- | ------------- |
| **Connection Management** | Support | Support | Support | Support | Support | Support |
| **Regular Query** | Support | Support | Support | Support | Support | Support |
| **Parameter Binding** | Supported | Not Supported | Support | Support | Not Supported | Support |
| **Parameter Binding** | Supported | Supported | Support | Support | Not Supported | Support |
| **Subscription (TMQ) ** | Supported | Support | Support | Not Supported | Not Supported | Support |
| **Schemaless** | Supported | Not Supported | Supported | Not Supported | Not Supported | Not Supported |
| **Schemaless** | Supported | Supported | Supported | Not Supported | Not Supported | Not Supported |
| **Bulk Pulling (based on WebSocket) ** | Support | Support | Support | Support | Support | Support |
:::warning

View File

@ -364,6 +364,7 @@ The configuration parameters for specifying super table tag columns and data col
- **min**: The minimum value of the column/label of the data type. The generated value will equal or large than the minimum value.
- **max**: The maximum value of the column/label of the data type. The generated value will less than the maximum value.
- **fun**: This column of data is filled with functions. Currently, only the sin and cos functions are supported. The input parameter is the timestamp and converted to an angle value. The conversion formula is: angle x=input time column ts value % 360. At the same time, it supports coefficient adjustment and random fluctuation factor adjustment, presented in a fixed format expression, such as fun="10\*sin(x)+100\*random(5)", where x represents the angle, ranging from 0 to 360 degrees, and the growth step size is consistent with the time column step size. 10 represents the coefficient of multiplication, 100 represents the coefficient of addition or subtraction, and 5 represents the fluctuation range within a random range of 5%. The currently supported data types are int, bigint, float, and double. Note: The expression is fixed and cannot be reversed.
- **values**: The value field of the nchar/binary column/label, which will be chosen randomly from the values.

View File

@ -5,7 +5,7 @@ description: This document describes the supported platforms for the TDengine se
## List of supported platforms for TDengine server
| | **Windows Server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** | **macOS** |
| | **Windows Server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18 or later** | **macOS** |
| ------------ | ---------------------------- | ----------------- | ---------------- | ---------------- | --------- |
| X64 | ● | ● | ● | ● | ● |
| ARM64 | | | ● | | ● |

View File

@ -732,6 +732,15 @@ The charset that takes effect is UTF-8.
| Value Range | 0-23 |
| Default Value | 0 |
### tmqMaxTopicNum
| Attribute | Description |
| -------- | ------------------ |
| Applicable | Server Only |
| Meaning | The max num of topics |
| Value Range | 1-10000|
| Default Value | 20 |
## 3.0 Parameters
| # | **Parameter** | **Applicable to 2.x ** | **Applicable to 3.0 ** | Current behavior in 3.0 |

View File

@ -6,7 +6,7 @@ description: This document provides download links for all released versions of
TDengine 3.x installation packages can be downloaded at the following links:
For TDengine 2.x installation packages by version, please visit [here](https://www.taosdata.com/all-downloads).
For TDengine 2.x installation packages by version, please visit [here](https://tdengine.com/downloads/historical/).
import Release from "/components/ReleaseV3";

View File

@ -201,7 +201,7 @@ Active: inactive (dead)
<TabItem label="Windows 系统" value="windows">
安装后,可以在拥有管理员权限的 cmd 窗口执行 `sc start taosd` 或在 `C:\TDengine` 目录下,运行 `taosd.exe` 来启动 TDengine 服务进程。
安装后,可以在拥有管理员权限的 cmd 窗口执行 `sc start taosd` 或在 `C:\TDengine` 目录下,运行 `taosd.exe` 来启动 TDengine 服务进程。如需使用 http/REST 服务,请执行 `sc start taosadapter` 或运行 `taosadapter.exe` 来启动 taosAdapter 服务进程。
**TDengine 命令行CLI**

View File

@ -398,7 +398,7 @@ def finish(buf: bytes) -> output_type:
3. 定义一个标量函数,输入一个时间戳,输出距离这个时间最近的下一个周日。完成这个函数要用到第三方库 moment。我们在这个示例中讲解使用第三方库的注意事项。
4. 定义一个聚合函数,计算某一列最大值和最小值的差, 也就是实现 TDengien 内置的 spread 函数。
同时也包含大量实用的 debug 技巧。
本文假设你用的是 Linux 系统,且已安装好了 TDengine 3.0.4.0+ 和 Python 3.x
本文假设你用的是 Linux 系统,且已安装好了 TDengine 3.0.4.0+ 和 Python 3.7+
注意:**UDF 内无法通过 print 函数输出日志,需要自己写文件或用 python 内置的 logging 库写文件**。

View File

@ -30,7 +30,8 @@ Websocket 连接支持所有能运行 Rust 的平台。
| Rust 连接器版本 | TDengine 版本 | 主要功能 |
| :----------------: | :--------------: | :--------------------------------------------------: |
| v0.8.12 | 3.0.5.0 or later | 消息订阅:获取消费进度及按照指定进度开始消费。 |
| v0.9.2 | 3.0.7.0 or later | STMTws 下获取 tag_fields、col_fields。 |
| v0.8.12 | 3.0.5.0 | 消息订阅:获取消费进度及按照指定进度开始消费。 |
| v0.8.0 | 3.0.4.0 | 支持无模式写入。 |
| v0.7.6 | 3.0.3.0 | 支持在请求中使用 req_id。 |
| v0.6.0 | 3.0.0.0 | 基础功能。 |

View File

@ -58,9 +58,9 @@ TDengine 版本更新往往会增加新的功能特性,列表中的连接器
| ------------------------------ | -------- | ---------- | -------- | -------- | ----------- | -------- |
| **连接管理** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **普通查询** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
| **参数绑定** | 支持 | 暂不支持 | 支持 | 支持 | 暂不支持 | 支持 |
| **参数绑定** | 支持 | 支持 | 支持 | 支持 | 暂不支持 | 支持 |
| **数据订阅TMQ** | 支持 | 支持 | 支持 | 暂不支持 | 暂不支持 | 支持 |
| **Schemaless** | 支持 | 暂不支持 | 支持 | 暂不支持 | 暂不支持 | 支持 |
| **Schemaless** | 支持 | 支持 | 支持 | 暂不支持 | 暂不支持 | 支持 |
| **批量拉取(基于 WebSocket** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 |
:::warning

View File

@ -4,23 +4,31 @@ title: 在 Kubernetes 上部署 TDengine 集群
description: 利用 Kubernetes 部署 TDengine 集群的详细指南
---
作为面向云原生架构设计的时序数据库TDengine 支持 Kubernetes 部署。这里介绍如何使用 YAML 文件一步一步从头创建一个 TDengine 集群,并重点介绍 Kubernetes 环境下 TDengine 的常用操作。
## 概述
作为面向云原生架构设计的时序数据库TDengine 本身就支持 Kubernetes 部署。这里介绍如何使用 YAML 文件从头一步一步创建一个可用于生产使用的高可用 TDengine 集群,并重点介绍 Kubernetes 环境下 TDengine 的常用操作。
为了满足[高可用](https://docs.taosdata.com/tdinternal/high-availability/)的需求,集群需要满足如下要求:
- 3个及以上 dnode TDengine 的同一个 vgroup 中的多个 vnode ,不允许同时分布在一个 dnode 所以如果创建3副本的数据库则 dnode 数大于等于3
- 3个 mnode mnode 负责整个集群的管理工作TDengine 默认是一个 mnode。如果这个 mnode 所在的 dnode 掉线,则整个集群不可用。
- 数据库的3副本TDengine 的副本配置是数据库级别所以数据库3副本可满足在3个 dnode 的集群中,任意一个 dnode 下线,都不影响集群的正常使用。**如果下线** **dnode** **个数为2时此时集群不可用****因为****RAFT无法完成选举****。**企业版在灾难恢复场景任一节点数据文件损坏都可以通过重新拉起dnode进行恢复
## 前置条件
要使用 Kubernetes 部署管理 TDengine 集群,需要做好如下准备工作。
* 本文适用 Kubernetes v1.5 以上版本
* 本文和下一章使用 minikube、kubectl 和 helm 等工具进行安装部署,请提前安装好相应软件
* Kubernetes 已经安装部署并能正常访问使用或更新必要的容器仓库或其他服务
- 本文适用 Kubernetes v1.19 以上版本
- 本文使用 kubectl 工具进行安装部署,请提前安装好相应软件
- Kubernetes 已经安装部署并能正常访问使用或更新必要的容器仓库或其他服务
以下配置文件也可以从 [GitHub 仓库](https://github.com/taosdata/TDengine-Operator/tree/3.0/src/tdengine) 下载。
## 配置 Service 服务
创建一个 Service 配置文件:`taosd-service.yaml`,服务名称 `metadata.name` (此处为 "taosd") 将在下一步中使用到。添加 TDengine 所用到的端口:
创建一个 Service 配置文件:`taosd-service.yaml`,服务名称 `metadata.name` (此处为 "taosd") 将在下一步中使用到。首先添加 TDengine 所用到的端口,然后在选择器设置确定的标签 app (此处为 “tdengine”)。
```yaml
```YAML
---
apiVersion: v1
kind: Service
@ -42,10 +50,11 @@ spec:
## 有状态服务 StatefulSet
根据 Kubernetes 对各类部署的说明,我们将使用 StatefulSet 作为 TDengine 的服务类型。
创建文件 `tdengine.yaml`,其中 replicas 定义集群节点的数量为 3。节点时区为中国Asia/Shanghai每个节点分配 10G 标准standard存储。你也可以根据实际情况进行相应修改。
根据 Kubernetes 对各类部署的说明,我们将使用 StatefulSet 作为 TDengine 的部署资源类型。 创建文件 `tdengine.yaml`,其中 replicas 定义集群节点的数量为 3。节点时区为中国Asia/Shanghai每个节点分配 5G 标准standard存储参考[Storage Classes](https://kubernetes.io/docs/concepts/storage/storage-classes/) 配置 storage class )。你也可以根据实际情况进行相应修改。
```yaml
请特别注意startupProbe的配置在 dnode 的 Pod 掉线一段时间后,再重新启动,这个时候新上线的 dnode 会短暂不可用。如果startupProbe配置过小Kubernetes 会认为该 Pod 处于不正常的状态,并尝试重启该 Pod该 dnode 的 Pod 会频繁重启,始终无法恢复到正常状态。参考 [Configure Liveness, Readiness and Startup Probes](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/)
```YAML
---
apiVersion: apps/v1
kind: StatefulSet
@ -69,7 +78,7 @@ spec:
spec:
containers:
- name: "tdengine"
image: "tdengine/tdengine:3.0.0.0"
image: "tdengine/tdengine:3.0.7.1"
imagePullPolicy: "IfNotPresent"
ports:
- name: tcp6030
@ -108,6 +117,12 @@ spec:
volumeMounts:
- name: taosdata
mountPath: /var/lib/taos
startupProbe:
exec:
command:
- taos-check
failureThreshold: 360
periodSeconds: 10
readinessProbe:
exec:
command:
@ -129,199 +144,373 @@ spec:
storageClassName: "standard"
resources:
requests:
storage: "10Gi"
storage: "5Gi"
```
## 使用 kubectl 命令部署 TDengine 集群
顺序执行以下命令。
首先创建对应的 namespace然后顺序执行以下命令
```bash
kubectl apply -f taosd-service.yaml
kubectl apply -f tdengine.yaml
```Bash
kubectl apply -f taosd-service.yaml -n tdengine-test
kubectl apply -f tdengine.yaml -n tdengine-test
```
上面的配置将生成一个三节点的 TDengine 集群dnode 为自动配置,可以使用 show dnodes 命令查看当前集群的节点:
```bash
kubectl exec -i -t tdengine-0 -- taos -s "show dnodes"
kubectl exec -i -t tdengine-1 -- taos -s "show dnodes"
kubectl exec -i -t tdengine-2 -- taos -s "show dnodes"
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show dnodes"
kubectl exec -it tdengine-2 -n tdengine-test -- taos -s "show dnodes"
```
输出如下:
```
```Bash
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
Query OK, 3 rows in database (0.003655s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 0 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
2 | tdengine-1.ta... | 0 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
3 | tdengine-2.ta... | 0 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
Query OK, 3 row(s) in set (0.001853s)
```
查看当前mnode
```Bash
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
taos> show mnodes\G
*************************** 1.row ***************************
id: 1
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
role: leader
status: ready
create_time: 2023-07-19 17:54:18.559
reboot_time: 2023-07-19 17:54:19.520
Query OK, 1 row(s) in set (0.001282s)
```
## 创建mnode
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 2"
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "create mnode on dnode 3"
```
查看mnode
```Bash
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
taos> show mnodes\G
*************************** 1.row ***************************
id: 1
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
role: leader
status: ready
create_time: 2023-07-19 17:54:18.559
reboot_time: 2023-07-20 09:19:36.060
*************************** 2.row ***************************
id: 2
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
role: follower
status: ready
create_time: 2023-07-20 09:22:05.600
reboot_time: 2023-07-20 09:22:12.838
*************************** 3.row ***************************
id: 3
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
role: follower
status: ready
create_time: 2023-07-20 09:22:20.042
reboot_time: 2023-07-20 09:22:23.271
Query OK, 3 row(s) in set (0.003108s)
```
## 使能端口转发
利用 kubectl 端口转发功能可以使应用可以访问 Kubernetes 环境运行的 TDengine 集群。
```
kubectl port-forward tdengine-0 6041:6041 &
```Plain
kubectl port-forward -n tdengine-test tdengine-0 6041:6041 &
```
使用 curl 命令验证 TDengine REST API 使用的 6041 接口。
```
$ curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
Handling connection for 6041
{"code":0,"column_meta":[["name","VARCHAR",64],["create_time","TIMESTAMP",8],["vgroups","SMALLINT",2],["ntables","BIGINT",8],["replica","TINYINT",1],["strict","VARCHAR",4],["duration","VARCHAR",10],["keep","VARCHAR",32],["buffer","INT",4],["pagesize","INT",4],["pages","INT",4],["minrows","INT",4],["maxrows","INT",4],["comp","TINYINT",1],["precision","VARCHAR",2],["status","VARCHAR",10],["retention","VARCHAR",60],["single_stable","BOOL",1],["cachemodel","VARCHAR",11],["cachesize","INT",4],["wal_level","TINYINT",1],["wal_fsync_period","INT",4],["wal_retention_period","INT",4],["wal_retention_size","BIGINT",8]],"data":[["information_schema",null,null,16,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null],["performance_schema",null,null,10,null,null,null,null,null,null,null,null,null,null,null,"ready",null,null,null,null,null,null,null,null,null,null]],"rows":2}
```Plain
curl -u root:taosdata -d "show databases" 127.0.0.1:6041/rest/sql
{"code":0,"column_meta":[["name","VARCHAR",64]],"data":[["information_schema"],["performance_schema"],["test"],["test1"]],"rows":4}
```
## 使用 dashboard 进行图形化管理
## 集群测试
minikube 提供 dashboard 命令支持图形化管理界面。
### 数据准备
```
$ minikube dashboard
* Verifying dashboard health ...
* Launching proxy ...
* Verifying proxy health ...
* Opening http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...
http://127.0.0.1:46617/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
#### taosBenchmark
通过taosBenchmark 创建一个3副本的数据库同时写入1亿条数据同时查看数据
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taosBenchmark -I stmt -d test -n 10000 -t 10000 -a 3
# query data
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "select count(*) from test.meters;"
taos> select count(*) from test.meters;
count(*) |
========================
100000000 |
Query OK, 1 row(s) in set (0.103537s)
```
对于某些公有云环境minikube 绑定在 127.0.0.1 IP 地址上无法通过远程访问,需要使用 kubectl proxy 命令将端口映射到 0.0.0.0 IP 地址上,再通过浏览器访问虚拟机公网 IP 和端口以及相同的 dashboard URL 路径即可远程访问 dashboard。
查看vnode分布通过show dnodes
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 8 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-19 17:54:18.469 | | | |
2 | tdengine-1.ta... | 8 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-19 17:54:38.698 | | | |
3 | tdengine-2.ta... | 8 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-19 17:55:02.039 | | | |
Query OK, 3 row(s) in set (0.001357s)
```
$ kubectl proxy --accept-hosts='^.*$' --address='0.0.0.0'
通过show vgroup 查看 vnode 分布情况
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show test.vgroups"
taos> show test.vgroups
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode | v4_status | cacheload | cacheelements | tsma |
==============================================================================================================================================================================================
2 | test | 1267 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
3 | test | 1215 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
4 | test | 1215 | 1 | leader | 2 | follower | 3 | follower | NULL | NULL | 0 | 0 | 0 |
5 | test | 1307 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
6 | test | 1245 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
7 | test | 1275 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
8 | test | 1231 | 1 | leader | 2 | follower | 3 | follower | NULL | NULL | 0 | 0 | 0 |
9 | test | 1245 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
Query OK, 8 row(s) in set (0.001488s)
```
#### 手工创建
常见一个三副本的test1并创建一张表写入2条数据
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- \
taos -s \
"create database if not exists test1 replica 3;
use test1;
create table if not exists t1(ts timestamp, n int);
insert into t1 values(now, 1)(now+1s, 2);"
```
通过show test1.vgroup 查看xnode分布情况
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show test1.vgroups"
taos> show test1.vgroups
vgroup_id | db_name | tables | v1_dnode | v1_status | v2_dnode | v2_status | v3_dnode | v3_status | v4_dnode | v4_status | cacheload | cacheelements | tsma |
==============================================================================================================================================================================================
10 | test1 | 1 | 1 | follower | 2 | follower | 3 | leader | NULL | NULL | 0 | 0 | 0 |
11 | test1 | 0 | 1 | follower | 2 | leader | 3 | follower | NULL | NULL | 0 | 0 | 0 |
Query OK, 2 row(s) in set (0.001489s)
```
### 容错测试
Mnode leader 所在的 dnode 掉线dnode1
```Bash
kubectl get pod -l app=tdengine -n tdengine-test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 0/1 ErrImagePull 2 (2s ago) 20m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (6m48s ago) 20m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 21m 10.244.1.223 node85 <none> <none>
```
此时集群mnode发生重新选举dnode1上的monde 成为leader
```Bash
kubectl exec -it tdengine-1 -n tdengine-test -- taos -s "show mnodes\G"
Welcome to the TDengine Command Line Interface, Client Version:3.0.7.1.202307190706
Copyright (c) 2022 by TDengine, all rights reserved.
taos> show mnodes\G
*************************** 1.row ***************************
id: 1
endpoint: tdengine-0.taosd.tdengine-test.svc.cluster.local:6030
role: offline
status: offline
create_time: 2023-07-19 17:54:18.559
reboot_time: 1970-01-01 08:00:00.000
*************************** 2.row ***************************
id: 2
endpoint: tdengine-1.taosd.tdengine-test.svc.cluster.local:6030
role: leader
status: ready
create_time: 2023-07-20 09:22:05.600
reboot_time: 2023-07-20 09:32:00.227
*************************** 3.row ***************************
id: 3
endpoint: tdengine-2.taosd.tdengine-test.svc.cluster.local:6030
role: follower
status: ready
create_time: 2023-07-20 09:22:20.042
reboot_time: 2023-07-20 09:32:00.026
Query OK, 3 row(s) in set (0.001513s)
```
集群可以正常读写
```Bash
# insert
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "insert into test1.t1 values(now, 1)(now+1s, 2);"
taos> insert into test1.t1 values(now, 1)(now+1s, 2);
Insert OK, 2 row(s) affected (0.002098s)
# select
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "select *from test1.t1"
taos> select *from test1.t1
ts | n |
========================================
2023-07-19 18:04:58.104 | 1 |
2023-07-19 18:04:59.104 | 2 |
2023-07-19 18:06:00.303 | 1 |
2023-07-19 18:06:01.303 | 2 |
Query OK, 4 row(s) in set (0.001994s)
```
同理至于非leader得mnode掉线读写当然可以正常进行这里就不做过多的展示。
## 集群扩容
TDengine 集群支持自动扩容:
```bash
```Bash
kubectl scale statefulsets tdengine --replicas=4
```
上面命令行中参数 `--replica=4` 表示要将 TDengine 集群扩容到 4 个节点,执行后首先检查 POD 的状态:
```bash
kubectl get pods -l app=tdengine
```Bash
kubectl get pod -l app=tdengine -n tdengine-test -o wide
```
输出如下:
```
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 161m
tdengine-1 1/1 Running 0 161m
tdengine-2 1/1 Running 0 32m
tdengine-3 1/1 Running 0 32m
```Plain
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 1/1 Running 4 (6h26m ago) 6h53m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (6h39m ago) 6h53m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 5h16m 10.244.1.224 node85 <none> <none>
tdengine-3 1/1 Running 0 3m24s 10.244.2.76 node86 <none> <none>
```
此时 POD 的状态仍然是 RunningTDengine 集群中的 dnode 状态要等 POD 状态为 `ready` 之后才能看到:
此时 Pod 的状态仍然是 RunningTDengine 集群中的 dnode 状态要等 Pod 状态为 `ready` 之后才能看到:
```bash
kubectl exec -i -t tdengine-3 -- taos -s "show dnodes"
```Bash
kubectl exec -it tdengine-3 -n tdengine-test -- taos -s "show dnodes"
```
扩容后的四节点 TDengine 集群的 dnode 列表:
```
```Plain
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
4 | tdengine-3.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:33:16.039 | |
Query OK, 4 rows in database (0.008377s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
4 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:01:44.007 | 2023-07-20 16:01:44.889 | | | |
Query OK, 4 row(s) in set (0.003628s)
```
## 集群缩容
由于 TDengine 集群在扩缩容时会对数据进行节点间迁移,使用 kubectl 命令进行缩容需要首先使用 "drop dnodes" 命令,节点删除完成后再进行 Kubernetes 集群缩容。
由于 TDengine 集群在扩缩容时会对数据进行节点间迁移,使用 kubectl 命令进行缩容需要首先使用 "drop dnodes" 命令**如果集群中存在3副本的db那么缩容后的** **dnode** **个数也要必须大于等于3否则drop dnode操作会被中止**),然后再节点删除完成后再进行 Kubernetes 集群缩容。
注意:由于 Kubernetes Statefulset 中 Pod 的只能按创建顺序逆序移除,所以 TDengine drop dnode 也需要按照创建顺序逆序移除,否则会导致 Pod 处于错误状态。
```
$ kubectl exec -i -t tdengine-0 -- taos -s "drop dnode 4"
```
```bash
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
```Bash
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "drop dnode 4"
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | note |
============================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:14:57.285 | |
2 | tdengine-1.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:11.302 | |
3 | tdengine-2.taosd.default.sv... | 0 | 256 | ready | 2022-08-10 13:15:23.290 | |
Query OK, 3 rows in database (0.004861s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
Query OK, 3 row(s) in set (0.003324s)
```
确认移除成功后(使用 kubectl exec -i -t tdengine-0 -- taos -s "show dnodes" 查看和确认 dnode 列表),使用 kubectl 命令移除 POD
```
kubectl scale statefulsets tdengine --replicas=3
```Plain
kubectl scale statefulsets tdengine --replicas=3 -n tdengine-test
```
最后一个 POD 将会被删除。使用命令 kubectl get pods -l app=tdengine 查看POD状态
```
$ kubectl get pods -l app=tdengine
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 4m7s
tdengine-1 1/1 Running 0 3m55s
tdengine-2 1/1 Running 0 2m28s
```Plain
kubectl get pod -l app=tdengine -n tdengine-test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 1/1 Running 4 (6h55m ago) 7h22m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (7h9m ago) 7h23m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 5h45m 10.244.1.224 node85 <none> <none>
```
POD删除后需要手动删除PVC否则下次扩容时会继续使用以前的数据导致无法正常加入集群。
```bash
$ kubectl delete pvc taosdata-tdengine-3
```Bash
kubectl delete pvc aosdata-tdengine-3 -n tdengine-test
```
此时的集群状态是安全的,需要时还可以再次进行扩容:
```bash
$ kubectl scale statefulsets tdengine --replicas=4
```Bash
kubectl scale statefulsets tdengine --replicas=4 -n tdengine-test
statefulset.apps/tdengine scaled
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 35m
tdengine-1 1/1 Running 0 34m
tdengine-2 1/1 Running 0 12m
tdengine-3 0/1 ContainerCreating 0 4s
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl get pods -l app=tdengine
NAME READY STATUS RESTARTS AGE
tdengine-0 1/1 Running 0 35m
tdengine-1 1/1 Running 0 34m
tdengine-2 1/1 Running 0 12m
tdengine-3 0/1 Running 0 7s
it@k8s-2:~/TDengine-Operator/src/tdengine$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
kubectl get pod -l app=tdengine -n tdengine-test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
tdengine-0 1/1 Running 4 (6h59m ago) 7h27m 10.244.2.75 node86 <none> <none>
tdengine-1 1/1 Running 1 (7h13m ago) 7h27m 10.244.0.59 node84 <none> <none>
tdengine-2 1/1 Running 0 5h49m 10.244.1.224 node85 <none> <none>
tdengine-3 1/1 Running 0 20s 10.244.2.77 node86 <none> <none>
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
5 | tdengine-2.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:01:36.479 | |
6 | tdengine-3.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 18:13:54.411 | |
Query OK, 4 row(s) in set (0.001348s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
5 | tdengine-3.ta... | 0 | 16 | ready | 2023-07-20 16:31:34.092 | 2023-07-20 16:38:17.419 | | | |
Query OK, 4 row(s) in set (0.003881s)
```
## 清理 TDengine 集群
> **删除pvc时需要注意下pv persistentVolumeReclaimPolicy策略建议改为Delete这样在删除pvc时才会自动清理pv同时会清理底层的csi存储资源如果没有配置删除pvc自动清理pv的策略再删除pvc后在手动清理pv时pv对应的csi存储资源可能不会被释放。**
完整移除 TDengine 集群,需要分别清理 statefulset、svc、configmap、pvc。
```bash
kubectl delete statefulset -l app=tdengine
kubectl delete svc -l app=tdengine
kubectl delete pvc -l app=tdengine
kubectl delete configmap taoscfg
```Bash
kubectl delete statefulset -l app=tdengine -n tdengine-test
kubectl delete svc -l app=tdengine -n tdengine-test
kubectl delete pvc -l app=tdengine -n tdengine-test
kubectl delete configmap taoscfg -n tdengine-test
```
## 常见错误
@ -330,65 +519,26 @@ kubectl delete configmap taoscfg
未进行 "drop dnode" 直接进行缩容,由于 TDengine 尚未删除节点,缩容 pod 导致 TDengine 集群中部分节点处于 offline 状态。
```
$ kubectl exec -it tdengine-0 -- taos -s "show dnodes"
```Plain
kubectl exec -it tdengine-0 -n tdengine-test -- taos -s "show dnodes"
taos> show dnodes
id | endpoint | vnodes | support_vnodes | status | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 0 | 4 | ready | 2022-07-25 17:38:49.012 | |
2 | tdengine-1.taosd.default.sv... | 1 | 4 | ready | 2022-07-25 17:39:01.517 | |
5 | tdengine-2.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:01:36.479 | status msg timeout |
6 | tdengine-3.taosd.default.sv... | 0 | 4 | offline | 2022-07-25 18:13:54.411 | status msg timeout |
Query OK, 4 row(s) in set (0.001323s)
id | endpoint | vnodes | support_vnodes | status | create_time | reboot_time | note | active_code | c_active_code |
=============================================================================================================================================================================================================================================
1 | tdengine-0.ta... | 10 | 16 | ready | 2023-07-19 17:54:18.552 | 2023-07-20 09:39:04.297 | | | |
2 | tdengine-1.ta... | 10 | 16 | ready | 2023-07-19 17:54:37.828 | 2023-07-20 09:28:24.240 | | | |
3 | tdengine-2.ta... | 10 | 16 | ready | 2023-07-19 17:55:01.141 | 2023-07-20 10:48:43.445 | | | |
5 | tdengine-3.ta... | 0 | 16 | offline | 2023-07-20 16:31:34.092 | 2023-07-20 16:38:17.419 | status msg timeout | | |
Query OK, 4 row(s) in set (0.003862s)
```
### 错误二
## 最后
TDengine 集群会持有 replica 参数,如果缩容后的节点数小于这个值,集群将无法使用
对于在 Kubernetes 环境下 TDengine 的高可用和高可靠来说,对于硬件损坏、灾难恢复,分为两个层面来讲
创建一个库使用 replica 参数为 2插入部分数据
1. 底层的分布式块存储具备的灾难恢复能力,块存储的多副本,当下流行的分布式块存储如 Ceph就具备多副本能力将存储副本扩展到不同的机架、机柜、机房、数据中心或者直接使用公有云厂商提供的块存储服务
2. TDengine的灾难恢复在 TDengine Enterprise 中,本身具备了当一个 dnode 永久下线物理机磁盘损坏数据分拣丢失重新拉起一个空白的dnode来恢复原dnode的工作。
```bash
kubectl exec -i -t tdengine-0 -- \
taos -s \
"create database if not exists test replica 2;
use test;
create table if not exists t1(ts timestamp, n int);
insert into t1 values(now, 1)(now+1s, 2);"
最后,欢迎使用[TDengine Cloud](https://cloud.taosdata.com/)来体验一站式全托管的TDengine云服务。
```
缩容到单节点:
```bash
kubectl scale statefulsets tdengine --replicas=1
```
在 TDengine CLI 中的所有数据库操作将无法成功。
```
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
Query OK, 2 row(s) in set (0.000845s)
taos> show dnodes;
id | end_point | vnodes | cores | status | role | create_time | offline reason |
======================================================================================================================================
1 | tdengine-0.taosd.default.sv... | 2 | 40 | ready | any | 2021-06-01 15:55:52.562 | |
2 | tdengine-1.taosd.default.sv... | 1 | 40 | offline | any | 2021-06-01 15:56:07.212 | status msg timeout |
Query OK, 2 row(s) in set (0.000837s)
taos> use test;
Database changed.
taos> insert into t1 values(now, 3);
DB error: Unable to resolve FQDN (0.013874s)
```
> TDengine Cloud 是一个极简的全托管时序数据处理云服务平台,它是基于开源的时序数据库 TDengine 而开发的。除高性能的时序数据库之外它还具有缓存、订阅和流计算等系统功能而且提供了便利而又安全的数据分享、以及众多的企业级功能。它可以让物联网、工业互联网、金融、IT 运维监控等领域企业在时序数据的管理上大幅降低人力成本和运营成本。

View File

@ -73,7 +73,7 @@ database_option: {
- TABLE_PREFIX当其为正值时在决定把一个表分配到哪个 vgroup 时要忽略表名中指定长度的前缀;当其为负值时,在决定把一个表分配到哪个 vgroup 时只使用表名中指定长度的前缀;例如,假定表名为 "v30001",当 TSDB_PREFIX = 2 时 使用 "0001" 来决定分配到哪个 vgroup ,当 TSDB_PREFIX = -2 时使用 "v3" 来决定分配到哪个 vgroup
- TABLE_SUFFIX当其为正值时在决定把一个表分配到哪个 vgroup 时要忽略表名中指定长度的后缀;当其为负值时,在决定把一个表分配到哪个 vgroup 时只使用表名中指定长度的后缀;例如,假定表名为 "v30001",当 TSDB_SUFFIX = 2 时 使用 "v300" 来决定分配到哪个 vgroup ,当 TSDB_SUFFIX = -2 时使用 "01" 来决定分配到哪个 vgroup。
- TSDB_PAGESIZE一个 VNODE 中时序数据存储引擎的页大小,单位为 KB默认为 4 KB。范围为 1 到 16384即 1 KB到 16 MB。
- WAL_RETENTION_PERIOD: 为了数据订阅消费需要WAL日志文件额外保留的最大时长策略。WAL日志清理不受订阅客户端消费状态影响。单位为 s。默认为 0表示无需为订阅保留。新建订阅应先设置恰当的时长策略
- WAL_RETENTION_PERIOD: 为了数据订阅消费需要WAL日志文件额外保留的最大时长策略。WAL日志清理不受订阅客户端消费状态影响。单位为 s。默认为 3600表示在 WAL 保留最近 3600 秒的数据,请根据数据订阅的需要修改这个参数为适当值
- WAL_RETENTION_SIZE为了数据订阅消费需要WAL日志文件额外保留的最大累计大小策略。单位为 KB。默认为 0表示累计大小无上限。
### 创建数据库示例
@ -82,7 +82,7 @@ create database if not exists db vgroups 10 buffer 10
```
以上示例创建了一个有 10 个 vgroup 名为 db 的数据库, 其中每个 vnode 分配 10MB 的写入缓存
以上示例创建了一个有 10 个 vgroup 名为 db 的数据库, 其中每个 vnode 分配 10MB 的写入缓存
### 使用数据库

View File

@ -48,7 +48,6 @@ table_option: {
5. 使用数据类型 BINARY/NCHAR/GEOMETRY需指定其最长的字节数如 BINARY(20),表示 20 字节;
6. 为了兼容支持更多形式的表名TDengine 引入新的转义符 "\`",可以让表名与关键词不冲突,同时不受限于上述表名称合法性约束检查。但是同样具有长度限制要求。使用转义字符以后,不再对转义字符中的内容进行大小写统一。
例如:\`aBc\` 和 \`abc\` 是不同的表名,但是 abc 和 aBc 是相同的表名。
需要注意的是转义字符中的内容必须是可打印字符。
**参数说明**

View File

@ -10,11 +10,9 @@ description: 合法字符集和命名中的限制规则
2. 允许英文字符或下划线开头,不允许以数字开头
3. 不区分大小写
4. 转义后表(列)名规则:
为了兼容支持更多形式的表TDengine 引入新的转义符 "`"。可用让表名与关键词不冲突,同时不受限于上述表名称合法性约束检查
转义后的表(列)名同样受到长度限制要求,且长度计算的时候不计算转义符。使用转义字符以后,不再对转义字符中的内容进行大小写统一
为了兼容支持更多形式的表TDengine 引入新的转义符 "`"。使用转义字符以后,不再对转义字符中的内容进行大小写统一,即可以保留用户指定表名中的大小写属性。
例如:\`aBc\` 和 \`abc\` 是不同的表(列)名,但是 abc 和 aBc 是相同的表(列)名。
需要注意的是转义字符中的内容必须是可打印字符。
## 密码合法字符集
@ -48,13 +46,13 @@ description: 合法字符集和命名中的限制规则
### 转义后表(列)名规则:
为了兼容支持更多形式的表TDengine 引入新的转义符 "`",可以避免表名与关键词的冲突,同时不受限于上述表名合法性约束检查,转义符不计入表名的长度。
为了兼容支持更多形式的表TDengine 引入新的转义符 "`",可以避免表名与关键词的冲突,转义符不计入表名的长度。
转义后的表(列)名同样受到长度限制要求,且长度计算的时候不计算转义符。使用转义字符以后,不再对转义字符中的内容进行大小写统一。
例如:
\`aBc\` 和 \`abc\` 是不同的表(列)名,但是 abc 和 aBc 是相同的表(列)名。
:::note
转义字符中的内容必须是可打印字符
转义字符中的内容必须符合命名规则中的字符约束
:::

View File

@ -28,15 +28,15 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
提供 dnode 的相关信息。也可以使用 SHOW DNODES 来查询这些信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :------------: | ------------ | ------------------------- |
| # | **列名** | **数据类型** | **说明** |
| --- | :------------: | ------------ | ----------------------------------------------------------------------------------------------------- |
| 1 | vnodes | SMALLINT | dnode 中的实际 vnode 个数。需要注意,`vnodes` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 2 | support_vnodes | SMALLINT | 最多支持的 vnode 个数 |
| 3 | status | BINARY(10) | 当前状态 |
| 4 | note | BINARY(256) | 离线原因等信息 |
| 5 | id | SMALLINT | dnode id |
| 6 | endpoint | BINARY(134) | dnode 的地址 |
| 7 | create | TIMESTAMP | 创建时间 |
| 2 | support_vnodes | SMALLINT | 最多支持的 vnode 个数 |
| 3 | status | BINARY(10) | 当前状态 |
| 4 | note | BINARY(256) | 离线原因等信息 |
| 5 | id | SMALLINT | dnode id |
| 6 | endpoint | BINARY(134) | dnode 的地址 |
| 7 | create | TIMESTAMP | 创建时间 |
## INS_MNODES
@ -109,66 +109,66 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
用户创建的自定义函数的信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------ | -------------- |
| 1 | name | BINARY(64) | 函数名 |
| 2 | comment | BINARY(255) | 补充说明。需要注意,`comment` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 3 | aggregate | INT | 是否为聚合函数。需要注意,`aggregate` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 4 | output_type | BINARY(31) | 输出类型 |
| 5 | create_time | TIMESTAMP | 创建时间 |
| 6 | code_len | INT | 代码长度 |
| 7 | bufsize | INT | buffer 大小 |
| 8 | func_language | BINARY(31) | 自定义函数编程语言 |
| 9 | func_body | BINARY(16384) | 函数体定义 |
| 10 | func_version | INT | 函数版本号。初始版本为0每次替换更新版本号加1。|
| # | **列名** | **数据类型** | **说明** |
| --- | :-----------: | ------------- | --------------------------------------------------------------------------------------------- |
| 1 | name | BINARY(64) | 函数名 |
| 2 | comment | BINARY(255) | 补充说明。需要注意,`comment` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 3 | aggregate | INT | 是否为聚合函数。需要注意,`aggregate` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 4 | output_type | BINARY(31) | 输出类型 |
| 5 | create_time | TIMESTAMP | 创建时间 |
| 6 | code_len | INT | 代码长度 |
| 7 | bufsize | INT | buffer 大小 |
| 8 | func_language | BINARY(31) | 自定义函数编程语言 |
| 9 | func_body | BINARY(16384) | 函数体定义 |
| 10 | func_version | INT | 函数版本号。初始版本为0每次替换更新版本号加1。 |
## INS_INDEXES
提供用户创建的索引的相关信息。也可以使用 SHOW INDEX 来查询这些信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :--------------: | ------------ | ---------------------------------------------------------------------------------- |
| 1 | db_name | BINARY(32) | 包含此索引的表所在的数据库名 |
| 2 | table_name | BINARY(192) | 包含此索引的表的名称 |
| 3 | index_name | BINARY(192) | 索引名 |
| 4 | column_name | BINARY(64) | 建索引的列的列名 |
| 5 | index_type | BINARY(10) | 目前有 SMA 和 FULLTEXT |
| 6 | index_extensions | BINARY(256) | 索引的额外信息。对 SMA 类型的索引,是函数名的列表。对 FULLTEXT 类型的索引为 NULL。 |
| # | **列名** | **数据类型** | **说明** |
| --- | :--------------: | ------------ | ------------------------------------------------------- |
| 1 | db_name | BINARY(32) | 包含此索引的表所在的数据库名 |
| 2 | table_name | BINARY(192) | 包含此索引的表的名称 |
| 3 | index_name | BINARY(192) | 索引名 |
| 4 | column_name | BINARY(64) | 建索引的列的列名 |
| 5 | index_type | BINARY(10) | 目前有 SMA 和 tag |
| 6 | index_extensions | BINARY(256) | 索引的额外信息。对 SMA/tag 类型的索引,是函数名的列表。 |
## INS_STABLES
提供用户创建的超级表的相关信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :-----------: | ------------ | ------------------------ |
| 1 | stable_name | BINARY(192) | 超级表表名 |
| 2 | db_name | BINARY(64) | 超级表所在的数据库的名称 |
| 3 | create_time | TIMESTAMP | 创建时间 |
| 4 | columns | INT | 列数目 |
| 5 | tags | INT | 标签数目。需要注意,`tags` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 6 | last_update | TIMESTAMP | 最后更新时间 |
| 7 | table_comment | BINARY(1024) | 表注释 |
| 8 | watermark | BINARY(64) | 窗口的关闭时间。需要注意,`watermark` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 9 | max_delay | BINARY(64) | 推送计算结果的最大延迟。需要注意,`max_delay` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 10 | rollup | BINARY(128) | rollup 聚合函数。需要注意,`rollup` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| # | **列名** | **数据类型** | **说明** |
| --- | :-----------: | ------------ | ----------------------------------------------------------------------------------------------------- |
| 1 | stable_name | BINARY(192) | 超级表表名 |
| 2 | db_name | BINARY(64) | 超级表所在的数据库的名称 |
| 3 | create_time | TIMESTAMP | 创建时间 |
| 4 | columns | INT | 列数目 |
| 5 | tags | INT | 标签数目。需要注意,`tags` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 6 | last_update | TIMESTAMP | 最后更新时间 |
| 7 | table_comment | BINARY(1024) | 表注释 |
| 8 | watermark | BINARY(64) | 窗口的关闭时间。需要注意,`watermark` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 9 | max_delay | BINARY(64) | 推送计算结果的最大延迟。需要注意,`max_delay` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 10 | rollup | BINARY(128) | rollup 聚合函数。需要注意,`rollup` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
## INS_TABLES
提供用户创建的普通表和子表的相关信息
| # | **列名** | **数据类型** | **说明** |
| --- | :-----------: | ------------ | ---------------- |
| 1 | table_name | BINARY(192) | 表名 |
| 2 | db_name | BINARY(64) | 数据库名 |
| 3 | create_time | TIMESTAMP | 创建时间 |
| 4 | columns | INT | 列数目 |
| 5 | stable_name | BINARY(192) | 所属的超级表表名 |
| 6 | uid | BIGINT | 表 id |
| 7 | vgroup_id | INT | vgroup id |
| 8 | ttl | INT | 表的生命周期。需要注意,`ttl` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 9 | table_comment | BINARY(1024) | 表注释 |
| 10 | type | BINARY(21) | 表类型 |
| # | **列名** | **数据类型** | **说明** |
| --- | :-----------: | ------------ | ------------------------------------------------------------------------------------- |
| 1 | table_name | BINARY(192) | 表名 |
| 2 | db_name | BINARY(64) | 数据库名 |
| 3 | create_time | TIMESTAMP | 创建时间 |
| 4 | columns | INT | 列数目 |
| 5 | stable_name | BINARY(192) | 所属的超级表表名 |
| 6 | uid | BIGINT | 表 id |
| 7 | vgroup_id | INT | vgroup id |
| 8 | ttl | INT | 表的生命周期。需要注意,`ttl` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 9 | table_comment | BINARY(1024) | 表注释 |
| 10 | type | BINARY(21) | 表类型 |
## INS_TAGS
@ -183,17 +183,17 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
## INS_COLUMNS
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------- | ---------------------- |
| 1 | table_name | BINARY(192) | 表名 |
| 2 | db_name | BINARY(64) | 该表所在的数据库的名称 |
| 3 | table_type | BINARY(21) | 表类型 |
| 4 | col_name | BINARY(64) | 列 的名称 |
| 5 | col_type | BINARY(32) | 列 的类型 |
| 6 | col_length | INT | 列 的长度 |
| 7 | col_precision | INT | 列 的精度 |
| 8 | col_scale | INT | 列 的比例 |
| 9 | col_nullable | INT | 列 是否可以为空 |
| # | **列名** | **数据类型** | **说明** |
| --- | :-----------: | ------------ | ---------------------- |
| 1 | table_name | BINARY(192) | 表名 |
| 2 | db_name | BINARY(64) | 该表所在的数据库的名称 |
| 3 | table_type | BINARY(21) | 表类型 |
| 4 | col_name | BINARY(64) | 列 的名称 |
| 5 | col_type | BINARY(32) | 列 的类型 |
| 6 | col_length | INT | 列 的长度 |
| 7 | col_precision | INT | 列 的精度 |
| 8 | col_scale | INT | 列 的比例 |
| 9 | col_nullable | INT | 列 是否可以为空 |
## INS_USERS
@ -209,60 +209,60 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
提供企业版授权的相关信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------ | -------------------------------------------------- |
| 1 | version | BINARY(9) | 企业版授权说明official(官方授权的)/trial(试用的) |
| 2 | cpu_cores | BINARY(9) | 授权使用的 CPU 核心数量 |
| 3 | dnodes | BINARY(10) | 授权使用的 dnode 节点数量。需要注意,`dnodes` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 4 | streams | BINARY(10) | 授权创建的流数量。需要注意,`streams` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 5 | users | BINARY(10) | 授权创建的用户数量。需要注意,`users` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 6 | accounts | BINARY(10) | 授权创建的帐户数量。需要注意,`accounts` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 7 | storage | BINARY(21) | 授权使用的存储空间大小。需要注意,`storage` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 8 | connections | BINARY(21) | 授权使用的客户端连接数量。需要注意,`connections` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 9 | databases | BINARY(11) | 授权使用的数据库数量。需要注意,`databases` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 10 | speed | BINARY(9) | 授权使用的数据点每秒写入数量 |
| 11 | querytime | BINARY(9) | 授权使用的查询总时长 |
| 12 | timeseries | BINARY(21) | 授权使用的测点数量 |
| 13 | expired | BINARY(5) | 是否到期true到期false未到期 |
| 14 | expire_time | BINARY(19) | 试用期到期时间 |
| # | **列名** | **数据类型** | **说明** |
| --- | :---------: | ------------ | --------------------------------------------------------------------------------------------------------- |
| 1 | version | BINARY(9) | 企业版授权说明official(官方授权的)/trial(试用的) |
| 2 | cpu_cores | BINARY(9) | 授权使用的 CPU 核心数量 |
| 3 | dnodes | BINARY(10) | 授权使用的 dnode 节点数量。需要注意,`dnodes` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 4 | streams | BINARY(10) | 授权创建的流数量。需要注意,`streams` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 5 | users | BINARY(10) | 授权创建的用户数量。需要注意,`users` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 6 | accounts | BINARY(10) | 授权创建的帐户数量。需要注意,`accounts` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 7 | storage | BINARY(21) | 授权使用的存储空间大小。需要注意,`storage` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 8 | connections | BINARY(21) | 授权使用的客户端连接数量。需要注意,`connections` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 9 | databases | BINARY(11) | 授权使用的数据库数量。需要注意,`databases` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 10 | speed | BINARY(9) | 授权使用的数据点每秒写入数量 |
| 11 | querytime | BINARY(9) | 授权使用的查询总时长 |
| 12 | timeseries | BINARY(21) | 授权使用的测点数量 |
| 13 | expired | BINARY(5) | 是否到期true到期false未到期 |
| 14 | expire_time | BINARY(19) | 试用期到期时间 |
## INS_VGROUPS
系统中所有 vgroups 的信息。
| # | **列名** | **数据类型** | **说明** |
| --- | :-------: | ------------ | ------------------------------------------------------ |
| 1 | vgroup_id | INT | vgroup id |
| 2 | db_name | BINARY(32) | 数据库名 |
| 3 | tables | INT | 此 vgroup 内有多少表。需要注意,`tables` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 4 | status | BINARY(10) | 此 vgroup 的状态 |
| 5 | v1_dnode | INT | 第一个成员所在的 dnode 的 id |
| 6 | v1_status | BINARY(10) | 第一个成员的状态 |
| 7 | v2_dnode | INT | 第二个成员所在的 dnode 的 id |
| 8 | v2_status | BINARY(10) | 第二个成员的状态 |
| 9 | v3_dnode | INT | 第三个成员所在的 dnode 的 id |
| 10 | v3_status | BINARY(10) | 第三个成员的状态 |
| 11 | nfiles | INT | 此 vgroup 中数据/元数据文件的数量 |
| 12 | file_size | INT | 此 vgroup 中数据/元数据文件的大小 |
| 13 | tsma | TINYINT | 此 vgroup 是否专用于 Time-range-wise SMA1: 是, 0: 否 |
| # | **列名** | **数据类型** | **说明** |
| --- | :-------: | ------------ | ------------------------------------------------------------------------------------------------ |
| 1 | vgroup_id | INT | vgroup id |
| 2 | db_name | BINARY(32) | 数据库名 |
| 3 | tables | INT | 此 vgroup 内有多少表。需要注意,`tables` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 4 | status | BINARY(10) | 此 vgroup 的状态 |
| 5 | v1_dnode | INT | 第一个成员所在的 dnode 的 id |
| 6 | v1_status | BINARY(10) | 第一个成员的状态 |
| 7 | v2_dnode | INT | 第二个成员所在的 dnode 的 id |
| 8 | v2_status | BINARY(10) | 第二个成员的状态 |
| 9 | v3_dnode | INT | 第三个成员所在的 dnode 的 id |
| 10 | v3_status | BINARY(10) | 第三个成员的状态 |
| 11 | nfiles | INT | 此 vgroup 中数据/元数据文件的数量 |
| 12 | file_size | INT | 此 vgroup 中数据/元数据文件的大小 |
| 13 | tsma | TINYINT | 此 vgroup 是否专用于 Time-range-wise SMA1: 是, 0: 否 |
## INS_CONFIGS
系统配置参数。
| # | **列名** | **数据类型** | **说明** |
| --- | :------: | ------------ | ------------ |
| 1 | name | BINARY(32) | 配置项名称 |
| # | **列名** | **数据类型** | **说明** |
| --- | :------: | ------------ | --------------------------------------------------------------------------------------- |
| 1 | name | BINARY(32) | 配置项名称 |
| 2 | value | BINARY(64) | 该配置项的值。需要注意,`value` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
## INS_DNODE_VARIABLES
系统中每个 dnode 的配置参数。
| # | **列名** | **数据类型** | **说明** |
| --- | :------: | ------------ | ------------ |
| 1 | dnode_id | INT | dnode 的 ID |
| 2 | name | BINARY(32) | 配置项名称 |
| # | **列名** | **数据类型** | **说明** |
| --- | :------: | ------------ | --------------------------------------------------------------------------------------- |
| 1 | dnode_id | INT | dnode 的 ID |
| 2 | name | BINARY(32) | 配置项名称 |
| 3 | value | BINARY(64) | 该配置项的值。需要注意,`value` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
## INS_TOPICS
@ -282,19 +282,19 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数
| 2 | consumer_group | BINARY(193) | 订阅者的消费者组 |
| 3 | vgroup_id | INT | 消费者被分配的 vgroup id |
| 4 | consumer_id | BIGINT | 消费者的唯一 id |
| 5 | offset | BINARY(64) | 消费者的消费进度 |
| 6 | rows | BIGINT | 消费者的消费的数据条数 |
| 5 | offset | BINARY(64) | 消费者的消费进度 |
| 6 | rows | BIGINT | 消费者的消费的数据条数 |
## INS_STREAMS
| # | **列名** | **数据类型** | **说明** |
| --- | :----------: | ------------ | --------------------------------------- |
| 1 | stream_name | BINARY(64) | 流计算名称 |
| 2 | create_time | TIMESTAMP | 创建时间 |
| 3 | sql | BINARY(1024) | 创建流计算时提供的 SQL 语句 |
| 4 | status | BINARY(20) | 流当前状态 |
| 5 | source_db | BINARY(64) | 源数据库 |
| 6 | target_db | BINARY(64) | 目的数据库 |
| 7 | target_table | BINARY(192) | 流计算写入的目标表 |
| 8 | watermark | BIGINT | watermark详见 SQL 手册流式计算。需要注意,`watermark` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| # | **列名** | **数据类型** | **说明** |
| --- | :----------: | ------------ | -------------------------------------------------------------------------------------------------------------------- |
| 1 | stream_name | BINARY(64) | 流计算名称 |
| 2 | create_time | TIMESTAMP | 创建时间 |
| 3 | sql | BINARY(1024) | 创建流计算时提供的 SQL 语句 |
| 4 | status | BINARY(20) | 流当前状态 |
| 5 | source_db | BINARY(64) | 源数据库 |
| 6 | target_db | BINARY(64) | 目的数据库 |
| 7 | target_table | BINARY(192) | 流计算写入的目标表 |
| 8 | watermark | BIGINT | watermark详见 SQL 手册流式计算。需要注意,`watermark` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |
| 9 | trigger | INT | 计算结果推送模式,详见 SQL 手册流式计算。需要注意,`trigger` 为 TDengine 关键字,作为列名使用时需要使用 ` 进行转义。 |

View File

@ -4,12 +4,13 @@ title: 索引
description: 索引功能的使用细节
---
TDengine 从 3.0.0.0 版本开始引入了索引功能,支持 SMA 索引和 FULLTEXT 索引。
TDengine 从 3.0.0.0 版本开始引入了索引功能,支持 SMA 索引和 tag 索引。
## 创建索引
```sql
CREATE FULLTEXT INDEX index_name ON tb_name (col_name [, col_name] ...)
CREATE INDEX index_name ON tb_name index_option
CREATE SMA INDEX index_name ON tb_name index_option
@ -46,10 +47,6 @@ SELECT _wstart,_wend,_wduration,max(c2),min(c1) FROM st1 INTERVAL(5m,10s) SLIDIN
ALTER LOCAL 'querySmaOptimize' '0';
```
### FULLTEXT 索引
对指定列建立文本索引可以提升含有文本过滤的查询的性能。FULLTEXT 索引不支持 index_option 语法。现阶段只支持对 JSON 类型的标签列创建 FULLTEXT 索引。不支持多列联合索引,但可以为每个列分布创建 FULLTEXT 索引。
## 删除索引
```sql

View File

@ -362,6 +362,8 @@ taosBenchmark -A INT,DOUBLE,NCHAR,BINARY\(16\)
- **max** : 数据类型的 列/标签 的最大值。生成的值将小于最小值。
- **fun** : 此列数据以函数填充,目前只支持 sin 和 cos 两函数,输入参数为时间戳换算成角度值,换算公式: 角度 x = 输入的时间列ts值 % 360。同时支持系数调节随机波动因子调节以固定格式的表达式展现如 fun=“10\*sin(x)+100\*random(5)” , x 表示角度,取值 0 ~ 360度增长步长与时间列步长一致。10 表示乘的系数100 表示加或减的系数5 表示波动幅度在 5% 的随机范围内。目前支持的数据类型为 int, bigint, float, double 四种数据类型。注意:表达式为固定模式,不可前后颠倒。
- **values** : nchar/binary 列/标签的值域,将从值中随机选择。
- **sma**: 将该列加入 SMA 中,值为 "yes" 或者 "no",默认为 "no"。

View File

@ -5,7 +5,7 @@ description: "TDengine 服务端、客户端和连接器支持的平台列表"
## TDengine 服务端支持的平台列表
| | **Windows server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18/20** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **macOS** |
| | **Windows server 2016/2019** | **Windows 10/11** | **CentOS 7.9/8** | **Ubuntu 18 以上** | **统信 UOS** | **银河/中标麒麟** | **凝思 V60/V80** | **macOS** |
| ------------ | ---------------------------- | ----------------- | ---------------- | ---------------- | ------------ | ----------------- | ---------------- | --------- |
| X64 | ● | ● | ● | ● | ● | ● | ● | ● |
| 树莓派 ARM64 | | | ● | | | | | |

View File

@ -736,6 +736,15 @@ charset 的有效值是 UTF-8。
| 取值范围 | 0-23 |
| 缺省值 | 0 |
### tmqMaxTopicNum
| 属性 | 说明 |
| -------- | ------------------ |
| 适用范围 | 仅服务端适用 |
| 含义 | 订阅最多可建立的 topic 数量 |
| 取值范围 | 1-10000|
| 缺省值 | 20 |
## 压缩参数
### compressMsgSize

View File

@ -112,7 +112,7 @@ TDengine 3.0 采用 hash 一致性算法,确定每张数据表所在的 vnode
### 数据分区
TDengine 除 vnode 分片之外,还对时序数据按照时间段进行分区。每个数据文件只包含一个时间段的时序数据,时间段的长度由 DB 的配置参数 days 决定。这种按时间段分区的方法还便于高效实现数据的保留策略,只要数据文件超过规定的天数(系统配置参数 keep将被自动删除。而且不同的时间段可以存放于不同的路径和存储介质以便于大数据的冷热管理实现多级存储。
TDengine 除 vnode 分片之外,还对时序数据按照时间段进行分区。每个数据文件只包含一个时间段的时序数据,时间段的长度由 DB 的配置参数 duration 决定。这种按时间段分区的方法还便于高效实现数据的保留策略,只要数据文件超过规定的天数(系统配置参数 keep将被自动删除。而且不同的时间段可以存放于不同的路径和存储介质以便于大数据的冷热管理实现多级存储。
总的来说,**TDengine 是通过 vnode 以及时间两个维度,对大数据进行切分**,便于并行高效的管理,实现水平扩展。

View File

@ -47,7 +47,7 @@
<dependency>
<groupId>com.taosdata.jdbc</groupId>
<artifactId>taos-jdbcdriver</artifactId>
<version>3.0.0</version>
<version>3.2.4</version>
</dependency>
<dependency>

View File

@ -287,11 +287,20 @@ DLL_EXPORT TAOS_RES *tmq_consumer_poll(tmq_t *tmq, int64_t timeout);
DLL_EXPORT int32_t tmq_consumer_close(tmq_t *tmq);
DLL_EXPORT int32_t tmq_commit_sync(tmq_t *tmq, const TAOS_RES *msg);
DLL_EXPORT void tmq_commit_async(tmq_t *tmq, const TAOS_RES *msg, tmq_commit_cb *cb, void *param);
DLL_EXPORT int32_t tmq_commit_offset_sync(tmq_t *tmq, const char *pTopicName, int32_t vgId, int64_t offset);
DLL_EXPORT void tmq_commit_offset_async(tmq_t *tmq, const char *pTopicName, int32_t vgId, int64_t offset, tmq_commit_cb *cb, void *param);
DLL_EXPORT int32_t tmq_get_topic_assignment(tmq_t *tmq, const char *pTopicName, tmq_topic_assignment **assignment,
int32_t *numOfAssignment);
DLL_EXPORT void tmq_free_assignment(tmq_topic_assignment* pAssignment);
DLL_EXPORT int32_t tmq_offset_seek(tmq_t *tmq, const char *pTopicName, int32_t vgId, int64_t offset);
DLL_EXPORT const char *tmq_get_topic_name(TAOS_RES *res);
DLL_EXPORT const char *tmq_get_db_name(TAOS_RES *res);
DLL_EXPORT int32_t tmq_get_vgroup_id(TAOS_RES *res);
DLL_EXPORT int64_t tmq_get_vgroup_offset(TAOS_RES* res);
DLL_EXPORT int64_t tmq_position(tmq_t *tmq, const char *pTopicName, int32_t vgId);
DLL_EXPORT int64_t tmq_committed(tmq_t *tmq, const char *pTopicName, int32_t vgId);
/* ----------------------TMQ CONFIGURATION INTERFACE---------------------- */
enum tmq_conf_res_t {
@ -309,11 +318,6 @@ DLL_EXPORT void tmq_conf_set_auto_commit_cb(tmq_conf_t *conf, tmq_comm
/* -------------------------TMQ MSG HANDLE INTERFACE---------------------- */
DLL_EXPORT const char *tmq_get_topic_name(TAOS_RES *res);
DLL_EXPORT const char *tmq_get_db_name(TAOS_RES *res);
DLL_EXPORT int32_t tmq_get_vgroup_id(TAOS_RES *res);
DLL_EXPORT int64_t tmq_get_vgroup_offset(TAOS_RES* res);
/* ------------------------------ TAOSX -----------------------------------*/
// note: following apis are unstable
enum tmq_res_t {

View File

@ -1144,6 +1144,7 @@ typedef struct {
char timezone[TD_TIMEZONE_LEN]; // tsTimezone
char locale[TD_LOCALE_LEN]; // tsLocale
char charset[TD_LOCALE_LEN]; // tsCharset
int8_t ttlChangeOnWrite;
} SClusterCfg;
typedef struct {
@ -1180,6 +1181,8 @@ typedef struct {
typedef struct {
int8_t syncState;
int8_t syncRestore;
int64_t syncTerm;
int64_t roleTimeMs;
} SMnodeLoad;
typedef struct {
@ -3379,6 +3382,12 @@ typedef struct {
int8_t reserved;
} SMqHbRsp;
typedef struct {
SMsgHead head;
int64_t consumerId;
char subKey[TSDB_SUBSCRIBE_KEY_LEN];
} SMqSeekReq;
#define TD_AUTO_CREATE_TABLE 0x1
typedef struct {
int64_t suid;
@ -3508,6 +3517,8 @@ int32_t tSerializeSMqHbReq(void* buf, int32_t bufLen, SMqHbReq* pReq);
int32_t tDeserializeSMqHbReq(void* buf, int32_t bufLen, SMqHbReq* pReq);
int32_t tDeatroySMqHbReq(SMqHbReq* pReq);
int32_t tSerializeSMqSeekReq(void *buf, int32_t bufLen, SMqSeekReq *pReq);
int32_t tDeserializeSMqSeekReq(void *buf, int32_t bufLen, SMqSeekReq *pReq);
#define SUBMIT_REQ_AUTO_CREATE_TABLE 0x1
#define SUBMIT_REQ_COLUMN_DATA_FORMAT 0x2

View File

@ -307,12 +307,13 @@ enum {
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_SUBSCRIBE, "vnode-tmq-subscribe", SMqRebVgReq, SMqRebVgRsp)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_DELETE_SUB, "vnode-tmq-delete-sub", SMqVDeleteReq, SMqVDeleteRsp)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_COMMIT_OFFSET, "vnode-tmq-commit-offset", STqOffset, STqOffset)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_SEEK_TO_OFFSET, "vnode-tmq-seekto-offset", STqOffset, STqOffset)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_SEEK, "vnode-tmq-seek", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_ADD_CHECKINFO, "vnode-tmq-add-checkinfo", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_DEL_CHECKINFO, "vnode-del-checkinfo", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_CONSUME, "vnode-tmq-consume", SMqPollReq, SMqDataBlkRsp)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_CONSUME_PUSH, "vnode-tmq-consume-push", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_VG_WALINFO, "vnode-tmq-vg-walinfo", SMqPollReq, SMqDataBlkRsp)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_VG_COMMITTEDINFO, "vnode-tmq-committedinfo", NULL, NULL)
TD_DEF_MSG_TYPE(TDMT_VND_TMQ_MAX_MSG, "vnd-tmq-max", NULL, NULL)

View File

@ -228,7 +228,7 @@ typedef struct SStoreTqReader {
} SStoreTqReader;
typedef struct SStoreSnapshotFn {
int32_t (*createSnapshot)(SSnapContext* ctx, int64_t uid);
int32_t (*setForSnapShot)(SSnapContext* ctx, int64_t uid);
int32_t (*destroySnapshot)(SSnapContext* ctx);
SMetaTableInfo (*getMetaTableInfoFromSnapshot)(SSnapContext* ctx);
int32_t (*getTableInfoFromSnapshot)(SSnapContext* ctx, void** pBuf, int32_t* contLen, int16_t* type, int64_t* uid);

View File

@ -362,7 +362,7 @@ typedef struct SRestoreComponentNodeStmt {
typedef struct SCreateTopicStmt {
ENodeType type;
char topicName[TSDB_TABLE_NAME_LEN];
char topicName[TSDB_TOPIC_NAME_LEN];
char subDbName[TSDB_DB_NAME_LEN];
char subSTbName[TSDB_TABLE_NAME_LEN];
bool ignoreExists;
@ -373,13 +373,13 @@ typedef struct SCreateTopicStmt {
typedef struct SDropTopicStmt {
ENodeType type;
char topicName[TSDB_TABLE_NAME_LEN];
char topicName[TSDB_TOPIC_NAME_LEN];
bool ignoreNotExists;
} SDropTopicStmt;
typedef struct SDropCGroupStmt {
ENodeType type;
char topicName[TSDB_TABLE_NAME_LEN];
char topicName[TSDB_TOPIC_NAME_LEN];
char cgroup[TSDB_CGROUP_LEN];
bool ignoreNotExists;
} SDropCGroupStmt;

View File

@ -248,6 +248,8 @@ typedef struct SSortLogicNode {
SLogicNode node;
SNodeList* pSortKeys;
bool groupSort;
int64_t maxRows;
bool skipPKSortOpt;
} SSortLogicNode;
typedef struct SPartitionLogicNode {

View File

@ -239,29 +239,31 @@ typedef struct SSyncState {
ESyncState state;
bool restored;
bool canRead;
SyncTerm term;
int64_t roleTimeMs;
} SSyncState;
int32_t syncInit();
void syncCleanUp();
int64_t syncOpen(SSyncInfo* pSyncInfo);
int32_t syncStart(int64_t rid);
void syncStop(int64_t rid);
void syncPreStop(int64_t rid);
void syncPostStop(int64_t rid);
int32_t syncPropose(int64_t rid, SRpcMsg* pMsg, bool isWeak, int64_t* seq);
int32_t syncIsCatchUp(int64_t rid);
int32_t syncInit();
void syncCleanUp();
int64_t syncOpen(SSyncInfo* pSyncInfo);
int32_t syncStart(int64_t rid);
void syncStop(int64_t rid);
void syncPreStop(int64_t rid);
void syncPostStop(int64_t rid);
int32_t syncPropose(int64_t rid, SRpcMsg* pMsg, bool isWeak, int64_t* seq);
int32_t syncIsCatchUp(int64_t rid);
ESyncRole syncGetRole(int64_t rid);
int32_t syncProcessMsg(int64_t rid, SRpcMsg* pMsg);
int32_t syncReconfig(int64_t rid, SSyncCfg* pCfg);
int32_t syncBeginSnapshot(int64_t rid, int64_t lastApplyIndex);
int32_t syncEndSnapshot(int64_t rid);
int32_t syncLeaderTransfer(int64_t rid);
int32_t syncStepDown(int64_t rid, SyncTerm newTerm);
bool syncIsReadyForRead(int64_t rid);
bool syncSnapshotSending(int64_t rid);
bool syncSnapshotRecving(int64_t rid);
int32_t syncSendTimeoutRsp(int64_t rid, int64_t seq);
int32_t syncForceBecomeFollower(SSyncNode* ths, const SRpcMsg* pRpcMsg);
int32_t syncProcessMsg(int64_t rid, SRpcMsg* pMsg);
int32_t syncReconfig(int64_t rid, SSyncCfg* pCfg);
int32_t syncBeginSnapshot(int64_t rid, int64_t lastApplyIndex);
int32_t syncEndSnapshot(int64_t rid);
int32_t syncLeaderTransfer(int64_t rid);
int32_t syncStepDown(int64_t rid, SyncTerm newTerm);
bool syncIsReadyForRead(int64_t rid);
bool syncSnapshotSending(int64_t rid);
bool syncSnapshotRecving(int64_t rid);
int32_t syncSendTimeoutRsp(int64_t rid, int64_t seq);
int32_t syncForceBecomeFollower(SSyncNode* ths, const SRpcMsg* pRpcMsg);
SSyncState syncGetState(int64_t rid);
void syncGetRetryEpSet(int64_t rid, SEpSet* pEpSet);

View File

@ -775,6 +775,11 @@ int32_t* taosGetErrno();
#define TSDB_CODE_TMQ_TOPIC_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x4004)
#define TSDB_CODE_TMQ_GROUP_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x4005)
#define TSDB_CODE_TMQ_SNAPSHOT_ERROR TAOS_DEF_ERROR_CODE(0, 0x4006)
#define TSDB_CODE_TMQ_VERSION_OUT_OF_RANGE TAOS_DEF_ERROR_CODE(0, 0x4007)
#define TSDB_CODE_TMQ_INVALID_VGID TAOS_DEF_ERROR_CODE(0, 0x4008)
#define TSDB_CODE_TMQ_INVALID_TOPIC TAOS_DEF_ERROR_CODE(0, 0x4009)
#define TSDB_CODE_TMQ_NEED_INITIALIZED TAOS_DEF_ERROR_CODE(0, 0x4010)
#define TSDB_CODE_TMQ_NO_COMMITTED TAOS_DEF_ERROR_CODE(0, 0x4011)
// stream
#define TSDB_CODE_STREAM_TASK_NOT_EXIST TAOS_DEF_ERROR_CODE(0, 0x4100)

View File

@ -379,8 +379,8 @@ typedef enum ELogicConditionType {
#define TSDB_DEFAULT_HASH_SUFFIX 0
#define TSDB_DB_MIN_WAL_RETENTION_PERIOD -1
#define TSDB_REP_DEF_DB_WAL_RET_PERIOD 0
#define TSDB_REPS_DEF_DB_WAL_RET_PERIOD 0
#define TSDB_REP_DEF_DB_WAL_RET_PERIOD 3600
#define TSDB_REPS_DEF_DB_WAL_RET_PERIOD 3600
#define TSDB_DB_MIN_WAL_RETENTION_SIZE -1
#define TSDB_REP_DEF_DB_WAL_RET_SIZE 0
#define TSDB_REPS_DEF_DB_WAL_RET_SIZE 0

View File

@ -613,12 +613,6 @@ function install_examples() {
fi
}
function install_web() {
if [ -d "${script_dir}/share" ]; then
${csudo}cp -rf ${script_dir}/share/* ${install_main_dir}/share > /dev/null 2>&1 ||:
fi
}
function clean_service_on_sysvinit() {
if ps aux | grep -v grep | grep ${serverName2} &>/dev/null; then
@ -894,7 +888,6 @@ function updateProduct() {
fi
install_examples
install_web
if [ -z $1 ]; then
install_bin
install_service
@ -907,20 +900,22 @@ function updateProduct() {
echo
echo -e "${GREEN_DARK}To configure ${productName2} ${NC}: edit ${cfg_install_dir}/${configFile2}"
[ -f ${configDir}/${clientName2}adapter.toml ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To configure ${clientName2} Adapter ${NC}: edit ${configDir}/${clientName2}adapter.toml"
echo -e "${GREEN_DARK}To configure ${clientName2}Adapter ${NC}: edit ${configDir}/${clientName2}adapter.toml"
if ((${service_mod} == 0)); then
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}systemctl start ${serverName2}${NC}"
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}systemctl start ${clientName2}adapter ${NC}"
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${csudo}systemctl start ${clientName2}adapter ${NC}"
elif ((${service_mod} == 1)); then
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}service ${serverName2} start${NC}"
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}service ${clientName2}adapter start${NC}"
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${csudo}service ${clientName2}adapter start${NC}"
else
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ./${serverName2}${NC}"
[ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${clientName2}adapter &${NC}"
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${clientName2}adapter &${NC}"
fi
echo -e "${GREEN_DARK}To enable ${clientName2}keeper ${NC}: sudo systemctl enable ${clientName2}keeper &${NC}"
if [ ${openresty_work} = 'true' ]; then
echo -e "${GREEN_DARK}To access ${productName2} ${NC}: use ${GREEN_UNDERLINE}${clientName2} -h $serverFqdn${NC} in shell OR from ${GREEN_UNDERLINE}http://127.0.0.1:${web_port}${NC}"
@ -934,6 +929,7 @@ function updateProduct() {
fi
echo
echo -e "\033[44;32;1m${productName2} is updated successfully!${NC}"
echo -e "\033[44;32;1mTo manage ${productName2} instance, view documentation and explorer features, you need to install ${clientName2}Explorer ${NC}"
else
install_bin
install_config
@ -971,8 +967,7 @@ function installProduct() {
if [ "$verMode" == "cluster" ]; then
install_connector
fi
install_examples
install_web
install_examples
if [ -z $1 ]; then # install service and client
# For installing new
@ -989,21 +984,23 @@ function installProduct() {
echo
echo -e "${GREEN_DARK}To configure ${productName2} ${NC}: edit ${cfg_install_dir}/${configFile2}"
[ -f ${configDir}/${clientName2}adapter.toml ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To configure ${clientName2} Adapter ${NC}: edit ${configDir}/${clientName2}adapter.toml"
echo -e "${GREEN_DARK}To configure ${clientName2}Adapter ${NC}: edit ${configDir}/${clientName2}adapter.toml"
if ((${service_mod} == 0)); then
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}systemctl start ${serverName2}${NC}"
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}systemctl start ${clientName2}adapter ${NC}"
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${csudo}systemctl start ${clientName2}adapter ${NC}"
elif ((${service_mod} == 1)); then
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}service ${serverName2} start${NC}"
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}service ${clientName2}adapter start${NC}"
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${csudo}service ${clientName2}adapter start${NC}"
else
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${serverName2}${NC}"
[ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${clientName2}adapter &${NC}"
echo -e "${GREEN_DARK}To start ${clientName2}Adapter ${NC}: ${clientName2}adapter &${NC}"
fi
echo -e "${GREEN_DARK}To enable ${clientName2}keeper ${NC}: sudo systemctl enable ${clientName2}keeper &${NC}"
if [ ! -z "$firstEp" ]; then
tmpFqdn=${firstEp%%:*}
substr=":"
@ -1025,6 +1022,7 @@ function installProduct() {
fi
echo -e "\033[44;32;1m${productName2} is installed successfully!${NC}"
echo -e "\033[44;32;1mTo manage ${productName2} instance, view documentation and explorer features, you need to install ${clientName2}Explorer ${NC}"
echo
else # Only install client
install_bin

View File

@ -267,7 +267,9 @@ function install_log() {
}
function install_connector() {
${csudo}cp -rf ${script_dir}/connector/ ${install_main_dir}/
if [ -d ${script_dir}/connector ]; then
${csudo}cp -rf ${script_dir}/connector/ ${install_main_dir}/
fi
}
function install_examples() {

View File

@ -56,8 +56,8 @@ copy %binary_dir%\\build\\bin\\taos.exe %target_dir% > nul
if exist %binary_dir%\\build\\bin\\taosBenchmark.exe (
copy %binary_dir%\\build\\bin\\taosBenchmark.exe %target_dir% > nul
)
if exist %binary_dir%\\build\\lib\\taosws.dll.lib (
copy %binary_dir%\\build\\lib\\taosws.dll.lib %target_dir%\\driver > nul
if exist %binary_dir%\\build\\lib\\taosws.lib (
copy %binary_dir%\\build\\lib\\taosws.lib %target_dir%\\driver > nul
)
if exist %binary_dir%\\build\\lib\\taosws.dll (
copy %binary_dir%\\build\\lib\\taosws.dll %target_dir%\\driver > nul

View File

@ -432,12 +432,6 @@ function install_examples() {
${csudo}cp -rf ${source_dir}/examples/* ${install_main_dir}/examples || :
}
function install_web() {
if [ -d "${binary_dir}/build/share" ]; then
${csudo}cp -rf ${binary_dir}/build/share/* ${install_main_dir}/share || :
fi
}
function clean_service_on_sysvinit() {
if ps aux | grep -v grep | grep ${serverName} &>/dev/null; then
${csudo}service ${serverName} stop || :
@ -592,7 +586,6 @@ function update_TDengine() {
install_lib
# install_connector
install_examples
install_web
install_bin
install_app

View File

@ -126,7 +126,6 @@ else
fi
install_files="${script_dir}/install.sh"
web_dir="${top_dir}/../enterprise/src/plugins/web"
init_file_deb=${script_dir}/../deb/taosd
init_file_rpm=${script_dir}/../rpm/taosd
@ -320,17 +319,6 @@ if [[ $dbName == "taos" ]]; then
mkdir -p ${install_dir}/examples/taosbenchmark-json && cp ${examples_dir}/../tools/taos-tools/example/* ${install_dir}/examples/taosbenchmark-json
fi
# Add web files
if [ "$verMode" == "cluster" ] || [ "$verMode" == "cloud" ]; then
if [ -d "${web_dir}/admin" ] ; then
mkdir -p ${install_dir}/share/
cp -Rfap ${web_dir}/admin ${install_dir}/share/
cp ${web_dir}/png/taos.png ${install_dir}/share/admin/images/taos.png
cp -rf ${build_dir}/share/{etc,srv} ${install_dir}/share ||:
else
echo "directory not found for enterprise release: ${web_dir}/admin"
fi
fi
fi
# Copy driver

View File

@ -1297,13 +1297,19 @@ int initEpSetFromCfg(const char* firstEp, const char* secondEp, SCorEpSet* pEpSe
return -1;
}
int32_t code = taosGetFqdnPortFromEp(firstEp, &mgmtEpSet->eps[0]);
int32_t code = taosGetFqdnPortFromEp(firstEp, &mgmtEpSet->eps[mgmtEpSet->numOfEps]);
if (code != TSDB_CODE_SUCCESS) {
terrno = TSDB_CODE_TSC_INVALID_FQDN;
return terrno;
}
mgmtEpSet->numOfEps++;
uint32_t addr = taosGetIpv4FromFqdn(mgmtEpSet->eps[mgmtEpSet->numOfEps].fqdn);
if (addr == 0xffffffff) {
tscError("failed to resolve firstEp fqdn: %s, code:%s", mgmtEpSet->eps[mgmtEpSet->numOfEps].fqdn,
tstrerror(TSDB_CODE_TSC_INVALID_FQDN));
memset(&(mgmtEpSet->eps[mgmtEpSet->numOfEps]), 0, sizeof(mgmtEpSet->eps[mgmtEpSet->numOfEps]));
} else {
mgmtEpSet->numOfEps++;
}
}
if (secondEp && secondEp[0] != 0) {
@ -1313,12 +1319,19 @@ int initEpSetFromCfg(const char* firstEp, const char* secondEp, SCorEpSet* pEpSe
}
taosGetFqdnPortFromEp(secondEp, &mgmtEpSet->eps[mgmtEpSet->numOfEps]);
mgmtEpSet->numOfEps++;
uint32_t addr = taosGetIpv4FromFqdn(mgmtEpSet->eps[mgmtEpSet->numOfEps].fqdn);
if (addr == 0xffffffff) {
tscError("failed to resolve secondEp fqdn: %s, code:%s", mgmtEpSet->eps[mgmtEpSet->numOfEps].fqdn,
tstrerror(TSDB_CODE_TSC_INVALID_FQDN));
memset(&(mgmtEpSet->eps[mgmtEpSet->numOfEps]), 0, sizeof(mgmtEpSet->eps[mgmtEpSet->numOfEps]));
} else {
mgmtEpSet->numOfEps++;
}
}
if (mgmtEpSet->numOfEps == 0) {
terrno = TSDB_CODE_TSC_INVALID_FQDN;
return -1;
terrno = TSDB_CODE_RPC_NETWORK_UNAVAIL;
return TSDB_CODE_RPC_NETWORK_UNAVAIL;
}
return 0;

View File

@ -99,13 +99,20 @@ int32_t processConnectRsp(void* param, SDataBuf* pMsg, int32_t code) {
goto End;
}
int updateEpSet = 1;
if (connectRsp.dnodeNum == 1) {
SEpSet srcEpSet = getEpSet_s(&pTscObj->pAppInfo->mgmtEp);
SEpSet dstEpSet = connectRsp.epSet;
rpcSetDefaultAddr(pTscObj->pAppInfo->pTransporter, srcEpSet.eps[srcEpSet.inUse].fqdn,
dstEpSet.eps[dstEpSet.inUse].fqdn);
} else if (connectRsp.dnodeNum > 1 && !isEpsetEqual(&pTscObj->pAppInfo->mgmtEp.epSet, &connectRsp.epSet)) {
SEpSet* pOrig = &pTscObj->pAppInfo->mgmtEp.epSet;
if (srcEpSet.numOfEps == 1) {
rpcSetDefaultAddr(pTscObj->pAppInfo->pTransporter, srcEpSet.eps[srcEpSet.inUse].fqdn,
dstEpSet.eps[dstEpSet.inUse].fqdn);
updateEpSet = 0;
}
}
if (updateEpSet == 1 && !isEpsetEqual(&pTscObj->pAppInfo->mgmtEp.epSet, &connectRsp.epSet)) {
SEpSet corEpSet = getEpSet_s(&pTscObj->pAppInfo->mgmtEp);
SEpSet* pOrig = &corEpSet;
SEp* pOrigEp = &pOrig->eps[pOrig->inUse];
SEp* pNewEp = &connectRsp.epSet.eps[connectRsp.epSet.inUse];
tscDebug("mnode epset updated from %d/%d=>%s:%d to %d/%d=>%s:%d in connRsp", pOrig->inUse, pOrig->numOfEps,

View File

@ -1327,6 +1327,9 @@ end:
int taos_write_raw_block_with_fields(TAOS* taos, int rows, char* pData, const char* tbname, TAOS_FIELD* fields,
int numFields) {
if (!taos || !pData || !tbname) {
return TSDB_CODE_INVALID_PARA;
}
int32_t code = TSDB_CODE_SUCCESS;
STableMeta* pTableMeta = NULL;
SQuery* pQuery = NULL;
@ -1413,6 +1416,9 @@ end:
}
int taos_write_raw_block(TAOS* taos, int rows, char* pData, const char* tbname) {
if (!taos || !pData || !tbname) {
return TSDB_CODE_INVALID_PARA;
}
int32_t code = TSDB_CODE_SUCCESS;
STableMeta* pTableMeta = NULL;
SQuery* pQuery = NULL;
@ -1812,6 +1818,7 @@ end:
}
char* tmq_get_json_meta(TAOS_RES* res) {
if (res == NULL) return NULL;
uDebug("tmq_get_json_meta called");
if (!TD_RES_TMQ_META(res) && !TD_RES_TMQ_METADATA(res)) {
return NULL;

View File

@ -456,7 +456,7 @@ int smlJsonParseObj(char **start, SSmlLineInfo *element, int8_t *offset) {
static inline int32_t smlParseMetricFromJSON(SSmlHandle *info, cJSON *metric, SSmlLineInfo *elements) {
elements->measureLen = strlen(metric->valuestring);
if (IS_INVALID_TABLE_LEN(elements->measureLen)) {
uError("OTD:0x%" PRIx64 " Metric lenght is 0 or large than 192", info->id);
uError("OTD:0x%" PRIx64 " Metric length is 0 or large than 192", info->id);
return TSDB_CODE_TSC_INVALID_TABLE_ID_LENGTH;
}

File diff suppressed because it is too large Load Diff

View File

@ -34,6 +34,8 @@ namespace {
void printSubResults(void* pRes, int32_t* totalRows) {
char buf[1024];
int32_t vgId = tmq_get_vgroup_id(pRes);
int64_t offset = tmq_get_vgroup_offset(pRes);
while (1) {
TAOS_ROW row = taos_fetch_row(pRes);
if (row == NULL) {
@ -45,7 +47,7 @@ void printSubResults(void* pRes, int32_t* totalRows) {
int32_t precision = taos_result_precision(pRes);
taos_print_row(buf, row, fields, numOfFields);
*totalRows += 1;
printf("precision: %d, row content: %s\n", precision, buf);
printf("vgId: %d, offset: %lld, precision: %d, row content: %s\n", vgId, offset, precision, buf);
}
// taos_free_result(pRes);
@ -1073,6 +1075,98 @@ TEST(clientCase, sub_db_test) {
fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows);
}
TEST(clientCase, tmq_commit) {
// taos_options(TSDB_OPTION_CONFIGDIR, "~/first/cfg");
TAOS* pConn = taos_connect("localhost", "root", "taosdata", NULL, 0);
ASSERT_NE(pConn, nullptr);
tmq_conf_t* conf = tmq_conf_new();
tmq_conf_set(conf, "enable.auto.commit", "false");
tmq_conf_set(conf, "auto.commit.interval.ms", "2000");
tmq_conf_set(conf, "group.id", "group_id_2");
tmq_conf_set(conf, "td.connect.user", "root");
tmq_conf_set(conf, "td.connect.pass", "taosdata");
tmq_conf_set(conf, "auto.offset.reset", "earliest");
tmq_conf_set(conf, "msg.with.table.name", "true");
tmq_t* tmq = tmq_consumer_new(conf, NULL, 0);
tmq_conf_destroy(conf);
char topicName[128] = "tp";
// 创建订阅 topics 列表
tmq_list_t* topicList = tmq_list_new();
tmq_list_append(topicList, topicName);
// 启动订阅
tmq_subscribe(tmq, topicList);
tmq_list_destroy(topicList);
int32_t totalRows = 0;
int32_t msgCnt = 0;
int32_t timeout = 2000;
tmq_topic_assignment* pAssign = NULL;
int32_t numOfAssign = 0;
int32_t code = tmq_get_topic_assignment(tmq, topicName, &pAssign, &numOfAssign);
if (code != 0) {
printf("error occurs:%s\n", tmq_err2str(code));
tmq_free_assignment(pAssign);
tmq_consumer_close(tmq);
taos_close(pConn);
fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows);
return;
}
for(int i = 0; i < numOfAssign; i++){
printf("assign i:%d, vgId:%d, offset:%lld, start:%lld, end:%lld\n", i, pAssign[i].vgId, pAssign[i].currentOffset, pAssign[i].begin, pAssign[i].end);
int64_t committed = tmq_committed(tmq, topicName, pAssign[i].vgId);
printf("committed vgId:%d, committed:%lld\n", pAssign[i].vgId, committed);
int64_t position = tmq_position(tmq, topicName, pAssign[i].vgId);
printf("position vgId:%d, position:%lld\n", pAssign[i].vgId, position);
tmq_offset_seek(tmq, topicName, pAssign[i].vgId, 1);
position = tmq_position(tmq, topicName, pAssign[i].vgId);
printf("after seek 1, position vgId:%d, position:%lld\n", pAssign[i].vgId, position);
}
while (1) {
printf("start to poll\n");
TAOS_RES* pRes = tmq_consumer_poll(tmq, timeout);
if (pRes) {
printSubResults(pRes, &totalRows);
} else {
break;
}
tmq_commit_sync(tmq, pRes);
for(int i = 0; i < numOfAssign; i++) {
int64_t committed = tmq_committed(tmq, topicName, pAssign[i].vgId);
printf("committed vgId:%d, committed:%lld\n", pAssign[i].vgId, committed);
if(committed > 0){
int32_t code = tmq_commit_offset_sync(tmq, topicName, pAssign[i].vgId, 4);
printf("tmq_commit_offset_sync vgId:%d, offset:4, code:%d\n", pAssign[i].vgId, code);
int64_t committed = tmq_committed(tmq, topicName, pAssign[i].vgId);
printf("after tmq_commit_offset_sync, committed vgId:%d, committed:%lld\n", pAssign[i].vgId, committed);
}
}
if (pRes != NULL) {
taos_free_result(pRes);
}
// tmq_offset_seek(tmq, "tp", pAssign[0].vgId, pAssign[0].begin);
}
tmq_free_assignment(pAssign);
tmq_consumer_close(tmq);
taos_close(pConn);
fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows);
}
TEST(clientCase, td_25129) {
// taos_options(TSDB_OPTION_CONFIGDIR, "~/first/cfg");
@ -1092,9 +1186,10 @@ TEST(clientCase, td_25129) {
tmq_t* tmq = tmq_consumer_new(conf, NULL, 0);
tmq_conf_destroy(conf);
char topicName[128] = "tp";
// 创建订阅 topics 列表
tmq_list_t* topicList = tmq_list_new();
tmq_list_append(topicList, "tp");
tmq_list_append(topicList, topicName);
// 启动订阅
tmq_subscribe(tmq, topicList);
@ -1112,7 +1207,7 @@ TEST(clientCase, td_25129) {
tmq_topic_assignment* pAssign = NULL;
int32_t numOfAssign = 0;
int32_t code = tmq_get_topic_assignment(tmq, "tp", &pAssign, &numOfAssign);
int32_t code = tmq_get_topic_assignment(tmq, topicName, &pAssign, &numOfAssign);
if (code != 0) {
printf("error occurs:%s\n", tmq_err2str(code));
tmq_free_assignment(pAssign);
@ -1129,7 +1224,7 @@ TEST(clientCase, td_25129) {
// tmq_offset_seek(tmq, "tp", pAssign[0].vgId, 4);
tmq_free_assignment(pAssign);
code = tmq_get_topic_assignment(tmq, "tp", &pAssign, &numOfAssign);
code = tmq_get_topic_assignment(tmq, topicName, &pAssign, &numOfAssign);
if (code != 0) {
printf("error occurs:%s\n", tmq_err2str(code));
tmq_free_assignment(pAssign);
@ -1145,7 +1240,7 @@ TEST(clientCase, td_25129) {
tmq_free_assignment(pAssign);
code = tmq_get_topic_assignment(tmq, "tp", &pAssign, &numOfAssign);
code = tmq_get_topic_assignment(tmq, topicName, &pAssign, &numOfAssign);
if (code != 0) {
printf("error occurs:%s\n", tmq_err2str(code));
tmq_free_assignment(pAssign);
@ -1160,6 +1255,7 @@ TEST(clientCase, td_25129) {
}
while (1) {
printf("start to poll\n");
TAOS_RES* pRes = tmq_consumer_poll(tmq, timeout);
if (pRes) {
char buf[128];
@ -1173,10 +1269,26 @@ TEST(clientCase, td_25129) {
// printf("vgroup id: %d\n", vgroupId);
printSubResults(pRes, &totalRows);
code = tmq_get_topic_assignment(tmq, topicName, &pAssign, &numOfAssign);
if (code != 0) {
printf("error occurs:%s\n", tmq_err2str(code));
tmq_free_assignment(pAssign);
tmq_consumer_close(tmq);
taos_close(pConn);
fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows);
return;
}
for(int i = 0; i < numOfAssign; i++){
printf("assign i:%d, vgId:%d, offset:%lld, start:%lld, end:%lld\n", i, pAssign[i].vgId, pAssign[i].currentOffset, pAssign[i].begin, pAssign[i].end);
}
} else {
tmq_offset_seek(tmq, "tp", pAssign[0].vgId, pAssign[0].currentOffset);
tmq_offset_seek(tmq, "tp", pAssign[1].vgId, pAssign[1].currentOffset);
continue;
for(int i = 0; i < numOfAssign; i++) {
tmq_offset_seek(tmq, topicName, pAssign[i].vgId, pAssign[i].currentOffset);
}
tmq_commit_sync(tmq, pRes);
break;
}
// tmq_commit_sync(tmq, pRes);
@ -1208,6 +1320,7 @@ TEST(clientCase, td_25129) {
printf("assign i:%d, vgId:%d, offset:%lld, start:%lld, end:%lld\n", i, pAssign[i].vgId, pAssign[i].currentOffset, pAssign[i].begin, pAssign[i].end);
}
tmq_free_assignment(pAssign);
tmq_consumer_close(tmq);
taos_close(pConn);
fprintf(stderr, "%d msg consumed, include %d rows\n", msgCnt, totalRows);

View File

@ -33,7 +33,7 @@ static const SSysDbTableSchema dnodesSchema[] = {
{.name = "support_vnodes", .bytes = 2, .type = TSDB_DATA_TYPE_SMALLINT, .sysInfo = true},
{.name = "status", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true},
{.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true},
{.name = "reboot_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true},
{.name = "reboot_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true},
{.name = "note", .bytes = 256 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true},
#ifdef TD_ENTERPRISE
{.name = "active_code", .bytes = TSDB_ACTIVE_KEY_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true},
@ -47,7 +47,7 @@ static const SSysDbTableSchema mnodesSchema[] = {
{.name = "role", .bytes = 12 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true},
{.name = "status", .bytes = 9 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true},
{.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true},
{.name = "reboot_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true},
{.name = "role_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true},
};
static const SSysDbTableSchema modulesSchema[] = {
@ -73,7 +73,7 @@ static const SSysDbTableSchema clusterSchema[] = {
{.name = "name", .bytes = TSDB_CLUSTER_ID_LEN + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true},
{.name = "uptime", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = true},
{.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true},
{.name = "version", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true},
{.name = "version", .bytes = 10 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = true},
{.name = "expire_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = true},
};

View File

@ -1101,6 +1101,9 @@ int32_t tSerializeSStatusReq(void *buf, int32_t bufLen, SStatusReq *pReq) {
if (tEncodeI64(&encoder, pReq->qload.timeInFetchQueue) < 0) return -1;
if (tEncodeI32(&encoder, pReq->statusSeq) < 0) return -1;
if (tEncodeI64(&encoder, pReq->mload.syncTerm) < 0) return -1;
if (tEncodeI64(&encoder, pReq->mload.roleTimeMs) < 0) return -1;
if (tEncodeI8(&encoder, pReq->clusterCfg.ttlChangeOnWrite) < 0) return -1;
tEndEncode(&encoder);
int32_t tlen = encoder.pos;
@ -1145,7 +1148,8 @@ int32_t tDeserializeSStatusReq(void *buf, int32_t bufLen, SStatusReq *pReq) {
for (int32_t i = 0; i < vlen; ++i) {
SVnodeLoad vload = {0};
int64_t reserved = 0;
int64_t reserved64 = 0;
int32_t reserved32 = 0;
if (tDecodeI32(&decoder, &vload.vgId) < 0) return -1;
if (tDecodeI8(&decoder, &vload.syncState) < 0) return -1;
if (tDecodeI8(&decoder, &vload.syncRestore) < 0) return -1;
@ -1157,9 +1161,9 @@ int32_t tDeserializeSStatusReq(void *buf, int32_t bufLen, SStatusReq *pReq) {
if (tDecodeI64(&decoder, &vload.compStorage) < 0) return -1;
if (tDecodeI64(&decoder, &vload.pointsWritten) < 0) return -1;
if (tDecodeI32(&decoder, &vload.numOfCachedTables) < 0) return -1;
if (tDecodeI32(&decoder, (int32_t *)&reserved) < 0) return -1;
if (tDecodeI64(&decoder, &reserved) < 0) return -1;
if (tDecodeI64(&decoder, &reserved) < 0) return -1;
if (tDecodeI32(&decoder, (int32_t *)&reserved32) < 0) return -1;
if (tDecodeI64(&decoder, &reserved64) < 0) return -1;
if (tDecodeI64(&decoder, &reserved64) < 0) return -1;
if (taosArrayPush(pReq->pVloads, &vload) == NULL) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
return -1;
@ -1183,6 +1187,19 @@ int32_t tDeserializeSStatusReq(void *buf, int32_t bufLen, SStatusReq *pReq) {
if (tDecodeI64(&decoder, &pReq->qload.timeInFetchQueue) < 0) return -1;
if (tDecodeI32(&decoder, &pReq->statusSeq) < 0) return -1;
pReq->mload.syncTerm = -1;
pReq->mload.roleTimeMs = 0;
if (!tDecodeIsEnd(&decoder)) {
if (tDecodeI64(&decoder, &pReq->mload.syncTerm) < 0) return -1;
if (tDecodeI64(&decoder, &pReq->mload.roleTimeMs) < 0) return -1;
}
pReq->clusterCfg.ttlChangeOnWrite = false;
if (!tDecodeIsEnd(&decoder)) {
if (tDecodeI8(&decoder, &pReq->clusterCfg.ttlChangeOnWrite) < 0) return -1;
}
tEndDecode(&decoder);
tDecoderClear(&decoder);
return 0;
@ -1547,6 +1564,7 @@ int32_t tSerializeSGetUserAuthRsp(void *buf, int32_t bufLen, SGetUserAuthRsp *pR
}
int32_t tDeserializeSGetUserAuthRspImpl(SDecoder *pDecoder, SGetUserAuthRsp *pRsp) {
char *key = NULL, *value = NULL;
pRsp->createdDbs = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_ENTRY_LOCK);
pRsp->readDbs = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_ENTRY_LOCK);
pRsp->writeDbs = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_ENTRY_LOCK);
@ -1555,40 +1573,40 @@ int32_t tDeserializeSGetUserAuthRspImpl(SDecoder *pDecoder, SGetUserAuthRsp *pRs
pRsp->useDbs = taosHashInit(4, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BINARY), true, HASH_ENTRY_LOCK);
if (pRsp->createdDbs == NULL || pRsp->readDbs == NULL || pRsp->writeDbs == NULL || pRsp->readTbs == NULL ||
pRsp->writeTbs == NULL || pRsp->useDbs == NULL) {
return -1;
goto _err;
}
if (tDecodeCStrTo(pDecoder, pRsp->user) < 0) return -1;
if (tDecodeI8(pDecoder, &pRsp->superAuth) < 0) return -1;
if (tDecodeI8(pDecoder, &pRsp->sysInfo) < 0) return -1;
if (tDecodeI8(pDecoder, &pRsp->enable) < 0) return -1;
if (tDecodeI8(pDecoder, &pRsp->reserve) < 0) return -1;
if (tDecodeI32(pDecoder, &pRsp->version) < 0) return -1;
if (tDecodeCStrTo(pDecoder, pRsp->user) < 0) goto _err;
if (tDecodeI8(pDecoder, &pRsp->superAuth) < 0) goto _err;
if (tDecodeI8(pDecoder, &pRsp->sysInfo) < 0) goto _err;
if (tDecodeI8(pDecoder, &pRsp->enable) < 0) goto _err;
if (tDecodeI8(pDecoder, &pRsp->reserve) < 0) goto _err;
if (tDecodeI32(pDecoder, &pRsp->version) < 0) goto _err;
int32_t numOfCreatedDbs = 0;
int32_t numOfReadDbs = 0;
int32_t numOfWriteDbs = 0;
if (tDecodeI32(pDecoder, &numOfCreatedDbs) < 0) return -1;
if (tDecodeI32(pDecoder, &numOfReadDbs) < 0) return -1;
if (tDecodeI32(pDecoder, &numOfWriteDbs) < 0) return -1;
if (tDecodeI32(pDecoder, &numOfCreatedDbs) < 0) goto _err;
if (tDecodeI32(pDecoder, &numOfReadDbs) < 0) goto _err;
if (tDecodeI32(pDecoder, &numOfWriteDbs) < 0) goto _err;
for (int32_t i = 0; i < numOfCreatedDbs; ++i) {
char db[TSDB_DB_FNAME_LEN] = {0};
if (tDecodeCStrTo(pDecoder, db) < 0) return -1;
if (tDecodeCStrTo(pDecoder, db) < 0) goto _err;
int32_t len = strlen(db);
taosHashPut(pRsp->createdDbs, db, len, db, len + 1);
}
for (int32_t i = 0; i < numOfReadDbs; ++i) {
char db[TSDB_DB_FNAME_LEN] = {0};
if (tDecodeCStrTo(pDecoder, db) < 0) return -1;
if (tDecodeCStrTo(pDecoder, db) < 0) goto _err;
int32_t len = strlen(db);
taosHashPut(pRsp->readDbs, db, len, db, len + 1);
}
for (int32_t i = 0; i < numOfWriteDbs; ++i) {
char db[TSDB_DB_FNAME_LEN] = {0};
if (tDecodeCStrTo(pDecoder, db) < 0) return -1;
if (tDecodeCStrTo(pDecoder, db) < 0) goto _err;
int32_t len = strlen(db);
taosHashPut(pRsp->writeDbs, db, len, db, len + 1);
}
@ -1597,67 +1615,80 @@ int32_t tDeserializeSGetUserAuthRspImpl(SDecoder *pDecoder, SGetUserAuthRsp *pRs
int32_t numOfReadTbs = 0;
int32_t numOfWriteTbs = 0;
int32_t numOfUseDbs = 0;
if (tDecodeI32(pDecoder, &numOfReadTbs) < 0) return -1;
if (tDecodeI32(pDecoder, &numOfWriteTbs) < 0) return -1;
if (tDecodeI32(pDecoder, &numOfUseDbs) < 0) return -1;
if (tDecodeI32(pDecoder, &numOfReadTbs) < 0) goto _err;
if (tDecodeI32(pDecoder, &numOfWriteTbs) < 0) goto _err;
if (tDecodeI32(pDecoder, &numOfUseDbs) < 0) goto _err;
for (int32_t i = 0; i < numOfReadTbs; ++i) {
int32_t keyLen = 0;
if (tDecodeI32(pDecoder, &keyLen) < 0) return -1;
if (tDecodeI32(pDecoder, &keyLen) < 0) goto _err;
char *key = taosMemoryCalloc(keyLen + 1, sizeof(char));
if (tDecodeCStrTo(pDecoder, key) < 0) return -1;
key = taosMemoryCalloc(keyLen + 1, sizeof(char));
if (tDecodeCStrTo(pDecoder, key) < 0) goto _err;
int32_t valuelen = 0;
if (tDecodeI32(pDecoder, &valuelen) < 0) return -1;
char *value = taosMemoryCalloc(valuelen + 1, sizeof(char));
if (tDecodeCStrTo(pDecoder, value) < 0) return -1;
if (tDecodeI32(pDecoder, &valuelen) < 0) goto _err;
value = taosMemoryCalloc(valuelen + 1, sizeof(char));
if (tDecodeCStrTo(pDecoder, value) < 0) goto _err;
taosHashPut(pRsp->readTbs, key, strlen(key), value, valuelen + 1);
taosMemoryFree(key);
taosMemoryFree(value);
taosMemoryFreeClear(key);
taosMemoryFreeClear(value);
}
for (int32_t i = 0; i < numOfWriteTbs; ++i) {
int32_t keyLen = 0;
if (tDecodeI32(pDecoder, &keyLen) < 0) return -1;
if (tDecodeI32(pDecoder, &keyLen) < 0) goto _err;
char *key = taosMemoryCalloc(keyLen + 1, sizeof(char));
if (tDecodeCStrTo(pDecoder, key) < 0) return -1;
key = taosMemoryCalloc(keyLen + 1, sizeof(char));
if (tDecodeCStrTo(pDecoder, key) < 0) goto _err;
int32_t valuelen = 0;
if (tDecodeI32(pDecoder, &valuelen) < 0) return -1;
char *value = taosMemoryCalloc(valuelen + 1, sizeof(char));
if (tDecodeCStrTo(pDecoder, value) < 0) return -1;
if (tDecodeI32(pDecoder, &valuelen) < 0) goto _err;
value = taosMemoryCalloc(valuelen + 1, sizeof(char));
if (tDecodeCStrTo(pDecoder, value) < 0) goto _err;
taosHashPut(pRsp->writeTbs, key, strlen(key), value, valuelen + 1);
taosMemoryFree(key);
taosMemoryFree(value);
taosMemoryFreeClear(key);
taosMemoryFreeClear(value);
}
for (int32_t i = 0; i < numOfUseDbs; ++i) {
int32_t keyLen = 0;
if (tDecodeI32(pDecoder, &keyLen) < 0) return -1;
if (tDecodeI32(pDecoder, &keyLen) < 0) goto _err;
char *key = taosMemoryCalloc(keyLen + 1, sizeof(char));
if (tDecodeCStrTo(pDecoder, key) < 0) return -1;
key = taosMemoryCalloc(keyLen + 1, sizeof(char));
if (tDecodeCStrTo(pDecoder, key) < 0) goto _err;
int32_t ref = 0;
if (tDecodeI32(pDecoder, &ref) < 0) return -1;
if (tDecodeI32(pDecoder, &ref) < 0) goto _err;
taosHashPut(pRsp->useDbs, key, strlen(key), &ref, sizeof(ref));
taosMemoryFree(key);
taosMemoryFreeClear(key);
}
// since 3.0.7.0
if (!tDecodeIsEnd(pDecoder)) {
if (tDecodeI32(pDecoder, &pRsp->passVer) < 0) return -1;
if (tDecodeI32(pDecoder, &pRsp->passVer) < 0) goto _err;
} else {
pRsp->passVer = 0;
}
}
return 0;
_err:
taosHashCleanup(pRsp->createdDbs);
taosHashCleanup(pRsp->readDbs);
taosHashCleanup(pRsp->writeDbs);
taosHashCleanup(pRsp->writeTbs);
taosHashCleanup(pRsp->readTbs);
taosHashCleanup(pRsp->useDbs);
taosMemoryFreeClear(key);
taosMemoryFreeClear(value);
return -1;
}
int32_t tDeserializeSGetUserAuthRsp(void *buf, int32_t bufLen, SGetUserAuthRsp *pRsp) {
@ -2846,7 +2877,6 @@ int32_t tSerializeSDbHbRspImp(SEncoder *pEncoder, const SDbHbRsp *pRsp) {
return 0;
}
int32_t tSerializeSDbHbBatchRsp(void *buf, int32_t bufLen, SDbHbBatchRsp *pRsp) {
SEncoder encoder = {0};
tEncoderInit(&encoder, buf, bufLen);
@ -2910,7 +2940,7 @@ int32_t tDeserializeSUseDbRsp(void *buf, int32_t bufLen, SUseDbRsp *pRsp) {
return 0;
}
int32_t tDeserializeSDbHbRspImp(SDecoder* decoder, SDbHbRsp* pRsp) {
int32_t tDeserializeSDbHbRspImp(SDecoder *decoder, SDbHbRsp *pRsp) {
int8_t flag = 0;
if (tDecodeI8(decoder, &flag) < 0) return -1;
if (flag) {
@ -3198,7 +3228,7 @@ int32_t tSerializeSDbCfgRsp(void *buf, int32_t bufLen, const SDbCfgRsp *pRsp) {
return tlen;
}
int32_t tDeserializeSDbCfgRspImpl(SDecoder* decoder, SDbCfgRsp *pRsp) {
int32_t tDeserializeSDbCfgRspImpl(SDecoder *decoder, SDbCfgRsp *pRsp) {
if (tDecodeCStrTo(decoder, pRsp->db) < 0) return -1;
if (tDecodeI64(decoder, &pRsp->dbId) < 0) return -1;
if (tDecodeI32(decoder, &pRsp->cfgVersion) < 0) return -1;
@ -5310,10 +5340,10 @@ int32_t tDeserializeSMqAskEpReq(void *buf, int32_t bufLen, SMqAskEpReq *pReq) {
return 0;
}
int32_t tDeatroySMqHbReq(SMqHbReq* pReq){
for(int i = 0; i < taosArrayGetSize(pReq->topics); i++){
TopicOffsetRows* vgs = taosArrayGet(pReq->topics, i);
if(vgs) taosArrayDestroy(vgs->offsetRows);
int32_t tDeatroySMqHbReq(SMqHbReq *pReq) {
for (int i = 0; i < taosArrayGetSize(pReq->topics); i++) {
TopicOffsetRows *vgs = taosArrayGet(pReq->topics, i);
if (vgs) taosArrayDestroy(vgs->offsetRows);
}
taosArrayDestroy(pReq->topics);
return 0;
@ -5330,7 +5360,7 @@ int32_t tSerializeSMqHbReq(void *buf, int32_t bufLen, SMqHbReq *pReq) {
int32_t sz = taosArrayGetSize(pReq->topics);
if (tEncodeI32(&encoder, sz) < 0) return -1;
for (int32_t i = 0; i < sz; ++i) {
TopicOffsetRows* vgs = (TopicOffsetRows*)taosArrayGet(pReq->topics, i);
TopicOffsetRows *vgs = (TopicOffsetRows *)taosArrayGet(pReq->topics, i);
if (tEncodeCStr(&encoder, vgs->topicName) < 0) return -1;
int32_t szVgs = taosArrayGetSize(vgs->offsetRows);
if (tEncodeI32(&encoder, szVgs) < 0) return -1;
@ -5360,19 +5390,19 @@ int32_t tDeserializeSMqHbReq(void *buf, int32_t bufLen, SMqHbReq *pReq) {
if (tDecodeI32(&decoder, &pReq->epoch) < 0) return -1;
int32_t sz = 0;
if (tDecodeI32(&decoder, &sz) < 0) return -1;
if(sz > 0){
if (sz > 0) {
pReq->topics = taosArrayInit(sz, sizeof(TopicOffsetRows));
if (NULL == pReq->topics) return -1;
for (int32_t i = 0; i < sz; ++i) {
TopicOffsetRows* data = taosArrayReserve(pReq->topics, 1);
TopicOffsetRows *data = taosArrayReserve(pReq->topics, 1);
tDecodeCStrTo(&decoder, data->topicName);
int32_t szVgs = 0;
if (tDecodeI32(&decoder, &szVgs) < 0) return -1;
if(szVgs > 0){
if (szVgs > 0) {
data->offsetRows = taosArrayInit(szVgs, sizeof(OffsetRows));
if (NULL == data->offsetRows) return -1;
for (int32_t j= 0; j < szVgs; ++j) {
OffsetRows* offRows = taosArrayReserve(data->offsetRows, 1);
for (int32_t j = 0; j < szVgs; ++j) {
OffsetRows *offRows = taosArrayReserve(data->offsetRows, 1);
if (tDecodeI32(&decoder, &offRows->vgId) < 0) return -1;
if (tDecodeI64(&decoder, &offRows->rows) < 0) return -1;
if (tDecodeSTqOffsetVal(&decoder, &offRows->offset) < 0) return -1;
@ -5386,6 +5416,47 @@ int32_t tDeserializeSMqHbReq(void *buf, int32_t bufLen, SMqHbReq *pReq) {
return 0;
}
int32_t tSerializeSMqSeekReq(void *buf, int32_t bufLen, SMqSeekReq *pReq) {
int32_t headLen = sizeof(SMsgHead);
if (buf != NULL) {
buf = (char *)buf + headLen;
bufLen -= headLen;
}
SEncoder encoder = {0};
tEncoderInit(&encoder, buf, bufLen);
if (tStartEncode(&encoder) < 0) return -1;
if (tEncodeI64(&encoder, pReq->consumerId) < 0) return -1;
if (tEncodeCStr(&encoder, pReq->subKey) < 0) return -1;
tEndEncode(&encoder);
int32_t tlen = encoder.pos;
tEncoderClear(&encoder);
if (buf != NULL) {
SMsgHead *pHead = (SMsgHead *)((char *)buf - headLen);
pHead->vgId = htonl(pReq->head.vgId);
pHead->contLen = htonl(tlen + headLen);
}
return tlen + headLen;
}
int32_t tDeserializeSMqSeekReq(void *buf, int32_t bufLen, SMqSeekReq *pReq) {
int32_t headLen = sizeof(SMsgHead);
SDecoder decoder = {0};
tDecoderInit(&decoder, (char *)buf + headLen, bufLen - headLen);
if (tStartDecode(&decoder) < 0) return -1;
if (tDecodeI64(&decoder, &pReq->consumerId) < 0) return -1;
tDecodeCStrTo(&decoder, pReq->subKey);
tEndDecode(&decoder);
tDecoderClear(&decoder);
return 0;
}
int32_t tSerializeSSubQueryMsg(void *buf, int32_t bufLen, SSubQueryMsg *pReq) {
int32_t headLen = sizeof(SMsgHead);
if (buf != NULL) {
@ -5572,9 +5643,9 @@ int32_t tSerializeSMqPollReq(void *buf, int32_t bufLen, SMqPollReq *pReq) {
int32_t tDeserializeSMqPollReq(void *buf, int32_t bufLen, SMqPollReq *pReq) {
int32_t headLen = sizeof(SMsgHead);
// SMsgHead *pHead = buf;
// pHead->vgId = pReq->head.vgId;
// pHead->contLen = pReq->head.contLen;
// SMsgHead *pHead = buf;
// pHead->vgId = pReq->head.vgId;
// pHead->contLen = pReq->head.contLen;
SDecoder decoder = {0};
tDecoderInit(&decoder, (char *)buf + headLen, bufLen - headLen);
@ -6945,7 +7016,7 @@ int32_t tDecodeSVAlterTbReq(SDecoder *pDecoder, SVAlterTbReq *pReq) {
return 0;
}
int32_t tDecodeSVAlterTbReqSetCtime(SDecoder* pDecoder, SVAlterTbReq* pReq, int64_t ctimeMs) {
int32_t tDecodeSVAlterTbReqSetCtime(SDecoder *pDecoder, SVAlterTbReq *pReq, int64_t ctimeMs) {
if (tStartDecode(pDecoder) < 0) return -1;
if (tDecodeSVAlterTbReqCommon(pDecoder, pReq) < 0) return -1;
@ -7156,11 +7227,6 @@ bool tOffsetEqual(const STqOffsetVal *pLeft, const STqOffsetVal *pRight) {
return pLeft->uid == pRight->uid && pLeft->ts == pRight->ts;
} else if (pLeft->type == TMQ_OFFSET__SNAPSHOT_META) {
return pLeft->uid == pRight->uid;
} else {
ASSERT(0);
/*ASSERT(pLeft->type == TMQ_OFFSET__RESET_NONE || pLeft->type == TMQ_OFFSET__RESET_EARLIEST ||*/
/*pLeft->type == TMQ_OFFSET__RESET_LATEST);*/
/*return true;*/
}
}
return false;
@ -7178,13 +7244,13 @@ int32_t tDecodeSTqOffset(SDecoder *pDecoder, STqOffset *pOffset) {
return 0;
}
int32_t tEncodeMqVgOffset(SEncoder* pEncoder, const SMqVgOffset* pOffset) {
int32_t tEncodeMqVgOffset(SEncoder *pEncoder, const SMqVgOffset *pOffset) {
if (tEncodeSTqOffset(pEncoder, &pOffset->offset) < 0) return -1;
if (tEncodeI64(pEncoder, pOffset->consumerId) < 0) return -1;
return 0;
}
int32_t tDecodeMqVgOffset(SDecoder* pDecoder, SMqVgOffset* pOffset) {
int32_t tDecodeMqVgOffset(SDecoder *pDecoder, SMqVgOffset *pOffset) {
if (tDecodeSTqOffset(pDecoder, &pOffset->offset) < 0) return -1;
if (tDecodeI64(pDecoder, &pOffset->consumerId) < 0) return -1;
return 0;
@ -7355,27 +7421,8 @@ void tDeleteMqDataRsp(SMqDataRsp *pRsp) {
}
int32_t tEncodeSTaosxRsp(SEncoder *pEncoder, const STaosxRsp *pRsp) {
if (tEncodeSTqOffsetVal(pEncoder, &pRsp->reqOffset) < 0) return -1;
if (tEncodeSTqOffsetVal(pEncoder, &pRsp->rspOffset) < 0) return -1;
if (tEncodeI32(pEncoder, pRsp->blockNum) < 0) return -1;
if (pRsp->blockNum != 0) {
if (tEncodeI8(pEncoder, pRsp->withTbName) < 0) return -1;
if (tEncodeI8(pEncoder, pRsp->withSchema) < 0) return -1;
if (tEncodeMqDataRsp(pEncoder, (const SMqDataRsp *)pRsp) < 0) return -1;
for (int32_t i = 0; i < pRsp->blockNum; i++) {
int32_t bLen = *(int32_t *)taosArrayGet(pRsp->blockDataLen, i);
void *data = taosArrayGetP(pRsp->blockData, i);
if (tEncodeBinary(pEncoder, (const uint8_t *)data, bLen) < 0) return -1;
if (pRsp->withSchema) {
SSchemaWrapper *pSW = (SSchemaWrapper *)taosArrayGetP(pRsp->blockSchema, i);
if (tEncodeSSchemaWrapper(pEncoder, pSW) < 0) return -1;
}
if (pRsp->withTbName) {
char *tbName = (char *)taosArrayGetP(pRsp->blockTbName, i);
if (tEncodeCStr(pEncoder, tbName) < 0) return -1;
}
}
}
if (tEncodeI32(pEncoder, pRsp->createTableNum) < 0) return -1;
if (pRsp->createTableNum) {
for (int32_t i = 0; i < pRsp->createTableNum; i++) {
@ -7388,46 +7435,8 @@ int32_t tEncodeSTaosxRsp(SEncoder *pEncoder, const STaosxRsp *pRsp) {
}
int32_t tDecodeSTaosxRsp(SDecoder *pDecoder, STaosxRsp *pRsp) {
if (tDecodeSTqOffsetVal(pDecoder, &pRsp->reqOffset) < 0) return -1;
if (tDecodeSTqOffsetVal(pDecoder, &pRsp->rspOffset) < 0) return -1;
if (tDecodeI32(pDecoder, &pRsp->blockNum) < 0) return -1;
if (pRsp->blockNum != 0) {
pRsp->blockData = taosArrayInit(pRsp->blockNum, sizeof(void *));
pRsp->blockDataLen = taosArrayInit(pRsp->blockNum, sizeof(int32_t));
if (tDecodeI8(pDecoder, &pRsp->withTbName) < 0) return -1;
if (tDecodeI8(pDecoder, &pRsp->withSchema) < 0) return -1;
if (pRsp->withTbName) {
pRsp->blockTbName = taosArrayInit(pRsp->blockNum, sizeof(void *));
}
if (pRsp->withSchema) {
pRsp->blockSchema = taosArrayInit(pRsp->blockNum, sizeof(void *));
}
if (tDecodeMqDataRsp(pDecoder, (SMqDataRsp *)pRsp) < 0) return -1;
for (int32_t i = 0; i < pRsp->blockNum; i++) {
void *data;
uint64_t bLen;
if (tDecodeBinaryAlloc(pDecoder, &data, &bLen) < 0) return -1;
taosArrayPush(pRsp->blockData, &data);
int32_t len = bLen;
taosArrayPush(pRsp->blockDataLen, &len);
if (pRsp->withSchema) {
SSchemaWrapper *pSW = (SSchemaWrapper *)taosMemoryCalloc(1, sizeof(SSchemaWrapper));
if (pSW == NULL) return -1;
if (tDecodeSSchemaWrapper(pDecoder, pSW) < 0) {
taosMemoryFree(pSW);
return -1;
}
taosArrayPush(pRsp->blockSchema, &pSW);
}
if (pRsp->withTbName) {
char *tbName;
if (tDecodeCStrAlloc(pDecoder, &tbName) < 0) return -1;
taosArrayPush(pRsp->blockTbName, &tbName);
}
}
}
if (tDecodeI32(pDecoder, &pRsp->createTableNum) < 0) return -1;
if (pRsp->createTableNum) {
pRsp->createTableLen = taosArrayInit(pRsp->createTableNum, sizeof(int32_t));

View File

@ -90,6 +90,7 @@ void dmSendStatusReq(SDnodeMgmt *pMgmt) {
req.clusterCfg.statusInterval = tsStatusInterval;
req.clusterCfg.checkTime = 0;
req.clusterCfg.ttlChangeOnWrite = tsTtlChangeOnWrite;
char timestr[32] = "1970-01-01 00:00:00.00";
(void)taosParseTime(timestr, &req.clusterCfg.checkTime, (int32_t)strlen(timestr), TSDB_TIME_PRECISION_MILLI, 0);
memcpy(req.clusterCfg.timezone, tsTimezoneStr, TD_TIMEZONE_LEN);

View File

@ -719,12 +719,13 @@ SArray *vmGetMsgHandles() {
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_SUBSCRIBE, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_DELETE_SUB, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_COMMIT_OFFSET, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_SEEK_TO_OFFSET, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_SEEK, vmPutMsgToFetchQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_ADD_CHECKINFO, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_DEL_CHECKINFO, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_CONSUME, vmPutMsgToQueryQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_CONSUME_PUSH, vmPutMsgToQueryQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_VG_WALINFO, vmPutMsgToFetchQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_TMQ_VG_COMMITTEDINFO, vmPutMsgToFetchQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_DELETE, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_BATCH_DEL, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;
if (dmSetMgmtHandle(pArray, TDMT_VND_COMMIT, vmPutMsgToWriteQueue, 0) == NULL) goto _OVER;

View File

@ -41,7 +41,7 @@ int32_t dmOpenNode(SMgmtWrapper *pWrapper) {
pWrapper->pMgmt = output.pMgmt;
}
dmReportStartup(pWrapper->name, "openned");
dmReportStartup(pWrapper->name, "opened");
return 0;
}
@ -159,7 +159,7 @@ int32_t dmRunDnode(SDnode *pDnode) {
} else {
count++;
}
taosMsleep(100);
}
}

View File

@ -133,16 +133,16 @@ typedef enum {
DND_REASON_TIME_ZONE_NOT_MATCH,
DND_REASON_LOCALE_NOT_MATCH,
DND_REASON_CHARSET_NOT_MATCH,
DND_REASON_TTL_CHANGE_ON_WRITE_NOT_MATCH,
DND_REASON_OTHERS
} EDndReason;
typedef enum {
CONSUMER_UPDATE_REB_MODIFY_NOTOPIC = 1, // topic do not need modified after rebalance
CONSUMER_UPDATE_REB_MODIFY_TOPIC, // topic need modified after rebalance
CONSUMER_UPDATE_REB_MODIFY_REMOVE, // topic need removed after rebalance
// CONSUMER_UPDATE_TIMER_LOST,
CONSUMER_UPDATE_RECOVER,
CONSUMER_UPDATE_SUB_MODIFY, // modify after subscribe req
CONSUMER_UPDATE_REB = 1, // update after rebalance
CONSUMER_ADD_REB, // add after rebalance
CONSUMER_REMOVE_REB, // remove after rebalance
CONSUMER_UPDATE_REC, // update after recover
CONSUMER_UPDATE_SUB, // update after subscribe req
} ECsmUpdateType;
typedef struct {
@ -216,8 +216,9 @@ typedef struct {
int64_t createdTime;
int64_t updateTime;
ESyncState syncState;
SyncTerm syncTerm;
bool syncRestore;
int64_t stateStartTime;
int64_t roleTimeMs;
SDnodeObj* pDnode;
int32_t role;
SyncIndex lastIndex;

View File

@ -76,7 +76,6 @@ static SClusterObj *mndAcquireCluster(SMnode *pMnode, void **ppIter) {
if (pIter == NULL) break;
*ppIter = pIter;
return pCluster;
}

View File

@ -94,7 +94,7 @@ void mndDropConsumerFromSdb(SMnode *pMnode, int64_t consumerId){
bool mndRebTryStart() {
int32_t old = atomic_val_compare_exchange_32(&mqRebInExecCnt, 0, 1);
mDebug("tq timer, rebalance counter old val:%d", old);
mInfo("tq timer, rebalance counter old val:%d", old);
return old == 0;
}
@ -116,7 +116,7 @@ void mndRebCntDec() {
int32_t newVal = val - 1;
int32_t oldVal = atomic_val_compare_exchange_32(&mqRebInExecCnt, val, newVal);
if (oldVal == val) {
mDebug("rebalance trans end, rebalance counter:%d", newVal);
mInfo("rebalance trans end, rebalance counter:%d", newVal);
break;
}
}
@ -184,7 +184,7 @@ static int32_t mndProcessConsumerRecoverMsg(SRpcMsg *pMsg) {
}
SMqConsumerObj *pConsumerNew = tNewSMqConsumerObj(pConsumer->consumerId, pConsumer->cgroup);
pConsumerNew->updateType = CONSUMER_UPDATE_RECOVER;
pConsumerNew->updateType = CONSUMER_UPDATE_REC;
mndReleaseConsumer(pMnode, pConsumer);
@ -281,7 +281,7 @@ static int32_t mndProcessMqTimerMsg(SRpcMsg *pMsg) {
// rebalance cannot be parallel
if (!mndRebTryStart()) {
mDebug("mq rebalance already in progress, do nothing");
mInfo("mq rebalance already in progress, do nothing");
return 0;
}
@ -312,7 +312,7 @@ static int32_t mndProcessMqTimerMsg(SRpcMsg *pMsg) {
int32_t hbStatus = atomic_add_fetch_32(&pConsumer->hbStatus, 1);
int32_t status = atomic_load_32(&pConsumer->status);
mDebug("check for consumer:0x%" PRIx64 " status:%d(%s), sub-time:%" PRId64 ", createTime:%" PRId64 ", hbstatus:%d",
mInfo("check for consumer:0x%" PRIx64 " status:%d(%s), sub-time:%" PRId64 ", createTime:%" PRId64 ", hbstatus:%d",
pConsumer->consumerId, status, mndConsumerStatusName(status), pConsumer->subscribeTime, pConsumer->createTime,
hbStatus);
@ -362,7 +362,7 @@ static int32_t mndProcessMqTimerMsg(SRpcMsg *pMsg) {
}
if (taosHashGetSize(pRebMsg->rebSubHash) != 0) {
mInfo("mq rebalance will be triggered");
mInfo("mq rebalance will be triggered");
SRpcMsg rpcMsg = {
.msgType = TDMT_MND_TMQ_DO_REBALANCE,
.pCont = pRebMsg,
@ -416,7 +416,7 @@ static int32_t mndProcessMqHbReq(SRpcMsg *pMsg) {
for(int i = 0; i < taosArrayGetSize(req.topics); i++){
TopicOffsetRows* data = taosArrayGet(req.topics, i);
mDebug("heartbeat report offset rows.%s:%s", pConsumer->cgroup, data->topicName);
mInfo("heartbeat report offset rows.%s:%s", pConsumer->cgroup, data->topicName);
SMqSubscribeObj *pSub = mndAcquireSubscribe(pMnode, pConsumer->cgroup, data->topicName);
if(pSub == NULL){
@ -515,7 +515,7 @@ static int32_t mndProcessAskEpReq(SRpcMsg *pMsg) {
char *topic = taosArrayGetP(pConsumer->currentTopics, i);
SMqSubscribeObj *pSub = mndAcquireSubscribe(pMnode, pConsumer->cgroup, topic);
// txn guarantees pSub is created
if(pSub == NULL) continue;
taosRLockLatch(&pSub->lock);
SMqSubTopicEp topicEp = {0};
@ -523,6 +523,11 @@ static int32_t mndProcessAskEpReq(SRpcMsg *pMsg) {
// 2.1 fetch topic schema
SMqTopicObj *pTopic = mndAcquireTopic(pMnode, topic);
if(pTopic == NULL) {
taosRUnLockLatch(&pSub->lock);
mndReleaseSubscribe(pMnode, pSub);
continue;
}
taosRLockLatch(&pTopic->lock);
tstrncpy(topicEp.db, pTopic->db, TSDB_DB_FNAME_LEN);
topicEp.schema.nCols = pTopic->schema.nCols;
@ -701,7 +706,7 @@ int32_t mndProcessSubscribeReq(SRpcMsg *pMsg) {
pConsumerNew->autoCommitInterval = subscribe.autoCommitInterval;
pConsumerNew->resetOffsetCfg = subscribe.resetOffsetCfg;
// pConsumerNew->updateType = CONSUMER_UPDATE_SUB_MODIFY; // use insert logic
// pConsumerNew->updateType = CONSUMER_UPDATE_SUB; // use insert logic
taosArrayDestroy(pConsumerNew->assignedTopics);
pConsumerNew->assignedTopics = taosArrayDup(pTopicList, topicNameDup);
@ -731,7 +736,7 @@ int32_t mndProcessSubscribeReq(SRpcMsg *pMsg) {
}
// set the update type
pConsumerNew->updateType = CONSUMER_UPDATE_SUB_MODIFY;
pConsumerNew->updateType = CONSUMER_UPDATE_SUB;
taosArrayDestroy(pConsumerNew->assignedTopics);
pConsumerNew->assignedTopics = taosArrayDup(pTopicList, topicNameDup);
@ -984,7 +989,7 @@ static int32_t mndConsumerActionUpdate(SSdb *pSdb, SMqConsumerObj *pOldConsumer,
taosWLockLatch(&pOldConsumer->lock);
if (pNewConsumer->updateType == CONSUMER_UPDATE_SUB_MODIFY) {
if (pNewConsumer->updateType == CONSUMER_UPDATE_SUB) {
TSWAP(pOldConsumer->rebNewTopics, pNewConsumer->rebNewTopics);
TSWAP(pOldConsumer->rebRemovedTopics, pNewConsumer->rebRemovedTopics);
TSWAP(pOldConsumer->assignedTopics, pNewConsumer->assignedTopics);
@ -1004,7 +1009,7 @@ static int32_t mndConsumerActionUpdate(SSdb *pSdb, SMqConsumerObj *pOldConsumer,
// mInfo("consumer:0x%" PRIx64 " timer update, timer lost. state %s -> %s, reb-time:%" PRId64 ", reb-removed-topics:%d",
// pOldConsumer->consumerId, mndConsumerStatusName(prevStatus), mndConsumerStatusName(pOldConsumer->status),
// pOldConsumer->rebalanceTime, (int)taosArrayGetSize(pOldConsumer->rebRemovedTopics));
} else if (pNewConsumer->updateType == CONSUMER_UPDATE_RECOVER) {
} else if (pNewConsumer->updateType == CONSUMER_UPDATE_REC) {
int32_t sz = taosArrayGetSize(pOldConsumer->assignedTopics);
for (int32_t i = 0; i < sz; i++) {
char *topic = taosStrdup(taosArrayGetP(pOldConsumer->assignedTopics, i));
@ -1013,12 +1018,12 @@ static int32_t mndConsumerActionUpdate(SSdb *pSdb, SMqConsumerObj *pOldConsumer,
pOldConsumer->status = MQ_CONSUMER_STATUS_REBALANCE;
mInfo("consumer:0x%" PRIx64 " timer update, timer recover",pOldConsumer->consumerId);
} else if (pNewConsumer->updateType == CONSUMER_UPDATE_REB_MODIFY_NOTOPIC) {
} else if (pNewConsumer->updateType == CONSUMER_UPDATE_REB) {
atomic_add_fetch_32(&pOldConsumer->epoch, 1);
pOldConsumer->rebalanceTime = taosGetTimestampMs();
mInfo("consumer:0x%" PRIx64 " reb update, only rebalance time", pOldConsumer->consumerId);
} else if (pNewConsumer->updateType == CONSUMER_UPDATE_REB_MODIFY_TOPIC) {
} else if (pNewConsumer->updateType == CONSUMER_ADD_REB) {
char *pNewTopic = taosStrdup(taosArrayGetP(pNewConsumer->rebNewTopics, 0));
// check if exist in current topic
@ -1049,7 +1054,7 @@ static int32_t mndConsumerActionUpdate(SSdb *pSdb, SMqConsumerObj *pOldConsumer,
(int)taosArrayGetSize(pOldConsumer->currentTopics), (int)taosArrayGetSize(pOldConsumer->rebNewTopics),
(int)taosArrayGetSize(pOldConsumer->rebRemovedTopics));
} else if (pNewConsumer->updateType == CONSUMER_UPDATE_REB_MODIFY_REMOVE) {
} else if (pNewConsumer->updateType == CONSUMER_REMOVE_REB) {
char *removedTopic = taosArrayGetP(pNewConsumer->rebRemovedTopics, 0);
// remove from removed topic
@ -1104,13 +1109,13 @@ static int32_t mndRetrieveConsumer(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *
}
if (taosArrayGetSize(pConsumer->assignedTopics) == 0) {
mDebug("showing consumer:0x%" PRIx64 " no assigned topic, skip", pConsumer->consumerId);
mInfo("showing consumer:0x%" PRIx64 " no assigned topic, skip", pConsumer->consumerId);
sdbRelease(pSdb, pConsumer);
continue;
}
taosRLockLatch(&pConsumer->lock);
mDebug("showing consumer:0x%" PRIx64, pConsumer->consumerId);
mInfo("showing consumer:0x%" PRIx64, pConsumer->consumerId);
int32_t topicSz = taosArrayGetSize(pConsumer->assignedTopics);
bool hasTopic = true;

View File

@ -1303,11 +1303,10 @@ static void mndBuildDBVgroupInfo(SDbObj *pDb, SMnode *pMnode, SArray *pVgList) {
sdbRelease(pSdb, pVgroup);
if (pDb && (vindex >= pDb->cfg.numOfVgroups)) {
sdbCancelFetch(pSdb, pIter);
break;
}
}
sdbCancelFetch(pSdb, pIter);
}
int32_t mndExtractDbInfo(SMnode *pMnode, SDbObj *pDb, SUseDbRsp *pRsp, const SUseDbReq *pReq) {

View File

@ -41,6 +41,7 @@ static const char *offlineReason[] = {
"timezone not match",
"locale not match",
"charset not match",
"ttl change on write not match"
"unknown",
};
@ -414,6 +415,12 @@ static int32_t mndCheckClusterCfgPara(SMnode *pMnode, SDnodeObj *pDnode, const S
return DND_REASON_CHARSET_NOT_MATCH;
}
if (pCfg->ttlChangeOnWrite != tsTtlChangeOnWrite) {
mError("dnode:%d, ttlChangeOnWrite:%d inconsistent with cluster:%d", pDnode->id, pCfg->ttlChangeOnWrite,
tsTtlChangeOnWrite);
return DND_REASON_TTL_CHANGE_ON_WRITE_NOT_MATCH;
}
return 0;
}
@ -524,13 +531,23 @@ static int32_t mndProcessStatusReq(SRpcMsg *pReq) {
SMnodeObj *pObj = mndAcquireMnode(pMnode, pDnode->id);
if (pObj != NULL) {
if (pObj->syncState != statusReq.mload.syncState || pObj->syncRestore != statusReq.mload.syncRestore) {
mInfo("dnode:%d, mnode syncState from %s to %s, restoreState from %d to %d", pObj->id, syncStr(pObj->syncState),
syncStr(statusReq.mload.syncState), pObj->syncRestore, statusReq.mload.syncRestore);
bool roleChanged = pObj->syncState != statusReq.mload.syncState ||
(statusReq.mload.syncTerm != -1 && pObj->syncTerm != statusReq.mload.syncTerm);
bool restoreChanged = pObj->syncRestore != statusReq.mload.syncRestore;
if (roleChanged || restoreChanged) {
mInfo("dnode:%d, mnode syncState from %s to %s, restoreState from %d to %d, syncTerm from %" PRId64
" to %" PRId64,
pObj->id, syncStr(pObj->syncState), syncStr(statusReq.mload.syncState), pObj->syncRestore,
statusReq.mload.syncRestore, pObj->syncTerm, statusReq.mload.syncTerm);
pObj->syncState = statusReq.mload.syncState;
pObj->syncRestore = statusReq.mload.syncRestore;
pObj->stateStartTime = taosGetTimestampMs();
pObj->syncTerm = statusReq.mload.syncTerm;
}
if (roleChanged) {
pObj->roleTimeMs = (statusReq.mload.roleTimeMs != 0) ? statusReq.mload.roleTimeMs : taosGetTimestampMs();
}
mndReleaseMnode(pMnode, pObj);
}
@ -710,6 +727,7 @@ _OVER:
} else {
mndReleaseDnode(pMnode, pDnode);
}
sdbCancelFetch(pSdb, pIter);
mndTransDrop(pTrans);
sdbFreeRaw(pRaw);
return terrno;

View File

@ -831,6 +831,7 @@ int32_t mndGetIdxsByTagName(SMnode *pMnode, SStbObj *pStb, char *tagName, SIdxOb
if (pIdx->stbUid == pStb->uid && strcasecmp(pIdx->colName, tagName) == 0) {
memcpy((char *)idx, (char *)pIdx, sizeof(SIdxObj));
sdbRelease(pSdb, pIdx);
sdbCancelFetch(pSdb, pIter);
return 0;
}
@ -851,7 +852,7 @@ int32_t mndDropIdxsByDb(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
if (pIdx->dbUid == pDb->uid) {
if (mndSetDropIdxCommitLogs(pMnode, pTrans, pIdx) != 0) {
sdbRelease(pSdb, pIdx);
sdbCancelFetch(pSdb, pIdx);
sdbCancelFetch(pSdb, pIter);
return -1;
}
}

View File

@ -890,7 +890,10 @@ int32_t mndGetLoad(SMnode *pMnode, SMnodeLoad *pLoad) {
SSyncState state = syncGetState(pMnode->syncMgmt.sync);
pLoad->syncState = state.state;
pLoad->syncRestore = state.restored;
mTrace("mnode current syncState is %s, syncRestore:%d", syncStr(pLoad->syncState), pLoad->syncRestore);
pLoad->syncTerm = state.term;
pLoad->roleTimeMs = state.roleTimeMs;
mTrace("mnode current syncState is %s, syncRestore:%d, syncTerm:%" PRId64 " ,roleTimeMs:%" PRId64,
syncStr(pLoad->syncState), pLoad->syncRestore, pLoad->syncTerm, pLoad->roleTimeMs);
return 0;
}

View File

@ -319,7 +319,7 @@ static int32_t mndBuildCreateMnodeRedoAction(STrans *pTrans, SDCreateMnodeReq *p
return 0;
}
static int32_t mndBuildAlterMnodeTypeRedoAction(STrans *pTrans,
static int32_t mndBuildAlterMnodeTypeRedoAction(STrans *pTrans,
SDAlterMnodeTypeReq *pAlterMnodeTypeReq, SEpSet *pAlterMnodeTypeEpSet) {
int32_t contLen = tSerializeSDCreateMnodeReq(NULL, 0, pAlterMnodeTypeReq);
void *pReq = taosMemoryMalloc(contLen);
@ -803,9 +803,17 @@ static int32_t mndRetrieveMnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
int32_t numOfRows = 0;
int32_t cols = 0;
SMnodeObj *pObj = NULL;
SMnodeObj *pSelfObj = NULL;
ESdbStatus objStatus = 0;
char *pWrite;
int64_t curMs = taosGetTimestampMs();
int64_t dummyTimeMs = 0;
pSelfObj = sdbAcquire(pSdb, SDB_MNODE, &pMnode->selfDnodeId);
if (pSelfObj == NULL) {
mError("mnode:%d, failed to acquire self %s", pMnode->selfDnodeId, terrstr());
goto _out;
}
while (numOfRows < rows) {
pShow->pIter = sdbFetchAll(pSdb, SDB_MNODE, pShow->pIter, (void **)&pObj, &objStatus, true);
@ -825,7 +833,8 @@ static int32_t mndRetrieveMnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
if (pObj->id == pMnode->selfDnodeId) {
snprintf(role, sizeof(role), "%s%s", syncStr(TAOS_SYNC_STATE_LEADER), pMnode->restored ? "" : "*");
}
if (mndIsDnodeOnline(pObj->pDnode, curMs)) {
bool isDnodeOnline = mndIsDnodeOnline(pObj->pDnode, curMs);
if (isDnodeOnline) {
tstrncpy(role, syncStr(pObj->syncState), sizeof(role));
if (pObj->syncState == TAOS_SYNC_STATE_LEADER && pObj->id != pMnode->selfDnodeId) {
tstrncpy(role, syncStr(TAOS_SYNC_STATE_ERROR), sizeof(role));
@ -840,7 +849,7 @@ static int32_t mndRetrieveMnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
const char *status = "ready";
if (objStatus == SDB_STATUS_CREATING) status = "creating";
if (objStatus == SDB_STATUS_DROPPING) status = "dropping";
if (!mndIsDnodeOnline(pObj->pDnode, curMs)) status = "offline";
if (!isDnodeOnline) status = "offline";
char b3[9 + VARSTR_HEADER_SIZE] = {0};
STR_WITH_MAXSIZE_TO_VARSTR(b3, status, pShow->pMeta->pSchemas[cols].bytes);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
@ -850,7 +859,15 @@ static int32_t mndRetrieveMnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
colDataSetVal(pColInfo, numOfRows, (const char *)&pObj->createdTime, false);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataSetVal(pColInfo, numOfRows, (const char *)&pObj->stateStartTime, false);
if (pObj->syncTerm != pSelfObj->syncTerm || !isDnodeOnline) {
// state of old term / no status report => use dummyTimeMs
if (pObj->syncTerm > pSelfObj->syncTerm) {
mError("mnode:%d has a newer term:%" PRId64 " than me:%" PRId64, pObj->id, pObj->syncTerm, pSelfObj->syncTerm);
}
colDataSetVal(pColInfo, numOfRows, (const char *)&dummyTimeMs, false);
} else {
colDataSetVal(pColInfo, numOfRows, (const char *)&pObj->roleTimeMs, false);
}
numOfRows++;
sdbRelease(pSdb, pObj);
@ -858,6 +875,8 @@ static int32_t mndRetrieveMnodes(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pB
pShow->numOfRows += numOfRows;
_out:
sdbRelease(pSdb, pSelfObj);
return numOfRows;
}
@ -999,12 +1018,12 @@ static void mndReloadSyncConfig(SMnode *pMnode) {
}
if (pMnode->syncMgmt.sync > 0) {
mInfo("vgId:1, mnode sync reconfig, totalReplica:%d replica:%d myIndex:%d",
mInfo("vgId:1, mnode sync reconfig, totalReplica:%d replica:%d myIndex:%d",
cfg.totalReplicaNum, cfg.replicaNum, cfg.myIndex);
for (int32_t i = 0; i < cfg.totalReplicaNum; ++i) {
SNodeInfo *pNode = &cfg.nodeInfo[i];
mInfo("vgId:1, index:%d, ep:%s:%u dnode:%d cluster:%" PRId64 " role:%d", i, pNode->nodeFqdn, pNode->nodePort,
mInfo("vgId:1, index:%d, ep:%s:%u dnode:%d cluster:%" PRId64 " role:%d", i, pNode->nodeFqdn, pNode->nodePort,
pNode->nodeId, pNode->clusterId, pNode->nodeRole);
}

View File

@ -454,6 +454,7 @@ int32_t mndCreateQnodeList(SMnode *pMnode, SArray **pList, int32_t limit) {
sdbRelease(pSdb, pObj);
if (limit > 0 && numOfRows >= limit) {
sdbCancelFetch(pSdb, pIter);
break;
}
}

View File

@ -168,6 +168,7 @@ SSnodeObj* mndSchedFetchOneSnode(SMnode* pMnode) {
void* pIter = NULL;
// TODO random fetch
pIter = sdbFetch(pMnode->pSdb, SDB_SNODE, pIter, (void**)&pObj);
sdbCancelFetch(pMnode->pSdb, pIter);
return pObj;
}
@ -199,6 +200,7 @@ SVgObj* mndSchedFetchOneVg(SMnode* pMnode, int64_t dbUid) {
sdbRelease(pMnode->pSdb, pVgroup);
continue;
}
sdbCancelFetch(pMnode->pSdb, pIter);
return pVgroup;
}
return pVgroup;

View File

@ -504,6 +504,11 @@ static void mndDestroySmaObj(SSmaObj *pSmaObj) {
static int32_t mndCreateSma(SMnode *pMnode, SRpcMsg *pReq, SMCreateSmaReq *pCreate, SDbObj *pDb, SStbObj *pStb,
const char *streamName) {
if (pDb->cfg.replications > 1) {
terrno = TSDB_CODE_MND_INVALID_SMA_OPTION;
mError("sma:%s, failed to create since not support multiple replicas", pCreate->name);
return -1;
}
SSmaObj smaObj = {0};
memcpy(smaObj.name, pCreate->name, TSDB_TABLE_FNAME_LEN);
memcpy(smaObj.stb, pStb->name, TSDB_TABLE_FNAME_LEN);

View File

@ -900,7 +900,6 @@ static int32_t mndProcessTtlTimer(SRpcMsg *pReq) {
SMsgHead *pHead = rpcMallocCont(contLen);
if (pHead == NULL) {
sdbCancelFetch(pSdb, pVgroup);
sdbRelease(pSdb, pVgroup);
continue;
}
@ -1240,6 +1239,7 @@ static int32_t mndCheckAlterColForTopic(SMnode *pMnode, const char *stbFullName,
terrno = TSDB_CODE_MND_FIELD_CONFLICT_WITH_TOPIC;
mError("topic:%s, create ast error", pTopic->name);
sdbRelease(pSdb, pTopic);
sdbCancelFetch(pSdb, pIter);
return -1;
}
@ -1260,6 +1260,7 @@ static int32_t mndCheckAlterColForTopic(SMnode *pMnode, const char *stbFullName,
mError("topic:%s, check colId:%d conflicted", pTopic->name, pCol->colId);
nodesDestroyNode(pAst);
nodesDestroyList(pNodeList);
sdbCancelFetch(pSdb, pIter);
sdbRelease(pSdb, pTopic);
return -1;
}
@ -1287,6 +1288,7 @@ static int32_t mndCheckAlterColForStream(SMnode *pMnode, const char *stbFullName
terrno = TSDB_CODE_MND_INVALID_STREAM_OPTION;
mError("stream:%s, create ast error", pStream->name);
sdbRelease(pSdb, pStream);
sdbCancelFetch(pSdb, pIter);
return -1;
}
@ -1306,6 +1308,7 @@ static int32_t mndCheckAlterColForStream(SMnode *pMnode, const char *stbFullName
nodesDestroyNode(pAst);
nodesDestroyList(pNodeList);
sdbRelease(pSdb, pStream);
sdbCancelFetch(pSdb, pIter);
return -1;
}
mInfo("stream:%s, check colId:%d passed", pStream->name, pCol->colId);
@ -1335,6 +1338,7 @@ static int32_t mndCheckAlterColForTSma(SMnode *pMnode, const char *stbFullName,
terrno = TSDB_CODE_SDB_INVALID_DATA_CONTENT;
mError("tsma:%s, check tag and column modifiable, stb:%s suid:%" PRId64 " colId:%d failed since parse AST err",
pSma->name, stbFullName, suid, colId);
sdbCancelFetch(pSdb, pIter);
return -1;
}
@ -1355,6 +1359,7 @@ static int32_t mndCheckAlterColForTSma(SMnode *pMnode, const char *stbFullName,
nodesDestroyNode(pAst);
nodesDestroyList(pNodeList);
sdbRelease(pSdb, pSma);
sdbCancelFetch(pSdb, pIter);
return -1;
}
mInfo("tsma:%s, check colId:%d passed", pSma->name, pCol->colId);
@ -2271,6 +2276,7 @@ static int32_t mndCheckDropStbForTopic(SMnode *pMnode, const char *stbFullName,
if (pTopic->subType == TOPIC_SUB_TYPE__TABLE) {
if (pTopic->stbUid == suid) {
sdbRelease(pSdb, pTopic);
sdbCancelFetch(pSdb, pIter);
return -1;
}
}
@ -2285,6 +2291,7 @@ static int32_t mndCheckDropStbForTopic(SMnode *pMnode, const char *stbFullName,
terrno = TSDB_CODE_MND_INVALID_TOPIC_OPTION;
mError("topic:%s, create ast error", pTopic->name);
sdbRelease(pSdb, pTopic);
sdbCancelFetch(pSdb, pIter);
return -1;
}
@ -2298,6 +2305,7 @@ static int32_t mndCheckDropStbForTopic(SMnode *pMnode, const char *stbFullName,
sdbRelease(pSdb, pTopic);
nodesDestroyNode(pAst);
nodesDestroyList(pNodeList);
sdbCancelFetch(pSdb, pIter);
return -1;
} else {
goto NEXT;
@ -2325,6 +2333,7 @@ static int32_t mndCheckDropStbForStream(SMnode *pMnode, const char *stbFullName,
}
if (pStream->targetStbUid == suid) {
sdbCancelFetch(pSdb, pIter);
sdbRelease(pSdb, pStream);
return -1;
}
@ -2333,6 +2342,7 @@ static int32_t mndCheckDropStbForStream(SMnode *pMnode, const char *stbFullName,
if (nodesStringToNode(pStream->ast, &pAst) != 0) {
terrno = TSDB_CODE_MND_INVALID_STREAM_OPTION;
mError("stream:%s, create ast error", pStream->name);
sdbCancelFetch(pSdb, pIter);
sdbRelease(pSdb, pStream);
return -1;
}
@ -2344,6 +2354,7 @@ static int32_t mndCheckDropStbForStream(SMnode *pMnode, const char *stbFullName,
SColumnNode *pCol = (SColumnNode *)pNode;
if (pCol->tableId == suid) {
sdbCancelFetch(pSdb, pIter);
sdbRelease(pSdb, pStream);
nodesDestroyNode(pAst);
nodesDestroyList(pNodeList);

View File

@ -740,12 +740,14 @@ static int32_t mndProcessCreateStreamReq(SRpcMsg *pReq) {
if (numOfStream > MND_STREAM_MAX_NUM) {
mError("too many streams, no more than %d for each database", MND_STREAM_MAX_NUM);
terrno = TSDB_CODE_MND_TOO_MANY_STREAMS;
sdbCancelFetch(pMnode->pSdb, pIter);
goto _OVER;
}
if (pStream->targetStbUid == streamObj.targetStbUid) {
mError("Cannot write the same stable as other stream:%s", pStream->name);
terrno = TSDB_CODE_MND_INVALID_TARGET_TABLE;
sdbCancelFetch(pMnode->pSdb, pIter);
goto _OVER;
}
}

View File

@ -597,7 +597,7 @@ static int32_t mndPersistRebResult(SMnode *pMnode, SRpcMsg *pMsg, const SMqRebOu
for (int32_t i = 0; i < consumerNum; i++) {
int64_t consumerId = *(int64_t *)taosArrayGet(pOutput->modifyConsumers, i);
SMqConsumerObj *pConsumerNew = tNewSMqConsumerObj(consumerId, cgroup);
pConsumerNew->updateType = CONSUMER_UPDATE_REB_MODIFY_NOTOPIC;
pConsumerNew->updateType = CONSUMER_UPDATE_REB;
if (mndSetConsumerCommitLogs(pMnode, pTrans, pConsumerNew) != 0) {
tDeleteSMqConsumerObj(pConsumerNew, true);
@ -613,7 +613,7 @@ static int32_t mndPersistRebResult(SMnode *pMnode, SRpcMsg *pMsg, const SMqRebOu
for (int32_t i = 0; i < consumerNum; i++) {
int64_t consumerId = *(int64_t *)taosArrayGet(pOutput->newConsumers, i);
SMqConsumerObj *pConsumerNew = tNewSMqConsumerObj(consumerId, cgroup);
pConsumerNew->updateType = CONSUMER_UPDATE_REB_MODIFY_TOPIC;
pConsumerNew->updateType = CONSUMER_ADD_REB;
char* topicTmp = taosStrdup(topic);
taosArrayPush(pConsumerNew->rebNewTopics, &topicTmp);
@ -633,7 +633,7 @@ static int32_t mndPersistRebResult(SMnode *pMnode, SRpcMsg *pMsg, const SMqRebOu
int64_t consumerId = *(int64_t *)taosArrayGet(pOutput->removedConsumers, i);
SMqConsumerObj *pConsumerNew = tNewSMqConsumerObj(consumerId, cgroup);
pConsumerNew->updateType = CONSUMER_UPDATE_REB_MODIFY_REMOVE;
pConsumerNew->updateType = CONSUMER_REMOVE_REB;
char* topicTmp = taosStrdup(topic);
taosArrayPush(pConsumerNew->rebRemovedTopics, &topicTmp);
@ -1104,6 +1104,7 @@ int32_t mndDropSubByTopic(SMnode *pMnode, STrans *pTrans, const char *topicName)
if (taosHashGetSize(pSub->consumerHash) != 0) {
sdbRelease(pSdb, pSub);
terrno = TSDB_CODE_MND_IN_REBALANCE;
sdbCancelFetch(pSdb, pIter);
return -1;
}
int32_t sz = taosArrayGetSize(pSub->unassignedVgs);
@ -1122,12 +1123,14 @@ int32_t mndDropSubByTopic(SMnode *pMnode, STrans *pTrans, const char *topicName)
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
taosMemoryFree(pReq);
sdbRelease(pSdb, pSub);
sdbCancelFetch(pSdb, pIter);
return -1;
}
}
if (mndSetDropSubRedoLogs(pMnode, pTrans, pSub) < 0) {
sdbRelease(pSdb, pSub);
sdbCancelFetch(pSdb, pIter);
goto END;
}
@ -1204,7 +1207,7 @@ int32_t mndRetrieveSubscribe(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBlock
int32_t numOfRows = 0;
SMqSubscribeObj *pSub = NULL;
mDebug("mnd show subscriptions begin");
mInfo("mnd show subscriptions begin");
while (numOfRows < rowsCapacity) {
pShow->pIter = sdbFetch(pSdb, SDB_SUBSCRIBE, pShow->pIter, (void **)&pSub);
@ -1244,7 +1247,7 @@ int32_t mndRetrieveSubscribe(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBlock
sdbRelease(pSdb, pSub);
}
mDebug("mnd end show subscriptions");
mInfo("mnd end show subscriptions");
pShow->numOfRows += numOfRows;
return numOfRows;

View File

@ -513,6 +513,7 @@ static int32_t mndCreateTopic(SMnode *pMnode, SRpcMsg *pReq, SCMCreateTopicReq *
tEncodeSize(tEncodeSTqCheckInfo, &info, len, code);
if (code < 0) {
sdbRelease(pSdb, pVgroup);
sdbCancelFetch(pSdb, pIter);
goto _OUT;
}
void *buf = taosMemoryCalloc(1, sizeof(SMsgHead) + len);
@ -522,6 +523,7 @@ static int32_t mndCreateTopic(SMnode *pMnode, SRpcMsg *pReq, SCMCreateTopicReq *
if (tEncodeSTqCheckInfo(&encoder, &info) < 0) {
taosMemoryFree(buf);
sdbRelease(pSdb, pVgroup);
sdbCancelFetch(pSdb, pIter);
goto _OUT;
}
tEncoderClear(&encoder);
@ -535,6 +537,7 @@ static int32_t mndCreateTopic(SMnode *pMnode, SRpcMsg *pReq, SCMCreateTopicReq *
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
taosMemoryFree(buf);
sdbRelease(pSdb, pVgroup);
sdbCancelFetch(pSdb, pIter);
goto _OUT;
}
buf = NULL;
@ -647,7 +650,6 @@ static int32_t mndDropTopic(SMnode *pMnode, STrans *pTrans, SRpcMsg *pReq, SMqTo
code = 0;
_OVER:
mndTransDrop(pTrans);
return code;
}
@ -698,6 +700,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
if (strcmp(name, pTopic->name) == 0) {
mndReleaseConsumer(pMnode, pConsumer);
mndReleaseTopic(pMnode, pTopic);
sdbCancelFetch(pSdb, pIter);
terrno = TSDB_CODE_MND_TOPIC_SUBSCRIBED;
mError("topic:%s, failed to drop since subscribed by consumer:0x%" PRIx64 ", in consumer group %s",
dropReq.name, pConsumer->consumerId, pConsumer->cgroup);
@ -711,6 +714,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
if (strcmp(name, pTopic->name) == 0) {
mndReleaseConsumer(pMnode, pConsumer);
mndReleaseTopic(pMnode, pTopic);
sdbCancelFetch(pSdb, pIter);
terrno = TSDB_CODE_MND_TOPIC_SUBSCRIBED;
mError("topic:%s, failed to drop since subscribed by consumer:%" PRId64 ", in consumer group %s (reb new)",
dropReq.name, pConsumer->consumerId, pConsumer->cgroup);
@ -724,6 +728,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
if (strcmp(name, pTopic->name) == 0) {
mndReleaseConsumer(pMnode, pConsumer);
mndReleaseTopic(pMnode, pTopic);
sdbCancelFetch(pSdb, pIter);
terrno = TSDB_CODE_MND_TOPIC_SUBSCRIBED;
mError("topic:%s, failed to drop since subscribed by consumer:%" PRId64 ", in consumer group %s (reb remove)",
dropReq.name, pConsumer->consumerId, pConsumer->cgroup);
@ -735,6 +740,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
}
if (mndCheckDbPrivilegeByName(pMnode, pReq->info.conn.user, MND_OPER_READ_DB, pTopic->db) != 0) {
mndReleaseTopic(pMnode, pTopic);
return -1;
}
@ -788,6 +794,8 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
if (mndTransAppendRedoAction(pTrans, &action) != 0) {
taosMemoryFree(buf);
sdbRelease(pSdb, pVgroup);
mndReleaseTopic(pMnode, pTopic);
sdbCancelFetch(pSdb, pIter);
mndTransDrop(pTrans);
return -1;
}
@ -796,6 +804,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
int32_t code = mndDropTopic(pMnode, pTrans, pReq, pTopic);
mndReleaseTopic(pMnode, pTopic);
mndTransDrop(pTrans);
if (code != 0) {
mError("topic:%s, failed to drop since %s", dropReq.name, terrstr());
@ -999,6 +1008,7 @@ bool mndTopicExistsForDb(SMnode *pMnode, SDbObj *pDb) {
if (pTopic->dbUid == pDb->uid) {
sdbRelease(pSdb, pTopic);
sdbCancelFetch(pSdb, pIter);
return true;
}

View File

@ -630,6 +630,11 @@ static int32_t mndProcessCreateUserReq(SRpcMsg *pReq) {
goto _OVER;
}
if (strlen(createReq.pass) >= TSDB_PASSWORD_LEN){
terrno = TSDB_CODE_PAR_NAME_OR_PASSWD_TOO_LONG;
goto _OVER;
}
pUser = mndAcquireUser(pMnode, createReq.user);
if (pUser != NULL) {
terrno = TSDB_CODE_MND_USER_ALREADY_EXIST;
@ -922,19 +927,19 @@ static int32_t mndProcessAlterUserReq(SRpcMsg *pReq) {
}
}
if (alterReq.alterType == TSDB_ALTER_USER_ADD_READ_TABLE) {
if (alterReq.alterType == TSDB_ALTER_USER_ADD_READ_TABLE || alterReq.alterType == TSDB_ALTER_USER_ADD_ALL_TABLE) {
if (mndTablePriviledge(pMnode, newUser.readTbs, newUser.useDbs, &alterReq, pSdb) != 0) goto _OVER;
}
if (alterReq.alterType == TSDB_ALTER_USER_ADD_WRITE_TABLE) {
if (alterReq.alterType == TSDB_ALTER_USER_ADD_WRITE_TABLE || alterReq.alterType == TSDB_ALTER_USER_ADD_ALL_TABLE) {
if (mndTablePriviledge(pMnode, newUser.writeTbs, newUser.useDbs, &alterReq, pSdb) != 0) goto _OVER;
}
if (alterReq.alterType == TSDB_ALTER_USER_REMOVE_READ_TABLE) {
if (alterReq.alterType == TSDB_ALTER_USER_REMOVE_READ_TABLE || alterReq.alterType == TSDB_ALTER_USER_REMOVE_ALL_TABLE) {
if (mndRemoveTablePriviledge(pMnode, newUser.readTbs, newUser.useDbs, &alterReq, pSdb) != 0) goto _OVER;
}
if (alterReq.alterType == TSDB_ALTER_USER_REMOVE_WRITE_TABLE) {
if (alterReq.alterType == TSDB_ALTER_USER_REMOVE_WRITE_TABLE || alterReq.alterType == TSDB_ALTER_USER_REMOVE_ALL_TABLE) {
if (mndRemoveTablePriviledge(pMnode, newUser.writeTbs, newUser.useDbs, &alterReq, pSdb) != 0) goto _OVER;
}
@ -1444,7 +1449,9 @@ int32_t mndUserRemoveDb(SMnode *pMnode, STrans *pTrans, char *db) {
if (pIter == NULL) break;
code = -1;
if (mndUserDupObj(pUser, &newUser) != 0) break;
if (mndUserDupObj(pUser, &newUser) != 0) {
break;
}
bool inRead = (taosHashGet(newUser.readDbs, db, len) != NULL);
bool inWrite = (taosHashGet(newUser.writeDbs, db, len) != NULL);
@ -1453,7 +1460,9 @@ int32_t mndUserRemoveDb(SMnode *pMnode, STrans *pTrans, char *db) {
(void)taosHashRemove(newUser.writeDbs, db, len);
SSdbRaw *pCommitRaw = mndUserActionEncode(&newUser);
if (pCommitRaw == NULL || mndTransAppendCommitlog(pTrans, pCommitRaw) != 0) break;
if (pCommitRaw == NULL || mndTransAppendCommitlog(pTrans, pCommitRaw) != 0) {
break;
}
(void)sdbSetRawStatus(pCommitRaw, SDB_STATUS_READY);
}
@ -1491,7 +1500,9 @@ int32_t mndUserRemoveTopic(SMnode *pMnode, STrans *pTrans, char *topic) {
if (inTopic) {
(void)taosHashRemove(newUser.topics, topic, len);
SSdbRaw *pCommitRaw = mndUserActionEncode(&newUser);
if (pCommitRaw == NULL || mndTransAppendCommitlog(pTrans, pCommitRaw) != 0) break;
if (pCommitRaw == NULL || mndTransAppendCommitlog(pTrans, pCommitRaw) != 0) {
break;
}
(void)sdbSetRawStatus(pCommitRaw, SDB_STATUS_READY);
}

View File

@ -2591,6 +2591,7 @@ static int32_t mndProcessBalanceVgroupMsg(SRpcMsg *pReq) {
pIter = sdbFetch(pMnode->pSdb, SDB_DNODE, pIter, (void **)&pDnode);
if (pIter == NULL) break;
if (!mndIsDnodeOnline(pDnode, curMs)) {
sdbCancelFetch(pMnode->pSdb, pIter);
terrno = TSDB_CODE_MND_HAS_OFFLINE_DNODE;
mError("failed to balance vgroup since %s, dnode:%d", terrstr(), pDnode->id);
sdbRelease(pMnode->pSdb, pDnode);

View File

@ -231,10 +231,11 @@ int32_t tqProcessDelCheckInfoReq(STQ* pTq, int64_t version, char* msg, int32_t m
int32_t tqProcessSubscribeReq(STQ* pTq, int64_t version, char* msg, int32_t msgLen);
int32_t tqProcessDeleteSubReq(STQ* pTq, int64_t version, char* msg, int32_t msgLen);
int32_t tqProcessOffsetCommitReq(STQ* pTq, int64_t version, char* msg, int32_t msgLen);
int32_t tqProcessSeekReq(STQ* pTq, int64_t sversion, char* msg, int32_t msgLen);
int32_t tqProcessSeekReq(STQ* pTq, SRpcMsg* pMsg);
int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg);
int32_t tqProcessPollPush(STQ* pTq, SRpcMsg* pMsg);
int32_t tqProcessVgWalInfoReq(STQ* pTq, SRpcMsg* pMsg);
int32_t tqProcessVgCommittedInfoReq(STQ* pTq, SRpcMsg* pMsg);
// tq-stream
int32_t tqProcessTaskDeployReq(STQ* pTq, int64_t version, char* msg, int32_t msgLen);

View File

@ -455,7 +455,7 @@ int metaAddIndexToSTable(SMeta *pMeta, int64_t version, SVCreateStbReq *pReq) {
}
}
if (diffIdx == -1 && diffIdx == 0) {
if (diffIdx == -1 || diffIdx == 0) {
goto _err;
}
@ -1662,10 +1662,11 @@ static int metaAddTagIndex(SMeta *pMeta, int64_t version, SVAlterTbReq *pAlterTb
if (ret < 0) {
terrno = TSDB_CODE_TDB_TABLE_NOT_EXIST;
return -1;
} else {
uid = *(tb_uid_t *)pVal;
tdbFree(pVal);
pVal = NULL;
}
uid = *(tb_uid_t *)pVal;
tdbFree(pVal);
pVal = NULL;
if (tdbTbGet(pMeta->pUidIdx, &uid, sizeof(tb_uid_t), &pVal, &nVal) == -1) {
ret = -1;
@ -1744,12 +1745,16 @@ static int metaAddTagIndex(SMeta *pMeta, int64_t version, SVAlterTbReq *pAlterTb
nTagData = tDataTypes[pCol->type].bytes;
}
if (metaCreateTagIdxKey(suid, pCol->colId, pTagData, nTagData, pCol->type, uid, &pTagIdxKey, &nTagIdxKey) < 0) {
tdbFree(pKey);
tdbFree(pVal);
metaDestroyTagIdxKey(pTagIdxKey);
tdbTbcClose(pCtbIdxc);
goto _err;
}
tdbTbUpsert(pMeta->pTagIdx, pTagIdxKey, nTagIdxKey, NULL, 0, pMeta->txn);
metaDestroyTagIdxKey(pTagIdxKey);
}
tdbTbcClose(pCtbIdxc);
return 0;
_err:

View File

@ -83,9 +83,9 @@ void tqDestroyTqHandle(void* data) {
}
}
static bool tqOffsetLessOrEqual(const STqOffset* pLeft, const STqOffset* pRight) {
static bool tqOffsetEqual(const STqOffset* pLeft, const STqOffset* pRight) {
return pLeft->val.type == TMQ_OFFSET__LOG && pRight->val.type == TMQ_OFFSET__LOG &&
pLeft->val.version <= pRight->val.version;
pLeft->val.version == pRight->val.version;
}
STQ* tqOpen(const char* path, SVnode* pVnode) {
@ -336,22 +336,19 @@ int32_t tqProcessOffsetCommitReq(STQ* pTq, int64_t sversion, char* msg, int32_t
STqOffset* pOffset = &vgOffset.offset;
if (pOffset->val.type == TMQ_OFFSET__SNAPSHOT_DATA || pOffset->val.type == TMQ_OFFSET__SNAPSHOT_META) {
tqDebug("receive offset commit msg to %s on vgId:%d, offset(type:snapshot) uid:%" PRId64 ", ts:%" PRId64,
tqInfo("receive offset commit msg to %s on vgId:%d, offset(type:snapshot) uid:%" PRId64 ", ts:%" PRId64,
pOffset->subKey, vgId, pOffset->val.uid, pOffset->val.ts);
} else if (pOffset->val.type == TMQ_OFFSET__LOG) {
tqDebug("receive offset commit msg to %s on vgId:%d, offset(type:log) version:%" PRId64, pOffset->subKey, vgId,
tqInfo("receive offset commit msg to %s on vgId:%d, offset(type:log) version:%" PRId64, pOffset->subKey, vgId,
pOffset->val.version);
if (pOffset->val.version + 1 == sversion) {
pOffset->val.version += 1;
}
} else {
tqError("invalid commit offset type:%d", pOffset->val.type);
return -1;
}
STqOffset* pSavedOffset = tqOffsetRead(pTq->pOffsetStore, pOffset->subKey);
if (pSavedOffset != NULL && tqOffsetLessOrEqual(pOffset, pSavedOffset)) {
tqDebug("not update the offset, vgId:%d sub:%s since committed:%" PRId64 " less than/equal to existed:%" PRId64,
if (pSavedOffset != NULL && tqOffsetEqual(pOffset, pSavedOffset)) {
tqInfo("not update the offset, vgId:%d sub:%s since committed:%" PRId64 " less than/equal to existed:%" PRId64,
vgId, pOffset->subKey, pOffset->val.version, pSavedOffset->val.version);
return 0; // no need to update the offset value
}
@ -364,86 +361,124 @@ int32_t tqProcessOffsetCommitReq(STQ* pTq, int64_t sversion, char* msg, int32_t
return 0;
}
int32_t tqProcessSeekReq(STQ* pTq, int64_t sversion, char* msg, int32_t msgLen) {
SMqVgOffset vgOffset = {0};
int32_t tqProcessSeekReq(STQ* pTq, SRpcMsg* pMsg) {
SMqSeekReq req = {0};
int32_t vgId = TD_VID(pTq->pVnode);
SRpcMsg rsp = {.info = pMsg->info};
int code = 0;
SDecoder decoder;
tDecoderInit(&decoder, (uint8_t*)msg, msgLen);
if (tDecodeMqVgOffset(&decoder, &vgOffset) < 0) {
tqError("vgId:%d failed to decode seek msg", vgId);
return -1;
tqDebug("tmq seek: consumer:0x%" PRIx64 " vgId:%d, subkey %s", req.consumerId, vgId, req.subKey);
if (tDeserializeSMqSeekReq(pMsg->pCont, pMsg->contLen, &req) < 0) {
code = TSDB_CODE_OUT_OF_MEMORY;
goto end;
}
tDecoderClear(&decoder);
tqDebug("topic:%s, vgId:%d process offset seek by consumer:0x%" PRIx64 ", req offset:%" PRId64,
vgOffset.offset.subKey, vgId, vgOffset.consumerId, vgOffset.offset.val.version);
STqOffset* pOffset = &vgOffset.offset;
if (pOffset->val.type != TMQ_OFFSET__LOG) {
tqError("vgId:%d, subKey:%s invalid seek offset type:%d", vgId, pOffset->subKey, pOffset->val.type);
return -1;
}
STqHandle* pHandle = taosHashGet(pTq->pHandle, pOffset->subKey, strlen(pOffset->subKey));
STqHandle* pHandle = taosHashGet(pTq->pHandle, req.subKey, strlen(req.subKey));
if (pHandle == NULL) {
tqError("tmq seek: consumer:0x%" PRIx64 " vgId:%d subkey %s not found", vgOffset.consumerId, vgId, pOffset->subKey);
terrno = TSDB_CODE_INVALID_MSG;
return -1;
tqWarn("tmq seek: consumer:0x%" PRIx64 " vgId:%d subkey %s not found", req.consumerId, vgId, req.subKey);
code = 0;
goto end;
}
// 2. check consumer-vg assignment status
taosRLockLatch(&pTq->lock);
if (pHandle->consumerId != vgOffset.consumerId) {
tqDebug("ERROR tmq seek: consumer:0x%" PRIx64 " vgId:%d, subkey %s, mismatch for saved handle consumer:0x%" PRIx64,
vgOffset.consumerId, vgId, pOffset->subKey, pHandle->consumerId);
terrno = TSDB_CODE_TMQ_CONSUMER_MISMATCH;
if (pHandle->consumerId != req.consumerId) {
tqError("ERROR tmq seek: consumer:0x%" PRIx64 " vgId:%d, subkey %s, mismatch for saved handle consumer:0x%" PRIx64,
req.consumerId, vgId, req.subKey, pHandle->consumerId);
taosRUnLockLatch(&pTq->lock);
return -1;
code = TSDB_CODE_TMQ_CONSUMER_MISMATCH;
goto end;
}
//if consumer register to push manager, push empty to consumer to change vg status from TMQ_VG_STATUS__WAIT to TMQ_VG_STATUS__IDLE,
//otherwise poll data failed after seek.
tqUnregisterPushHandle(pTq, pHandle);
taosRUnLockLatch(&pTq->lock);
// 3. check the offset info
STqOffset* pSavedOffset = tqOffsetRead(pTq->pOffsetStore, pOffset->subKey);
if (pSavedOffset != NULL) {
if (pSavedOffset->val.type != TMQ_OFFSET__LOG) {
tqError("invalid saved offset type, vgId:%d sub:%s", vgId, pOffset->subKey);
return 0; // no need to update the offset value
}
if (pSavedOffset->val.version == pOffset->val.version) {
tqDebug("vgId:%d subKey:%s no need to seek to %" PRId64 " prev offset:%" PRId64, vgId, pOffset->subKey,
pOffset->val.version, pSavedOffset->val.version);
return 0;
}
}
int64_t sver = 0, ever = 0;
walReaderValidVersionRange(pHandle->execHandle.pTqReader->pWalReader, &sver, &ever);
if (pOffset->val.version < sver) {
pOffset->val.version = sver;
} else if (pOffset->val.version > ever) {
pOffset->val.version = ever;
}
// save the new offset value
if (pSavedOffset != NULL) {
tqDebug("vgId:%d sub:%s seek to:%" PRId64 " prev offset:%" PRId64, vgId, pOffset->subKey, pOffset->val.version,
pSavedOffset->val.version);
} else {
tqDebug("vgId:%d sub:%s seek to:%" PRId64 " not saved yet", vgId, pOffset->subKey, pOffset->val.version);
}
if (tqOffsetWrite(pTq->pOffsetStore, pOffset) < 0) {
tqError("failed to save offset, vgId:%d sub:%s seek to %" PRId64, vgId, pOffset->subKey, pOffset->val.version);
return -1;
}
tqDebug("topic:%s, vgId:%d consumer:0x%" PRIx64 " offset is update to:%" PRId64, vgOffset.offset.subKey, vgId,
vgOffset.consumerId, vgOffset.offset.val.version);
end:
rsp.code = code;
tmsgSendRsp(&rsp);
return 0;
// SMqVgOffset vgOffset = {0};
// int32_t vgId = TD_VID(pTq->pVnode);
//
// SDecoder decoder;
// tDecoderInit(&decoder, (uint8_t*)msg, msgLen);
// if (tDecodeMqVgOffset(&decoder, &vgOffset) < 0) {
// tqError("vgId:%d failed to decode seek msg", vgId);
// return -1;
// }
//
// tDecoderClear(&decoder);
//
// tqDebug("topic:%s, vgId:%d process offset seek by consumer:0x%" PRIx64 ", req offset:%" PRId64,
// vgOffset.offset.subKey, vgId, vgOffset.consumerId, vgOffset.offset.val.version);
//
// STqOffset* pOffset = &vgOffset.offset;
// if (pOffset->val.type != TMQ_OFFSET__LOG) {
// tqError("vgId:%d, subKey:%s invalid seek offset type:%d", vgId, pOffset->subKey, pOffset->val.type);
// return -1;
// }
//
// STqHandle* pHandle = taosHashGet(pTq->pHandle, pOffset->subKey, strlen(pOffset->subKey));
// if (pHandle == NULL) {
// tqError("tmq seek: consumer:0x%" PRIx64 " vgId:%d subkey %s not found", vgOffset.consumerId, vgId, pOffset->subKey);
// terrno = TSDB_CODE_INVALID_MSG;
// return -1;
// }
//
// // 2. check consumer-vg assignment status
// taosRLockLatch(&pTq->lock);
// if (pHandle->consumerId != vgOffset.consumerId) {
// tqDebug("ERROR tmq seek: consumer:0x%" PRIx64 " vgId:%d, subkey %s, mismatch for saved handle consumer:0x%" PRIx64,
// vgOffset.consumerId, vgId, pOffset->subKey, pHandle->consumerId);
// terrno = TSDB_CODE_TMQ_CONSUMER_MISMATCH;
// taosRUnLockLatch(&pTq->lock);
// return -1;
// }
// taosRUnLockLatch(&pTq->lock);
//
// // 3. check the offset info
// STqOffset* pSavedOffset = tqOffsetRead(pTq->pOffsetStore, pOffset->subKey);
// if (pSavedOffset != NULL) {
// if (pSavedOffset->val.type != TMQ_OFFSET__LOG) {
// tqError("invalid saved offset type, vgId:%d sub:%s", vgId, pOffset->subKey);
// return 0; // no need to update the offset value
// }
//
// if (pSavedOffset->val.version == pOffset->val.version) {
// tqDebug("vgId:%d subKey:%s no need to seek to %" PRId64 " prev offset:%" PRId64, vgId, pOffset->subKey,
// pOffset->val.version, pSavedOffset->val.version);
// return 0;
// }
// }
//
// int64_t sver = 0, ever = 0;
// walReaderValidVersionRange(pHandle->execHandle.pTqReader->pWalReader, &sver, &ever);
// if (pOffset->val.version < sver) {
// pOffset->val.version = sver;
// } else if (pOffset->val.version > ever) {
// pOffset->val.version = ever;
// }
//
// // save the new offset value
// if (pSavedOffset != NULL) {
// tqDebug("vgId:%d sub:%s seek to:%" PRId64 " prev offset:%" PRId64, vgId, pOffset->subKey, pOffset->val.version,
// pSavedOffset->val.version);
// } else {
// tqDebug("vgId:%d sub:%s seek to:%" PRId64 " not saved yet", vgId, pOffset->subKey, pOffset->val.version);
// }
//
// if (tqOffsetWrite(pTq->pOffsetStore, pOffset) < 0) {
// tqError("failed to save offset, vgId:%d sub:%s seek to %" PRId64, vgId, pOffset->subKey, pOffset->val.version);
// return -1;
// }
//
// tqDebug("topic:%s, vgId:%d consumer:0x%" PRIx64 " offset is update to:%" PRId64, vgOffset.offset.subKey, vgId,
// vgOffset.consumerId, vgOffset.offset.val.version);
//
// return 0;
}
int32_t tqCheckColModifiable(STQ* pTq, int64_t tbUid, int32_t colId) {
@ -574,6 +609,49 @@ int32_t tqProcessPollReq(STQ* pTq, SRpcMsg* pMsg) {
return code;
}
int32_t tqProcessVgCommittedInfoReq(STQ* pTq, SRpcMsg* pMsg) {
void* data = POINTER_SHIFT(pMsg->pCont, sizeof(SMsgHead));
int32_t len = pMsg->contLen - sizeof(SMsgHead);
SMqVgOffset vgOffset = {0};
SDecoder decoder;
tDecoderInit(&decoder, (uint8_t*)data, len);
if (tDecodeMqVgOffset(&decoder, &vgOffset) < 0) {
return TSDB_CODE_OUT_OF_MEMORY;
}
tDecoderClear(&decoder);
STqOffset* pOffset = &vgOffset.offset;
STqOffset* pSavedOffset = tqOffsetRead(pTq->pOffsetStore, pOffset->subKey);
if (pSavedOffset == NULL) {
return TSDB_CODE_TMQ_NO_COMMITTED;
}
vgOffset.offset = *pSavedOffset;
int32_t code = 0;
tEncodeSize(tEncodeMqVgOffset, &vgOffset, len, code);
if (code < 0) {
return TSDB_CODE_INVALID_PARA;
}
void* buf = rpcMallocCont(len);
if (buf == NULL) {
return TSDB_CODE_OUT_OF_MEMORY;
}
SEncoder encoder;
tEncoderInit(&encoder, buf, len);
tEncodeMqVgOffset(&encoder, &vgOffset);
tEncoderClear(&encoder);
SRpcMsg rsp = {.info = pMsg->info, .pCont = buf, .contLen = len, .code = 0};
tmsgSendRsp(&rsp);
return 0;
}
int32_t tqProcessVgWalInfoReq(STQ* pTq, SRpcMsg* pMsg) {
SMqPollReq req = {0};
if (tDeserializeSMqPollReq(pMsg->pCont, pMsg->contLen, &req) < 0) {
@ -659,7 +737,7 @@ int32_t tqProcessDeleteSubReq(STQ* pTq, int64_t sversion, char* msg, int32_t msg
SMqVDeleteReq* pReq = (SMqVDeleteReq*)msg;
int32_t vgId = TD_VID(pTq->pVnode);
tqDebug("vgId:%d, tq process delete sub req %s", vgId, pReq->subKey);
tqInfo("vgId:%d, tq process delete sub req %s", vgId, pReq->subKey);
int32_t code = 0;
taosWLockLatch(&pTq->lock);
@ -740,7 +818,7 @@ int32_t tqProcessSubscribeReq(STQ* pTq, int64_t sversion, char* msg, int32_t msg
return -1;
}
tqDebug("vgId:%d, tq process sub req:%s, Id:0x%" PRIx64 " -> Id:0x%" PRIx64, pTq->pVnode->config.vgId, req.subKey,
tqInfo("vgId:%d, tq process sub req:%s, Id:0x%" PRIx64 " -> Id:0x%" PRIx64, pTq->pVnode->config.vgId, req.subKey,
req.oldConsumerId, req.newConsumerId);
STqHandle* pHandle = NULL;

View File

@ -196,7 +196,7 @@ int32_t tqFetchLog(STQ* pTq, STqHandle* pHandle, int64_t* fetchOffset, SWalCkHea
tqDebug("tmq poll: consumer:0x%" PRIx64 ", (epoch %d) vgId:%d offset %" PRId64
", no more log to return, reqId:0x%" PRIx64,
pHandle->consumerId, pHandle->epoch, vgId, offset, reqId);
*fetchOffset = offset - 1;
*fetchOffset = offset;
code = -1;
goto END;
}

View File

@ -98,14 +98,14 @@ static int32_t extractResetOffsetVal(STqOffsetVal* pOffsetVal, STQ* pTq, STqHand
}
} else {
walRefFirstVer(pTq->pVnode->pWal, pHandle->pRef);
tqOffsetResetToLog(pOffsetVal, pHandle->pRef->refVer - 1);
tqOffsetResetToLog(pOffsetVal, pHandle->pRef->refVer);
}
} else if (reqOffset.type == TMQ_OFFSET__RESET_LATEST) {
walRefLastVer(pTq->pVnode->pWal, pHandle->pRef);
SMqDataRsp dataRsp = {0};
tqInitDataRsp(&dataRsp, pRequest);
tqOffsetResetToLog(&dataRsp.rspOffset, pHandle->pRef->refVer);
tqOffsetResetToLog(&dataRsp.rspOffset, pHandle->pRef->refVer + 1);
tqDebug("tmq poll: consumer:0x%" PRIx64 ", subkey %s, vgId:%d, (latest) offset reset to %" PRId64, consumerId,
pHandle->subKey, vgId, dataRsp.rspOffset.version);
int32_t code = tqSendDataRsp(pHandle, pMsg, pRequest, &dataRsp, TMQ_MSG_TYPE__POLL_DATA_RSP, vgId);
@ -125,18 +125,24 @@ static int32_t extractResetOffsetVal(STqOffsetVal* pOffsetVal, STQ* pTq, STqHand
return 0;
}
static void setRequestVersion(STqOffsetVal* offset, int64_t ver){
if(offset->type == TMQ_OFFSET__LOG){
offset->version = ver + 1;
}
}
static int32_t extractDataAndRspForNormalSubscribe(STQ* pTq, STqHandle* pHandle, const SMqPollReq* pRequest,
SRpcMsg* pMsg, STqOffsetVal* pOffset) {
uint64_t consumerId = pRequest->consumerId;
int32_t vgId = TD_VID(pTq->pVnode);
int code = 0;
terrno = 0;
SMqDataRsp dataRsp = {0};
tqInitDataRsp(&dataRsp, pRequest);
dataRsp.reqOffset.type = pOffset->type; // stroe origin type for getting offset in tmq_get_vgroup_offset
qSetTaskId(pHandle->execHandle.task, consumerId, pRequest->reqId);
code = tqScanData(pTq, pHandle, &dataRsp, pOffset);
int code = tqScanData(pTq, pHandle, &dataRsp, pOffset);
if (code != 0 && terrno != TSDB_CODE_WAL_LOG_NOT_EXIST) {
goto end;
}
@ -151,11 +157,11 @@ static int32_t extractDataAndRspForNormalSubscribe(STQ* pTq, STqHandle* pHandle,
code = tqRegisterPushHandle(pTq, pHandle, pMsg);
taosWUnLockLatch(&pTq->lock);
goto end;
} else {
taosWUnLockLatch(&pTq->lock);
}
taosWUnLockLatch(&pTq->lock);
}
setRequestVersion(&dataRsp.reqOffset, pOffset->version);
code = tqSendDataRsp(pHandle, pMsg, pRequest, (SMqDataRsp*)&dataRsp, TMQ_MSG_TYPE__POLL_DATA_RSP, vgId);
end : {
@ -177,6 +183,7 @@ static int32_t extractDataAndRspForDbStbSubscribe(STQ* pTq, STqHandle* pHandle,
SMqMetaRsp metaRsp = {0};
STaosxRsp taosxRsp = {0};
tqInitTaosxRsp(&taosxRsp, pRequest);
taosxRsp.reqOffset.type = offset->type; // store origin type for getting offset in tmq_get_vgroup_offset
if (offset->type != TMQ_OFFSET__LOG) {
if (tqScanTaosx(pTq, pHandle, &taosxRsp, &metaRsp, offset) < 0) {
@ -208,7 +215,7 @@ static int32_t extractDataAndRspForDbStbSubscribe(STQ* pTq, STqHandle* pHandle,
if (offset->type == TMQ_OFFSET__LOG) {
walReaderVerifyOffset(pHandle->pWalReader, offset);
int64_t fetchVer = offset->version + 1;
int64_t fetchVer = offset->version;
pCkHead = taosMemoryMalloc(sizeof(SWalCkHead) + 2048);
if (pCkHead == NULL) {
terrno = TSDB_CODE_OUT_OF_MEMORY;
@ -229,6 +236,7 @@ static int32_t extractDataAndRspForDbStbSubscribe(STQ* pTq, STqHandle* pHandle,
if (tqFetchLog(pTq, pHandle, &fetchVer, &pCkHead, pRequest->reqId) < 0) {
tqOffsetResetToLog(&taosxRsp.rspOffset, fetchVer);
setRequestVersion(&taosxRsp.reqOffset, offset->version);
code = tqSendDataRsp(pHandle, pMsg, pRequest, (SMqDataRsp*)&taosxRsp, TMQ_MSG_TYPE__POLL_DATA_RSP, vgId);
goto end;
}
@ -240,13 +248,14 @@ static int32_t extractDataAndRspForDbStbSubscribe(STQ* pTq, STqHandle* pHandle,
// process meta
if (pHead->msgType != TDMT_VND_SUBMIT) {
if (totalRows > 0) {
tqOffsetResetToLog(&taosxRsp.rspOffset, fetchVer - 1);
tqOffsetResetToLog(&taosxRsp.rspOffset, fetchVer);
setRequestVersion(&taosxRsp.reqOffset, offset->version);
code = tqSendDataRsp(pHandle, pMsg, pRequest, (SMqDataRsp*)&taosxRsp, TMQ_MSG_TYPE__POLL_DATA_RSP, vgId);
goto end;
}
tqDebug("fetch meta msg, ver:%" PRId64 ", type:%s", pHead->version, TMSG_INFO(pHead->msgType));
tqOffsetResetToLog(&metaRsp.rspOffset, fetchVer);
tqOffsetResetToLog(&metaRsp.rspOffset, fetchVer + 1);
metaRsp.resMsgType = pHead->msgType;
metaRsp.metaRspLen = pHead->bodyLen;
metaRsp.metaRsp = pHead->body;
@ -269,7 +278,8 @@ static int32_t extractDataAndRspForDbStbSubscribe(STQ* pTq, STqHandle* pHandle,
}
if (totalRows >= 4096 || taosxRsp.createTableNum > 0) {
tqOffsetResetToLog(&taosxRsp.rspOffset, fetchVer);
tqOffsetResetToLog(&taosxRsp.rspOffset, fetchVer + 1);
setRequestVersion(&taosxRsp.reqOffset, offset->version);
code = tqSendDataRsp(pHandle, pMsg, pRequest, (SMqDataRsp*)&taosxRsp, taosxRsp.createTableNum > 0 ? TMQ_MSG_TYPE__POLL_DATA_META_RSP : TMQ_MSG_TYPE__POLL_DATA_RSP, vgId);
goto end;
} else {
@ -303,9 +313,11 @@ int32_t tqExtractDataForMq(STQ* pTq, STqHandle* pHandle, const SMqPollReq* pRequ
if (blockReturned) {
return 0;
}
} else { // use the consumer specified offset
} else if(reqOffset.type != 0){ // use the consumer specified offset
// the offset value can not be monotonious increase??
offset = reqOffset;
} else {
return TSDB_CODE_TMQ_INVALID_MSG;
}
// this is a normal subscribe requirement

View File

@ -1028,55 +1028,7 @@ int32_t tsdbCacheGetBatch(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCache
return code;
}
/*
int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsReader *pr, int8_t ltype) {
int32_t code = 0;
SLRUCache *pCache = pTsdb->lruCache;
SArray *pCidList = pr->pCidList;
int num_keys = TARRAY_SIZE(pCidList);
for (int i = 0; i < num_keys; ++i) {
SLastCol *pLastCol = NULL;
int16_t cid = *(int16_t *)taosArrayGet(pCidList, i);
SLastKey *key = &(SLastKey){.ltype = ltype, .uid = uid, .cid = cid};
LRUHandle *h = taosLRUCacheLookup(pCache, key, ROCKS_KEY_LEN);
if (!h) {
taosThreadMutexLock(&pTsdb->lruMutex);
h = taosLRUCacheLookup(pCache, key, ROCKS_KEY_LEN);
if (!h) {
pLastCol = tsdbCacheLoadCol(pTsdb, pr, pr->pSlotIds[i], uid, cid, ltype);
size_t charge = sizeof(*pLastCol);
if (IS_VAR_DATA_TYPE(pLastCol->colVal.type)) {
charge += pLastCol->colVal.value.nData;
}
LRUStatus status = taosLRUCacheInsert(pCache, key, ROCKS_KEY_LEN, pLastCol, charge, tsdbCacheDeleter, &h,
TAOS_LRU_PRIORITY_LOW, &pTsdb->flushState);
if (status != TAOS_LRU_STATUS_OK) {
code = -1;
}
}
taosThreadMutexUnlock(&pTsdb->lruMutex);
}
pLastCol = (SLastCol *)taosLRUCacheValue(pCache, h);
SLastCol lastCol = *pLastCol;
reallocVarData(&lastCol.colVal);
if (h) {
taosLRUCacheRelease(pCache, h, false);
}
taosArrayPush(pLastArray, &lastCol);
}
return code;
}
*/
int32_t tsdbCacheDel(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSKEY sKey, TSKEY eKey) {
int32_t code = 0;
// fetch schema
@ -1108,6 +1060,7 @@ int32_t tsdbCacheDel(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSKEY sKey, TSKE
char **values_list = taosMemoryCalloc(num_keys * 2, sizeof(char *));
size_t *values_list_sizes = taosMemoryCalloc(num_keys * 2, sizeof(size_t));
char **errs = taosMemoryCalloc(num_keys * 2, sizeof(char *));
taosThreadMutexLock(&pTsdb->lruMutex);
taosThreadMutexLock(&pTsdb->rCache.rMutex);
rocksMayWrite(pTsdb, true, false, false);
rocksdb_multi_get(pTsdb->rCache.db, pTsdb->rCache.readoptions, num_keys * 2, (const char *const *)keys_list,
@ -1137,7 +1090,7 @@ int32_t tsdbCacheDel(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSKEY sKey, TSKE
rocksdb_free(values_list[i]);
rocksdb_free(values_list[i + num_keys]);
taosThreadMutexLock(&pTsdb->lruMutex);
// taosThreadMutexLock(&pTsdb->lruMutex);
LRUHandle *h = taosLRUCacheLookup(pTsdb->lruCache, keys_list[i], klen);
if (h) {
@ -1159,7 +1112,7 @@ int32_t tsdbCacheDel(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSKEY sKey, TSKE
}
taosLRUCacheErase(pTsdb->lruCache, keys_list[num_keys + i], klen);
taosThreadMutexUnlock(&pTsdb->lruMutex);
// taosThreadMutexUnlock(&pTsdb->lruMutex);
}
for (int i = 0; i < num_keys; ++i) {
taosMemoryFree(keys_list[i]);
@ -1171,6 +1124,8 @@ int32_t tsdbCacheDel(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSKEY sKey, TSKE
rocksMayWrite(pTsdb, true, false, true);
taosThreadMutexUnlock(&pTsdb->lruMutex);
_exit:
taosMemoryFree(pTSchema);
@ -1311,62 +1266,7 @@ int32_t tsdbCacheDeleteLast(SLRUCache *pCache, tb_uid_t uid, TSKEY eKey) {
return code;
}
/*
int32_t tsdbCacheDelete(SLRUCache *pCache, tb_uid_t uid, TSKEY eKey) {
int32_t code = 0;
char key[32] = {0};
int keyLen = 0;
// getTableCacheKey(uid, "lr", key, &keyLen);
getTableCacheKey(uid, 0, key, &keyLen);
LRUHandle *h = taosLRUCacheLookup(pCache, key, keyLen);
if (h) {
SArray *pLast = (SArray *)taosLRUCacheValue(pCache, h);
bool invalidate = false;
int16_t nCol = taosArrayGetSize(pLast);
for (int16_t iCol = 0; iCol < nCol; ++iCol) {
SLastCol *tTsVal = (SLastCol *)taosArrayGet(pLast, iCol);
if (eKey >= tTsVal->ts) {
invalidate = true;
break;
}
}
if (invalidate) {
taosLRUCacheRelease(pCache, h, true);
} else {
taosLRUCacheRelease(pCache, h, false);
}
}
// getTableCacheKey(uid, "l", key, &keyLen);
getTableCacheKey(uid, 1, key, &keyLen);
h = taosLRUCacheLookup(pCache, key, keyLen);
if (h) {
SArray *pLast = (SArray *)taosLRUCacheValue(pCache, h);
bool invalidate = false;
int16_t nCol = taosArrayGetSize(pLast);
for (int16_t iCol = 0; iCol < nCol; ++iCol) {
SLastCol *tTsVal = (SLastCol *)taosArrayGet(pLast, iCol);
if (eKey >= tTsVal->ts) {
invalidate = true;
break;
}
}
if (invalidate) {
taosLRUCacheRelease(pCache, h, true);
} else {
taosLRUCacheRelease(pCache, h, false);
}
// void taosLRUCacheErase(SLRUCache * cache, const void *key, size_t keyLen);
}
return code;
}
*/
int32_t tsdbCacheInsertLastrow(SLRUCache *pCache, STsdb *pTsdb, tb_uid_t uid, TSDBROW *row, bool dup) {
int32_t code = 0;
STSRow *cacheRow = NULL;
@ -1767,6 +1667,10 @@ static int32_t loadTombFromBlk(const TTombBlkArray *pTombBlkArray, SCacheRowsRea
}
if (record.version <= pReader->info.verRange.maxVer) {
/*
tsdbError("tomb xx load/cache: vgId:%d fid:%d commit %" PRId64 "~%" PRId64 "~%" PRId64 " tomb records",
TD_VID(pReader->pTsdb->pVnode), pReader->pCurFileSet->fid, record.skey, record.ekey, uid);
*/
SDelData delData = {.version = record.version, .sKey = record.skey, .eKey = record.ekey};
taosArrayPush(pInfo->pTombData, &delData);
}
@ -1977,9 +1881,9 @@ static int32_t getNextRowFromFS(void *iter, TSDBROW **ppRow, bool *pIgnoreEarlie
goto _err;
}
loadDataTomb(state->pr, state->pr->pFileReader);
state->pr->pCurFileSet = state->pFileSet;
loadDataTomb(state->pr, state->pr->pFileReader);
}
if (!state->pIndexList) {
@ -2017,6 +1921,10 @@ static int32_t getNextRowFromFS(void *iter, TSDBROW **ppRow, bool *pIgnoreEarlie
state->iBrinIndex = indexSize;
}
if (state->pFileSet != state->pr->pCurFileSet) {
state->pr->pCurFileSet = state->pFileSet;
}
code = lastIterOpen(&state->lastIter, state->pFileSet, state->pTsdb, state->pTSchema, state->suid, state->uid,
state->pr, state->lastTs, aCols, nCols);
if (code != TSDB_CODE_SUCCESS) {
@ -2074,7 +1982,14 @@ static int32_t getNextRowFromFS(void *iter, TSDBROW **ppRow, bool *pIgnoreEarlie
if (SFSNEXTROW_INDEXLIST == state->state) {
SBrinBlk *pBrinBlk = NULL;
_next_brinindex:
if (--state->iBrinIndex < 0) { // no index left, goto next fileset
if (--state->iBrinIndex < 0) {
if (state->pLastRow) {
state->state = SFSNEXTROW_NEXTSTTROW;
*ppRow = state->pLastRow;
state->pLastRow = NULL;
return code;
}
clearLastFileSet(state);
goto _next_fileset;
} else {

View File

@ -37,6 +37,7 @@ typedef struct {
int64_t cid;
int64_t now;
TSKEY nextKey;
TSKEY maxDelKey;
int32_t fid;
int32_t expLevel;
SDiskID did;
@ -161,6 +162,11 @@ static int32_t tsdbCommitTombData(SCommitter2 *committer) {
SMetaInfo info;
if (committer->ctx->fset == NULL && !committer->ctx->hasTSData) {
if (committer->ctx->maxKey < committer->ctx->maxDelKey) {
committer->ctx->nextKey = committer->ctx->maxKey + 1;
} else {
committer->ctx->nextKey = TSKEY_MAX;
}
return 0;
}
@ -185,16 +191,17 @@ static int32_t tsdbCommitTombData(SCommitter2 *committer) {
goto _next;
}
TSKEY maxKey = committer->ctx->maxKey;
if (record->ekey > committer->ctx->maxKey) {
committer->ctx->maxKey = committer->ctx->maxKey + 1;
maxKey = committer->ctx->maxKey + 1;
}
if (record->ekey > committer->ctx->maxKey) {
committer->ctx->nextKey = record->ekey;
if (record->ekey > committer->ctx->maxKey && committer->ctx->nextKey > maxKey) {
committer->ctx->nextKey = maxKey;
}
record->skey = TMAX(record->skey, committer->ctx->minKey);
record->ekey = TMIN(record->ekey, committer->ctx->maxKey);
record->ekey = TMIN(record->ekey, maxKey);
numRecord++;
code = tsdbFSetWriteTombRecord(committer->writer, record);
@ -477,6 +484,21 @@ static int32_t tsdbOpenCommitter(STsdb *tsdb, SCommitInfo *info, SCommitter2 *co
}
}
committer->ctx->maxDelKey = TSKEY_MIN;
TSKEY minKey = TSKEY_MAX;
TSKEY maxKey = TSKEY_MIN;
if (TARRAY2_SIZE(committer->fsetArr) > 0) {
STFileSet *fset = TARRAY2_LAST(committer->fsetArr);
tsdbFidKeyRange(fset->fid, committer->minutes, committer->precision, &minKey, &committer->ctx->maxDelKey);
fset = TARRAY2_FIRST(committer->fsetArr);
tsdbFidKeyRange(fset->fid, committer->minutes, committer->precision, &minKey, &maxKey);
}
if (committer->ctx->nextKey < TMIN(tsdb->imem->minKey, minKey)) {
committer->ctx->nextKey = TMIN(tsdb->imem->minKey, minKey);
}
_exit:
if (code) {
TSDB_ERROR_LOG(TD_VID(tsdb->pVnode), lino, code);

View File

@ -1729,41 +1729,45 @@ static int32_t mergeFileBlockAndLastBlock(STsdbReader* pReader, SLastBlockReader
// row in last file block
TSDBROW fRow = tsdbRowFromBlockData(pBlockData, pDumpInfo->rowIndex);
int64_t ts = getCurrentKeyInLastBlock(pLastBlockReader);
int64_t tsLast = getCurrentKeyInLastBlock(pLastBlockReader);
if (ASCENDING_TRAVERSE(pReader->info.order)) {
if (key < ts) { // imem, mem are all empty, file blocks (data blocks and last block) exist
if (key < tsLast) {
return mergeRowsInFileBlocks(pBlockData, pBlockScanInfo, key, pReader);
} else if (key == ts) {
SRow* pTSRow = NULL;
int32_t code = tsdbRowMergerAdd(pMerger, &fRow, pReader->info.pSchema);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
doMergeRowsInFileBlocks(pBlockData, pBlockScanInfo, pReader);
TSDBROW* pRow1 = tMergeTreeGetRow(&pLastBlockReader->mergeTree);
tsdbRowMergerAdd(pMerger, pRow1, NULL);
doMergeRowsInLastBlock(pLastBlockReader, pBlockScanInfo, ts, pMerger, &pReader->info.verRange, pReader->idStr);
code = tsdbRowMergerGetRow(pMerger, &pTSRow);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
code = doAppendRowFromTSRow(pReader->resBlockInfo.pResBlock, pReader, pTSRow, pBlockScanInfo);
taosMemoryFree(pTSRow);
tsdbRowMergerClear(pMerger);
return code;
} else { // key > ts
} else if (key > tsLast) {
return doMergeFileBlockAndLastBlock(pLastBlockReader, pReader, pBlockScanInfo, NULL, false);
}
} else {
if (key > tsLast) {
return mergeRowsInFileBlocks(pBlockData, pBlockScanInfo, key, pReader);
} else if (key < tsLast) {
return doMergeFileBlockAndLastBlock(pLastBlockReader, pReader, pBlockScanInfo, NULL, false);
}
} else { // desc order
return doMergeFileBlockAndLastBlock(pLastBlockReader, pReader, pBlockScanInfo, pBlockData, true);
}
// the following for key == tsLast
SRow* pTSRow = NULL;
int32_t code = tsdbRowMergerAdd(pMerger, &fRow, pReader->info.pSchema);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
doMergeRowsInFileBlocks(pBlockData, pBlockScanInfo, pReader);
TSDBROW* pRow1 = tMergeTreeGetRow(&pLastBlockReader->mergeTree);
tsdbRowMergerAdd(pMerger, pRow1, NULL);
doMergeRowsInLastBlock(pLastBlockReader, pBlockScanInfo, tsLast, pMerger, &pReader->info.verRange, pReader->idStr);
code = tsdbRowMergerGetRow(pMerger, &pTSRow);
if (code != TSDB_CODE_SUCCESS) {
return code;
}
code = doAppendRowFromTSRow(pReader->resBlockInfo.pResBlock, pReader, pTSRow, pBlockScanInfo);
taosMemoryFree(pTSRow);
tsdbRowMergerClear(pMerger);
return code;
} else { // only last block exists
return doMergeFileBlockAndLastBlock(pLastBlockReader, pReader, pBlockScanInfo, NULL, false);
}
@ -2190,7 +2194,8 @@ static int32_t buildComposedDataBlockImpl(STsdbReader* pReader, STableBlockScanI
SFileBlockDumpInfo* pDumpInfo = &pReader->status.fBlockDumpInfo;
TSDBROW *pRow = NULL, *piRow = NULL;
int64_t key = (pBlockData->nRow > 0 && (!pDumpInfo->allDumped)) ? pBlockData->aTSKEY[pDumpInfo->rowIndex] : INT64_MIN;
int64_t key = (pBlockData->nRow > 0 && (!pDumpInfo->allDumped)) ? pBlockData->aTSKEY[pDumpInfo->rowIndex] :
(ASCENDING_TRAVERSE(pReader->info.order) ? INT64_MAX : INT64_MIN);
if (pBlockScanInfo->iter.hasVal) {
pRow = getValidMemRow(&pBlockScanInfo->iter, pBlockScanInfo->delSkyline, pReader);
}
@ -2673,16 +2678,32 @@ static int32_t doBuildDataBlock(STsdbReader* pReader) {
(ASCENDING_TRAVERSE(pReader->info.order)) ? pBlockInfo->record.firstKey : pBlockInfo->record.lastKey;
code = buildDataBlockFromBuf(pReader, pScanInfo, endKey);
} else {
if (hasDataInLastBlock(pLastBlockReader) && !ASCENDING_TRAVERSE(pReader->info.order)) {
// only return the rows in last block
int64_t tsLast = getCurrentKeyInLastBlock(pLastBlockReader);
ASSERT(tsLast >= pBlockInfo->record.lastKey);
bool bHasDataInLastBlock = hasDataInLastBlock(pLastBlockReader);
int64_t tsLast = bHasDataInLastBlock ? getCurrentKeyInLastBlock(pLastBlockReader) : INT64_MIN;
if (!bHasDataInLastBlock || ((ASCENDING_TRAVERSE(pReader->info.order) && pBlockInfo->record.lastKey < tsLast) ||
(!ASCENDING_TRAVERSE(pReader->info.order) && pBlockInfo->record.firstKey > tsLast))) {
// whole block is required, return it directly
SDataBlockInfo* pInfo = &pReader->resBlockInfo.pResBlock->info;
pInfo->rows = pBlockInfo->record.numRow;
pInfo->id.uid = pScanInfo->uid;
pInfo->dataLoad = 0;
pInfo->window = (STimeWindow){.skey = pBlockInfo->record.firstKey, .ekey = pBlockInfo->record.lastKey};
setComposedBlockFlag(pReader, false);
setBlockAllDumped(&pStatus->fBlockDumpInfo, pBlockInfo->record.lastKey, pReader->info.order);
// update the last key for the corresponding table
pScanInfo->lastKey = ASCENDING_TRAVERSE(pReader->info.order) ? pInfo->window.ekey : pInfo->window.skey;
tsdbDebug("%p uid:%" PRIu64
" clean file block retrieved from file, global index:%d, "
"table index:%d, rows:%d, brange:%" PRId64 "-%" PRId64 ", %s",
pReader, pScanInfo->uid, pBlockIter->index, pBlockInfo->tbBlockIdx, pBlockInfo->record.numRow,
pBlockInfo->record.firstKey, pBlockInfo->record.lastKey, pReader->idStr);
} else {
SBlockData* pBData = &pReader->status.fileBlockData;
tBlockDataReset(pBData);
SSDataBlock* pResBlock = pReader->resBlockInfo.pResBlock;
tsdbDebug("load data in last block firstly, due to desc scan data, %s", pReader->idStr);
tsdbDebug("load data in last block firstly %s", pReader->idStr);
int64_t st = taosGetTimestampUs();
@ -2713,23 +2734,8 @@ static int32_t doBuildDataBlock(STsdbReader* pReader) {
pReader, pResBlock->info.id.uid, pResBlock->info.window.skey, pResBlock->info.window.ekey,
pResBlock->info.rows, el, pReader->idStr);
}
} else { // whole block is required, return it directly
SDataBlockInfo* pInfo = &pReader->resBlockInfo.pResBlock->info;
pInfo->rows = pBlockInfo->record.numRow;
pInfo->id.uid = pScanInfo->uid;
pInfo->dataLoad = 0;
pInfo->window = (STimeWindow){.skey = pBlockInfo->record.firstKey, .ekey = pBlockInfo->record.lastKey};
setComposedBlockFlag(pReader, false);
setBlockAllDumped(&pStatus->fBlockDumpInfo, pBlockInfo->record.lastKey, pReader->info.order);
// update the last key for the corresponding table
pScanInfo->lastKey = ASCENDING_TRAVERSE(pReader->info.order) ? pInfo->window.ekey : pInfo->window.skey;
tsdbDebug("%p uid:%" PRIu64
" clean file block retrieved from file, global index:%d, "
"table index:%d, rows:%d, brange:%" PRId64 "-%" PRId64 ", %s",
pReader, pScanInfo->uid, pBlockIter->index, pBlockInfo->tbBlockIdx, pBlockInfo->record.numRow,
pBlockInfo->record.firstKey, pBlockInfo->record.lastKey, pReader->idStr);
}
}
return (pReader->code != TSDB_CODE_SUCCESS) ? pReader->code : code;

View File

@ -238,7 +238,7 @@ void initCacheFn(SStoreCacheReader* pCache) {
}
void initSnapshotFn(SStoreSnapshotFn* pSnapshot) {
pSnapshot->createSnapshot = setForSnapShot;
pSnapshot->setForSnapShot = setForSnapShot;
pSnapshot->destroySnapshot = destroySnapContext;
pSnapshot->getMetaTableInfoFromSnapshot = getMetaTableInfoFromSnapshot;
pSnapshot->getTableInfoFromSnapshot = getTableInfoFromSnapshot;

View File

@ -458,12 +458,7 @@ int32_t vnodeProcessWriteMsg(SVnode *pVnode, SRpcMsg *pMsg, int64_t ver, SRpcMsg
}
break;
case TDMT_VND_TMQ_COMMIT_OFFSET:
if (tqProcessOffsetCommitReq(pVnode->pTq, ver, pReq, pMsg->contLen - sizeof(SMsgHead)) < 0) {
goto _err;
}
break;
case TDMT_VND_TMQ_SEEK_TO_OFFSET:
if (tqProcessSeekReq(pVnode->pTq, ver, pReq, pMsg->contLen - sizeof(SMsgHead)) < 0) {
if (tqProcessOffsetCommitReq(pVnode->pTq, ver, pReq, len) < 0) {
goto _err;
}
break;

View File

@ -1172,8 +1172,8 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subT
SStoreTqReader* pReaderAPI = &pTaskInfo->storageAPI.tqReaderFn;
SWalReader* pWalReader = pReaderAPI->tqReaderGetWalReader(pInfo->tqReader);
walReaderVerifyOffset(pWalReader, pOffset);
if (pReaderAPI->tqReaderSeek(pInfo->tqReader, pOffset->version + 1, id) < 0) {
qError("tqReaderSeek failed ver:%" PRId64 ", %s", pOffset->version + 1, id);
if (pReaderAPI->tqReaderSeek(pInfo->tqReader, pOffset->version, id) < 0) {
qError("tqReaderSeek failed ver:%" PRId64 ", %s", pOffset->version, id);
return -1;
}
} else if (pOffset->type == TMQ_OFFSET__SNAPSHOT_DATA) {
@ -1262,7 +1262,7 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subT
SOperatorInfo* p = extractOperatorInTree(pOperator, QUERY_NODE_PHYSICAL_PLAN_TABLE_SCAN, id);
STableListInfo* pTableListInfo = ((SStreamRawScanInfo*)(p->info))->pTableListInfo;
if (pAPI->snapshotFn.createSnapshot(sContext, pOffset->uid) != 0) {
if (pAPI->snapshotFn.setForSnapShot(sContext, pOffset->uid) != 0) {
qError("setDataForSnapShot error. uid:%" PRId64 " , %s", pOffset->uid, id);
terrno = TSDB_CODE_PAR_INTERNAL_ERROR;
return -1;
@ -1299,7 +1299,7 @@ int32_t qStreamPrepareScan(qTaskInfo_t tinfo, STqOffsetVal* pOffset, int8_t subT
} else if (pOffset->type == TMQ_OFFSET__SNAPSHOT_META) {
SStreamRawScanInfo* pInfo = pOperator->info;
SSnapContext* sContext = pInfo->sContext;
if (pTaskInfo->storageAPI.snapshotFn.createSnapshot(sContext, pOffset->uid) != 0) {
if (pTaskInfo->storageAPI.snapshotFn.setForSnapShot(sContext, pOffset->uid) != 0) {
qError("setForSnapShot error. uid:%" PRIu64 " ,version:%" PRId64, pOffset->uid, pOffset->version);
terrno = TSDB_CODE_PAR_INTERNAL_ERROR;
return -1;

View File

@ -502,9 +502,13 @@ void* destroyStreamFillSupporter(SStreamFillSupporter* pFillSup) {
pFillSup->pAllColInfo = destroyFillColumnInfo(pFillSup->pAllColInfo, pFillSup->numOfFillCols, pFillSup->numOfAllCols);
tSimpleHashCleanup(pFillSup->pResMap);
pFillSup->pResMap = NULL;
releaseOutputBuf(NULL, NULL, (SResultRow*)pFillSup->cur.pRowVal, &pFillSup->pAPI->stateStore); //?????
pFillSup->cur.pRowVal = NULL;
cleanupExprSupp(&pFillSup->notFillExprSup);
if (pFillSup->cur.pRowVal != pFillSup->prev.pRowVal && pFillSup->cur.pRowVal != pFillSup->next.pRowVal) {
taosMemoryFree(pFillSup->cur.pRowVal);
}
taosMemoryFree(pFillSup->prev.pRowVal);
taosMemoryFree(pFillSup->next.pRowVal);
taosMemoryFree(pFillSup->nextNext.pRowVal);
taosMemoryFree(pFillSup);
return NULL;
@ -546,13 +550,17 @@ static void destroyStreamFillOperatorInfo(void* param) {
static void resetFillWindow(SResultRowData* pRowData) {
pRowData->key = INT64_MIN;
pRowData->pRowVal = NULL;
taosMemoryFreeClear(pRowData->pRowVal);
}
void resetPrevAndNextWindow(SStreamFillSupporter* pFillSup, void* pState, SStorageAPI* pAPI) {
if (pFillSup->cur.pRowVal != pFillSup->prev.pRowVal && pFillSup->cur.pRowVal != pFillSup->next.pRowVal) {
resetFillWindow(&pFillSup->cur);
} else {
pFillSup->cur.key = INT64_MIN;
pFillSup->cur.pRowVal = NULL;
}
resetFillWindow(&pFillSup->prev);
releaseOutputBuf(NULL, NULL, (SResultRow*)pFillSup->cur.pRowVal, &pAPI->stateStore); //???
resetFillWindow(&pFillSup->cur);
resetFillWindow(&pFillSup->next);
resetFillWindow(&pFillSup->nextNext);
}
@ -1513,11 +1521,11 @@ SOperatorInfo* createStreamFillOperatorInfo(SOperatorInfo* downstream, SStreamFi
float v = 0;
GET_TYPED_DATA(v, float, pVar->nType, &pVar->i);
SET_TYPED_DATA(pCell->pData, pCell->type, v);
} else if (pCell->type == TSDB_DATA_TYPE_DOUBLE) {
} else if (IS_FLOAT_TYPE(pCell->type)) {
double v = 0;
GET_TYPED_DATA(v, double, pVar->nType, &pVar->i);
SET_TYPED_DATA(pCell->pData, pCell->type, v);
} else if (IS_SIGNED_NUMERIC_TYPE(pCell->type)) {
} else if (IS_INTEGER_TYPE(pCell->type)) {
int64_t v = 0;
GET_TYPED_DATA(v, int64_t, pVar->nType, &pVar->i);
SET_TYPED_DATA(pCell->pData, pCell->type, v);

View File

@ -1642,12 +1642,13 @@ static SSDataBlock* doQueueScan(SOperatorInfo* pOperator) {
pAPI->tsdReader.tsdReaderClose(pTSInfo->base.dataReader);
pTSInfo->base.dataReader = NULL;
qDebug("queue scan tsdb over, switch to wal ver %" PRId64 "", pTaskInfo->streamInfo.snapshotVer + 1);
if (pAPI->tqReaderFn.tqReaderSeek(pInfo->tqReader, pTaskInfo->streamInfo.snapshotVer + 1, pTaskInfo->id.str) < 0) {
int64_t validVer = pTaskInfo->streamInfo.snapshotVer + 1;
qDebug("queue scan tsdb over, switch to wal ver %" PRId64 "", validVer);
if (pAPI->tqReaderFn.tqReaderSeek(pInfo->tqReader, validVer, pTaskInfo->id.str) < 0) {
return NULL;
}
tqOffsetResetToLog(&pTaskInfo->streamInfo.currentOffset, pTaskInfo->streamInfo.snapshotVer);
tqOffsetResetToLog(&pTaskInfo->streamInfo.currentOffset, validVer);
}
if (pTaskInfo->streamInfo.currentOffset.type == TMQ_OFFSET__LOG) {
@ -1658,8 +1659,8 @@ static SSDataBlock* doQueueScan(SOperatorInfo* pOperator) {
SSDataBlock* pRes = pAPI->tqReaderFn.tqGetResultBlock(pInfo->tqReader);
struct SWalReader* pWalReader = pAPI->tqReaderFn.tqReaderGetWalReader(pInfo->tqReader);
// curVersion move to next, so currentOffset = curVersion - 1
tqOffsetResetToLog(&pTaskInfo->streamInfo.currentOffset, pWalReader->curVersion - 1);
// curVersion move to next
tqOffsetResetToLog(&pTaskInfo->streamInfo.currentOffset, pWalReader->curVersion);
if (hasResult) {
qDebug("doQueueScan get data from log %" PRId64 " rows, version:%" PRId64, pRes->info.rows,
@ -2262,7 +2263,7 @@ static SSDataBlock* doRawScan(SOperatorInfo* pOperator) {
STqOffsetVal offset = {0};
if (mtInfo.uid == 0 || pInfo->sContext->withMeta == ONLY_META) { // read snapshot done, change to get data from wal
qDebug("tmqsnap read snapshot done, change to get data from wal");
tqOffsetResetToLog(&offset, pInfo->sContext->snapVersion);
tqOffsetResetToLog(&offset, pInfo->sContext->snapVersion + 1);
} else {
tqOffsetResetToData(&offset, mtInfo.uid, INT64_MIN);
qDebug("tmqsnap change get data uid:%" PRId64 "", mtInfo.uid);
@ -2778,7 +2779,7 @@ _error:
return NULL;
}
static SSDataBlock* getTableDataBlockImpl(void* param) {
static SSDataBlock* getBlockForTableMergeScan(void* param) {
STableMergeScanSortSourceParam* source = param;
SOperatorInfo* pOperator = source->pOperator;
STableMergeScanInfo* pInfo = pOperator->info;
@ -2796,6 +2797,7 @@ static SSDataBlock* getTableDataBlockImpl(void* param) {
code = pAPI->tsdReader.tsdNextDataBlock(reader, &hasNext);
if (code != 0) {
pAPI->tsdReader.tsdReaderReleaseDataBlock(reader);
qError("table merge scan fetch next data block error code: %d, %s", code, GET_TASKID(pTaskInfo));
T_LONG_JMP(pTaskInfo->env, code);
}
@ -2804,8 +2806,9 @@ static SSDataBlock* getTableDataBlockImpl(void* param) {
}
if (isTaskKilled(pTaskInfo)) {
qInfo("table merge scan fetch next data block found task killed. %s", GET_TASKID(pTaskInfo));
pAPI->tsdReader.tsdReaderReleaseDataBlock(reader);
T_LONG_JMP(pTaskInfo->env, pTaskInfo->code);
break;
}
// process this data block based on the probabilities
@ -2818,6 +2821,7 @@ static SSDataBlock* getTableDataBlockImpl(void* param) {
code = loadDataBlock(pOperator, &pInfo->base, pBlock, &status);
// code = loadDataBlockFromOneTable(pOperator, pTableScanInfo, pBlock, &status);
if (code != TSDB_CODE_SUCCESS) {
qInfo("table merge scan load datablock code %d, %s", code, GET_TASKID(pTaskInfo));
T_LONG_JMP(pTaskInfo->env, code);
}
@ -2908,7 +2912,8 @@ int32_t startGroupTableMergeScan(SOperatorInfo* pOperator) {
tsortSetMergeLimit(pInfo->pSortHandle, mergeLimit);
}
tsortSetFetchRawDataFp(pInfo->pSortHandle, getTableDataBlockImpl, NULL, NULL);
tsortSetFetchRawDataFp(pInfo->pSortHandle, getBlockForTableMergeScan, NULL, NULL);
// one table has one data block
int32_t numOfTable = tableEndIdx - tableStartIdx + 1;

View File

@ -3192,11 +3192,12 @@ SStreamStateCur* getNextSessionWinInfo(SStreamAggSupporter* pAggSup, SSHashObj*
return pCur;
}
static void compactSessionWindow(SOperatorInfo* pOperator, SResultWindowInfo* pCurWin, SSHashObj* pStUpdated,
static int32_t compactSessionWindow(SOperatorInfo* pOperator, SResultWindowInfo* pCurWin, SSHashObj* pStUpdated,
SSHashObj* pStDeleted, bool addGap) {
SExprSupp* pSup = &pOperator->exprSupp;
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
SStorageAPI* pAPI = &pOperator->pTaskInfo->storageAPI;
SExprSupp* pSup = &pOperator->exprSupp;
SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo;
SStorageAPI* pAPI = &pOperator->pTaskInfo->storageAPI;
int32_t winNum = 0;
SStreamSessionAggOperatorInfo* pInfo = pOperator->info;
SResultRow* pCurResult = NULL;
@ -3230,7 +3231,9 @@ static void compactSessionWindow(SOperatorInfo* pOperator, SResultWindowInfo* pC
doDeleteSessionWindow(pAggSup, &winInfo.sessionWin);
pAPI->stateStore.streamStateFreeCur(pCur);
taosMemoryFree(winInfo.pOutputBuf);
winNum++;
}
return winNum;
}
int32_t saveSessionOutputBuf(SStreamAggSupporter* pAggSup, SResultWindowInfo* pWinInfo) {
@ -3731,9 +3734,22 @@ void streamSessionReloadState(SOperatorInfo* pOperator) {
for (int32_t i = 0; i < num; i++) {
SResultWindowInfo winInfo = {0};
setSessionOutputBuf(pAggSup, pSeKeyBuf[i].win.skey, pSeKeyBuf[i].win.ekey, pSeKeyBuf[i].groupId, &winInfo);
compactSessionWindow(pOperator, &winInfo, pInfo->pStUpdated, pInfo->pStDeleted, true);
saveSessionOutputBuf(pAggSup, &winInfo);
saveResult(winInfo, pInfo->pStUpdated);
int32_t winNum = compactSessionWindow(pOperator, &winInfo, pInfo->pStUpdated, pInfo->pStDeleted, true);
if (winNum > 0) {
saveSessionOutputBuf(pAggSup, &winInfo);
if (pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE) {
saveResult(winInfo, pInfo->pStUpdated);
} else if (pInfo->twAggSup.calTrigger == STREAM_TRIGGER_WINDOW_CLOSE) {
if (!isCloseWindow(&winInfo.sessionWin.win, &pInfo->twAggSup)) {
saveDeleteRes(pInfo->pStDeleted, winInfo.sessionWin);
}
SSessionKey key = {0};
getSessionHashKey(&winInfo.sessionWin, &key);
tSimpleHashPut(pAggSup->pResultRows, &key, sizeof(SSessionKey), &winInfo, sizeof(SResultWindowInfo));
}
} else {
releaseOutputBuf(pAggSup->pState, NULL, (SResultRow*)winInfo.pOutputBuf, &pAggSup->stateStore);
}
}
taosMemoryFree(pBuf);
@ -4383,7 +4399,16 @@ void streamStateReloadState(SOperatorInfo* pOperator) {
if (compareStateKey(curInfo.pStateKey,nextInfo.pStateKey)) {
compactStateWindow(pOperator, &curInfo.winInfo, &nextInfo.winInfo, pInfo->pSeUpdated, pInfo->pSeUpdated);
saveSessionOutputBuf(pAggSup, &curInfo.winInfo);
saveResult(curInfo.winInfo, pInfo->pSeUpdated);
if (pInfo->twAggSup.calTrigger == STREAM_TRIGGER_AT_ONCE) {
saveResult(curInfo.winInfo, pInfo->pSeUpdated);
} else if (pInfo->twAggSup.calTrigger == STREAM_TRIGGER_WINDOW_CLOSE) {
if (!isCloseWindow(&curInfo.winInfo.sessionWin.win, &pInfo->twAggSup)) {
saveDeleteRes(pInfo->pSeDeleted, curInfo.winInfo.sessionWin);
}
SSessionKey key = {0};
getSessionHashKey(&curInfo.winInfo.sessionWin, &key);
tSimpleHashPut(pAggSup->pResultRows, &key, sizeof(SSessionKey), &curInfo.winInfo, sizeof(SResultWindowInfo));
}
}
if (IS_VALID_SESSION_WIN(curInfo.winInfo)) {
@ -4622,6 +4647,7 @@ static void doMergeAlignedIntervalAgg(SOperatorInfo* pOperator) {
finalizeResultRows(pIaInfo->aggSup.pResultBuf, &pResultRowInfo->cur, pSup, pRes, pTaskInfo);
resetResultRow(pMiaInfo->pResultRow, pIaInfo->aggSup.resultRowSize - sizeof(SResultRow));
cleanupAfterGroupResultGen(pMiaInfo, pRes);
doFilter(pRes, pOperator->exprSupp.pFilterInfo, NULL);
}
setOperatorCompleted(pOperator);
@ -4642,6 +4668,7 @@ static void doMergeAlignedIntervalAgg(SOperatorInfo* pOperator) {
pMiaInfo->prefetchedBlock = pBlock;
cleanupAfterGroupResultGen(pMiaInfo, pRes);
doFilter(pRes, pOperator->exprSupp.pFilterInfo, NULL);
break;
} else {
// continue

View File

@ -1049,12 +1049,24 @@ static int32_t createBlocksMergeSortInitialSources(SSortHandle* pHandle) {
}
if (pBlk == NULL) {
break;
};
}
if (tsortIsClosed(pHandle)) {
tSimpleHashClear(mUidBlk);
for (int i = 0; i < taosArrayGetSize(aBlkSort); ++i) {
blockDataDestroy(taosArrayGetP(aBlkSort, i));
}
taosArrayClear(aBlkSort);
break;
}
}
tSimpleHashCleanup(mUidBlk);
taosArrayDestroy(aBlkSort);
tsortClearOrderdSource(pHandle->pOrderedSource, NULL, NULL);
taosArrayAddAll(pHandle->pOrderedSource, aExtSrc);
if (!tsortIsClosed(pHandle)) {
taosArrayAddAll(pHandle->pOrderedSource, aExtSrc);
}
taosArrayDestroy(aExtSrc);
pHandle->type = SORT_SINGLESOURCE_SORT;

View File

@ -3413,7 +3413,8 @@ static int32_t checkIntervalWindow(STranslateContext* pCxt, SIntervalWindowNode*
if (IS_CALENDAR_TIME_DURATION(pSliding->unit)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INTER_SLIDING_UNIT);
}
if ((pSliding->datum.i < convertTimePrecision(tsMinSlidingTime, TSDB_TIME_PRECISION_MILLI, precision)) ||
if ((pSliding->datum.i <
convertTimeFromPrecisionToUnit(tsMinSlidingTime, TSDB_TIME_PRECISION_MILLI, pSliding->unit)) ||
(pInter->datum.i / pSliding->datum.i > INTERVAL_SLIDING_FACTOR)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INTER_SLIDING_TOO_SMALL);
}
@ -6196,9 +6197,7 @@ static int32_t translateCreateTopic(STranslateContext* pCxt, SCreateTopicStmt* p
static int32_t translateDropTopic(STranslateContext* pCxt, SDropTopicStmt* pStmt) {
SMDropTopicReq dropReq = {0};
SName name;
tNameSetDbName(&name, pCxt->pParseCxt->acctId, pStmt->topicName, strlen(pStmt->topicName));
tNameGetFullDbName(&name, dropReq.name);
snprintf(dropReq.name, sizeof(dropReq.name), "%d.%s", pCxt->pParseCxt->acctId, pStmt->topicName);
dropReq.igNotExists = pStmt->ignoreNotExists;
return buildCmdMsg(pCxt, TDMT_MND_TMQ_DROP_TOPIC, (FSerializeFunc)tSerializeSMDropTopicReq, &dropReq);

View File

@ -1033,7 +1033,6 @@ static int32_t createSortLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSelect
pSort->node.resultDataOrder = isPrimaryKeySort(pSelect->pOrderByList)
? (pSort->groupSort ? DATA_ORDER_LEVEL_IN_GROUP : DATA_ORDER_LEVEL_GLOBAL)
: DATA_ORDER_LEVEL_NONE;
int32_t code = nodesCollectColumns(pSelect, SQL_CLAUSE_ORDER_BY, NULL, COLLECT_COL_TYPE_ALL, &pSort->node.pTargets);
if (TSDB_CODE_SUCCESS == code && NULL == pSort->node.pTargets) {
code = nodesListMakeStrictAppend(&pSort->node.pTargets,
@ -1047,6 +1046,7 @@ static int32_t createSortLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSelect
}
SNode* pNode = NULL;
SOrderByExprNode* firstSortKey = (SOrderByExprNode*)nodesListGetNode(pSort->pSortKeys, 0);
if (isPrimaryKeySort(pSelect->pOrderByList)) pSort->node.outputTsOrder = firstSortKey->order;
if (firstSortKey->pExpr->type == QUERY_NODE_COLUMN) {
SColumnNode* pCol = (SColumnNode*)firstSortKey->pExpr;
int16_t projIdx = 1;

View File

@ -1181,7 +1181,8 @@ static bool sortPriKeyOptMayBeOptimized(SLogicNode* pNode) {
return false;
}
SSortLogicNode* pSort = (SSortLogicNode*)pNode;
if (!sortPriKeyOptIsPriKeyOrderBy(pSort->pSortKeys) || 1 != LIST_LENGTH(pSort->node.pChildren)) {
if (pSort->skipPKSortOpt || !sortPriKeyOptIsPriKeyOrderBy(pSort->pSortKeys) ||
1 != LIST_LENGTH(pSort->node.pChildren)) {
return false;
}
SNode* pChild;
@ -1194,8 +1195,8 @@ static bool sortPriKeyOptMayBeOptimized(SLogicNode* pNode) {
return true;
}
static int32_t sortPriKeyOptGetSequencingNodesImpl(SLogicNode* pNode, bool groupSort, bool* pNotOptimize,
SNodeList** pSequencingNodes) {
static int32_t sortPriKeyOptGetSequencingNodesImpl(SLogicNode* pNode, bool groupSort, EOrder sortOrder,
bool* pNotOptimize, SNodeList** pSequencingNodes) {
if (NULL != pNode->pLimit || NULL != pNode->pSlimit) {
*pNotOptimize = false;
return TSDB_CODE_SUCCESS;
@ -1212,15 +1213,21 @@ static int32_t sortPriKeyOptGetSequencingNodesImpl(SLogicNode* pNode, bool group
}
case QUERY_NODE_LOGIC_PLAN_JOIN: {
int32_t code = sortPriKeyOptGetSequencingNodesImpl((SLogicNode*)nodesListGetNode(pNode->pChildren, 0), groupSort,
pNotOptimize, pSequencingNodes);
sortOrder, pNotOptimize, pSequencingNodes);
if (TSDB_CODE_SUCCESS == code) {
code = sortPriKeyOptGetSequencingNodesImpl((SLogicNode*)nodesListGetNode(pNode->pChildren, 1), groupSort,
pNotOptimize, pSequencingNodes);
sortOrder, pNotOptimize, pSequencingNodes);
}
return code;
}
case QUERY_NODE_LOGIC_PLAN_WINDOW:
return nodesListMakeAppend(pSequencingNodes, (SNode*)pNode);
case QUERY_NODE_LOGIC_PLAN_WINDOW: {
SWindowLogicNode* pWindowLogicNode = (SWindowLogicNode*)pNode;
// For interval window, we always apply sortPriKey optimization.
// For session/event/state window, the output ts order will always be ASC.
// If sort order is also asc, we apply optimization, otherwise we keep sort node to get correct output order.
if (pWindowLogicNode->winType == WINDOW_TYPE_INTERVAL || sortOrder == ORDER_ASC)
return nodesListMakeAppend(pSequencingNodes, (SNode*)pNode);
}
case QUERY_NODE_LOGIC_PLAN_AGG:
case QUERY_NODE_LOGIC_PLAN_PARTITION:
*pNotOptimize = true;
@ -1234,23 +1241,25 @@ static int32_t sortPriKeyOptGetSequencingNodesImpl(SLogicNode* pNode, bool group
return TSDB_CODE_SUCCESS;
}
return sortPriKeyOptGetSequencingNodesImpl((SLogicNode*)nodesListGetNode(pNode->pChildren, 0), groupSort,
return sortPriKeyOptGetSequencingNodesImpl((SLogicNode*)nodesListGetNode(pNode->pChildren, 0), groupSort, sortOrder,
pNotOptimize, pSequencingNodes);
}
static int32_t sortPriKeyOptGetSequencingNodes(SLogicNode* pNode, bool groupSort, SNodeList** pSequencingNodes) {
bool notOptimize = false;
int32_t code = sortPriKeyOptGetSequencingNodesImpl(pNode, groupSort, &notOptimize, pSequencingNodes);
if (TSDB_CODE_SUCCESS != code || notOptimize) {
NODES_CLEAR_LIST(*pSequencingNodes);
}
return code;
}
static EOrder sortPriKeyOptGetPriKeyOrder(SSortLogicNode* pSort) {
return ((SOrderByExprNode*)nodesListGetNode(pSort->pSortKeys, 0))->order;
}
static int32_t sortPriKeyOptGetSequencingNodes(SSortLogicNode* pSort, bool groupSort, SNodeList** pSequencingNodes) {
bool notOptimize = false;
int32_t code =
sortPriKeyOptGetSequencingNodesImpl((SLogicNode*)nodesListGetNode(pSort->node.pChildren, 0), groupSort,
sortPriKeyOptGetPriKeyOrder(pSort), &notOptimize, pSequencingNodes);
if (TSDB_CODE_SUCCESS != code || notOptimize) {
NODES_CLEAR_LIST(*pSequencingNodes);
}
return code;
}
static int32_t sortPriKeyOptApply(SOptimizeContext* pCxt, SLogicSubplan* pLogicSubplan, SSortLogicNode* pSort,
SNodeList* pSequencingNodes) {
EOrder order = sortPriKeyOptGetPriKeyOrder(pSort);
@ -1289,10 +1298,17 @@ static int32_t sortPriKeyOptApply(SOptimizeContext* pCxt, SLogicSubplan* pLogicS
static int32_t sortPrimaryKeyOptimizeImpl(SOptimizeContext* pCxt, SLogicSubplan* pLogicSubplan, SSortLogicNode* pSort) {
SNodeList* pSequencingNodes = NULL;
int32_t code = sortPriKeyOptGetSequencingNodes((SLogicNode*)nodesListGetNode(pSort->node.pChildren, 0),
pSort->groupSort, &pSequencingNodes);
if (TSDB_CODE_SUCCESS == code && NULL != pSequencingNodes) {
code = sortPriKeyOptApply(pCxt, pLogicSubplan, pSort, pSequencingNodes);
int32_t code = sortPriKeyOptGetSequencingNodes(pSort, pSort->groupSort, &pSequencingNodes);
if (TSDB_CODE_SUCCESS == code) {
if (pSequencingNodes != NULL) {
code = sortPriKeyOptApply(pCxt, pLogicSubplan, pSort, pSequencingNodes);
} else {
// if we decided not to push down sort info to children, we should propagate output ts order to parents of pSort
optSetParentOrder(pSort->node.pParent, sortPriKeyOptGetPriKeyOrder(pSort), 0);
// we need to prevent this pSort from being chosen to do optimization again
pSort->skipPKSortOpt = true;
pCxt->optimized = true;
}
}
nodesClearList(pSequencingNodes);
return code;

View File

@ -35,7 +35,7 @@ int32_t schedulerInit() {
schMgmt.cfg.schPolicy = SCHEDULE_DEFAULT_POLICY;
schMgmt.cfg.enableReSchedule = true;
qDebug("schedule init, policy: %d, maxNodeTableNum: %" PRId64", reSchedule:%d",
qDebug("schedule init, policy: %d, maxNodeTableNum: %" PRId64", reSchedule:%d",
schMgmt.cfg.schPolicy, schMgmt.cfg.maxNodeTableNum, schMgmt.cfg.enableReSchedule);
schMgmt.jobRef = taosOpenRef(schMgmt.cfg.maxJobNum, schFreeJobImpl);
@ -57,11 +57,11 @@ int32_t schedulerInit() {
}
if (taosGetSystemUUID((char *)&schMgmt.sId, sizeof(schMgmt.sId))) {
qError("generate schdulerId failed, errno:%d", errno);
qError("generate schedulerId failed, errno:%d", errno);
SCH_ERR_RET(TSDB_CODE_QRY_SYS_ERROR);
}
qInfo("scheduler 0x%" PRIx64 " initizlized, maxJob:%u", schMgmt.sId, schMgmt.cfg.maxJobNum);
qInfo("scheduler 0x%" PRIx64 " initialized, maxJob:%u", schMgmt.sId, schMgmt.cfg.maxJobNum);
return TSDB_CODE_SUCCESS;
}

View File

@ -253,8 +253,8 @@ void streamBackendCleanup(void* arg) {
taosThreadMutexDestroy(&pHandle->cfMutex);
taosMemoryFree(pHandle);
qDebug("destroy stream backend backend:%p", pHandle);
taosMemoryFree(pHandle);
return;
}
void streamBackendHandleCleanup(void* arg) {
@ -796,8 +796,8 @@ int32_t streamStateOpenBackendCf(void* backend, char* name, char** cfs, int32_t
char suffix[64] = {0};
rocksdb_options_t** cfOpts = taosMemoryCalloc(nCf, sizeof(rocksdb_options_t*));
RocksdbCfParam* params = taosMemoryCalloc(nCf, sizeof(RocksdbCfParam*));
rocksdb_comparator_t** pCompare = taosMemoryCalloc(nCf, sizeof(rocksdb_comparator_t**));
RocksdbCfParam* params = taosMemoryCalloc(nCf, sizeof(RocksdbCfParam));
rocksdb_comparator_t** pCompare = taosMemoryCalloc(nCf, sizeof(rocksdb_comparator_t*));
rocksdb_column_family_handle_t** cfHandle = taosMemoryCalloc(nCf, sizeof(rocksdb_column_family_handle_t*));
for (int i = 0; i < nCf; i++) {
@ -960,7 +960,7 @@ int streamStateOpenBackend(void* backend, SStreamState* pState) {
param[i].tableOpt = tableOpt;
};
rocksdb_comparator_t** pCompare = taosMemoryCalloc(cfLen, sizeof(rocksdb_comparator_t**));
rocksdb_comparator_t** pCompare = taosMemoryCalloc(cfLen, sizeof(rocksdb_comparator_t*));
for (int i = 0; i < cfLen; i++) {
SCfInit* cf = &ginitDict[i];
@ -1066,15 +1066,15 @@ rocksdb_iterator_t* streamStateIterCreate(SStreamState* pState, const char* cfNa
rocksdb_readoptions_t** readOpt) {
int idx = streamStateGetCfIdx(pState, cfName);
SBackendCfWrapper* wrapper = pState->pTdbState->pBackendCfWrapper;
if (snapshot != NULL) {
*snapshot = (rocksdb_snapshot_t*)rocksdb_create_snapshot(wrapper->rocksdb);
}
rocksdb_readoptions_t* rOpt = rocksdb_readoptions_create();
*readOpt = rOpt;
rocksdb_readoptions_set_snapshot(rOpt, *snapshot);
rocksdb_readoptions_set_fill_cache(rOpt, 0);
SBackendCfWrapper* wrapper = pState->pTdbState->pBackendCfWrapper;
if (snapshot != NULL) {
*snapshot = (rocksdb_snapshot_t*)rocksdb_create_snapshot(wrapper->rocksdb);
rocksdb_readoptions_set_snapshot(rOpt, *snapshot);
rocksdb_readoptions_set_fill_cache(rOpt, 0);
}
return rocksdb_create_iterator_cf(wrapper->rocksdb, rOpt, ((rocksdb_column_family_handle_t**)wrapper->pHandle)[idx]);
}
@ -1101,8 +1101,8 @@ rocksdb_iterator_t* streamStateIterCreate(SStreamState* pState, const char* cfNa
int32_t ttlVLen = ginitDict[i].enValueFunc((char*)value, vLen, 0, &ttlV); \
rocksdb_put_cf(db, opts, pHandle, (const char*)buf, klen, (const char*)ttlV, (size_t)ttlVLen, &err); \
if (err != NULL) { \
taosMemoryFree(err); \
qError("streamState str: %s failed to write to %s, err: %s", toString, funcname, err); \
taosMemoryFree(err); \
code = -1; \
} else { \
qTrace("streamState str:%s succ to write to %s, rowValLen:%d, ttlValLen:%d", toString, funcname, vLen, ttlVLen); \
@ -1263,6 +1263,8 @@ int32_t streamStateGetGroupKVByCur_rocksdb(SStreamStateCur* pCur, SWinKey* pKey,
if (pKey->groupId == groupId) {
return 0;
}
taosMemoryFree((void*)*pVal);
*pVal = NULL;
}
return -1;
}
@ -1440,8 +1442,6 @@ int32_t streamStateSessionPut_rocksdb(SStreamState* pState, const SSessionKey* k
int code = 0;
SStateSessionKey sKey = {.key = *key, .opNum = pState->number};
STREAM_STATE_PUT_ROCKSDB(pState, "sess", &sKey, value, vLen);
if (code == -1) {
}
return code;
}
int32_t streamStateSessionGet_rocksdb(SStreamState* pState, SSessionKey* key, void** pVal, int32_t* pVLen) {
@ -1459,8 +1459,10 @@ int32_t streamStateSessionGet_rocksdb(SStreamState* pState, SSessionKey* key, vo
code = -1;
} else {
*key = resKey;
*pVal = taosMemoryCalloc(1, *pVLen);
memcpy(*pVal, tmp, *pVLen);
if (pVal != NULL && pVLen != NULL) {
*pVal = taosMemoryCalloc(1, *pVLen);
memcpy(*pVal, tmp, *pVLen);
}
}
}
taosMemoryFree(tmp);

View File

@ -62,7 +62,9 @@ const char* streamGetTaskStatusStr(int32_t status) {
static int32_t doLaunchScanHistoryTask(SStreamTask* pTask) {
SVersionRange* pRange = &pTask->dataRange.range;
streamSetParamForScanHistory(pTask);
if (pTask->info.fillHistory) {
streamSetParamForScanHistory(pTask);
}
streamSetParamForStreamScannerStep1(pTask, pRange, &pTask->dataRange.window);
int32_t code = streamStartRecoverTask(pTask, 0);
@ -80,8 +82,10 @@ int32_t streamTaskLaunchScanHistory(SStreamTask* pTask) {
walReaderGetCurrentVer(pTask->exec.pWalReader));
}
} else if (pTask->info.taskLevel == TASK_LEVEL__AGG) {
if (pTask->info.fillHistory) {
streamSetParamForScanHistory(pTask);
}
streamTaskEnablePause(pTask);
streamSetParamForScanHistory(pTask);
streamTaskScanHistoryPrepare(pTask);
} else if (pTask->info.taskLevel == TASK_LEVEL__SINK) {
qDebug("s-task:%s sink task do nothing to handle scan-history", pTask->id.idStr);
@ -435,7 +439,7 @@ int32_t streamTaskScanHistoryPrepare(SStreamTask* pTask) {
int32_t streamAggUpstreamScanHistoryFinish(SStreamTask* pTask) {
void* exec = pTask->exec.pExecutor;
if (qRestoreStreamOperatorOption(exec) < 0) {
if (pTask->info.fillHistory && qRestoreStreamOperatorOption(exec) < 0) {
return -1;
}
@ -626,9 +630,12 @@ int32_t streamTaskScanHistoryDataComplete(SStreamTask* pTask) {
}
// restore param
int32_t code = streamRestoreParam(pTask);
if (code < 0) {
return -1;
int32_t code = 0;
if (pTask->info.fillHistory) {
code = streamRestoreParam(pTask);
if (code < 0) {
return -1;
}
}
// dispatch recover finish req to all related downstream task

View File

@ -213,7 +213,7 @@ typedef struct SSyncNode {
int64_t minMatchIndex;
int64_t startTime;
int64_t leaderTime;
int64_t roleTimeMs;
int64_t lastReplicateTime;
int32_t electNum;

View File

@ -508,12 +508,14 @@ SSyncState syncGetState(int64_t rid) {
SSyncNode* pSyncNode = syncNodeAcquire(rid);
if (pSyncNode != NULL) {
state.state = pSyncNode->state;
state.roleTimeMs = pSyncNode->roleTimeMs;
state.restored = pSyncNode->restoreFinish;
if (pSyncNode->vgId != 1) {
state.canRead = syncNodeIsReadyForRead(pSyncNode);
} else {
state.canRead = state.restored;
}
state.term = raftStoreGetTerm(pSyncNode);
syncNodeRelease(pSyncNode);
}
@ -898,6 +900,7 @@ SSyncNode* syncNodeOpen(SSyncInfo* pSyncInfo) {
// init TLA+ server vars
pSyncNode->state = TAOS_SYNC_STATE_FOLLOWER;
pSyncNode->roleTimeMs = taosGetTimestampMs();
if (raftStoreOpen(pSyncNode) != 0) {
sError("vgId:%d, failed to open raft store at path %s", pSyncNode->vgId, pSyncNode->raftStorePath);
goto _error;
@ -1035,7 +1038,6 @@ SSyncNode* syncNodeOpen(SSyncInfo* pSyncInfo) {
int64_t timeNow = taosGetTimestampMs();
pSyncNode->startTime = timeNow;
pSyncNode->leaderTime = timeNow;
pSyncNode->lastReplicateTime = timeNow;
// snapshotting
@ -1131,6 +1133,7 @@ int32_t syncNodeStart(SSyncNode* pSyncNode) {
int32_t syncNodeStartStandBy(SSyncNode* pSyncNode) {
// state change
pSyncNode->state = TAOS_SYNC_STATE_FOLLOWER;
pSyncNode->roleTimeMs = taosGetTimestampMs();
syncNodeStopHeartbeatTimer(pSyncNode);
// reset elect timer, long enough
@ -1667,6 +1670,7 @@ void syncNodeBecomeFollower(SSyncNode* pSyncNode, const char* debugStr) {
// state change
pSyncNode->state = TAOS_SYNC_STATE_FOLLOWER;
pSyncNode->roleTimeMs = taosGetTimestampMs();
syncNodeStopHeartbeatTimer(pSyncNode);
// trace log
@ -1695,6 +1699,7 @@ void syncNodeBecomeLearner(SSyncNode* pSyncNode, const char* debugStr) {
// state change
pSyncNode->state = TAOS_SYNC_STATE_LEARNER;
pSyncNode->roleTimeMs = taosGetTimestampMs();
// trace log
sNTrace(pSyncNode, "become learner %s", debugStr);
@ -1730,8 +1735,6 @@ void syncNodeBecomeLearner(SSyncNode* pSyncNode, const char* debugStr) {
// /\ UNCHANGED <<messages, currentTerm, votedFor, candidateVars, logVars>>
//
void syncNodeBecomeLeader(SSyncNode* pSyncNode, const char* debugStr) {
pSyncNode->leaderTime = taosGetTimestampMs();
pSyncNode->becomeLeaderNum++;
pSyncNode->hbrSlowNum = 0;
@ -1740,6 +1743,7 @@ void syncNodeBecomeLeader(SSyncNode* pSyncNode, const char* debugStr) {
// state change
pSyncNode->state = TAOS_SYNC_STATE_LEADER;
pSyncNode->roleTimeMs = taosGetTimestampMs();
// set leader cache
pSyncNode->leaderCache = pSyncNode->myRaftId;
@ -1839,6 +1843,7 @@ int32_t syncNodePeerStateInit(SSyncNode* pSyncNode) {
void syncNodeFollower2Candidate(SSyncNode* pSyncNode) {
ASSERT(pSyncNode->state == TAOS_SYNC_STATE_FOLLOWER);
pSyncNode->state = TAOS_SYNC_STATE_CANDIDATE;
pSyncNode->roleTimeMs = taosGetTimestampMs();
SyncIndex lastIndex = pSyncNode->pLogStore->syncLogLastIndex(pSyncNode->pLogStore);
sInfo("vgId:%d, become candidate from follower. term:%" PRId64 ", commit index:%" PRId64 ", last index:%" PRId64,
pSyncNode->vgId, raftStoreGetTerm(pSyncNode), pSyncNode->commitIndex, lastIndex);

View File

@ -87,22 +87,6 @@ static int32_t syncNodeTimerRoutine(SSyncNode* ths) {
}
}
if (atomic_load_64(&ths->snapshottingIndex) != SYNC_INDEX_INVALID) {
// end timeout wal snapshot
if (timeNow - ths->snapshottingTime > SYNC_DEL_WAL_MS &&
atomic_load_64(&ths->snapshottingIndex) != SYNC_INDEX_INVALID) {
SSyncLogStoreData* pData = ths->pLogStore->data;
int32_t code = walEndSnapshot(pData->pWal);
if (code != 0) {
sNError(ths, "timer wal snapshot end error since:%s", terrstr());
return -1;
} else {
sNTrace(ths, "wal snapshot end, index:%" PRId64, atomic_load_64(&ths->snapshottingIndex));
atomic_store_64(&ths->snapshottingIndex, SYNC_INDEX_INVALID);
}
}
}
if (!syncNodeIsMnode(ths)) {
syncRespClean(ths->pSyncRespMgr);
}

View File

@ -391,7 +391,13 @@ static void httpHandleReq(SHttpMsg* msg) {
// set up timeout to avoid stuck;
int32_t fd = taosCreateSocketWithTimeout(5);
int ret = uv_tcp_open((uv_tcp_t*)&cli->tcp, fd);
if (fd < 0) {
tError("http-report failed to open socket, dst:%s:%d", cli->addr, cli->port);
taosReleaseRef(httpRefMgt, httpRef);
destroyHttpClient(cli);
return;
}
int ret = uv_tcp_open((uv_tcp_t*)&cli->tcp, fd);
if (ret != 0) {
tError("http-report failed to open socket, reason:%s, dst:%s:%d", uv_strerror(ret), cli->addr, cli->port);
taosReleaseRef(httpRefMgt, httpRef);

View File

@ -73,7 +73,7 @@ typedef struct SCliConn {
SDelayTask* task;
char* ip;
char* dstAddr;
char src[32];
char dst[32];
@ -196,6 +196,7 @@ static FORCE_INLINE int32_t cliBuildExceptResp(SCliMsg* pMsg, STransMsg* resp);
static FORCE_INLINE uint32_t cliGetIpFromFqdnCache(SHashObj* cache, char* fqdn);
static FORCE_INLINE void cliUpdateFqdnCache(SHashObj* cache, char* fqdn);
static FORCE_INLINE void cliMayUpdateFqdnCache(SHashObj* cache, char* dst);
// process data read from server, add decompress etc later
static void cliHandleResp(SCliConn* conn);
// handle except about conn
@ -543,6 +544,7 @@ void cliConnTimeout(uv_timer_t* handle) {
taosArrayPush(pThrd->timerList, &conn->timer);
conn->timer = NULL;
cliMayUpdateFqdnCache(pThrd->fqdn2ipCache, conn->dstAddr);
cliHandleFastFail(conn, UV_ECANCELED);
}
void cliReadTimeoutCb(uv_timer_t* handle) {
@ -719,7 +721,7 @@ static void addConnToPool(void* pool, SCliConn* conn) {
cliDestroyConnMsgs(conn, false);
if (conn->list == NULL) {
conn->list = taosHashGet((SHashObj*)pool, conn->ip, strlen(conn->ip) + 1);
conn->list = taosHashGet((SHashObj*)pool, conn->dstAddr, strlen(conn->dstAddr) + 1);
}
SConnList* pList = conn->list;
@ -878,7 +880,7 @@ static void cliDestroyConn(SCliConn* conn, bool clear) {
connList->list->numOfConn--;
connList->size--;
} else {
SConnList* connList = taosHashGet((SHashObj*)pThrd->pool, conn->ip, strlen(conn->ip) + 1);
SConnList* connList = taosHashGet((SHashObj*)pThrd->pool, conn->dstAddr, strlen(conn->dstAddr) + 1);
if (connList != NULL) connList->list->numOfConn--;
}
conn->list = NULL;
@ -923,7 +925,7 @@ static void cliDestroy(uv_handle_t* handle) {
transReleaseExHandle(transGetRefMgt(), conn->refId);
transRemoveExHandle(transGetRefMgt(), conn->refId);
taosMemoryFree(conn->ip);
taosMemoryFree(conn->dstAddr);
taosMemoryFree(conn->stream);
cliDestroyConnMsgs(conn, true);
@ -1168,7 +1170,7 @@ static void cliHandleBatchReq(SCliBatch* pBatch, SCliThrd* pThrd) {
if (conn == NULL) {
conn = cliCreateConn(pThrd);
conn->pBatch = pBatch;
conn->ip = taosStrdup(pList->dst);
conn->dstAddr = taosStrdup(pList->dst);
uint32_t ipaddr = cliGetIpFromFqdnCache(pThrd->fqdn2ipCache, pList->ip);
if (ipaddr == 0xffffffff) {
@ -1213,6 +1215,8 @@ static void cliHandleBatchReq(SCliBatch* pBatch, SCliThrd* pThrd) {
conn->timer->data = NULL;
taosArrayPush(pThrd->timerList, &conn->timer);
conn->timer = NULL;
cliMayUpdateFqdnCache(pThrd->fqdn2ipCache, conn->dstAddr);
cliHandleFastFail(conn, -1);
return;
}
@ -1271,11 +1275,11 @@ static void cliHandleFastFail(SCliConn* pConn, int status) {
STraceId* trace = &pMsg->msg.info.traceId;
tGError("%s msg %s failed to send, conn %p failed to connect to %s, reason: %s", CONN_GET_INST_LABEL(pConn),
TMSG_INFO(pMsg->msg.msgType), pConn, pConn->ip, uv_strerror(status));
TMSG_INFO(pMsg->msg.msgType), pConn, pConn->dstAddr, uv_strerror(status));
if (pMsg != NULL && REQUEST_NO_RESP(&pMsg->msg) &&
(pTransInst->failFastFp != NULL && pTransInst->failFastFp(pMsg->msg.msgType))) {
SFailFastItem* item = taosHashGet(pThrd->failFastCache, pConn->ip, strlen(pConn->ip) + 1);
SFailFastItem* item = taosHashGet(pThrd->failFastCache, pConn->dstAddr, strlen(pConn->dstAddr) + 1);
int64_t cTimestamp = taosGetTimestampMs();
if (item != NULL) {
int32_t elapse = cTimestamp - item->timestamp;
@ -1287,12 +1291,12 @@ static void cliHandleFastFail(SCliConn* pConn, int status) {
}
} else {
SFailFastItem item = {.count = 1, .timestamp = cTimestamp};
taosHashPut(pThrd->failFastCache, pConn->ip, strlen(pConn->ip) + 1, &item, sizeof(SFailFastItem));
taosHashPut(pThrd->failFastCache, pConn->dstAddr, strlen(pConn->dstAddr) + 1, &item, sizeof(SFailFastItem));
}
}
} else {
tError("%s batch msg failed to send, conn %p failed to connect to %s, reason: %s", CONN_GET_INST_LABEL(pConn),
pConn, pConn->ip, uv_strerror(status));
pConn, pConn->dstAddr, uv_strerror(status));
cliDestroyBatch(pConn->pBatch);
pConn->pBatch = NULL;
}
@ -1314,6 +1318,7 @@ void cliConnCb(uv_connect_t* req, int status) {
}
if (status != 0) {
cliMayUpdateFqdnCache(pThrd->fqdn2ipCache, pConn->dstAddr);
if (timeout == false) {
cliHandleFastFail(pConn, status);
} else if (timeout == true) {
@ -1483,9 +1488,34 @@ static FORCE_INLINE uint32_t cliGetIpFromFqdnCache(SHashObj* cache, char* fqdn)
}
static FORCE_INLINE void cliUpdateFqdnCache(SHashObj* cache, char* fqdn) {
// impl later
uint32_t addr = taosGetIpv4FromFqdn(fqdn);
if (addr != 0xffffffff) {
uint32_t* v = taosHashGet(cache, fqdn, strlen(fqdn) + 1);
if (addr != *v) {
char old[64] = {0}, new[64] = {0};
tinet_ntoa(old, *v);
tinet_ntoa(new, addr);
tWarn("update ip of fqdn:%s, old: %s, new: %s", fqdn, old, new);
taosHashPut(cache, fqdn, strlen(fqdn) + 1, &addr, sizeof(addr));
}
}
return;
}
static void cliMayUpdateFqdnCache(SHashObj* cache, char* dst) {
if (dst == NULL) return;
int16_t i = 0, len = strlen(dst);
for (i = len - 1; i >= 0; i--) {
if (dst[i] == ':') break;
}
if (i > 0) {
char fqdn[TSDB_FQDN_LEN + 1] = {0};
memcpy(fqdn, dst, i);
cliUpdateFqdnCache(cache, fqdn);
}
}
static void doFreeTimeoutMsg(void* param) {
STaskArg* arg = param;
SCliMsg* pMsg = arg->param1;
@ -1560,7 +1590,7 @@ void cliHandleReq(SCliMsg* pMsg, SCliThrd* pThrd) {
transCtxMerge(&conn->ctx, &pMsg->ctx->appCtx);
transQueuePush(&conn->cliMsgs, pMsg);
conn->ip = taosStrdup(addr);
conn->dstAddr = taosStrdup(addr);
uint32_t ipaddr = cliGetIpFromFqdnCache(pThrd->fqdn2ipCache, fqdn);
if (ipaddr == 0xffffffff) {
@ -1578,7 +1608,7 @@ void cliHandleReq(SCliMsg* pMsg, SCliThrd* pThrd) {
addr.sin_addr.s_addr = ipaddr;
addr.sin_port = (uint16_t)htons(port);
tGTrace("%s conn %p try to connect to %s", pTransInst->label, conn, conn->ip);
tGTrace("%s conn %p try to connect to %s", pTransInst->label, conn, conn->dstAddr);
pThrd->newConnCount++;
int32_t fd = taosCreateSocketWithTimeout(TRANS_CONN_TIMEOUT * 4);
if (fd == -1) {
@ -1608,6 +1638,7 @@ void cliHandleReq(SCliMsg* pMsg, SCliThrd* pThrd) {
taosArrayPush(pThrd->timerList, &conn->timer);
conn->timer = NULL;
cliMayUpdateFqdnCache(pThrd->fqdn2ipCache, conn->dstAddr);
cliHandleFastFail(conn, ret);
return;
}

View File

@ -70,17 +70,18 @@ int32_t walNextValidMsg(SWalReader *pReader) {
int64_t fetchVer = pReader->curVersion;
int64_t lastVer = walGetLastVer(pReader->pWal);
int64_t committedVer = walGetCommittedVer(pReader->pWal);
int64_t appliedVer = walGetAppliedVer(pReader->pWal);
// int64_t appliedVer = walGetAppliedVer(pReader->pWal);
if(appliedVer < committedVer){ // wait apply ver equal to commit ver, otherwise may lost data when consume data [TD-24010]
wDebug("vgId:%d, wal apply ver:%"PRId64" smaller than commit ver:%"PRId64, pReader->pWal->cfg.vgId, appliedVer, committedVer);
}
// if(appliedVer < committedVer){ // wait apply ver equal to commit ver, otherwise may lost data when consume data [TD-24010]
// wDebug("vgId:%d, wal apply ver:%"PRId64" smaller than commit ver:%"PRId64, pReader->pWal->cfg.vgId, appliedVer, committedVer);
// }
int64_t endVer = TMIN(appliedVer, committedVer);
// int64_t endVer = TMIN(appliedVer, committedVer);
int64_t endVer = committedVer;
wDebug("vgId:%d, wal start to fetch, index:%" PRId64 ", last index:%" PRId64 " commit index:%" PRId64
", applied index:%" PRId64", end index:%" PRId64,
pReader->pWal->cfg.vgId, fetchVer, lastVer, committedVer, appliedVer, endVer);
", end index:%" PRId64,
pReader->pWal->cfg.vgId, fetchVer, lastVer, committedVer, endVer);
if (fetchVer > endVer){
terrno = TSDB_CODE_WAL_LOG_NOT_EXIST;
@ -135,8 +136,8 @@ void walReaderVerifyOffset(SWalReader *pWalReader, STqOffsetVal* pOffset){
int64_t firstVer = walGetFirstVer((pWalReader)->pWal);
taosThreadMutexUnlock(&pWalReader->pWal->mutex);
if (pOffset->version + 1 < firstVer){
pOffset->version = firstVer - 1;
if (pOffset->version < firstVer){
pOffset->version = firstVer;
}
}
@ -370,9 +371,9 @@ int32_t walFetchHead(SWalReader *pRead, int64_t ver, SWalCkHead *pHead) {
pRead->pWal->vers.appliedVer);
// TODO: valid ver
if (ver > pRead->pWal->vers.appliedVer) {
return -1;
}
// if (ver > pRead->pWal->vers.appliedVer) {
// return -1;
// }
if (pRead->curVersion != ver) {
code = walReaderSeekVer(pRead, ver);

Some files were not shown because too many files have changed in this diff Show More