@@ -284,7 +284,7 @@ SELECT COUNT(*) FROM d1001 WHERE ts >= '2017-7-14 00:00:00' AND ts < '2017-7-14
TDengine 对每个数据采集点单独建表,但在实际应用中经常需要对不同的采集点数据进行聚合。为高效的进行聚合操作,TDengine 引入超级表(STable)的概念。超级表用来代表一特定类型的数据采集点,它是包含多张表的表集合,集合里每张表的模式(schema)完全一致,但每张表都带有自己的静态标签,标签可以有多个,可以随时增加、删除和修改。应用可通过指定标签的过滤条件,对一个 STable 下的全部或部分表进行聚合或统计操作,这样大大简化应用的开发。其具体流程如下图所示:
-
+
图 5 多表聚合查询原理图
diff --git a/docs-cn/21-tdinternal/02-replica.md b/docs-cn/21-tdinternal/02-replica.md
deleted file mode 100644
index 6a384b982d..0000000000
--- a/docs-cn/21-tdinternal/02-replica.md
+++ /dev/null
@@ -1,256 +0,0 @@
----
-sidebar_label: 数据复制模块设计
-title: 数据复制模块设计
----
-
-## 数据复制概述
-
-数据复制(Replication)是指同一份数据在多个物理地点保存。它的目的是防止数据丢失,提高系统的高可用性(High Availability),而且通过应用访问多个副本,提升数据查询性能。
-
-在高可靠的大数据系统里,数据复制是必不可少的一大功能。数据复制又分为实时复制与非实时复制。实时复制是指任何数据的更新(包括数据的增加、删除、修改)操作,会被实时的复制到所有副本,这样任何一台机器宕机或网络出现故障,整个系统还能提供最新的数据,保证系统的正常工作。而非实时复制,是指传统的数据备份操作,按照固定的时间周期,将一份数据全量或增量复制到其他地方。如果主节点宕机,副本是很大可能没有最新数据,因此在有些场景是无法满足要求的。
-
-TDengine面向的是物联网场景,需要支持数据的实时复制,来最大程度保证系统的可靠性。实时复制有两种方式,一种是异步复制,一种是同步复制。异步复制(Asynchronous Replication)是指数据由Master转发给Slave后,Master并不需要等待Slave回复确认,这种方式效率高,但有极小的概率会丢失数据。同步复制是指Master将数据转发给Slave后,需要等待Slave的回复确认,才会通知应用写入成功,这种方式效率偏低,但能保证数据绝不丢失。
-
-数据复制是与数据存储(写入、读取)密切相关的,但两者又是相对独立,可以完全脱耦的。在TDengine系统中,有两种不同类型的数据,一种是时序数据,由TSDB模块负责;一种是元数据(Meta Data), 由MNODE负责。这两种性质不同的数据都需要同步功能。数据复制模块通过不同的实例启动配置参数,为这两种类型数据都提供同步功能。
-
-在阅读本文之前,请先阅读《[TDengine 2.0 整体架构](/tdinternal/arch/)》,了解TDengine的集群设计和基本概念
-
-特别注明:本文中提到数据更新操作包括数据的增加、删除与修改。
-
-## 基本概念和定义
-
-TDengine里存在vnode, mnode, vnode用来存储时序数据,mnode用来存储元数据。但从同步数据复制的模块来看,两者没有本质的区别,因此本文里的虚拟节点不仅包括vnode, 也包括mnode, vgroup也指mnode group, 除非特别注明。
-
-**版本(version)**:
-
-一个虚拟节点组里多个虚拟节点互为备份,来保证数据的有效与可靠,是依靠虚拟节点组的数据版本号来维持的。TDengine2.0设计里,对于版本的定义如下:客户端发起增加、删除、修改的流程,无论是一条记录还是多条,只要是在一个请求里,这个数据更新请求被TDengine的一个虚拟节点收到后,经过合法性检查后,可以被写入系统时,就会被分配一个版本号。这个版本号在一个虚拟节点里从1开始,是单调连续递增的。无论这条记录是采集的时序数据还是meta data, 一样处理。当Master转发一个写入请求到slave时,必须带上版本号。一个虚拟节点将一数据更新请求写入WAL时,需要带上版本号。
-
-不同虚拟节点组的数据版本号是完全独立的,互不相干的。版本号本质上是数据更新记录的transaction ID,但用来标识数据集的版本。
-
-**角色(role):**
-
-一个虚拟节点可以是master, slave, unsynced或offline状态。
-
-- master: 具有最新的数据,容许客户端往里写入数据,一个虚拟节点组,至多一个master.
-- slave:与master是同步的,但不容许客户端往里写入数据,根据配置,可以容许客户端对其进行查询。
-- unsynced: 节点处于非同步状态,比如虚拟节点刚启动、或与其他虚拟节点的连接出现故障等。处于该状态时,该虚拟节点既不能提供写入,也不能提供查询服务。
-- offline: 由于宕机或网络原因,无法访问到某虚拟节点时,其他虚拟节点将该虚拟节点标为离线。但请注意,该虚拟节点本身的状态可能是unsynced或其他,但不会是离线。
-
-**Quorum:**
-
-指数据写入成功所需要的确认数。对于异步复制,quorum设为1,具有master角色的虚拟节点自己确认即可。对于同步复制,需要至少大于等于2。原则上,Quorum >=1 并且 Quorum <= replication(副本数)。这个参数在启动一个同步模块实例时需要提供。
-
-**WAL:**
-
-TDengine的WAL(Write Ahead Log)与cassandra的commit log, mySQL的bin log, Postgres的WAL没本质区别。没有写入数据库文件,还保存在内存的数据都会先存在WAL。当数据已经成功写入数据库数据文件,相应的WAL会被删除。但需要特别指明的是,在TDengine系统里,有几点:
-
-- 每个虚拟节点有自己独立的wal
-- WAL里包含而且仅仅包含来自客户端的数据更新操作,每个更新操作都会被打上一个版本号
-
-**复制实例:**
-
-复制模块只是一可执行的代码,复制实例是指正在运行的复制模块的一个实例,一个节点里,可以存在多个实例。原则上,一个节点有多少虚拟节点,就可以启动多少实例。对于副本数为1的场景,应用可以决定是否需要启动同步实例。应用启动一个同步模块的实例时,需要提供的就是虚拟节点组的配置信息,包括:
-
-- 虚拟节点个数,即replication number
-- 各虚拟节点所在节点的信息,包括node的end point
-- quorum, 需要的数据写入成功的确认数
-- 虚拟节点的初始版本号
-
-## 数据复制模块的基本工作原理
-
-TDengine采取的是Master-Slave模式进行同步,与流行的RAFT一致性算法比较一致。总结下来,有几点:
-
-1. 一个vgroup里有一到多个虚拟节点,每个虚拟节点都有自己的角色
-2. 客户端只能向角色是master的虚拟节点发起数据更新操作,因为master具有最新版本的数据,如果向非Master发起数据更新操作,会直接收到错误
-3. 客户端可以向master, 也可以向角色是Slave的虚拟节点发起查询操作,但不能对unsynced的虚拟节点发起任何操作
-4. 如果master不存在,这个vgroup是不能对外提供数据更新和查询服务的
-5. master收到客户端的数据更新操作时,会将其转发给slave节点
-6. 一个虚拟节点的版本号比master低的时候,会发起数据恢复流程,成功后,才会成为slave
-
-数据实时复制有三个主要流程:选主、数据转发、数据恢复。后续做详细讨论。
-
-## 虚拟节点之间的网络连接
-
-虚拟节点之间通过TCP进行连接,节点之间的状态交换、数据包的转发都是通过这个TCP连接(peerFd)进行。为避免竞争,两个虚拟节点之间的TCP连接,总是由IP地址(UINT32)小的节点作为TCP客户端发起。一旦TCP连接被中断,虚拟节点能通过TCP socket自动检测到,将对方标为offline。如果监测到任何错误(比如数据恢复流程),虚拟节点将主动重置该连接。
-
-一旦作为客户端的节点连接不成或中断,它将周期性的每隔一秒钟去试图去连接一次。因为TCP本身有心跳机制,虚拟节点之间不再另行提供心跳。
-
-如果一个unsynced节点要发起数据恢复流程,它与Master将建立起专有的TCP连接(syncFd)。数据恢复完成后,该连接会被关闭。而且为限制资源的使用,系统只容许一定数量(配置参数tsMaxSyncNum)的数据恢复的socket存在。如果超过这个数字,系统会将新的数据恢复请求延后处理。
-
-任意一个节点,无论有多少虚拟节点,都会启动而且只会启动一个TCP server, 来接受来自其他虚拟节点的上述两类TCP的连接请求。当TCP socket建立起来,客户端侧发送的消息体里会带有vgId(全局唯一的vgroup ID), TCP 服务器侧会检查该vgId是否已经在该节点启动运行。如果已经启动运行,就接受其请求。如果不存在,就直接将连接请求关闭。在TDengine代码里,mnode group的vgId设置为1。
-
-## 选主流程
-
-当同一组的两个虚拟节点之间(vnode A, vnode B)建立连接后,他们互换status消息。status消息里包含本地存储的同一虚拟节点组内所有虚拟节点的role和version。
-
-如果一个虚拟节点(vnode A)检测到与同一虚拟节点组内另外一虚拟节点(vnode B)的连接中断,vnode A将立即把vnode B的role设置为offline。无论是接收到另外一虚拟节点发来的status消息,还是检测与另外一虚拟节点的连接中断,该虚拟节点都将进入状态处理流程。状态处理流程的规则如下:
-
-1. 如果检测到在线的节点数没有超过一半,则将自己的状态设置为unsynced.
-2. 如果在线的虚拟节点数超过一半,会检查master节点是否存在,如果存在,则会决定是否将自己状态改为slave或启动数据恢复流程。
-3. 如果master不存在,则会检查自己保存的各虚拟节点的状态信息与从另一节点接收到的是否一致,如果一致,说明节点组里状态已经稳定一致,则会触发选举流程。如果不一致,说明状态还没趋于一致,即使master不存在,也不进行选主。由于要求状态信息一致才进行选举,每个虚拟节点根据同样的信息,会选出同一个虚拟节点做master,无需投票表决。
-4. 自己的状态是根据规则自己决定并修改的,并不需要其他节点同意,包括成为master。一个节点无权修改其他节点的状态。
-5. 如果一个虚拟节点检测到自己或其他虚拟节点的role发生改变,该节点会广播它自己保存的各个虚拟节点的状态信息(role和version)。
-
-具体的流程图如下:
-
-
-
-选择Master的具体规则如下:
-
-1. 如果只有一个副本,该副本永远就是master
-2. 所有副本都在线时,版本最高的被选为master
-3. 在线的虚拟节点数过半,而且有虚拟节点是slave的话,该虚拟节点自动成为master
-4. 对于2和3,如果多个虚拟节点满足成为master的要求,那么虚拟节点组的节点列表里,最前面的选为master
-
-按照上面的规则,如果所有虚拟节点都是unsynced(比如全部重启),只有所有虚拟节点上线,才能选出master,该虚拟节点组才能开始对外提供服务。当一个虚拟节点的role发生改变时,sync模块回通过回调函数notifyRole通知应用。
-
-## 数据转发流程
-
-如果vnode A是master, vnode B是slave, vnode A能接受客户端的写请求,而vnode B不能。当vnode A收到写的请求后,遵循下面的流程:
-
-
-
-1. 应用对写请求做基本的合法性检查,通过,则给该请求包打上一个版本号(version, 单调递增)
-2. 应用将打上版本号的写请求封装一个WAL Head, 写入WAL(Write Ahead Log)
-3. 应用调用API syncForwardToPeer,如果vnode B是slave状态,sync模块将包含WAL Head的数据包通过Forward消息发送给vnode B,否则就不转发。
-4. vnode B收到Forward消息后,调用回调函数writeToCache, 交给应用处理
-5. vnode B应用在写入成功后,都需要调用syncConfirmForward通知sync模块已经写入成功。
-6. 如果quorum大于1,vnode B需要等待应用的回复确认,收到确认后,vnode B发送Forward Response消息给node A。
-7. 如果quorum大于1,vnode A需要等待vnode B或其他副本对Forward消息的确认。
-8. 如果quorum大于1,vnode A收到quorum-1条确认消息后,调用回调函数confirmForward,通知应用写入成功。
-9. 如果quorum为1,上述6,7,8步不会发生。
-10. 如果要等待slave的确认,master会启动2秒的定时器(可配置),如果超时,则认为失败。
-
-对于回复确认,sync模块提供的是异步回调函数,因此APP在调用syncForwardToPeer之后,无需等待,可以处理下一个操作。在Master与Slave的TCP连接管道里,可能有多个Forward消息,这些消息是严格按照应用提供的顺序排好的。对于Forward Response也是一样,TCP管道里存在多个,但都是排序好的。这个顺序,SYNC模块并没有做特别的事情,是由APP单线程顺序写来保证的(TDengine里每个vnode的写数据,都是单线程)。
-
-## 数据恢复流程
-
-如果一虚拟节点(vnode B) 处于unsynced状态,master存在(vnode A),而且其版本号比master的低,它将立即启动数据恢复流程。在理解恢复流程时,需要澄清几个关于文件的概念和处理规则。
-
-1. 每个文件(无论是archived data的file还是wal)都有一个index, 这需要应用来维护(vnode里,该index就是fileId*3 + 0/1/2, 对应data, head与last三个文件)。如果index为0,表示系统里最老的数据文件。对于mode里的文件,数量是固定的,对应于acct, user, db, table等文件。
-2. 任何一个数据文件(file)有名字、大小,还有一个magic number。只有文件名、大小与magic number一致时,两个文件才判断是一样的,无需同步。Magic number可以是checksum, 也可以是简单的文件大小。怎么计算magic,换句话说,如何检测数据文件是否有效,完全由应用决定。
-3. 文件名的处理有点复杂,因为每台服务器的路径可能不一致。比如node A的TDengine的数据文件存放在 /etc/taos目录下,而node B的数据存放在 /home/jhtao目录下。因此同步模块需要应用在启动一个同步实例时提供一个path,这样两台服务器的绝对路径可以不一样,但仍然可以做对比,做同步。
-4. 当sync模块调用回调函数getFileInfo获得数据文件信息时,有如下的规则
- * index 为0,表示获取最老的文件,同时修改index返回给sync模块。如果index不为0,表示获取指定位置的文件。
- * 如果name为空,表示sync想获取位于index位置的文件信息,包括magic, size。Master节点会这么调用
- * 如果name不为空,表示sync想获取指定文件名和index的信息,slave节点会这么调用
- * 如果某个index的文件不存在,magic返回0,表示文件已经是最后一个。因此整个系统里,文件的index必须是连续的一段整数。
-5. 当sync模块调用回调函数getWalInfo获得wal信息时,有如下规则
- * index为0,表示获得最老的WAL文件, 返回时,index更新为具体的数字
- * 如果返回0,表示这是最新的一个WAL文件,如果返回值是1,表示后面还有更新的WAL文件
- * 返回的文件名为空,那表示没有WAL文件
-6. 无论是getFileInfo, 还是getWalInfo, 只要获取出错(不是文件不存在),返回-1即可,系统会报错,停止同步
-
-整个数据恢复流程分为两大步骤,第一步,先恢复archived data(file), 然后恢复wal。具体流程如下:
-
-
-
-1. 通过已经建立的TCP连接,发送sync req给master节点
-2. master收到sync req后,以client的身份,向vnode B主动建立一新的专用于同步的TCP连接(syncFd)
-3. 新的TCP连接建立成功后,master将开始retrieve流程,对应的,vnode B将同步启动restore流程
-4. Retrieve/Restore流程里,先处理所有archived data (vnode里的data, head, last文件),后处理WAL data。
-5. 对于archived data,master将通过回调函数getFileInfo获取数据文件的基本信息,包括文件名、magic以及文件大小。
-6. master 将获得的文件名、magic以及文件大小发给vnode B
-7. vnode B将回调函数getFile获得magic和文件大小,如果两者一致,就认为无需同步,如果两者不一致 ,就认为需要同步。vnode B将结果通过消息FileAck发回master
-8. 如果文件需要同步,master就调用sendfile把整个文件发往vnode B
-9. 如果文件不需要同步,master(vnode A)就重复5,6,7,8,直到所有文件被处理完
-
-对于WAL同步,流程如下:
-
-1. master节点调用回调函数getWalInfo,获取WAL的文件名。
-2. 如果getWalInfo返回值大于0,表示该文件还不是最后一个WAL,因此master调用sendfile一下把该文件发送给vnode B
-3. 如果getWalInfo返回时为0,表示该文件是最后一个WAL,因为文件可能还处于写的状态中,sync模块要根据WAL Head的定义逐条读出记录,然后发往vnode B。
-4. vnode A读取TCP连接传来的数据,按照WAL Head,逐条读取,如果版本号比现有的大,调用回调函数writeToCache,交给应用处理。如果小,直接扔掉。
-5. 上述流程循环,直到所有WAL文件都被处理完。处理完后,master就会将新来的数据包通过Forward消息转发给slave。
-
-从同步文件启动起,sync模块会通过inotify监控所有处理过的file以及wal。一旦发现被处理过的文件有更新变化,同步流程将中止,会重新启动。因为有可能落盘操作正在进行(比如历史数据导入,内存数据落盘),把已经处理过的文件进行了修改,需要重新同步才行。
-
-对于最后一个WAL (LastWal)的处理逻辑有点复杂,因为这个文件往往是打开写的状态,有很多场景需要考虑,比如:
-
-- LastWal文件size在增长,需要重新读;
-- LastWal文件虽然已经打开写,但内容为空;
-- LastWal文件已经被关闭,应用生成了新的Last WAL文件;
-- LastWal文件没有被关闭,但数据落盘的原因,没有读到完整的一条记录;
-- LastWal文件没有被关闭,但数据落盘的原因,还有部分记录暂时读取不到;
-
-sync模块通过inotify监控LastWal文件的更新和关闭操作。而且在确认已经尽可能读完LastWal的数据后,会将对方同步状态设置为SYNC_CACHE。该状态下,master节点会将新的记录转发给vnode B,而此时vnode B并没有完成同步,需要把这些转发包先存在recv buffer里,等WAL处理完后,vnode A再把recv buffer里的数据包通过回调writeToCache交给应用处理。
-
-等vnode B把这些buffered forwards处理完,同步流程才算结束,vnode B正式变为slave。
-
-## Master分布均匀性问题
-
-因为Master负责写、转发,消耗的资源会更多,因此Master在整个集群里分布均匀比较理想。
-
-但在TDengine的设计里,如果多个虚拟节点都符合master条件,TDengine选在列表中最前面的做Master, 这样是否导致在集群里,Master数量的分布不均匀问题呢?这取决于应用的设计。
-
-给一个具体例子,系统里仅仅有三个节点,IP地址分别为IP1, IP2, IP3. 在各个节点上,TDengine创建了多个虚拟节点组,每个虚拟节点组都有三个副本。如果三个副本的顺序在所有虚拟节点组里都是IP1, IP2, IP3, 那毫无疑问,master将集中在IP1这个节点,这是我们不想看到的。
-
-但是,如果在创建虚拟节点组时,增加随机性,这个问题就不存在了。比如在vgroup 1, 顺序是IP1, IP2, IP3, 在vgroup 2里,顺序是IP2, IP3, IP1, 在vgroup 3里,顺序是IP3, IP1, IP2。最后master的分布会是均匀的。
-
-因此在创建一个虚拟节点组时,应用需要保证节点的顺序是round robin或完全随机。
-
-## 少数虚拟节点写入成功的问题
-
-在某种情况下,写入成功的确认数大于0,但小于配置的Quorum, 虽然有虚拟节点数据更新成功,master仍然会认为数据更新失败,并通知客户端写入失败。
-
-这个时候,系统存在数据不一致的问题,因为有的虚拟节点已经写入成功,而有的写入失败。一个处理方式是,Master重置(reset)与其他虚拟节点的连接,该虚拟节点组将自动进入选举流程。按照规则,已经成功写入数据的虚拟节点将成为新的master,组内的其他虚拟节点将从master那里恢复数据。
-
-因为写入失败,客户端会重新写入数据。但对于TDengine而言,是OK的。因为时序数据都是有时间戳的,时间戳相同的数据更新操作,第一次会执行,但第二次会自动扔掉。对于Meta Data(增加、删除库、表等等)的操作,也是OK的。一张表、库已经被创建或删除,再创建或删除,不会被执行的。
-
-在TDengine的设计里,虚拟节点与虚拟节点之间,是一个TCP连接,是一个pipeline,数据块一个接一个按顺序在这个pipeline里等待处理。一旦某个数据块的处理失败,这个连接会被重置,后续的数据块的处理都会失败。因此不会存在Pipeline里一个数据块更新失败,但下一个数据块成功的可能。
-
-## Split Brain的问题
-
-选举流程中,有个强制要求,那就是一定有超过半数的虚拟节点在线。但是如果replication正好是偶数,这个时候,完全可能存在splt brain问题。
-
-为解决这个问题,TDengine提供Arbitrator的解决方法。Arbitrator是一个节点,它的任务就是接受任何虚拟节点的连接请求,并保持它。
-
-在启动复制模块实例时,在配置参数中,应用可以提供Arbitrator的IP地址。如果是奇数个副本,复制模块不会与这个arbitrator去建立连接,但如果是偶数个副本,就会主动去建立连接。
-
-Arbitrator的程序tarbitrator.c在复制模块的同一目录, 编译整个系统时,会在bin目录生成。命令行参数“-?”查看可以配置的参数,比如绑定的IP地址,监听的端口号。
-
-## 与RAFT相比的异同
-
-数据一致性协议流行的有两种,Paxos与Raft. 本设计的实现与Raft有很多类同之处,下面做一些比较
-
-相同之处:
-
-- 三大流程一致:Raft里有Leader election, replication, safety,完全对应TDengine的选举、数据转发、数据恢复三个流程。
-- 节点状态定义一致:Raft里每个节点有Leader, Follower, Candidate三个状态,TDengine里是Master, Slave, Unsynced, Offline。多了一个offlince, 但本质上是一样的,因为offline是外界看一个节点的状态,但该节点本身是处于master, slave 或unsynced的。
-- 数据转发流程完全一样,Master(leader)需要等待回复确认。
-- 数据恢复流程几乎一样,Raft没有涉及历史数据同步问题,只考虑了WAL数据同步。
-
-不同之处:
-
-- 选举流程不一样:Raft里任何一个节点是candidate时,主动向其他节点发出vote request,如果超过半数回答Yes,这个candidate就成为Leader,开始一个新的term。而TDengine的实现里,节点上线、离线或角色改变都会触发状态消息在节点组内传播,等节点组里状态稳定一致之后才触发选举流程,因为状态稳定一致,基于同样的状态信息,每个节点做出的决定会是一致的,一旦某个节点符合成为master的条件,无需其他节点认可,它会自动将自己设为master。TDengine里,任何一个节点检测到其他节点或自己的角色发生改变,就会向节点组内其他节点进行广播。Raft里不存在这样的机制,因此需要投票来解决。
-- 对WAL的一条记录,Raft用term + index来做唯一标识。但TDengine只用version(类似index),在TDengine实现里,仅仅用version是完全可行的, 因为TDengine的选举机制,没有term的概念。
-
-如果整个虚拟节点组全部宕机,重启,但不是所有虚拟节点都上线,这个时候TDengine是不会选出master的,因为未上线的节点有可能有最高version的数据。而RAFT协议,只要超过半数上线,就会选出Leader。
-
-## Meta Data的数据复制
-
-TDengine里存在时序数据,也存在Meta Data。Meta Data对数据的可靠性要求更高,那么TDengine设计能否满足要求呢?下面做个仔细分析。
-
-TDengine里Meta Data包括以下:
-
-- account 信息
-- 一个account下面,可以有多个user, 多个DB
-- 一个DB下面有多个vgroup
-- 一个DB下面有多个stable
-- 一个vgroup下面有多个table
-- 整个系统有多个mnode, dnode
-- 一个dnode可以有多个vnode
-
-上述的account, user, DB, vgroup, table, stable, mnode, dnode都有自己的属性,这些属性是TDengine自己定义的,不会开放给用户进行修改。这些Meta Data的查询都比较简单,都可以采用key-value模型进行存储。这些Meta Data还具有几个特点:
-
-1. 上述的Meta Data之间有一定的层级关系,比如必须先创建DB,才能创建table, stable。只有先创建dnode,才可能创建vnode, 才可能创建vgroup。因此他们创建的顺序是绝对不能错的。
-2. 在客户端应用的数据更新操作得到TDengine服务器侧确认后,所执行的数据更新操作绝对不能丢失。否则会造成客户端应用与服务器的数据不一致。
-3. 上述的Meta Data是容许重复操作的。比如插入新记录后,再插入一次,删除一次后,再删除一次,更新一次后,再更新一次,不会对系统产生任何影响,不会改变系统任何状态。
-
-对于特点1,本设计里,数据的写入是单线程的,按照到达的先后顺序,给每个数据更新操作打上版本号,版本号大的记录一定是晚于版本号小的写入系统,数据写入顺序是100%保证的,绝对不会让版本号大的记录先写入。复制过程中,数据块的转发也是严格按照顺序进行的,因此TDengine的数据复制设计是能保证Meta Data的创建顺序的。
-
-对于特点2,只要Quorum数设置等于replica,那么一定能保证回复确认过的数据更新操作不会在服务器侧丢失。即使某节点永不起来,只要超过一半的节点还是online, 查询服务不会受到任何影响。这时,如果某个节点离线超过一定时长,系统可以自动补充新的节点,以保证在线的节点数在绝大部分时间是100%的。
-
-对于特点3,完全可能发生,服务器确实持久化存储了某一数据更新操作,但客户端应用出了问题,认为操作不成功,它会重新发起操作。但对于Meta Data而言,没有关系,客户端可以再次发起同样的操作,不会有任何影响。
-
-总结来看,只要quorum设置大于一,本数据复制的设计是能满足Meta Data的需求的。目前,还没有发现漏洞。
diff --git a/docs-cn/21-tdinternal/03-taosd.md b/docs-cn/21-tdinternal/03-taosd.md
deleted file mode 100644
index 6a5734102c..0000000000
--- a/docs-cn/21-tdinternal/03-taosd.md
+++ /dev/null
@@ -1,119 +0,0 @@
----
-sidebar_label: taosd 的设计
-title: taosd的设计
----
-
-逻辑上,TDengine 系统包含 dnode,taosc 和 App,dnode 是服务器侧执行代码 taosd 的一个运行实例,因此 taosd 是 TDengine 的核心,本文对 taosd 的设计做一简单的介绍,模块内的实现细节请见其他文档。
-
-## 系统模块图
-
-taosd 包含 rpc,dnode,vnode,tsdb,query,cq,sync,wal,mnode,http,monitor 等模块,具体如下图:
-
-
-
-taosd 的启动入口是 dnode 模块,dnode 然后启动其他模块,包括可选配置的 http,monitor 模块。taosc 或 dnode 之间交互的消息都是通过 rpc 模块进行,dnode 模块根据接收到的消息类型,将消息分发到 vnode 或 mnode 的消息队列,或由 dnode 模块自己消费。dnode 的工作线程(worker)消费消息队列里的消息,交给 mnode 或 vnode 进行处理。下面对各个模块做简要说明。
-
-## RPC 模块
-
-该模块负责 taosd 与 taosc,以及其他数据节点之间的通讯。TDengine 没有采取标准的 HTTP 或 gRPC 等第三方工具,而是实现了自己的通讯模块 RPC。
-
-考虑到物联网场景下,数据写入的包一般不大,因此除支持 TCP 连接之外,RPC 还支持 UDP 连接。当数据包小于 15K 时,RPC 将采用 UDP 方式进行连接,否则将采用 TCP 连接。对于查询类的消息,RPC 不管包的大小,总是采取 TCP 连接。对于 UDP 连接,RPC 实现了自己的超时、重传、顺序检查等机制,以保证数据可靠传输。
-
-RPC 模块还提供数据压缩功能,如果数据包的字节数超过系统配置参数 compressMsgSize,RPC 在传输中将自动压缩数据,以节省带宽。
-
-为保证数据的安全和数据的 integrity,RPC 模块采用 MD5 做数字签名,对数据的真实性和完整性进行认证。
-
-## DNODE 模块
-
-该模块是整个 taosd 的入口,它具体负责如下任务:
-
-- 系统的初始化,包括
- - 从文件 taos.cfg 读取系统配置参数,从文件 dnodeCfg.json 读取数据节点的配置参数;
- - 启动 RPC 模块,并建立起与 taosc 通讯的 server 连接,与其他数据节点通讯的 server 连接;
- - 启动并初始化 dnode 的内部管理,该模块将扫描该数据节点已有的 vnode ,并打开它们;
- - 初始化可配置的模块,如 mnode,http,monitor 等。
-- 数据节点的管理,包括
- - 定时的向 mnode 发送 status 消息,报告自己的状态;
- - 根据 mnode 的指示,创建、改变、删除 vnode;
- - 根据 mnode 的指示,修改自己的配置参数;
-- 消息的分发、消费,包括
- - 为每一个 vnode 和 mnode 的创建并维护一个读队列、一个写队列;
- - 将从 taosc 或其他数据节点来的消息,根据消息类型,将其直接分发到不同的消息队列,或由自己的管理模块直接消费;
- - 维护一个读的线程池,消费读队列的消息,交给 vnode 或 mnode 处理。为支持高并发,一个读线程(worker)可以消费多个队列的消息,一个读队列可以由多个 worker 消费;
- - 维护一个写的线程池,消费写队列的消息,交给 vnode 或 mnode 处理。为保证写操作的序列化,一个写队列只能由一个写线程负责,但一个写线程可以负责多个写队列。
-
-taosd 的消息消费由 dnode 通过读写线程池进行控制,是系统的中枢。该模块内的结构体图如下:
-
-
-
-## VNODE 模块
-
-vnode 是一独立的数据存储查询逻辑单元,但因为一个 vnode 只能容许一个 DB ,因此 vnode 内部没有 account,DB,user 等概念。为实现更好的模块化、封装以及未来的扩展,它有很多子模块,包括负责存储的 TSDB,负责查询的 query,负责数据复制的 sync,负责数据库日志的的 WAL,负责连续查询的 cq(continuous query),负责事件触发的流计算的 event 等模块,这些子模块只与 vnode 模块发生关系,与其他模块没有任何调用关系。模块图如下:
-
-
-
-vnode 模块向下,与 dnodeVRead,dnodeVWrite 发生互动,向上,与子模块发生互动。它主要的功能有:
-
-- 协调各个子模块的互动。各个子模块之间都不直接调用,都需要通过 vnode 模块进行;
-- 对于来自 taosc 或 mnode 的写操作,vnode 模块将其分解为写日志(WAL),转发(sync),本地存储(TSDB)子模块的操作;
-- 对于查询操作,分发到 query 模块进行。
-
-一个数据节点里有多个 vnode,因此 vnode 模块是有多个运行实例的。每个运行实例是完全独立的。
-
-vnode 与其子模块是通过 API 直接调用,而不是通过消息队列传递。而且各个子模块只与 vnode 模块有交互,不与 dnode,rpc 等模块发生任何直接关联。
-
-## MNODE 模块
-
-mnode 是整个系统的大脑,负责整个系统的资源调度,负责 meta data 的管理与存储。
-
-一个运行的系统里,只有一个 mnode,但它有多个副本(由系统配置参数 numOfMnodes 控制)。这些副本分布在不同的 dnode 里,目的是保证系统的高可靠运行。副本之间的数据复制是采用同步而非异步的方式,以确保数据的一致性,确保数据不会丢失。这些副本会自动选举一个 Master,其他副本是 slave。所有数据更新类的操作,都只能在 master 上进行,而查询类的可以在 slave 节点上进行。代码实现上,同步模块与 vnode 共享,但 mnode 被分配一个特殊的 vgroup ID: 1,而且 quorum 大于 1。整个集群系统是由多个 dnode 组成的,运行的 mnode 的副本数不可能超过 dnode 的个数,但不会超过配置的副本数。如果某个 mnode 副本宕机一段时间,只要超过半数的 mnode 副本仍在运行,运行的 mnode 会自动根据整个系统的资源情况,在其他 dnode 里再启动一个 mnode,以保证运行的副本数。
-
-各个 dnode 通过信息交换,保存有 mnode 各个副本的 End Point 列表,并向其中的 master 节点定时(间隔由系统配置参数 statusInterval 控制)发送 status 消息,消息体里包含该 dnode 的 CPU、内存、剩余存储空间、vnode 个数,以及各个 vnode 的状态(存储空间、原始数据大小、记录条数、角色等)。这样 mnode 就了解整个系统的资源情况,如果用户创建新的表,就可以决定需要在哪个 dnode 创建;如果增加或删除 dnode,或者监测到某 dnode 数据过热、或离线太长,就可以决定需要挪动那些 vnode,以实现负载均衡。
-
-mnode 里还负责 account,user,DB,stable,table,vgroup,dnode 的创建、删除与更新。mnode 不仅把这些 entity 的 meta data 保存在内存,还做持久化存储。但为节省内存,各个表的标签值不保存在 mnode(保存在 vnode),而且子表不维护自己的 schema,而是与 stable 共享。为减小 mnode 的查询压力,taosc 会缓存 table、stable 的 schema。对于查询类的操作,各个 slave mnode 也可以提供,以减轻 master 压力。
-
-## TSDB 模块
-
-TSDB 模块是 vnode 中的负责快速高并发地存储和读取属于该 vnode 的表的元数据及采集的时序数据的引擎。除此之外,TSDB 还提供了表结构的修改、表标签值的修改等功能。TSDB 提供 API 供 vnode 和 query 等模块调用。TSDB 中存储了两类数据,1:元数据信息;2:时序数据
-
-### 元数据信息
-
-TSDB 中存储的元数据包含属于其所在的 vnode 中表的类型,schema 的定义等。对于超级表和超级表下的子表而言,又包含了 tag 的 schema 定义以及子表的 tag 值等。对于元数据信息而言,TSDB 就相当于一个全内存的 KV 型数据库,属于该 vnode 的表对象全部在内存中,方便快速查询表的信息。除此之外,TSDB 还对其中的子表,按照 tag 的第一列取值做了全内存的索引,大大加快了对于标签的过滤查询。TSDB 中的元数据的最新状态在落盘时,会以追加(append-only)的形式,写入到 meta 文件中。meta 文件只进行追加操作,即便是元数据的删除,也会以一条记录的形式写入到文件末尾。TSDB 也提供了对于元数据的修改操作,如表 schema 的修改,tag schema 的修改以及 tag 值的修改等。
-
-### 时序数据
-
-每个 TSDB 在创建时,都会事先分配一定量的内存缓冲区,且内存缓冲区的大小可配可修改。表采集的时序数据,在写入 TSDB 时,首先以追加的方式写入到分配的内存缓冲区中,同时建立基于时间戳的内存索引,方便快速查询。当内存缓冲区的数据积累到一定的程度时(达到内存缓冲区总大小的 1/3),则会触发落盘操作,将缓冲区中的数据持久化到硬盘文件上。时序数据在内存缓冲区中是以行(row)的形式存储的。
-
-而时序数据在写入到 TSDB 的数据文件时,是以列(column)的形式存储的。TSDB 中的数据文件包含多个数据文件组,每个数据文件组中又包含 .head、.data 和 .last 三个文件,如(v2f1801.head、v2f1801.data、v2f1801.last)数据文件组。TSDB 中的数据文件组是按照时间跨度进行分片的,默认是 10 天一个文件组,且可通过配置文件及建库选项进行配置。分片的数据文件组又按照编号递增排列,方便快速定位某一时间段的时序数据,高效定位数据文件组。时序数据在 TSDB 的数据文件中是以块的形式进行列式存储的,每个块中只包含一张表的数据,且数据在一个块中是按照时间顺序递增排列的。在一个数据文件组中,.head 文件负责存储数据块的索引及统计信息,如每个块的位置,压缩算法,时间戳范围等。存储在 .head 文件中一张表的索引信息是按照数据块中存储的数据的时间递增排列的,方便进行折半查找等工作。.head 和 .last 文件是存储真实数据块的文件,若数据块中的数据累计到一定程度,则会写入 .data 文件中,否则,会写入 .last 文件中,等待下次落盘时合并数据写入 .data 文件中,从而大大减少文件中块的个数,避免数据的过度碎片化。
-
-## Query 模块
-
-该模块负责整体系统的查询处理。客户端调用该该模块进行 SQL 语法解析,并将查询或写入请求发送到 vnode ,同时负责针对超级表的查询进行二阶段的聚合操作。在 vnode 端,该模块调用 TSDB 模块读取系统中存储的数据进行查询处理。query 模块还定义了系统能够支持的全部查询函数,查询函数的实现机制与查询框架无耦合,可以在不修改查询流程的情况下动态增加查询函数。详细的设计请参见《TDengine 2.0 查询模块设计》。
-
-## SYNC 模块
-
-该模块实现数据的多副本复制,包括 vnode 与 mnode 的数据复制,支持异步和同步两种复制方式,以满足 meta data 与时序数据不同复制的需求。因为它为 mnode 与 vnode 共享,系统为 mnode 副本预留了一个特殊的 vgroup ID:1。因此 vnode group 的 ID 是从 2 开始的。
-
-每个 vnode/mnode 模块实例会有一对应的 sync 模块实例,他们是一一对应的。详细设计请见[TDengine 2.0 数据复制模块设计](/tdinternal/replica/)
-
-## WAL 模块
-
-该模块负责将新插入的数据写入 write ahead log(WAL),为 vnode,mnode 共享。以保证服务器 crash 或其他故障,能从 WAL 中恢复数据。
-
-每个 vnode/mnode 模块实例会有一对应的 WAL 模块实例,是完全一一对应的。WAL 的落盘操作由两个参数 walLevel,fsync 控制。看具体场景,如果要 100% 保证数据不会丢失,需要将 walLevel 配置为 2,fsync 设置为 0,每条数据插入请求,都会实时落盘后,才会给应用确认
-
-## HTTP 模块
-
-该模块负责处理系统对外的 RESTful 接口,可以通过配置,由 dnode 启动或停止 。(仅 2.2 及之前的版本中存在)
-
-该模块将接收到的 RESTful 请求,做了各种合法性检查后,将其变成标准的 SQL 语句,通过 taosc 的异步接口,将请求发往整个系统中的任一 dnode 。收到处理后的结果后,再翻译成 HTTP 协议,返回给应用。
-
-如果 HTTP 模块启动,就意味着启动了一个 taosc 的实例。任一一个 dnode 都可以启动该模块,以实现对 RESTful 请求的分布式处理。
-
-## Monitor 模块
-
-该模块负责检测一个 dnode 的运行状态,可以通过配置,由 dnode 启动或停止。原则上,每个 dnode 都应该启动一个 monitor 实例。
-
-Monitor 采集 TDengine 里的关键操作,比如创建、删除、更新账号、表、库等,而且周期性的收集 CPU、内存、网络等资源的使用情况(采集周期由系统配置参数 monitorInterval 控制)。获得这些数据后,monitor 模块将采集的数据写入系统的日志库(DB 名字由系统配置参数 monitorDbName 控制)。
-
-Monitor 模块使用 taosc 来将采集的数据写入系统,因此每个 monitor 实例,都有一个 taosc 运行实例。
diff --git a/docs-cn/21-tdinternal/12-tsz-compress.md b/docs-cn/21-tdinternal/12-tsz-compress.md
deleted file mode 100644
index baf5df15db..0000000000
--- a/docs-cn/21-tdinternal/12-tsz-compress.md
+++ /dev/null
@@ -1,44 +0,0 @@
----
-title: TSZ 压缩算法
----
-
-TSZ 压缩算法是 TDengine 为浮点数据类型提供更加丰富的压缩功能,可以实现浮点数的有损至无损全状态压缩,相比原来在 TDengine 中原有压缩算法,TSZ 压缩算法压缩选项更丰富,压缩率更高,即使切到无损状态下对浮点数压缩,压缩率也会比原来的压缩算法高一倍。
-
-## 适合场景
-
-TSZ 压缩算法压缩率比原来的要高,但压缩时间会更长,即开启 TSZ 压缩算法写入速度会有一些下降,通常情况下会有 20% 左右的下降。影响写入速度是因为需要更多的 CPU 计算,所以从原始数据到压缩好数据的交付时间变长,导致写入速度变慢。如果您的服务器 CPU 配置很高的话,这个影响会变小甚至没有。
-
-另外如果设备产生了大量的高精度浮点数,存储占用的空间非常庞大,但实际使用并不需要那么高的精度时,可以通过 TSZ 压缩的有损压缩功能,把精度压缩至指定的长度,节约存储空间。
-
-总结:采集到了大量浮点数,存储时占用空间过大或出有存储空间不足,需要超高压缩率的场景。
-
-## 使用步骤
-
-- 检查版本支持,2.4.0.10 及之后 TDengine 的版本都支持此功能
-
-- 配置选项开启功能,在 TDengine 的配置文件 taos.cfg 增加一行以下内容,打开 TSZ 功能
-
-```TSZ
-lossyColumns float|double
-```
-
-- 根据自己需要配置其它选项,如果不配置都会按默认值处理。
-
-- 重启服务,配置生效。
-- 确认功能已开启,在服务启动过程中输出的信息如果有前面配置的内容,表明功能已生效:
-
-```TSZ Test
-02/22 10:49:27.607990 00002933 UTL lossyColumns float|double
-```
-
-## 注意事项
-
-- 确认版本是否支持
-
-- 除了服务器启动时的输出的配置成功信息外,不再会有其它的信息输出是使用的哪种压缩算法,可以通过配置前后数据库文件大小来比较效果
-
-- 如果浮点数类型列较少,看整体数据文件大小效果会不太明显
-
-- 此压缩产生的数据文件中浮点数据部分将不能被 2.4.0.10 以下的版本解析,即不向下兼容,使用时避免更换回旧版本,以免数据不能被读取出来。
-
-- 在使用过程中允许反复开启和关闭 TSZ 压缩选项的操作,前后两种压缩算法产生的数据都能正常读取。
diff --git a/docs-cn/21-tdinternal/30-iot-big-data.md b/docs-cn/21-tdinternal/30-iot-big-data.md
deleted file mode 100644
index a234713f88..0000000000
--- a/docs-cn/21-tdinternal/30-iot-big-data.md
+++ /dev/null
@@ -1,9 +0,0 @@
----
-title: 物联网大数据
-description: "物联网、工业互联网大数据的特点;物联网大数据平台应具备的功能和特点;通用大数据架构为什么不适合处理物联网数据;物联网、车联网、工业互联网大数据平台,为什么推荐使用 TDengine"
----
-
-- [物联网、工业互联网大数据的特点](https://www.taosdata.com/blog/2019/07/09/105.html)
-- [物联网大数据平台应具备的功能和特点](https://www.taosdata.com/blog/2019/07/29/542.html)
-- [通用大数据架构为什么不适合处理物联网数据?](https://www.taosdata.com/blog/2019/07/09/107.html)
-- [物联网、车联网、工业互联网大数据平台,为什么推荐使用 TDengine?](https://www.taosdata.com/blog/2019/07/09/109.html)
diff --git a/docs-cn/21-tdinternal/dnode.webp b/docs-cn/21-tdinternal/dnode.webp
new file mode 100644
index 0000000000..a56c7e4594
Binary files /dev/null and b/docs-cn/21-tdinternal/dnode.webp differ
diff --git a/docs-cn/21-tdinternal/message.webp b/docs-cn/21-tdinternal/message.webp
new file mode 100644
index 0000000000..a2a42abff3
Binary files /dev/null and b/docs-cn/21-tdinternal/message.webp differ
diff --git a/docs-cn/21-tdinternal/modules.webp b/docs-cn/21-tdinternal/modules.webp
new file mode 100644
index 0000000000..718a6abccd
Binary files /dev/null and b/docs-cn/21-tdinternal/modules.webp differ
diff --git a/docs-cn/21-tdinternal/multi_tables.webp b/docs-cn/21-tdinternal/multi_tables.webp
new file mode 100644
index 0000000000..8f649e34a3
Binary files /dev/null and b/docs-cn/21-tdinternal/multi_tables.webp differ
diff --git a/docs-cn/21-tdinternal/replica-forward.webp b/docs-cn/21-tdinternal/replica-forward.webp
new file mode 100644
index 0000000000..512efd4eba
Binary files /dev/null and b/docs-cn/21-tdinternal/replica-forward.webp differ
diff --git a/docs-cn/21-tdinternal/replica-master.webp b/docs-cn/21-tdinternal/replica-master.webp
new file mode 100644
index 0000000000..57030a11f5
Binary files /dev/null and b/docs-cn/21-tdinternal/replica-master.webp differ
diff --git a/docs-cn/21-tdinternal/replica-restore.webp b/docs-cn/21-tdinternal/replica-restore.webp
new file mode 100644
index 0000000000..f282c2d4d2
Binary files /dev/null and b/docs-cn/21-tdinternal/replica-restore.webp differ
diff --git a/docs-cn/21-tdinternal/structure.webp b/docs-cn/21-tdinternal/structure.webp
new file mode 100644
index 0000000000..b77a42c074
Binary files /dev/null and b/docs-cn/21-tdinternal/structure.webp differ
diff --git a/docs-cn/21-tdinternal/vnode.webp b/docs-cn/21-tdinternal/vnode.webp
new file mode 100644
index 0000000000..fae3104c89
Binary files /dev/null and b/docs-cn/21-tdinternal/vnode.webp differ
diff --git a/docs-cn/21-tdinternal/write_master.webp b/docs-cn/21-tdinternal/write_master.webp
new file mode 100644
index 0000000000..9624036ed3
Binary files /dev/null and b/docs-cn/21-tdinternal/write_master.webp differ
diff --git a/docs-cn/21-tdinternal/write_slave.webp b/docs-cn/21-tdinternal/write_slave.webp
new file mode 100644
index 0000000000..7c45dec11b
Binary files /dev/null and b/docs-cn/21-tdinternal/write_slave.webp differ
diff --git a/docs-cn/25-application/01-telegraf.md b/docs-cn/25-application/01-telegraf.md
index f63a6701ee..5bfc94c534 100644
--- a/docs-cn/25-application/01-telegraf.md
+++ b/docs-cn/25-application/01-telegraf.md
@@ -16,7 +16,7 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + Telegraf + Grafana 的 IT 运维系统。架构如下图:
-
+
## 安装步骤
@@ -75,7 +75,7 @@ sudo systemctl start telegraf
点击左侧齿轮图标并选择 `Plugins`,应该可以找到 TDengine data source 插件图标。
点击左侧加号图标并选择 `Import`,从 `https://github.com/taosdata/grafanaplugin/blob/master/examples/telegraf/grafana/dashboards/telegraf-dashboard-v0.1.0.json` 下载 dashboard JSON 文件后导入。之后可以看到如下界面的仪表盘:
-
+![IT-DevOps-Solutions-telegraf-dashboard.webp]./IT-DevOps-Solutions-telegraf-dashboard.webp)
## 总结
diff --git a/docs-cn/25-application/02-collectd.md b/docs-cn/25-application/02-collectd.md
index 5e6bc6577b..5966f2d654 100644
--- a/docs-cn/25-application/02-collectd.md
+++ b/docs-cn/25-application/02-collectd.md
@@ -16,7 +16,7 @@ IT 运维监测数据通常都是对时间特性比较敏感的数据,例如
本文介绍不需要写一行代码,通过简单修改几行配置文件,就可以快速搭建一个基于 TDengine + collectd / statsD + Grafana 的 IT 运维系统。架构如下图:
-
+
## 安装步骤
@@ -81,12 +81,12 @@ repeater 部分添加 { host:'', port:
diff --git a/docs-cn/eco_system.png b/docs-cn/eco_system.png
deleted file mode 100644
index bf8bf8f1e0..0000000000
Binary files a/docs-cn/eco_system.png and /dev/null differ
diff --git a/docs-cn/eco_system.webp b/docs-cn/eco_system.webp
new file mode 100644
index 0000000000..d60c38e97c
Binary files /dev/null and b/docs-cn/eco_system.webp differ
diff --git a/docs-en/02-intro/eco_system.png b/docs-en/02-intro/eco_system.png
deleted file mode 100644
index bf8bf8f1e0..0000000000
Binary files a/docs-en/02-intro/eco_system.png and /dev/null differ
diff --git a/docs-en/02-intro/eco_system.webp b/docs-en/02-intro/eco_system.webp
new file mode 100644
index 0000000000..d60c38e97c
Binary files /dev/null and b/docs-en/02-intro/eco_system.webp differ
diff --git a/docs-en/02-intro/index.md b/docs-en/02-intro/index.md
index e2309943f3..628e87dd59 100644
--- a/docs-en/02-intro/index.md
+++ b/docs-en/02-intro/index.md
@@ -5,39 +5,39 @@ toc_max_heading_level: 2
TDengine is a high-performance, scalable time-series database with SQL support. Its code, including its cluster feature is open source under GNU AGPL v3.0. Besides the database engine, it provides [caching](/develop/cache), [stream processing](/develop/continuous-query), [data subscription](/develop/subscribe) and other functionalities to reduce the complexity and cost of development and operation.
-This section introduces the major features, competitive advantages, suited scenarios and benchmarks to help you get a high level picture for TDengine.
+This section introduces the major features, competitive advantages, typical use-cases and benchmarks to help you get a high level overview of TDengine.
## Major Features
The major features are listed below:
-1. Besides [using SQL to insert](/develop/insert-data/sql-writing),it supports [Schemaless writing](/reference/schemaless/),and it supports [InfluxDB LINE](/develop/insert-data/influxdb-line),[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) and other protocols.
-2. Support for seamless integration with third-party data collection agents like [Telegraf](/third-party/telegraf),[Prometheus](/third-party/prometheus),[StatsD](/third-party/statsd),[collectd](/third-party/collectd),[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQX](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). Without a line of code, those agents can write data points into TDengine just by configuration.
-3. Support for [all kinds of queries](/develop/query-data), including aggregation, nested query, downsampling, interpolation, etc.
-4. Support for [user defined functions](/develop/udf)
+1. While TDengine supports [using SQL to insert](/develop/insert-data/sql-writing), it also supports [Schemaless writing](/reference/schemaless/) just like NoSQL databases. TDengine also supports standard protocols like [InfluxDB LINE](/develop/insert-data/influxdb-line),[OpenTSDB Telnet](/develop/insert-data/opentsdb-telnet), [OpenTSDB JSON ](/develop/insert-data/opentsdb-json) among others.
+2. TDengine supports seamless integration with third-party data collection agents like [Telegraf](/third-party/telegraf),[Prometheus](/third-party/prometheus),[StatsD](/third-party/statsd),[collectd](/third-party/collectd),[icinga2](/third-party/icinga2), [TCollector](/third-party/tcollector), [EMQX](/third-party/emq-broker), [HiveMQ](/third-party/hive-mq-broker). These agents can write data into TDengine with simple configuration and without a single line of code.
+3. Support for [all kinds of queries](/develop/query-data), including aggregation, nested query, downsampling, interpolation and others.
+4. Support for [user defined functions](/develop/udf).
5. Support for [caching](/develop/cache). TDengine always saves the last data point in cache, so Redis is not needed in some scenarios.
6. Support for [continuous query](/develop/continuous-query).
7. Support for [data subscription](/develop/subscribe) with the capability to specify filter conditions.
8. Support for [cluster](/cluster/), with the capability of increasing processing power by adding more nodes. High availability is supported by replication.
-9. Provides interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc query.
+9. Provides an interactive [command-line interface](/reference/taos-shell) for management, maintenance and ad-hoc queries.
10. Provides many ways to [import](/operation/import) and [export](/operation/export) data.
-11. Provides [monitoring](/operation/monitor) on TDengine running instances.
+11. Provides [monitoring](/operation/monitor) on running instances of TDengine.
12. Provides [connectors](/reference/connector/) for [C/C++](/reference/connector/cpp), [Java](/reference/connector/java), [Python](/reference/connector/python), [Go](/reference/connector/go), [Rust](/reference/connector/rust), [Node.js](/reference/connector/node) and other programming languages.
13. Provides a [REST API](/reference/rest-api/).
-14. Supports the seamless integration with [Grafana](/third-party/grafana) for visualization.
+14. Supports seamless integration with [Grafana](/third-party/grafana) for visualization.
15. Supports seamless integration with Google Data Studio.
-For more detail on features, please read through the whole documentation.
+For more details on features, please read through the entire documentation.
## Competitive Advantages
-TDengine makes full use of [the characteristics of time series data](https://tdengine.com/2019/07/09/86.html), such as structured, no transaction, rarely delete or update, etc., and builds its own innovative storage engine and computing engine to differentiate itself from other time series databases with the following advantages.
+Time-series data is structured, not transactional, and is rarely deleted or updated. TDengine makes full use of [these characteristics of time series data](https://tdengine.com/2019/07/09/86.html) to build its own innovative storage engine and computing engine to differentiate itself from other time series databases, with the following advantages.
-- **[High Performance](https://tdengine.com/fast)**: TDengine outperforms other time series databases in data ingestion and querying while significantly reducing storage cost and compute costs, with an innovatively designed and purpose-built storage engine.
+- **[High Performance](https://tdengine.com/fast)**: With an innovatively designed and purpose-built storage engine, TDengine outperforms other time series databases in data ingestion and querying while significantly reducing storage costs and compute costs.
- **[Scalable](https://tdengine.com/scalable)**: TDengine provides out-of-box scalability and high-availability through its native distributed design. Nodes can be added through simple configuration to achieve greater data processing power. In addition, this feature is open source.
-- **[SQL Support](https://tdengine.com/sql-support)**: TDengine uses SQL as the query language, thereby reducing learning and migration costs, while adding SQL extensions to handle time-series data better, and supporting convenient and flexible schemaless data ingestion.
+- **[SQL Support](https://tdengine.com/sql-support)**: TDengine uses SQL as the query language, thereby reducing learning and migration costs, while adding SQL extensions to better handle time-series. Keeping NoSQL developers in mind, TDengine also supports convenient and flexible, schemaless data ingestion.
- **All in One**: TDengine has built-in caching, stream processing and data subscription functions. It is no longer necessary to integrate Kafka/Redis/HBase/Spark or other software in some scenarios. It makes the system architecture much simpler, cost-effective and easier to maintain.
@@ -45,24 +45,24 @@ TDengine makes full use of [the characteristics of time series data](https://tde
- **Zero Management**: Installation and cluster setup can be done in seconds. Data partitioning and sharding are executed automatically. TDengine’s running status can be monitored via Grafana or other DevOps tools.
-- **Zero Learning Costs**: With SQL as the query language and support for ubiquitous tools like Python, Java, C/C++, Go, Rust, and Node.js connectors, there are zero learning costs.
+- **Zero Learning Costs**: With SQL as the query language and support for ubiquitous tools like Python, Java, C/C++, Go, Rust, and Node.js connectors, and a REST API, there are zero learning costs.
-- **Interactive Console**: TDengine provides convenient console access to the database to run ad hoc queries, maintain the database, or manage the cluster without any programming.
+- **Interactive Console**: TDengine provides convenient console access to the database, through a CLI, to run ad hoc queries, maintain the database, or manage the cluster, without any programming.
-With TDengine, the total cost of ownership of time-series data platform can be greatly reduced. Because 1: with its superior performance, the computing and storage resources are reduced significantly; 2:with SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly; 3: with its simple architecture and zero management, the operation and maintenance costs are reduced.
+With TDengine, the total cost of ownership of your time-series data platform can be greatly reduced. 1: With its superior performance, the computing and storage resources are reduced significantly 2: With SQL support, it can be seamlessly integrated with many third party tools, and learning costs/migration costs are reduced significantly 3: With its simple architecture and zero management, the operation and maintenance costs are reduced.
## Technical Ecosystem
-In the time-series data processing platform, TDengine stands in a role like this diagram below:
+This is how TDengine would be situated, in a typical time-series data processing platform:
-
+
Figure 1. TDengine Technical Ecosystem
-On the left side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides interactive command-line interface and web interface for management and maintenance.
+On the left-hand side, there are data collection agents like OPC-UA, MQTT, Telegraf and Kafka. On the right-hand side, visualization/BI tools, HMI, Python/R, and IoT Apps can be connected. TDengine itself provides an interactive command-line interface and a web interface for management and maintenance.
-## Suited Scenarios
+## Typical Use Cases
-As a high-performance, scalable and SQL supported time-series database, TDengine's typical application scenarios include but are not limited to IoT, Industrial Internet, Connected Vehicles, IT operation and maintenance, energy, financial markets and other fields. TDengine is a purpose-built database optimized for the characteristics of time series data, it cannot be used to process data from web crawlers, social media, e-commerce, ERP, CRM, etc. This section makes a more detailed analysis of the applicable scenarios.
+As a high-performance, scalable and SQL supported time-series database, TDengine's typical use case include but are not limited to IoT, Industrial Internet, Connected Vehicles, IT operation and maintenance, energy, financial markets and other fields. TDengine is a purpose-built database optimized for the characteristics of time series data. As such, it cannot be used to process data from web crawlers, social media, e-commerce, ERP, CRM and so on. More generally TDengine is not a suitable storage engine for non-time-series data. This section makes a more detailed analysis of the applicable scenarios.
### Characteristics and Requirements of Data Sources
diff --git a/docs-en/04-concept/index.md b/docs-en/04-concept/index.md
index abc553ab6d..d714bace1d 100644
--- a/docs-en/04-concept/index.md
+++ b/docs-en/04-concept/index.md
@@ -2,7 +2,7 @@
title: Concepts
---
-In order to explain the basic concepts and provide some sample code, the TDengine documentation takes smart meters as a typical time series data scenario. Assuming that each smart meter collects three metrics of current, voltage, and phase, there are multiple smart meters, and each meter has static attributes like location and group ID, the collected data will be similar to the following table:
+In order to explain the basic concepts and provide some sample code, the TDengine documentation smart meters as a typical time series use case. We assume the following: 1. Each smart meter collects three metrics i.e. current, voltage, and phase 2. There are multiple smart meters, and 3. Each meter has static attributes like location and group ID. Based on this, collected data will look similar to the following table:
@@ -29,7 +29,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
10.3
219
0.31
-
Beijing.Chaoyang
+
San Jose
2
@@ -38,7 +38,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
10.2
220
0.23
-
Beijing.Chaoyang
+
San Jose
3
@@ -47,7 +47,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
11.5
221
0.35
-
Beijing.Haidian
+
Mountain View
3
@@ -56,7 +56,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
13.4
223
0.29
-
Beijing.Haidian
+
Mountain View
2
@@ -65,7 +65,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
12.6
218
0.33
-
Beijing.Chaoyang
+
San Jose
2
@@ -74,7 +74,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
11.8
221
0.28
-
Beijing.Haidian
+
Mountain View
2
@@ -83,7 +83,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
10.3
218
0.25
-
Beijing.Chaoyang
+
San Jose
3
@@ -92,7 +92,7 @@ In order to explain the basic concepts and provide some sample code, the TDengin
12.3
221
0.31
-
Beijing.Chaoyang
+
San Jose
2
@@ -112,7 +112,7 @@ Label/Tag refers to the static properties of sensors, equipment or other types o
## Data Collection Point
-Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipments, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car, so in this example the car would have three data collection points.
+Data Collection Point (DCP) refers to hardware or software that collects metrics based on preset time periods or triggered by events. A data collection point can collect one or multiple metrics, but these metrics are collected at the same time and have the same time stamp. For some complex equipment, there are often multiple data collection points, and the sampling rate of each collection point may be different, and fully independent. For example, for a car, there could be a data collection point to collect GPS position metrics, a data collection point to collect engine status metrics, and a data collection point to collect the environment metrics inside the car. So in this example the car would have three data collection points.
## Table
@@ -122,10 +122,10 @@ To make full use of time-series data characteristics, TDengine adopts a strategy
1. Since the metric data from different DCP are fully independent, the data source of each DCP is unique, and a table has only one writer. In this way, data points can be written in a lock-free manner, and the writing speed can be greatly improved.
2. For a DCP, the metric data generated by DCP is ordered by timestamp, so the write operation can be implemented by simple appending, which further greatly improves the data writing speed.
-3. The metric data from a DCP is continuously stored in block by block. If you read data for a period of time, it can greatly reduce random read operations and improve read and query performance by orders of magnitude.
-4. Inside a data block for a DCP, columnar storage is used, and different compression algorithms are used for different data types. Metrics generally don't vary as significantly between themselves over a time range as compared to other metrics, this allows for a higher compression rate.
+3. The metric data from a DCP is continuously stored, block by block. If you read data for a period of time, it can greatly reduce random read operations and improve read and query performance by orders of magnitude.
+4. Inside a data block for a DCP, columnar storage is used, and different compression algorithms are used for different data types. Metrics generally don't vary as significantly between themselves over a time range as compared to other metrics, which allows for a higher compression rate.
-If the metric data of multiple DCPs are traditionally written into a single table, due to the uncontrollable network delay, the timing of the data from different DCPs arriving at the server cannot be guaranteed, the writing operation must be protected by locks, and the metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest extent.**
+If the metric data of multiple DCPs are traditionally written into a single table, due to uncontrollable network delays, the timing of the data from different DCPs arriving at the server cannot be guaranteed, write operations must be protected by locks, and metric data from one DCP cannot be guaranteed to be continuously stored together. **One table for one data collection point can ensure the best performance of insert and query of a single data collection point to the greatest possible extent.**
TDengine suggests using DCP ID as the table name (like D1001 in the above table). Each DCP may collect one or multiple metrics (like the current, voltage, phase as above). Each metric has a corresponding column in the table. The data type for a column can be int, float, string and others. In addition, the first column in the table must be a timestamp. TDengine uses the time stamp as the index, and won’t build the index on any metrics stored. Column wise storage is used.
@@ -139,7 +139,7 @@ In the design of TDengine, **a table is used to represent a specific data collec
## Subtable
-When creating a table for a specific data collection point, the user can use a STable as a template and specifies the tag values of this specific DCP to create it. **The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is:
+When creating a table for a specific data collection point, the user can use a STable as a template and specify the tag values of this specific DCP to create it. **The table created by using a STable as the template is called subtable** in TDengine. The difference between regular table and subtable is:
1. Subtable is a table, all SQL commands applied on a regular table can be applied on subtable.
2. Subtable is a table with extensions, it has static tags (labels), and these tags can be added, deleted, and updated after it is created. But a regular table does not have tags.
3. A subtable belongs to only one STable, but a STable may have many subtables. Regular tables do not belong to a STable.
@@ -151,7 +151,7 @@ The relationship between a STable and the subtables created based on this STable
2. The schema of metrics or labels cannot be adjusted through subtables, and it can only be changed via STable. Changes to the schema of a STable takes effect immediately for all associated subtables.
3. STable defines only one template and does not store any data or label information by itself. Therefore, data cannot be written to a STable, only to subtables.
-Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which can greatly reduce the data sets to be scanned, thus greatly improving the performance of data aggregation across multiple DCPs.
+Queries can be executed on both a table (subtable) and a STable. For a query on a STable, TDengine will treat the data in all its subtables as a whole data set for processing. TDengine will first find the subtables that meet the tag filter conditions, then scan the time-series data of these subtables to perform aggregation operation, which reduces the number of data sets to be scanned which in turn greatly improves the performance of data aggregation across multiple DCPs.
In TDengine, it is recommended to use a subtable instead of a regular table for a DCP.
@@ -167,4 +167,4 @@ FQDN (Fully Qualified Domain Name) is the full domain name of a specific compute
Each node of a TDengine cluster is uniquely identified by an End Point, which consists of an FQDN and a Port, such as h1.tdengine.com:6030. In this way, when the IP changes, we can still use the FQDN to dynamically find the node without changing any configuration of the cluster. In addition, FQDN is used to facilitate unified access to the same cluster from the Intranet and the Internet.
-TDengine does not recommend using an IP address to access the cluster, FQDN is recommended for cluster management.
+TDengine does not recommend using an IP address to access the cluster. FQDN is recommended for cluster management.
diff --git a/docs-en/05-get-started/index.md b/docs-en/05-get-started/index.md
index 39b2d02eca..596d42d32f 100644
--- a/docs-en/05-get-started/index.md
+++ b/docs-en/05-get-started/index.md
@@ -10,7 +10,7 @@ import AptGetInstall from "./\_apt_get_install.mdx";
## Quick Install
-The full package of TDengine includes the server(taosd), taosAdapter for connecting with third-party systems and providing a RESTful interface, client driver(taosc), command-line program(CLI, taos) and some tools. For the current version, the server taosd and taosAdapter can only be installed and run on Linux systems. In the future taosd and taosAdapter will also be supported on Windows, macOS and other systems. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to the connectors of multiple languages, [RESTful interface](/reference/rest-api) is also provided by [taosAdapter](/reference/taosadapter) in TDengine. Prior to version 2.4.0.0, however, there is no taosAdapter, the RESTful interface is provided by the built-in HTTP service of taosd.
+The full package of TDengine includes the server(taosd), taosAdapter for connecting with third-party systems and providing a RESTful interface, client driver(taosc), command-line program(CLI, taos) and some tools. For the current version, the server taosd and taosAdapter can only be installed and run on Linux systems. In the future taosd and taosAdapter will also be supported on Windows, macOS and other systems. The client driver taosc and TDengine CLI can be installed and run on Windows or Linux. In addition to connectors for multiple languages, TDengine also provides a [RESTful interface](/reference/rest-api) through [taosAdapter](/reference/taosadapter). Prior to version 2.4.0.0, taosAdapter did not exist and the RESTful interface was provided by the built-in HTTP service of taosd.
TDengine supports X64/ARM64/MIPS64/Alpha64 hardware platforms, and will support ARM32, RISC-V and other CPU architectures in the future.
diff --git a/docs-en/07-develop/01-connect/index.md b/docs-en/07-develop/01-connect/index.md
index ee11c8f544..21b2149f44 100644
--- a/docs-en/07-develop/01-connect/index.md
+++ b/docs-en/07-develop/01-connect/index.md
@@ -1,7 +1,7 @@
---
sidebar_label: Connection
title: Connect to TDengine
-description: "This document explains how to establish connection to TDengine, and briefly introduce how to install and use TDengine connectors."
+description: "This document explains how to establish connections to TDengine, and briefly introduces how to install and use TDengine connectors."
---
import Tabs from "@theme/Tabs";
@@ -19,25 +19,24 @@ import InstallOnLinux from "../../14-reference/03-connector/\_windows_install.md
import VerifyLinux from "../../14-reference/03-connector/\_verify_linux.mdx";
import VerifyWindows from "../../14-reference/03-connector/\_verify_windows.mdx";
-Any application programs running on any kind of platforms can access TDengine through the REST API provided by TDengine. For the details, please refer to [REST API](/reference/rest-api/). Besides, application programs can use the connectors of multiple programming languages to access TDengine, including C/C++, Java, Python, Go, Node.js, C#, and Rust. This chapter describes how to establish connection to TDengine and briefly introduce how to install and use connectors. For details about the connectors, please refer to [Connectors](/reference/connector/)
+Any application programs running on any kind of platform can access TDengine through the REST API provided by TDengine. For details, please refer to [REST API](/reference/rest-api/). Additionally, application programs can use the connectors of multiple programming languages including C/C++, Java, Python, Go, Node.js, C#, and Rust to access TDengine. This chapter describes how to establish a connection to TDengine and briefly introduces how to install and use connectors. For details about the connectors, please refer to [Connectors](/reference/connector/)
## Establish Connection
There are two ways for a connector to establish connections to TDengine:
-1. Connection through the REST API provided by taosAdapter component, this way is called "REST connection" hereinafter.
+1. Connection through the REST API provided by the taosAdapter component, this way is called "REST connection" hereinafter.
2. Connection through the TDengine client driver (taosc), this way is called "Native connection" hereinafter.
-Either way, same or similar APIs are provided by connectors to access database or execute SQL statements, no obvious difference can be observed.
-
Key differences:
-1. With REST connection, it's not necessary to install TDengine client driver (taosc), it's more friendly for cross-platform with the cost of 30% performance downgrade. When taosc has an upgrade, application does not need to make changes.
-2. With native connection, full compatibility of TDengine can be utilized, like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc. But taosc has to be installed, some platforms may not be supported.
+1. The TDengine client driver (taosc) has the highest performance with all the features of TDengine like [Parameter Binding](/reference/connector/cpp#parameter-binding-api), [Subscription](/reference/connector/cpp#subscription-and-consumption-api), etc.
+2. The TDengine client driver (taosc) is not supported across all platforms, and applications built on taosc may need to be modified when updating taosc to newer versions.
+3. The REST connection is more accessible with cross-platform support, however it results in a 30% performance downgrade.
## Install Client Driver taosc
-If choosing to use native connection and the application is not on the same host as TDengine server, TDengine client driver taosc needs to be installed on the host where the application is. If choosing to use REST connection or the application is on the same host as server side, this step can be skipped. It's better to use same version of taosc as the server.
+If you are choosing to use the native connection and the the application is not on the same host as TDengine server, the TDengine client driver taosc needs to be installed on the application host. If choosing to use the REST connection or the application is on the same host as TDengine server, this step can be skipped. It's better to use same version of taosc as the TDengine server.
### Install
diff --git a/docs-en/07-develop/02-model/index.mdx b/docs-en/07-develop/02-model/index.mdx
index 962a75338f..2b91dc5487 100644
--- a/docs-en/07-develop/02-model/index.mdx
+++ b/docs-en/07-develop/02-model/index.mdx
@@ -2,11 +2,11 @@
title: Data Model
---
-The data model employed by TDengine is similar to relational database, you need to create databases and tables. For a specific application, the design of databases, STables (abbreviated for super table), and tables need to be considered. This chapter will explain the big picture without syntax details.
+The data model employed by TDengine is similar to a relational database, you need to create databases and tables. Design the data model based on your own application scenarios and you should design the STable (abbreviation for super table) schema to fit your data. This chapter will explain the big picture without getting into syntax details.
## Create Database
-The characteristics of data from different data collection points may be different, such as collection frequency, days to keep, number of replicas, data block size, whether it's allowed to update data, etc. For TDengine to operate with the best performance, it's strongly suggested to put the data with different characteristics into different databases because different storage policy can be set for each database. When creating a database, there are a lot of parameters that can be configured, such as the days to keep data, the number of replicas, the number of memory blocks, time precision, the minimum and maximum number of rows in each data block, compress or not, the time range of the data in single data file, etc. Below is an example of the SQL statement for creating a database.
+The characteristics of data from different data collection points may be different, such as collection frequency, days to keep, number of replicas, data block size, whether it's allowed to update data, etc. For TDengine to operate with the best performance, it's strongly suggested to put the data with different characteristics into different databases because different storage policies can be set for each database. When creating a database, there are a lot of parameters that can be configured, such as the days to keep data, the number of replicas, the number of memory blocks, time precision, the minimum and maximum number of rows in each data block, compress or not, the time range of the data in single data file, etc. Below is an example of the SQL statement for creating a database.
```sql
CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 6 UPDATE 1;
@@ -14,7 +14,7 @@ CREATE DATABASE power KEEP 365 DAYS 10 BLOCKS 6 UPDATE 1;
In the above SQL statement, a database named "power" will be created, the data in it will be kept for 365 days, which means the data older than 365 days will be deleted automatically, a new data file will be created every 10 days, the number of memory blocks is 6, data is allowed to be updated. For more details please refer to [Database](/taos-sql/database).
-After creating a database, the current database in use can be switched using SQL command `USE`, for example below SQL statement switches the current database to `power`. Without current database specified, table name must be preceded with the corresponding database name.
+After creating a database, the current database in use can be switched using SQL command `USE`, for example below SQL statement switches the current database to `power`. Without the current database specified, table name must be preceded with the corresponding database name.
```sql
USE power;
@@ -23,14 +23,14 @@ USE power;
:::note
- Any table or STable must belong to a database. To create a table or STable, the database it belongs to must be ready.
-- JOIN operation can't be performed tables from two different databases.
+- JOIN operations can't be performed on tables from two different databases.
- Timestamp needs to be specified when inserting rows or querying historical rows.
:::
## Create STable
-In a time-series application, there may be multiple kinds of data collection points. For example, in the electrical power system there are meters, transformers, bus bars, switches, etc. For easy and efficient aggregation of multiple tables, one STable needs to be created for each kind of data collection point. For example, for the meters in [table 1](/tdinternal/arch#model_table1), below SQL statement can be used to create the super table.
+In a time-series application, there may be multiple kinds of data collection points. For example, in the electrical power system there are meters, transformers, bus bars, switches, etc. For easy and efficient aggregation of multiple tables, one STable needs to be created for each kind of data collection point. For example, for the meters in [table 1](/tdinternal/arch#model_table1), the below SQL statement can be used to create the super table.
```sql
CREATE STable meters (ts timestamp, current float, voltage int, phase float) TAGS (location binary(64), groupId int);
@@ -41,11 +41,11 @@ If you are using versions prior to 2.0.15, the `STable` keyword needs to be repl
:::
-Similar to creating a regular table, when creating a STable, name and schema need to be provided too. In the STable schema, the first column must be timestamp (like ts in the example), and other columns (like current, voltage and phase in the example) are the data collected. The type of a column can be integer, float, double, string ,etc. Besides, the schema for tags need to be provided, like location and groupId in the example. The type of a tag can be integer, float, string, etc. The static properties of a data collection point can be defined as tags, like the location, device type, device group ID, manager ID, etc. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
+Similar to creating a regular table, when creating a STable, the name and schema need to be provided. In the STable schema, the first column must be timestamp (like ts in the example), and the other columns (like current, voltage and phase in the example) are the data collected. The column type can be integer, float, double, string ,etc. Besides, the schema for tags need to be provided, like location and groupId in the example. The tag type can be integer, float, string, etc. The static properties of a data collection point can be defined as tags, like the location, device type, device group ID, manager ID, etc. Tags in the schema can be added, removed or updated. Please refer to [STable](/taos-sql/stable) for more details.
-For each kind of data collection points, a corresponding STable must be created. There may be many STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another point for environmental data like temperature, humidity and wind direction, multiple STables are required for such kind of device.
+For each kind of data collection point, a corresponding STable must be created. There may be many STables in an application. For electrical power system, we need to create a STable respectively for meters, transformers, busbars, switches. There may be multiple kinds of data collection points on a single device, for example there may be one data collection point for electrical data like current and voltage and another point for environmental data like temperature, humidity and wind direction, multiple STables are required for such kind of device.
-At most 4096 (or 1024 prior to version 2.1.7.0) columns are allowed in a STable. If there are more than 4096 of metrics to bo collected for a data collection point, multiple STables are required for such kind of data collection point. There can be multiple databases in system, while one or more STables can exist in a database.
+At most 4096 (or 1024 prior to version 2.1.7.0) columns are allowed in a STable. If there are more than 4096 of metrics to be collected for a data collection point, multiple STables are required. There can be multiple databases in a system, while one or more STables can exist in a database.
## Create Table
@@ -57,7 +57,7 @@ CREATE TABLE d1001 USING meters TAGS ("Beijing.Chaoyang", 2);
In the above SQL statement, "d1001" is the table name, "meters" is the STable name, followed by the value of tag "Location" and the value of tag "groupId", which are "Beijing.Chaoyang" and "2" respectively in the example. The tag values can be updated after the table is created. Please refer to [Tables](/taos-sql/table) for details.
-In TDengine system, it's recommended to create a table for a data collection point via STable. Table created via STable is called subtable in some parts of TDengine document. All SQL commands applied on regular table can be applied on subtable.
+In TDengine system, it's recommended to create a table for a data collection point via STable. A table created via STable is called subtable in some parts of the TDengine documentation. All SQL commands applied on regular tables can be applied on subtables.
:::warning
It's not recommended to create a table in a database while using a STable from another database as template.
@@ -67,7 +67,7 @@ It's suggested to use the global unique ID of a data collection point as the tab
## Create Table Automatically
-In some circumstances, it's not sure whether the table already exists when inserting rows. The table can be created automatically using the SQL statement below, and nothing will happen if the table already exist.
+In some circumstances, it's unknown whether the table already exists when inserting rows. The table can be created automatically using the SQL statement below, and nothing will happen if the table already exist.
```sql
INSERT INTO d1001 USING meters TAGS ("Beijng.Chaoyang", 2) VALUES (now, 10.2, 219, 0.32);
@@ -79,6 +79,6 @@ For more details please refer to [Create Table Automatically](/taos-sql/insert#a
## Single Column vs Multiple Column
-Multiple columns data model is supported in TDengine. As long as multiple metrics are collected by same data collection point at same time, i.e. the timestamp are identical, these metrics can be put in single stable as columns. However, there is another kind of design, i.e. single column data model, a table is created for each metric, which means a STable is required for each kind of metric. For example, 3 STables are required for current, voltage and phase.
+A multiple columns data model is supported in TDengine. As long as multiple metrics are collected by the same data collection point at the same time, i.e. the timestamp are identical, these metrics can be put in a single STable as columns. However, there is another kind of design, i.e. single column data model, a table is created for each metric, which means a STable is required for each kind of metric. For example, 3 STables are required for current, voltage and phase.
-It's recommended to use multiple column data model as much as possible because it's better in the performance of inserting or querying rows. In some cases, however, the metrics to be collected vary frequently and correspondingly the STable schema needs to be changed frequently too. In such case, it's more convenient to use single column data model.
+It's recommended to use a multiple column data model as much as possible because it's better in the performance of inserting or querying rows. In some cases, however, the metrics to be collected vary frequently and correspondingly the STable schema needs to be changed frequently too. In such case, it's more convenient to use single column data model.
diff --git a/docs-en/07-develop/03-insert-data/01-sql-writing.mdx b/docs-en/07-develop/03-insert-data/01-sql-writing.mdx
index 9f66992d3d..ae170a2bef 100644
--- a/docs-en/07-develop/03-insert-data/01-sql-writing.mdx
+++ b/docs-en/07-develop/03-insert-data/01-sql-writing.mdx
@@ -22,11 +22,11 @@ import CStmt from "./_c_stmt.mdx";
## Introduction
-Application program can execute `INSERT` statement through connectors to insert rows. TAOS CLI can be launched manually to insert data too.
+Application programs can execute `INSERT` statement through connectors to insert rows. The TAOS CLI can also be used to manually insert data.
### Insert Single Row
-Below SQL statement is used to insert one row into table "d1001".
+The below SQL statement is used to insert one row into table "d1001".
```sql
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31);
@@ -34,7 +34,7 @@ INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31);
### Insert Multiple Rows
-Multiple rows can be inserted in single SQL statement. Below example inserts 2 rows into table "d1001".
+Multiple rows can be inserted in a single SQL statement. The example below inserts 2 rows into table "d1001".
```sql
INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3, 218, 0.25);
@@ -42,7 +42,7 @@ INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3,
### Insert into Multiple Tables
-Data can be inserted into multiple tables in same SQL statement. Below example inserts 2 rows into table "d1001" and 1 row into table "d1002".
+Data can be inserted into multiple tables in the same SQL statement. The example below inserts 2 rows into table "d1001" and 1 row into table "d1002".
```sql
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31);
@@ -52,14 +52,14 @@ For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
:::info
-- Inserting in batch can gain better performance. Normally, the higher the batch size, the better the performance. Please be noted each single row can't exceed 16K bytes and each single SQL statement can't exceed 1M bytes.
-- Inserting with multiple threads can gain better performance too. However, depending on the system resources on the application side and the server side, with the number of inserting threads grows to a specific point, the performance may drop instead of growing. The proper number of threads need to be tested in a specific environment to find the best number.
+- Inserting in batches can improve performance. Normally, the higher the batch size, the better the performance. Please note that a single row can't exceed 16K bytes and each SQL statement can't exceed 1MB.
+- Inserting with multiple threads can also improve performance. However, depending on the system resources on the application side and the server side, when the number of inserting threads grows beyond a specific point the performance may drop instead of improving. The proper number of threads needs to be tested in a specific environment to find the best number.
:::
:::warning
-- If the timestamp for the row to be inserted already exists in the table, the behavior depends on the value of parameter `UPDATE`. If it's set to 0 (also the default value), the row will be discarded. If it's set to 1, the new values will override the old values for the same row.
+- If the timestamp for the row to be inserted already exists in the table, the behavior depends on the value of parameter `UPDATE`. If it's set to 0 (the default value), the row will be discarded. If it's set to 1, the new values will override the old values for the same row.
- The timestamp to be inserted must be newer than the timestamp of subtracting current time by the parameter `KEEP`. If `KEEP` is set to 3650 days, then the data older than 3650 days ago can't be inserted. The timestamp to be inserted can't be newer than the timestamp of current time plus parameter `DAYS`. If `DAYS` is set to 2, the data newer than 2 days later can't be inserted.
:::
@@ -95,13 +95,13 @@ For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
:::note
1. With either native connection or REST connection, the above samples can work well.
-2. Please be noted that `use db` can't be used with REST connection because REST connection is stateless, so in the samples `dbName.tbName` is used to specify the table name.
+2. Please note that `use db` can't be used with a REST connection because REST connections are stateless, so in the samples `dbName.tbName` is used to specify the table name.
:::
### Insert with Parameter Binding
-TDengine also provides Prepare API that support parameter binding. Similar to MySQL, only `?` can be used in these APIs to represent the parameters to bind. From version 2.1.1.0 and 2.1.2.0, parameter binding support for inserting data has been improved significantly to improve the insert performance by avoiding the cost of parsing SQL statements.
+TDengine also provides API support for parameter binding. Similar to MySQL, only `?` can be used in these APIs to represent the parameters to bind. From version 2.1.1.0 and 2.1.2.0, parameter binding support for inserting data has improved significantly to improve the insert performance by avoiding the cost of parsing SQL statements.
Parameter binding is available only with native connection.
diff --git a/docs-en/07-develop/03-insert-data/03-opentsdb-telnet.mdx b/docs-en/07-develop/03-insert-data/03-opentsdb-telnet.mdx
index 66bb67c256..3656e5fd3b 100644
--- a/docs-en/07-develop/03-insert-data/03-opentsdb-telnet.mdx
+++ b/docs-en/07-develop/03-insert-data/03-opentsdb-telnet.mdx
@@ -15,16 +15,16 @@ import CTelnet from "./_c_opts_telnet.mdx";
## Introduction
-A single line of text is used in OpenTSDB line protocol to represent one row of data. OpenTSDB employs single column data model, so one line can only contains single data column. There can be multiple tags. Each line contains 4 parts as below:
+A single line of text is used in OpenTSDB line protocol to represent one row of data. OpenTSDB employs single column data model, so one line can only contain a single data column. There can be multiple tags. Each line contains 4 parts as below:
```
=[ =]
```
-- `metric` will be used as STable name.
-- `timestamp` is the timestamp of current row of data. The time precision will be determined automatically based on the length of the timestamp. second and millisecond time precision are supported.\
+- `metric` will be used as the STable name.
+- `timestamp` is the timestamp of current row of data. The time precision will be determined automatically based on the length of the timestamp. Second and millisecond time precision are supported.
- `value` is a metric which must be a numeric value, the corresponding column name is "value".
-- The last part is tag sets separated by space, all tags will be converted to nchar type automatically.
+- The last part is the tag set separated by spaces, all tags will be converted to nchar type automatically.
For example:
diff --git a/docs-en/07-develop/03-insert-data/index.md b/docs-en/07-develop/03-insert-data/index.md
index ee80d436f1..ba31a951ff 100644
--- a/docs-en/07-develop/03-insert-data/index.md
+++ b/docs-en/07-develop/03-insert-data/index.md
@@ -2,11 +2,11 @@
title: Insert
---
-TDengine supports multiple protocols of inserting data, including SQL, InfluxDB Line protocol, OpenTSDB Telnet protocol, OpenTSDB JSON protocol. Data can be inserted row by row, or in batch. Data from one or more collecting points can be inserted simultaneously. In the meantime, data can be inserted with multiple threads, out of order data and historical data can be inserted too. InfluxDB Line protocol, OpenTSDB Telnet protocol and OpenTSDB JSON protocol are the 3 kinds of schemaless insert protocols supported by TDengine. It's not necessary to create stable and table in advance if using schemaless protocols, and the schemas can be adjusted automatically according to the data to be inserted.
+TDengine supports multiple protocols of inserting data, including SQL, InfluxDB Line protocol, OpenTSDB Telnet protocol, and OpenTSDB JSON protocol. Data can be inserted row by row, or in batches. Data from one or more collection points can be inserted simultaneously. Data can be inserted with multiple threads, and out of order data and historical data can be inserted as well. InfluxDB Line protocol, OpenTSDB Telnet protocol and OpenTSDB JSON protocol are the 3 kinds of schemaless insert protocols supported by TDengine. It's not necessary to create STables and tables in advance if using schemaless protocols, and the schemas can be adjusted automatically based on the data being inserted.
```mdx-code-block
import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
-```
\ No newline at end of file
+```
diff --git a/docs-en/07-develop/04-query-data/index.mdx b/docs-en/07-develop/04-query-data/index.mdx
index 4016f8453b..539b8867a7 100644
--- a/docs-en/07-develop/04-query-data/index.mdx
+++ b/docs-en/07-develop/04-query-data/index.mdx
@@ -20,7 +20,7 @@ import CAsync from "./_c_async.mdx";
## Introduction
-SQL is used by TDengine as the query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine CLI `taos` can also be used to execute SQL Ad-Hoc query. Here is the list of major query functionalities supported by TDengine:
+SQL is used by TDengine as the query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine CLI `taos` can also be used to execute SQL Ad-Hoc queries. Here is the list of major query functionalities supported by TDengine:
- Query on single column or multiple columns
- Filter on tags or data columns:>, <, =, <\>, like
@@ -31,7 +31,7 @@ SQL is used by TDengine as the query language. Application programs can send SQL
- Join query with timestamp alignment
- Aggregate functions: count, max, min, avg, sum, twa, stddev, leastsquares, top, bottom, first, last, percentile, apercentile, last_row, spread, diff
-For example, below SQL statement can be executed in TDengine CLI `taos` to select the rows whose voltage column is bigger than 215 and limit the output to only 2 rows.
+For example, the SQL statement below can be executed in TDengine CLI `taos` to select the rows whose voltage column is bigger than 215 and limit the output to only 2 rows.
```sql
select * from d1001 where voltage > 215 order by ts desc limit 2;
@@ -46,15 +46,15 @@ taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
Query OK, 2 row(s) in set (0.001100s)
```
-To meet the requirements in many use cases, some special functions have been added in TDengine, for example `twa` (Time Weighted Average), `spared` (The difference between the maximum and the minimum), `last_row` (the last row), more and more functions will be added to better perform in many use cases. Furthermore, continuous query is also supported in TDengine.
+To meet the requirements of many use cases, some special functions have been added in TDengine, for example `twa` (Time Weighted Average), `spared` (The difference between the maximum and the minimum), and `last_row` (the last row). Furthermore, continuous query is also supported in TDengine.
For detailed query syntax please refer to [Select](/taos-sql/select).
## Aggregation among Tables
-In many use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviated for super table), is used in TDengine to represent a kind of data collection points, and a table is used to represent a specific data collection point. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same kind of data collection points, can be. Aggregate functions applicable for tables can be used directly on STables, syntax is exactly same.
+In many use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviated for super table), is used in TDengine to represent a kind of data collection point, and a subtable is used to represent a specific data collection point. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same kind of data collection points. Aggregate functions applicable for tables can be used directly on STables, the syntax is exactly the same.
-In summary, for a STable, its subtables can be aggregated by a simple query on STable, it's kind of join operation. But tables belong to different STables could not be aggregated.
+In summary, for a STable, its subtables can be aggregated by a simple query on the STable, it's a kind of join operation. But tables belong to different STables can not be aggregated.
### Example 1
@@ -81,11 +81,11 @@ taos> SELECT count(*), max(current) FROM meters where groupId = 2 and ts > now -
Query OK, 1 row(s) in set (0.002136s)
```
-Join query is allowed between only the tables of same STable. In [Select](/taos-sql/select), all query operations are marked as whether it supports STable or not.
+Join queries are only allowed between the subtables of the same STable. In [Select](/taos-sql/select), all query operations are marked as to whether they supports STables or not.
## Down Sampling and Interpolation
-In IoT use cases, down sampling is widely used to aggregate the data by time range. `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, below SQL statement can be used to get the sum of current every 10 seconds from meters table d1001.
+In IoT use cases, down sampling is widely used to aggregate the data by time range. The `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, the SQL statement below can be used to get the sum of current every 10 seconds from meters table d1001.
```
taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
@@ -96,7 +96,7 @@ taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
Query OK, 2 row(s) in set (0.000883s)
```
-Down sampling can also be used for STable. For example, below SQL statement can be used to get the sum of current from all meters in BeiJing.
+Down sampling can also be used for STable. For example, the below SQL statement can be used to get the sum of current from all meters in BeiJing.
```
taos> SELECT SUM(current) FROM meters where location like "Beijing%" INTERVAL(1s);
@@ -110,7 +110,7 @@ taos> SELECT SUM(current) FROM meters where location like "Beijing%" INTERVAL(1s
Query OK, 5 row(s) in set (0.001538s)
```
-Down sampling also supports time offset. For example, below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
+Down sampling also supports time offset. For example, the below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
```
taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
@@ -124,7 +124,7 @@ taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
Query OK, 5 row(s) in set (0.001521s)
```
-In many use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle by themselves in many systems. In TDengine, it's easy to achieve the alignment using down sampling.
+In many use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle this by themselves. In TDengine, it's easy to achieve the alignment using down sampling.
Interpolation can be performed in TDengine if there is no data in a time range.
@@ -162,16 +162,16 @@ In the section describing [Insert](/develop/insert-data/sql-writing), a database
:::note
-1. With either REST connection or native connection, the above sample code work well.
-2. Please be noted that `use db` can't be used in case of REST connection because it's stateless.
+1. With either REST connection or native connection, the above sample code works well.
+2. Please note that `use db` can't be used in case of REST connection because it's stateless.
:::
### Asynchronous Query
-Besides synchronous query, asynchronous query API is also provided by TDengine to insert or query data more efficiently. With similar hardware and software environment, async API is 2~4 times faster than sync APIs. Async API works in non-blocking mode, which means an operation can be returned without finishing so that the calling thread can switch to other works to improve the performance of the whole application system. Async APIs perform especially better in case of poor network.
+Besides synchronous queries, an asynchronous query API is also provided by TDengine to insert or query data more efficiently. With a similar hardware and software environment, the async API is 2~4 times faster than sync APIs. Async API works in non-blocking mode, which means an operation can be returned without finishing so that the calling thread can switch to other works to improve the performance of the whole application system. Async APIs perform especially better in the case of poor networks.
-Please be noted that async query can only be used with native connection.
+Please note that async query can only be used with a native connection.
diff --git a/docs-en/07-develop/05-continuous-query.mdx b/docs-en/07-develop/05-continuous-query.mdx
index 97e32a17ff..6e7ec64307 100644
--- a/docs-en/07-develop/05-continuous-query.mdx
+++ b/docs-en/07-develop/05-continuous-query.mdx
@@ -4,15 +4,15 @@ description: "Continuous query is a query that's executed automatically accordin
title: "Continuous Query"
---
-Continuous query is a query that's executed automatically according to predefined frequency to provide aggregate query capability by time window, it's actually a simplified time driven stream computing. Continuous query can be performed on a table or STable in TDengine. The result of continuous query can be pushed to client or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively.
+Continuous query is a query that's executed automatically according to a predefined frequency to provide aggregate query capability by time window, it's actually a simplified time driven stream computing. Continuous query can be performed on a table or STable in TDengine. The result of continuous query can be pushed to clients or written back to TDengine. Each query is executed on a time window, which moves forward with time. The size of time window and the forward sliding time need to be specified with parameter `INTERVAL` and `SLIDING` respectively.
-Continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With continuous query, the result can be generated according to time window to achieve down sampling of original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to client or written to TDengine.
+Continuous query in TDengine is time driven, and can be defined using TAOS SQL directly without any extra operations. With continuous query, the result can be generated according to a time window to achieve down sampling of the original data. Once a continuous query is defined using TAOS SQL, the query is automatically executed at the end of each time window and the result is pushed back to clients or written to TDengine.
There are some differences between continuous query in TDengine and time window computation in stream computing:
- The computation is performed and the result is returned in real time in stream computing, but the computation in continuous query is only started when a time window closes. For example, if the time window is 1 day, then the result will only be generated at 23:59:59.
-- If a historical data row is written in to a time widow for which the computation has been finished, the computation will not be performed again and the result will not be pushed to client again either. If the result has been written into TDengine, there will be no update for the result.
-- In continuous query, if the result is pushed to client, the client status is not cached on the server side and Exactly-once is not guaranteed by the server either. If the client program crashes, a new time window will be generated from the time where the continuous query is restarted. If the result is written into TDengine, the data written into TDengine can be guaranteed as valid and continuous.
+- If a historical data row is written in to a time window for which the computation has already finished, the computation will not be performed again and the result will not be pushed to client applications again. If the results have already been written into TDengine, they will not be updated.
+- In continuous query, if the result is pushed to a client, the client status is not cached on the server side and Exactly-once is not guaranteed by the server. If the client program crashes, a new time window will be generated from the time where the continuous query is restarted. If the result is written into TDengine, the data written into TDengine can be guaranteed as valid and continuous.
## Syntax
@@ -30,7 +30,7 @@ SLIDING: The time step for which the time window moves forward each time
## How to Use
-In this section the use case of meters will be used to introduce how to use continuous query. Assume the STable and sub tables have been created using below SQL statement.
+In this section the use case of meters will be used to introduce how to use continuous query. Assume the STable and subtables have been created using the SQL statements below.
```sql
create table meters (ts timestamp, current float, voltage int, phase float) tags (location binary(64), groupId int);
@@ -38,7 +38,7 @@ create table D1001 using meters tags ("Beijing.Chaoyang", 2);
create table D1002 using meters tags ("Beijing.Haidian", 2);
```
-The average voltage for each time window of one minute with 30 seconds as the length of moving forward can be retrieved using below SQL statement.
+The SQL statement below retrieves the average voltage for a one minute time window, with each time window moving forward by 30 seconds.
```sql
select avg(voltage) from meters interval(1m) sliding(30s);
@@ -50,13 +50,13 @@ Whenever the above SQL statement is executed, all the existing data will be comp
select avg(voltage) from meters where ts > {startTime} interval(1m) sliding(30s);
```
-Another easier way for same purpose is prepend `create table {tableName} as` before the `select`.
+An easier way to achieve this is to prepend `create table {tableName} as` before the `select`.
```sql
create table avg_vol as select avg(voltage) from meters interval(1m) sliding(30s);
```
-A table named as `avg_vol` will be created automatically, then every 30 seconds the `select` statement will be executed automatically on the data in the past 1 minutes, i.e. the latest time window, and the result is written into table `avg_vol`. The client program just needs to query from table `avg_vol`. For example:
+A table named as `avg_vol` will be created automatically, then every 30 seconds the `select` statement will be executed automatically on the data in the past 1 minute, i.e. the latest time window, and the result is written into table `avg_vol`. The client program just needs to query from table `avg_vol`. For example:
```sql
taos> select * from avg_vol;
@@ -68,16 +68,16 @@ taos> select * from avg_vol;
2020-07-29 13:39:00.000 | 223.0800000 |
```
-Please be noted that the minimum allowed time window is 10 milliseconds, and no upper limit.
+Please note that the minimum allowed time window is 10 milliseconds, and no upper limit.
-Besides, it's allowed to specify the start and end time of continuous query. If the start time is not specified, the timestamp of the first original row will be considered as the start time; if the end time is not specified, the continuous will be performed infinitely, otherwise it will be terminated once the end time is reached. For example, the continuous query in below SQL statement will be started from now and terminated one hour later.
+It's possible to specify the start and end time of a continuous query. If the start time is not specified, the timestamp of the first row will be considered as the start time; if the end time is not specified, the continuous query will be performed indefinitely, otherwise it will be terminated once the end time is reached. For example, the continuous query in the SQL statement below will be started from now and terminated one hour later.
```sql
create table avg_vol as select avg(voltage) from meters where ts > now and ts <= now + 1h interval(1m) sliding(30s);
```
-`now` in above SQL statement stands for the time when the continuous query is created, not the time when the computation is actually performed. Besides, to avoid the trouble caused by the delay of original data as much as possible, the actual computation in continuous query is also started with a little delay. That means, once a time window closes, the computation is not started immediately. Normally, the result can only be available a little time later, normally within one minute, after the time window closes.
+`now` in the above SQL statement stands for the time when the continuous query is created, not the time when the computation is actually performed. To avoid the trouble caused by a delay in receiving data as much as possible, the actual computation in a continuous query is started after a little delay. That means, once a time window closes, the computation is not started immediately. Normally, the result are available after a little time, normally within one minute, after the time window closes.
## How to Manage
-`show streams` command can be used in TDengine CLI `taos` to show all the continuous queries in the system, and `kill stream` can be used to terminate a continuous query.
+`show streams` command can be used in the TDengine CLI `taos` to show all the continuous queries in the system, and `kill stream` can be used to terminate a continuous query.
diff --git a/docs-en/07-develop/06-subscribe.mdx b/docs-en/07-develop/06-subscribe.mdx
index 56f4ed83d8..964bb2fd8d 100644
--- a/docs-en/07-develop/06-subscribe.mdx
+++ b/docs-en/07-develop/06-subscribe.mdx
@@ -16,9 +16,9 @@ import CDemo from "./_sub_c.mdx";
## Introduction
-According to the time series nature of the data, data inserting in TDengine is similar to data publishing in message queues, they both can be considered as a new data record with timestamp is inserted into the system. Data is stored in ascending order of timestamp inside TDengine, so essentially each table in TDengine can be considered as a message queue.
+Due to the nature of time series data, data inserting in TDengine is similar to data publishing in message queues. Data is stored in ascending order of timestamp inside TDengine, so each table in TDengine can essentially be considered as a message queue.
-Lightweight service for data subscription and pushing is built in TDengine. With the API provided by TDengine, client programs can used `select` statement to subscribe the data from one or more tables. The subscription and and state maintenance is performed on the client side, the client programs polls the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start for retrieving new data is up to the client side.
+A lightweight service for data subscription and pushing is built in TDengine. With the API provided by TDengine, client programs can use `select` statements to subscribe to data from one or more tables. The subscription and state maintenance is performed on the client side, the client programs poll the server to check whether there is new data, and if so the new data will be pushed back to the client side. If the client program is restarted, where to start for retrieving new data is up to the client side.
There are 3 major APIs related to subscription provided in the TDengine client driver.
@@ -28,9 +28,9 @@ taos_consume
taos_unsubscribe
```
-For more details about these API please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of STable and sub tables please refer to the previous section "continuous query". Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c).
+For more details about these APIs please refer to [C/C++ Connector](/reference/connector/cpp). Their usage will be introduced below using the use case of meters, in which the schema of STable and subtables from the previous section [Continuous Query](/develop/continuous-query) are used. Full sample code can be found [here](https://github.com/taosdata/TDengine/blob/master/examples/c/subscribe.c).
-If we want to get notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways:
+If we want to get a notification and take some actions if the current exceeds a threshold, like 10A, from some meters, there are two ways:
The first way is to query on each sub table and record the last timestamp matching the criteria, then after some time query on the data later than recorded timestamp and repeat this process. The SQL statements for this way are as below.
@@ -40,7 +40,7 @@ select * from D1002 where ts > {last_timestamp2} and current > 10;
...
```
-The above way works, but the problem is that the number of `select` statements increases with the number of meters grows. Finally the performance of both client side and server side will be unacceptable once the number of meters grows to a big enough number.
+The above way works, but the problem is that the number of `select` statements increases with the number of meters. Additionally, the performance of both client side and server side will be unacceptable once the number of meters grows to a big enough number.
A better way is to query on the STable, only one `select` is enough regardless of the number of meters, like below:
@@ -48,7 +48,7 @@ A better way is to query on the STable, only one `select` is enough regardless o
select * from meters where ts > {last_timestamp} and current > 10;
```
-However, how to choose `last_timestamp` becomes a new problem if using this way. Firstly, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Secondly, the time when the data from different meters may arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fasted" meters is used as `last_timestamp`, some data from other meters may be missed.
+However, this presents a new problem in how to choose `last_timestamp`. First, the timestamp when the data is generated is different from the timestamp when the data is inserted into the database, sometimes the difference between them may be very big. Second, the time when the data from different meters arrives at the database may be different too. If the timestamp of the "slowest" meter is used as `last_timestamp` in the query, the data from other meters may be selected repeatedly; but if the timestamp of the "fastest" meter is used as `last_timestamp`, some data from other meters may be missed.
All the problems mentioned above can be resolved thoroughly using subscription provided by TDengine.
@@ -75,19 +75,19 @@ The parameter `sql` is a `select` statement in which `where` clause can be used
select * from meters where current > 10;
```
-Please be noted that, all the data will be processed because no start time is specified. If only the data from one day ago needs to be processed, a time related condition can be added:
+Please note that, all the data will be processed because no start time is specified. If only the data from one day ago needs to be processed, a time related condition can be added:
```sql
select * from meters where ts > now - 1d and current > 10;
```
-The parameter `topic` is the name of the subscription, it needs to be guaranteed unique in the client program, but it's not necessary to be globally unique because subscription is implemented in the APIs on client side.
+The parameter `topic` is the name of the subscription, it needs to be guaranteed unique in the client program, but it's not necessary to be globally unique because subscription is implemented in the APIs on the client side.
-If the subscription named as `topic` doesn't exist, parameter `restart` would be ignored. If the subscription named as `topic` has been created before by the client program which then exited, when the client program is restarted to use this `topic`, parameter `restart` is used to determine retrieving data from beginning or from the last point where the subscription was broken. If the value of `restart` is **true** (i.e. a non-zero value), the data will be retrieved from beginning, or if it is **false** (i.e. zero), the data already consumed before will not be processed again.
+If the subscription named as `topic` doesn't exist, the parameter `restart` will be ignored. If the subscription named as `topic` has been created before by the client program, when the client program is restarted with the subscription named `topic`, parameter `restart` is used to determine whether to retrieve data from the beginning or from the last point where the subscription was broken. If the value of `restart` is **true** (i.e. a non-zero value), the data will be retrieved from beginning, or if it is **false** (i.e. zero), the data already consumed before will not be processed again.
-The last parameter of `taos_subscribe` is the polling interval in unit of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` would be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function.
+The last parameter of `taos_subscribe` is the polling interval in unit of millisecond. In sync mode, if the time difference between two continuous invocations to `taos_consume` is smaller than the interval specified by `taos_subscribe`, `taos_consume` will be blocked until the interval is reached. In async mode, this interval is the minimum interval between two invocations to the call back function.
-The last second parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode.
+The second to last parameter of `taos_subscribe` is used to pass arguments to the call back function. `taos_subscribe` doesn't process this parameter and simply passes it to the call back function. This parameter is simply ignored in sync mode.
After a subscription is created, its data can be consumed and processed, below is the sample code of how to consume data in sync mode, in the else part if `if (async)`.
@@ -149,22 +149,22 @@ void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
taos_unsubscribe(tsub, keep);
```
-The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value in when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with same name as `topic` for each subscription, the subscription will be restarted from beginning if the corresponding progress file is removed.
+The second parameter `keep` is used to specify whether to keep the subscription progress on the client sde. If it is **false**, i.e. **0**, then subscription will be restarted from beginning regardless of the `restart` parameter's value when `taos_subscribe` is invoked again. The subscription progress information is stored in _{DataDir}/subscribe/_ , under which there is a file with the same name as `topic` for each subscription, the subscription will be restarted from the beginning if the corresponding progress file is removed.
Now let's see the effect of the above sample code, assuming below prerequisites have been done.
- The sample code has been downloaded to local system
- TDengine has been installed and launched properly on same system
-- The database, STable, sub tables required in the sample code have been ready
+- The database, STable, and subtables required in the sample code are ready
-It's ready to launch below command in the directory where the sample code resides to compile and start the program.
+Launch the command below in the directory where the sample code resides to compile and start the program.
```bash
make
./subscribe -sql='select * from meters where current > 10;'
```
-After the program is started, open another terminal and launch TDengine CLI `taos`, then use below SQL commands to insert a row whose current is 12A into table **D1001**.
+After the program is started, open another terminal and launch TDengine CLI `taos`, then use the below SQL commands to insert a row whose current is 12A into table **D1001**.
```sql
use test;
@@ -232,7 +232,7 @@ Query OK, 5 row(s) in set (0.004896s)
### Run the Examples
-The example programs firstly consume all historical data matching the criteria.
+The example programs first consume all historical data matching the criteria.
```bash
ts: 1597464000000 current: 12.0 voltage: 220 phase: 1 location: Beijing.Chaoyang groupid : 2
diff --git a/docs-en/07-develop/07-cache.md b/docs-en/07-develop/07-cache.md
index 13db6c3638..c99612ecfb 100644
--- a/docs-en/07-develop/07-cache.md
+++ b/docs-en/07-develop/07-cache.md
@@ -10,9 +10,9 @@ Caching the latest data provides the capability of retrieving data in millisecon
The memory space used by TDengine cache is fixed in size, according to the configuration based on application requirement and system resources. Independent memory pool is allocated for and managed by each vnode (virtual node) in TDengine, there is no sharing of memory pools between vnodes. All the tables belonging to a vnode share all the cache memory of the vnode.
-Memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache`, the number of blocks for each vnode is determined by `blocks`. For each vnode, the total cache size is `cache * blocks`. It's better to set the size of each block to hold at least tends of rows.
+Memory pool is divided into blocks and data is stored in row format in memory and each block follows FIFO policy. The size of each block is determined by configuration parameter `cache`, the number of blocks for each vnode is determined by `blocks`. For each vnode, the total cache size is `cache * blocks`. A cache block needs to ensure that each table can store at least dozens of records to be efficient.
-`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example below SQL statement retrieves the latest voltage of all meters in Chaoyang district of Beijing.
+`last_row` function can be used to retrieve the last row of a table or a STable to quickly show the current state of devices on monitoring screen. For example the below SQL statement retrieves the latest voltage of all meters in Chaoyang district of Beijing.
```sql
select last_row(voltage) from meters where location='Beijing.Chaoyang';
diff --git a/docs-en/07-develop/index.md b/docs-en/07-develop/index.md
index 122dd0d870..e3f55f2907 100644
--- a/docs-en/07-develop/index.md
+++ b/docs-en/07-develop/index.md
@@ -2,15 +2,15 @@
title: Developer Guide
---
-To develop an application using TDengine to process time-series data, we recommend taking the following steps:
+To develop an application to process time-series data using TDengine, we recommend taking the following steps:
-1. Choose the way for connection to TDengine. No matter what programming language you use, you can always use the REST interface to access TDengine, but you can also use connectors unique to each programming language.
-2. Design the data model based on your own application scenarios. Learn the [concepts](/concept/) of TDengine including "one table for one data collection point" and the "super table" concept; learn about static labels, collected metrics, and subtables. According to the data characteristics, you may decide to create one or more databases, and you should design the STable schema to fit your data.
-3. Decide how to insert data. TDengine supports writing using standard SQL, but also supports schemaless writing, so that data can be written directly without creating tables manually.
-4. Based on business requirements, find out what SQL query statements need to be written.
+1. Choose the method to connect to TDengine. No matter what programming language you use, you can always use the REST interface to access TDengine, but you can also use connectors unique to each programming language.
+2. Design the data model based on your own use cases. Learn the [concepts](/concept/) of TDengine including "one table for one data collection point" and the "super table" (STable) concept; learn about static labels, collected metrics, and subtables. Depending on the characteristics of your data and your requirements, you may decide to create one or more databases, and you should design the STable schema to fit your data.
+3. Decide how you will insert data. TDengine supports writing using standard SQL, but also supports schemaless writing, so that data can be written directly without creating tables manually.
+4. Based on business requirements, find out what SQL query statements need to be written. You may be able to repurpose any existing SQL.
5. If you want to run real-time analysis based on time series data, including various dashboards, it is recommended that you use the TDengine continuous query feature instead of deploying complex streaming processing systems such as Spark or Flink.
6. If your application has modules that need to consume inserted data, and they need to be notified when new data is inserted, it is recommended that you use the data subscription function provided by TDengine without the need to deploy Kafka.
-7. In many scenarios (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately.
+7. In many use cases (such as fleet management), the application needs to obtain the latest status of each data collection point. It is recommended that you use the cache function of TDengine instead of deploying Redis separately.
8. If you find that the SQL functions of TDengine cannot meet your requirements, then you can use user-defined functions to solve the problem.
This section is organized in the order described above. For ease of understanding, TDengine provides sample code for each supported programming language for each function. If you want to learn more about the use of SQL, please read the [SQL manual](/taos-sql/). For a more in-depth understanding of the use of each connector, please read the [Connector Reference Guide](/reference/connector/). If you also want to integrate TDengine with third-party systems, such as Grafana, please refer to the [third-party tools](/third-party/).
diff --git a/docs-en/10-cluster/01-deploy.md b/docs-en/10-cluster/01-deploy.md
index 8c921797ec..844a026ff6 100644
--- a/docs-en/10-cluster/01-deploy.md
+++ b/docs-en/10-cluster/01-deploy.md
@@ -6,15 +6,15 @@ title: Deployment
### Step 1
-The FQDN of all hosts need to be setup properly, all the FQDNs need to be configured in the /etc/hosts of each host. It must be guaranteed that each FQDN can be accessed (by ping, for example) from any other hosts.
+The FQDN of all hosts needs to be setup properly, all the FQDNs need to be configured in the /etc/hosts of each host. It must be confirmed that each FQDN can be accessed (by ping, for example) from any other hosts.
-On each host command `hostname -f` can be executed to get the hostname. `ping` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, need to be checked and revised to make any two hosts accessible to each other.
+On each host the command `hostname -f` can be executed to get the hostname. `ping` command can be executed on each host to check whether any other host is accessible from it. If any host is not accessible, the network configuration, like /etc/hosts or DNS configuration, need to be checked and revised to make any two hosts accessible to each other.
:::note
-- The host where the client program runs also needs to configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster.
+- The host where the client program runs also needs to be configured properly for FQDN, to make sure all hosts for client or server can be accessed from any other. In other words, the hosts where the client is running are also considered as a part of the cluster.
-- It's suggested to disable the firewall for all hosts in the cluster. At least TCP/UDP for port 6030~6042 need to be open if firewall is enabled.
+- It's suggested to disable the firewall for all hosts in the cluster. At least TCP/UDP for port 6030~6042 need to be open if a firewall is enabled.
:::
@@ -28,7 +28,7 @@ Now it's time to install TDengine on all hosts without starting `taosd`, the ver
### Step 4
-Now each physical node (referred to as `dnode` hereinafter, it's abbreviation for "data node") of TDengine need to be configured properly. Please be noted that one dnode doesn't stand for one host, multiple TDengine nodes can be started on single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following.
+Now each physical node (referred to as `dnode` hereinafter, it's abbreviation for "data node") of TDengine needs to be configured properly. Please note that one dnode doesn't stand for one host, multiple TDengine nodes can be started on single host as long as they are configured properly without conflicting. More specifically each instance of the configuration file `taos.cfg` stands for a dnode. Assuming the first dnode of TDengine cluster is "h1.taosdata.com:6030", its `taos.cfg` is configured as following.
```c
// firstEp is the end point to connect to when any dnode starts
@@ -44,9 +44,9 @@ serverPort 6030
#arbitrator ha.taosdata.com:6042
```
-`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. If you want to start multiple TDengine dnodes on a single host, please also make sure all other configurations like `dataDir`, `logDir`, and other resources related parameters are not conflicting.
+`firstEp` and `fqdn` must be configured properly. In `taos.cfg` of all dnodes in TDengine cluster, `firstEp` must be configured to point to same address, i.e. the first dnode of the cluster. `fqdn` and `serverPort` compose the address of each node itself. If you want to start multiple TDengine dnodes on a single host, please make sure all other configurations like `dataDir`, `logDir`, and other resources related parameters are not conflicting.
-For all the dnodes in a TDengine cluster, below parameters must be configured as exactly same, any node whose configuration is different from dnodes already in the cluster can't join the cluster.
+For all the dnodes in a TDengine cluster, the below parameters must be configured exactly the same, any node whose configuration is different from dnodes already in the cluster can't join the cluster.
| **#** | **Parameter** | **Definition** |
| ----- | ------------------ | --------------------------------------------------------------------------------- |
@@ -61,7 +61,7 @@ For all the dnodes in a TDengine cluster, below parameters must be configured as
| 9 | maxVgroupsPerDb | Maximum number vgroups that can be used by each DB |
:::note
-Prior to version 2.0.19.0, besides the above parameters, `locale` and `charset` must be configured as same too for each dnode.
+Prior to version 2.0.19.0, besides the above parameters, `locale` and `charset` must also be configured the same for each dnode.
:::
@@ -92,7 +92,7 @@ From the above output, it is shown that the end point of the started dnode is "h
There are a few steps necessary to add other dnodes in the cluster.
-Firstly, start `taosd` as instructed in [Get Started](/get-started/), assuming it's for the second dnode. Before starting `taosd`, please making sure the configuration is correct, especially `firstEp`, `FQDN` and `serverPort`, `firstEp` must be same as the dnode shown in the section "Start First DNODE", i.e. "h1.taosdata.com" in this example.
+First, start `taosd` as instructed in [Get Started](/get-started/), assuming it's for the second dnode. Before starting `taosd`, please making sure the configuration is correct, especially `firstEp`, `FQDN` and `serverPort`, `firstEp` must be same as the dnode shown in the section "Start First DNODE", i.e. "h1.taosdata.com" in this example.
Then, on the first dnode, use TDengine CLI `taos` to execute below command to add the end point of the dnode in the cluster. In the command "fqdn:port" should be quoted using double quotes.
@@ -109,6 +109,6 @@ SHOW DNODES;
If the status of the newly added dnode is offline, please check:
- Whether the `taosd` process is running properly or not
-- In the log file `taosdlog.0` to see whether the fqdn and port are correct or not
+- In the log file `taosdlog.0` to see whether the fqdn and port are correct
The above process can be repeated to add more dnodes in the cluster.
diff --git a/docs-en/10-cluster/02-cluster-mgmt.md b/docs-en/10-cluster/02-cluster-mgmt.md
index 3fcd68b29c..9d717be236 100644
--- a/docs-en/10-cluster/02-cluster-mgmt.md
+++ b/docs-en/10-cluster/02-cluster-mgmt.md
@@ -3,7 +3,7 @@ sidebar_label: Operation
title: Manage DNODEs
---
-It has been introduced that how to deploy and start a cluster from scratch. Once a cluster is ready, the dnode status in the cluster can be shown at any time, new dnode can be added to scale out the cluster, an existing dnode can be removed, even load balance can be performed manually.\
+The previous section [Deployment](/cluster/deploy) introduced how to deploy and start a cluster from scratch. Once a cluster is ready, the dnode status in the cluster can be shown at any time, new dnode can be added to scale out the cluster, an existing dnode can be removed, even load balance can be performed manually.
:::note
All the commands to be introduced in this chapter need to be run through TDengine CLI, sometimes it's necessary to use root privilege.
@@ -12,7 +12,7 @@ All the commands to be introduced in this chapter need to be run through TDengin
## Show DNODEs
-below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes, etc. It's suggested to execute this command to check after adding or removing a dnode.
+The below command can be executed in TDengine CLI `taos` to list all dnodes in the cluster, including ID, end point (fqdn:port), status (ready, offline), number of vnodes, number of free vnodes, etc. It's suggested to execute this command to check after adding or removing a dnode.
```sql
SHOW DNODES;
@@ -39,7 +39,7 @@ USE SOME_DATABASE;
SHOW VGROUPS;
```
-The example output is as below:
+The example output is below:
```
taos> show dnodes;
@@ -87,7 +87,7 @@ taos> show dnodes;
Query OK, 2 row(s) in set (0.001017s)
```
-It can be seen that the status of the new dnode is "offline", once the dnode is started and connects the firstEp of the cluster, execute the command again and get below example output, from which it can be seen that two dnodes are both in "ready" status.
+It can be seen that the status of the new dnode is "offline", once the dnode is started and connects the firstEp of the cluster, execute the command again and get the example output below, from which it can be seen that two dnodes are both in "ready" status.
```
taos> show dnodes;
@@ -100,7 +100,7 @@ Query OK, 2 row(s) in set (0.001316s)
## Drop DNODE
-Launch TDengine CLI `taos` and execute the command below to drop or remove a dnode from the cluster. In the command, `dnodeId` can be gotten from `show dnodes`.
+Launch TDengine CLI `taos` and execute the command below to drop or remove a dnode from the cluster. In the command, you can get `dnodeId` from `show dnodes`.
```sql
DROP DNODE "fqdn:port";
@@ -112,7 +112,7 @@ or
DROP DNODE dnodeId;
```
-The example output is as below:
+The example output is below:
```
taos> show dnodes;
@@ -139,7 +139,7 @@ In the above example, when `show dnodes` is executed the first time, two dnodes
- Once a dnode is dropped, it can't rejoin the cluster. To rejoin, the dnode needs to deployed again after cleaning up the data directory. Normally, before dropping a dnode, the data belonging to the dnode needs to be migrated to other place.
- Please be noted that `drop dnode` is different from stopping `taosd` process. `drop dnode` just removes the dnode out of TDengine cluster. Only after a dnode is dropped, can the corresponding `taosd` process be stopped.
- Once a dnode is dropped, other dnodes in the cluster will be notified of the drop and will not accept the request from the dropped dnode.
-- dnodeID is allocated automatically and can't be interfered manually. dnodeID is generated in ascending order without duplication.
+- dnodeID is allocated automatically and can't be manually modified. dnodeID is generated in ascending order without duplication.
:::
@@ -155,7 +155,7 @@ ALTER DNODE BALANCE "VNODE:-DNODE:";
In the above command, `source-dnodeId` is the original dnodeId where the vnode resides, `dest-dnodeId` specifies the target dnode. vgId (vgroup ID) can be shown by `SHOW VGROUPS `.
-Firstly `show vgroups` is executed to show the vgroup distribution.
+First `show vgroups` is executed to show the vgroup distribution.
```
taos> show vgroups;
@@ -172,7 +172,7 @@ taos> show vgroups;
Query OK, 8 row(s) in set (0.001314s)
```
-It can be seen that there are 5 vgroups in dnode 3 and 3 vgroups in node 1, now we want to move vgId 18 from dnode 3 to dnode 1. Execute below command in `taos`
+It can be seen that there are 5 vgroups in dnode 3 and 3 vgroups in node 1, now we want to move vgId 18 from dnode 3 to dnode 1. Execute the below command in `taos`
```
taos> alter dnode 3 balance "vnode:18-dnode:1";
@@ -207,7 +207,7 @@ It can be seen from above output that vgId 18 has been moved from dnode 3 to dno
:::note
- Manual load balancing can only be performed when the automatic load balancing is disabled, i.e. `balance` is set to 0.
-- Only vnode in normal state, i.e. master or slave, can be moved. vnode can't moved when its in status offline, unsynced or syncing.
+- Only a vnode in normal state, i.e. master or slave, can be moved. vnode can't be moved when its in status offline, unsynced or syncing.
- Before moving a vnode, it's necessary to make sure the target dnode has enough resources: CPU, memory and disk.
:::
diff --git a/docs-en/10-cluster/03-ha-and-lb.md b/docs-en/10-cluster/03-ha-and-lb.md
index 53c95be9e9..6e0c386abe 100644
--- a/docs-en/10-cluster/03-ha-and-lb.md
+++ b/docs-en/10-cluster/03-ha-and-lb.md
@@ -7,19 +7,19 @@ title: High Availability and Load Balancing
High availability of vnode and mnode can be achieved through replicas in TDengine.
-The number of vnodes is associated with each DB, there can be multiple DBs in a TDengine cluster. For the purpose of operation, different number of replicas can be configured properly for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas, the default value is 1. With single replica, the high availability of the system can't be guaranteed. Whenever one node is down, data service would be unavailable. The number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation would fail with error "more dnodes are needed". Below SQL statement is used to create a database named as "demo" with 3 replicas.
+The number of vnodes is associated with each DB, there can be multiple DBs in a TDengine cluster. A different number of replicas can be configured for each DB. When creating a database, the parameter `replica` is used to specify the number of replicas, the default value is 1. With single replica, the high availability of the system can't be guaranteed. Whenever one node is down, the data service will be unavailable. The number of dnodes in the cluster must NOT be lower than the number of replicas set for any DB, otherwise the `create table` operation would fail with error "more dnodes are needed". The SQL statement below is used to create a database named "demo" with 3 replicas.
```sql
CREATE DATABASE demo replica 3;
```
-The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each group is determined by the number of replicas set for the DB. The vnodes in each vgroups store exactly same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in online state, the vgroup is able to serve data access. Otherwise the vgroup can't handle any data access for reading or inserting data.
+The data in a DB is divided into multiple shards and stored in multiple vgroups. The number of vnodes in each vgroup is determined by the number of replicas set for the DB. The vnodes in each vgroup store exactly the same data. For the purpose of high availability, the vnodes in a vgroup must be located in different dnodes on different hosts. As long as over half of the vnodes in a vgroup are in an online state, the vgroup is able to provide data access. Otherwise the vgroup can't provide data access for reading or inserting data.
There may be data for multiple DBs in a dnode. Once a dnode is down, multiple DBs may be affected. However, it's hard to say the cluster is guaranteed to work properly as long as over half of dnodes are online because vnodes are introduced and there may be complex mapping between vnodes and dnodes.
## High Availability of Mnode
-Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`, the valid time range is [1,3]. To make sure the data consistency between mnodes, the data replication between mnodes is performed in synchronous way.
+Each TDengine cluster is managed by `mnode`, which is a module of `taosd`. For the high availability of mnode, multiple mnodes can be configured using system parameter `numOfMNodes`, the valid time range is [1,3]. To make sure the data consistency between mnodes, the data replication between mnodes is performed in a synchronous way.
There may be multiple dnodes in a cluster, but only one mnode can be started in each dnode. Which one or ones of the dnodes will be designated as mnodes is automatically determined by TDengine according to the cluster configuration and system resources. Command `show mnodes` can be executed in TDengine `taos` to show the mnodes in the cluster.
@@ -32,19 +32,19 @@ The end point and role/status (master, slave, unsynced, or offline) of all mnode
For the high availability of mnode, `numOfMnodes` needs to be configured to 2 or a higher value. Because the data consistency between mnodes must be guaranteed, the replica confirmation parameter `quorum` is set to 2 automatically if `numOfMNodes` is set to 2 or higher.
:::note
-If high availability is important for your system, both vnode and mnode must be configured to have multiple replicas. How to configure for them are different and have been described.
+If high availability is important for your system, both vnode and mnode must be configured to have multiple replicas.
:::
## Load Balance
-Load balance will be triggered in 3 cades without manual intervention.
+Load balance will be triggered in 3 cases without manual intervention.
- When a new dnode is joined in the cluster, automatic load balancing may be triggered, some data from some dnodes may be transferred to the new dnode automatically.
- When a dnode is removed from the cluster, the data from this dnode will be transferred to other dnodes automatically.
- When a dnode is too hot, i.e. too much data has been stored in it, automatic load balancing may be triggered to migrate some vnodes from this dnode to other dnodes.
-- :::tip
- Automatic load balancing is controlled by parameter `balance`, 0 means disabled and 1 means enabled.
+:::tip
+Automatic load balancing is controlled by parameter `balance`, 0 means disabled and 1 means enabled.
:::
@@ -54,7 +54,7 @@ When a dnode is offline, it can be detected by the TDengine cluster. There are t
- The dnode becomes online again before the threshold configured in `offlineThreshold` is reached, it is still in the cluster and data replication is started automatically. The dnode can work properly after the data syncup is finished.
-- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. System alert will be generated and automatic load balancing will be triggered too if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not be joined in the cluster automatically, it can only be joined manually by the system operator.
+- If the dnode has been offline over the threshold configured in `offlineThreshold` in `taos.cfg`, the dnode will be removed from the cluster automatically. A system alert will be generated and automatic load balancing will be triggered if `balance` is set to 1. When the removed dnode is restarted and becomes online, it will not join in the cluster automatically, it can only be joined manually by the system operator.
:::note
If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsynced status, the master node can only be voted after all the vnodes or mnodes in the group become online and can exchange status, then the vgroup (or mnode group) is able to provide service.
@@ -63,15 +63,15 @@ If all the vnodes in a vgroup (or mnodes in mnode group) are in offline or unsyn
## Arbitrator
-If the number of replicas is set to an even number like 2, when half of the vnodes in a vgroup don't work master node can't be voted. Similar case is also applicable to mnode if the number of mnodes is set to an even number like 2.
+If the number of replicas is set to an even number like 2, when half of the vnodes in a vgroup don't work a master node can't be voted. A similar case is also applicable to mnode if the number of mnodes is set to an even number like 2.
-To resolve this problem, a new arbitrator component named `tarbitrator`, abbreviated for TDengine Arbitrator, was introduced. Arbitrator simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. With Arbitrator, any vgroup or mnode group can be considered as having number of member nodes and master node can be selected.
+To resolve this problem, a new arbitrator component named `tarbitrator`, abbreviated for TDengine Arbitrator, was introduced. Arbitrator simulates a vnode or mnode but it's only responsible for network communication and doesn't handle any actual data access. As long as more than half of the vnode or mnode, including Arbitrator, are available the vnode group or mnode group can provide data insertion or query services normally.
-Normally, it's suggested to configure replica number of each DB or system parameter `numOfMNodes` to an odd number. However, if a user is very sensitive to storage space, replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability.
+Normally, it's suggested to configure a replica number of each DB or system parameter `numOfMNodes` to an odd number. However, if a user is very sensitive to storage space, a replica number of 2 plus arbitrator component can be used to achieve both lower cost of storage space and high availability.
Arbitrator component is installed with the server package. For details about how to install, please refer to [Install](/operation/pkg-install). The `-p` parameter of `tarbitrator` can be used to specify the port on which it provides service.
-In the configuration file `taos.cfg` of each dnode, parameter `arbitrator` needs to be configured to the end point of the `tarbitrator` process. arbitrator component will be used automatically if the replica is configured to an even number and will be ignored if the replica is configured to an odd number.
+In the configuration file `taos.cfg` of each dnode, parameter `arbitrator` needs to be configured to the end point of the `tarbitrator` process. Arbitrator component will be used automatically if the replica is configured to an even number and will be ignored if the replica is configured to an odd number.
Arbitrator can be shown by executing command in TDengine CLI `taos` with its role shown as "arb".
diff --git a/docs-en/10-cluster/index.md b/docs-en/10-cluster/index.md
index a19a54e01d..5a45a2ce7b 100644
--- a/docs-en/10-cluster/index.md
+++ b/docs-en/10-cluster/index.md
@@ -3,7 +3,7 @@ title: Cluster
keywords: ["cluster", "high availability", "load balance", "scale out"]
---
-TDengine has a native distributed design and provides the ability to scale out. A few of nodes can form a TDengine cluster. If you need to get higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source.
+TDengine has a native distributed design and provides the ability to scale out. A few nodes can form a TDengine cluster. If you need higher processing power, you just need to add more nodes into the cluster. TDengine uses virtual node technology to virtualize a node into multiple virtual nodes to achieve load balancing. At the same time, TDengine can group virtual nodes on different nodes into virtual node groups, and use the replication mechanism to ensure the high availability of the system. The cluster feature of TDengine is completely open source.
This chapter mainly introduces cluster deployment, maintenance, and how to achieve high availability and load balancing.
diff --git a/docs-en/12-taos-sql/08-interval.md b/docs-en/12-taos-sql/08-interval.md
index 5cc3fa8cb4..2044ff4f61 100644
--- a/docs-en/12-taos-sql/08-interval.md
+++ b/docs-en/12-taos-sql/08-interval.md
@@ -10,7 +10,7 @@ Window related clauses are used to divide the data set to be queried into subset
`INTERVAL` clause is used to generate time windows of same time interval, `SLIDING` is used to specify the time step for which the time window moves forward. The query is performed on one time window each time, and the time window moves forward with time. When defining continuous query both the size of time window and the step of forward sliding time need to be specified. As shown in the figure blow, [t0s, t0e] ,[t1s , t1e], [t2s, t2e] are respectively the time range of three time windows on which continuous queries are executed. The time step for which time window moves forward is marked by `sliding time`. Query, filter and aggregate operations are executed on each time window respectively. When the time step specified by `SLIDING` is same as the time interval specified by `INTERVAL`, the sliding time window is actually a flip time window.
-
+
`INTERVAL` and `SLIDING` should be used with aggregate functions and selection functions. Below SQL statement is illegal because no aggregate or selection function is used with `INTERVAL`.
@@ -30,7 +30,7 @@ When the time length specified by `SLIDING` is same as that specified by `INTERV
In case of using integer, bool, or string to represent the device status at a moment, the continuous rows with same status belong to same status window. Once the status changes, the status window closes. As shown in the following figure,there are two status windows according to status, [2019-04-28 14:22:07,2019-04-28 14:22:10] and [2019-04-28 14:22:11,2019-04-28 14:22:12]. Status window is not applicable to STable for now.
-
+
`STATE_WINDOW` is used to specify the column based on which to define status window, for example:
@@ -46,7 +46,7 @@ SELECT COUNT(*), FIRST(ts) FROM temp_tb_1 SESSION(ts, tol_val);
The primary key, i.e. timestamp, is used to determine which session window the row belongs to. If the time interval between two adjacent rows is within the time range specified by `tol_val`, they belong to same session window; otherwise they belong to two different time windows. As shown in the figure below, if the limit of time interval for session window is specified as 12 seconds, then the 6 rows in the figure constitutes 2 time windows, [2019-04-28 14:22:10,2019-04-28 14:22:30] and [2019-04-28 14:23:10,2019-04-28 14:23:30], because the time difference between 2019-04-28 14:22:30 and 2019-04-28 14:23:10 is 40 seconds, which exceeds the time interval limit of 12 seconds.
-
+
If the time interval between two continuous rows are within the time interval specified by `tol_value` they belong to the same session window; otherwise a new session window is started automatically. Session window is not supported on STable for now.
diff --git a/docs-en/12-taos-sql/index.md b/docs-en/12-taos-sql/index.md
index 611f2bf75e..32850e8c4b 100644
--- a/docs-en/12-taos-sql/index.md
+++ b/docs-en/12-taos-sql/index.md
@@ -3,9 +3,9 @@ title: TDengine SQL
description: "The syntax supported by TDengine SQL "
---
-This section explains the syntax about operating database, table, STable, inserting data, selecting data, functions and some tips that can be used in TDengine SQL. It would be easier to understand with some fundamental knowledge of SQL.
+This section explains the syntax to operating databases, tables, STables, inserting data, selecting data, functions and some tips that can be used in TDengine SQL. It would be easier to understand with some fundamental knowledge of SQL.
-TDengine SQL is the major interface for users to write data into or query from TDengine. For users to easily use, syntax similar to standard SQL is provided. However, please be noted that TDengine SQL is not standard SQL. Besides, because TDengine doesn't provide the functionality of deleting time series data, corresponding statements are not provided in TDengine SQL.
+TDengine SQL is the major interface for users to write data into or query from TDengine. For users to easily use, syntax similar to standard SQL is provided. However, please note that TDengine SQL is not standard SQL. For instance, TDengine doesn't provide the functionality of deleting time series data, thus corresponding statements are not provided in TDengine SQL.
TDengine SQL doesn't support abbreviation for keywords, for example `DESCRIBE` can't be abbreviated as `DESC`.
@@ -16,7 +16,7 @@ Syntax Specifications used in this chapter:
- | means one of a few options, excluding | itself.
- … means the item prior to it can be repeated multiple times.
-To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of meters. Assuming each meter collects 3 data: current, voltage, phase. The data model is as below:
+To better demonstrate the syntax, usage and rules of TAOS SQL, hereinafter it's assumed that there is a data set of meters. Assuming each meter collects 3 data measurements: current, voltage, phase. The data model is shown below:
```sql
taos> DESCRIBE meters;
diff --git a/docs-en/12-taos-sql/timewindow-1.webp b/docs-en/12-taos-sql/timewindow-1.webp
new file mode 100644
index 0000000000..82747558e9
Binary files /dev/null and b/docs-en/12-taos-sql/timewindow-1.webp differ
diff --git a/docs-en/12-taos-sql/timewindow-2.webp b/docs-en/12-taos-sql/timewindow-2.webp
new file mode 100644
index 0000000000..8f1314ae34
Binary files /dev/null and b/docs-en/12-taos-sql/timewindow-2.webp differ
diff --git a/docs-en/12-taos-sql/timewindow-3.webp b/docs-en/12-taos-sql/timewindow-3.webp
new file mode 100644
index 0000000000..5bd16e68e7
Binary files /dev/null and b/docs-en/12-taos-sql/timewindow-3.webp differ
diff --git a/docs-en/13-operation/11-optimize.md b/docs-en/13-operation/11-optimize.md
deleted file mode 100644
index 7cccfc8b0d..0000000000
--- a/docs-en/13-operation/11-optimize.md
+++ /dev/null
@@ -1,100 +0,0 @@
----
-title: Performance Optimization
----
-
-After a TDengine cluster has been running for long enough time, because of updating data, deleting tables and deleting expired data, there may be fragments in data files and query performance may be impacted. To resolve the problem of fragments, from version 2.1.3.0 a new SQL command `COMPACT` can be used to defragment the data files.
-
-```sql
-COMPACT VNODES IN (vg_id1, vg_id2, ...)
-```
-
-`COMPACT` can be used to defragment one or more vgroups. The defragmentation work will be put in task queue for scheduling execution by TDengine. `SHOW VGROUPS` command can be used to get the vgroup ids to be used in `COMPACT` command. There is a column `compacting` in the output of `SHOW GROUPS` to indicate the compacting status of the vgroup: 2 means the vgroup is waiting in task queue for compacting, 1 means compacting is in progress, and 0 means the vgroup has nothing to do with compacting.
-
-Please be noted that a lot of disk I/O is required for defragementation operation, during which the performance may be impacted significantly for data insertion and query, data insertion may be blocked shortly in extreme cases.
-
-## Optimize Storage Parameters
-
-The data in different use cases may have different characteristics, such as the days to keep, number of replicas, collection interval, record size, number of collection points, compression or not, etc. To achieve best efficiency in storage, the parameters in below table can be used, all of them can be either configured in `taos.cfg` as default configuration or in the command `create database`. For detailed definition of these parameters please refer to [Configuration Parameters](/reference/config/).
-
-| # | Parameter | Unit | Definition | **Value Range** | **Default Value** |
-| --- | --------- | ---- | ------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------- | ----------------- |
-| 1 | days | Day | The time range of the data stored in a single data file | 1-3650 | 10 |
-| 2 | keep | Day | The number of days the data is kept in the database | 1-36500 | 3650 |
-| 3 | cache | MB | The size of each memory block | 1-128 | 16 |
-| 4 | blocks | None | The number of memory blocks used by each vnode | 3-10000 | 6 |
-| 5 | quorum | None | The number of required confirmation in case of multiple replicas | 1-2 | 1 |
-| 6 | minRows | None | The minimum number of rows in a data file | 10-1000 | 100 |
-| 7 | maxRows | None | The maximum number of rows in a daa file | 200-10000 | 4096 |
-| 8 | comp | None | Whether to compress the data | 0:uncompressed; 1: One Phase compression; 2: Two Phase compression | 2 |
-| 9 | walLevel | None | wal sync level (named as "wal" in create database ) | 1:wal enabled without fsync; 2:wal enabled with fsync | 1 |
-| 10 | fsync | ms | The time to wait for invoking fsync when walLevel is set to 2; 0 means no wait | 3000 |
-| 11 | replica | none | The number of replications | 1-3 | 1 |
-| 12 | precision | none | Time precision | ms: millisecond; us: microsecond;ns: nanosecond | ms |
-| 13 | update | none | Whether to allow updating data | 0: not allowed; 1: a row must be updated as whole; 2: a part of columns in a row can be updated | 0 |
-| 14 | cacheLast | none | Whether the latest data of a table is cached in memory | 0: not cached; 1: the last row is cached; 2: the latest non-NULL value of each column is cached | 0 |
-
-For a specific use case, there may be multiple kinds of data with different characteristics, it's best to put data with same characteristics in same database. So there may be multiple databases in a system while each database can be configured with different storage parameters to achieve best performance. The above parameters can be used when creating a database to override the default setting in configuration file.
-
-```sql
- CREATE DATABASE demo DAYS 10 CACHE 32 BLOCKS 8 REPLICA 3 UPDATE 1;
-```
-
-The above SQL statement creates a database named as `demo`, in which each data file stores data across 10 days, the size of each memory block is 32 MB and each vnode is allocated with 8 blocks, the replica is set to 3, update operation is allowed, and all other parameters not specified in the command follow the default configuration in `taos.cfg`.
-
-Once a database is created, only some parameters can be changed and be effective immediately while others are can't.
-
-| **Parameter** | **Alterable** | **Value Range** | **Syntax** |
-| ------------- | ------------- | ---------------- | -------------------------------------- |
-| name | | | |
-| create time | | | |
-| ntables | | | |
-| vgroups | | | |
-| replica | **YES** | 1-3 | ALTER DATABASE REPLICA _n_ |
-| quorum | **YES** | 1-2 | ALTER DATABASE QUORUM _n_ |
-| days | | | |
-| keep | **YES** | days-365000 | ALTER DATABASE KEEP _n_ |
-| cache | | | |
-| blocks | **YES** | 3-1000 | ALTER DATABASE BLOCKS _n_ |
-| minrows | | | |
-| maxrows | | | |
-| wal | | | |
-| fsync | | | |
-| comp | **YES** | 0-2 | ALTER DATABASE COMP _n_ |
-| precision | | | |
-| status | | | |
-| update | | | |
-| cachelast | **YES** | 0 \| 1 \| 2 \| 3 | ALTER DATABASE CACHELAST _n_ |
-
-**Explanation:** Prior to version 2.1.3.0, `taosd` server process needs to be restarted for these parameters to take in effect if they are changed using `ALTER DATABASE`.
-
-When trying to join a new dnode into a running TDengine cluster, all the parameters related to cluster in the new dnode configuration must be consistent with the cluster, otherwise it can't join the cluster. The parameters that are checked when joining a dnode are as below. For detailed definition of these parameters please refer to [Configuration Parameters](/reference/config/).
-
-- numOfMnodes
-- mnodeEqualVnodeNum
-- offlineThreshold
-- statusInterval
-- maxTablesPerVnode
-- maxVgroupsPerDb
-- arbitrator
-- timezone
-- balance
-- flowctrl
-- slaveQuery
-- adjustMaster
-
-For the convenience of debugging, the log setting of a dnode can be changed temporarily. The temporary change will be lost once the server is restarted.
-
-```sql
-ALTER DNODE
-```
-
-- dnode_id: from output of "SHOW DNODES"
-- config: the parameter to be changed, as below
- - resetlog: close the old log file and create the new on
- - debugFlag: 131 (INFO/ERROR/WARNING), 135 (DEBUG), 143 (TRACE)
-
-For example
-
-```
-alter dnode 1 debugFlag 135;
-```
diff --git a/docs-en/14-reference/03-connector/03-connector.mdx b/docs-en/14-reference/03-connector/03-connector.mdx
index 6be914bdb4..38eba73d09 100644
--- a/docs-en/14-reference/03-connector/03-connector.mdx
+++ b/docs-en/14-reference/03-connector/03-connector.mdx
@@ -4,7 +4,7 @@ title: Connector
TDengine provides a rich set of APIs (application development interface). To facilitate users to develop their applications quickly, TDengine supports connectors for multiple programming languages, including official connectors for C/C++, Java, Python, Go, Node.js, C#, and Rust. These connectors support connecting to TDengine clusters using both native interfaces (taosc) and REST interfaces (not supported in a few languages yet). Community developers have also contributed several unofficial connectors, such as the ADO.NET connector, the Lua connector, and the PHP connector.
-
+
## Supported platforms
diff --git a/docs-en/14-reference/03-connector/connector.webp b/docs-en/14-reference/03-connector/connector.webp
new file mode 100644
index 0000000000..040cf5c26c
Binary files /dev/null and b/docs-en/14-reference/03-connector/connector.webp differ
diff --git a/docs-en/14-reference/03-connector/java.mdx b/docs-en/14-reference/03-connector/java.mdx
index 328907c4d7..0a1960be51 100644
--- a/docs-en/14-reference/03-connector/java.mdx
+++ b/docs-en/14-reference/03-connector/java.mdx
@@ -11,7 +11,7 @@ import TabItem from '@theme/TabItem';
'taos-jdbcdriver' is TDengine's official Java language connector, which allows Java developers to develop applications that access the TDengine database. 'taos-jdbcdriver' implements the interface of the JDBC driver standard and provides two forms of connectors. One is to connect to a TDengine instance natively through the TDengine client driver (taosc), which supports functions including data writing, querying, subscription, schemaless writing, and bind interface. And the other is to connect to a TDengine instance through the REST interface provided by taosAdapter (2.4.0.0 and later). REST connections implement has a slight differences to compare the set of features implemented and native connections.
-
+
The preceding diagram shows two ways for a Java app to access TDengine via connector:
diff --git a/docs-en/14-reference/03-connector/tdengine-jdbc-connector.png b/docs-en/14-reference/03-connector/tdengine-jdbc-connector.png
deleted file mode 100644
index 7541aaf98a..0000000000
Binary files a/docs-en/14-reference/03-connector/tdengine-jdbc-connector.png and /dev/null differ
diff --git a/docs-en/14-reference/03-connector/tdengine-jdbc-connector.webp b/docs-en/14-reference/03-connector/tdengine-jdbc-connector.webp
new file mode 100644
index 0000000000..37cf6d90a5
Binary files /dev/null and b/docs-en/14-reference/03-connector/tdengine-jdbc-connector.webp differ
diff --git a/docs-en/14-reference/04-taosadapter.md b/docs-en/14-reference/04-taosadapter.md
index 85fd2923b0..de42e8a883 100644
--- a/docs-en/14-reference/04-taosadapter.md
+++ b/docs-en/14-reference/04-taosadapter.md
@@ -24,7 +24,7 @@ taosAdapter provides the following features.
## taosAdapter architecture diagram
-
+
## taosAdapter Deployment Method
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png
deleted file mode 100644
index 4708f836fe..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp
new file mode 100644
index 0000000000..a78e18028a
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-1-cluster-status.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png
deleted file mode 100644
index f2684e6eed..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp
new file mode 100644
index 0000000000..b152418d09
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-2-dnodes.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png
deleted file mode 100644
index 74686691e4..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp
new file mode 100644
index 0000000000..f58f48b7f1
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-3-mnodes.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.png
deleted file mode 100644
index 2796421556..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp
new file mode 100644
index 0000000000..00afcce013
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-4-requests.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.png
deleted file mode 100644
index b0d3abbf21..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.webp
new file mode 100644
index 0000000000..567e5694f9
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-5-database.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png
deleted file mode 100644
index 2b54cbeb83..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp
new file mode 100644
index 0000000000..cc8a912810
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-6-dnode-usage.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png
deleted file mode 100644
index eb3848657f..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp
new file mode 100644
index 0000000000..651b716bc5
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-7-login-history.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png
deleted file mode 100644
index d94b2e02ac..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp
new file mode 100644
index 0000000000..8666193f59
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-8-taosadapter.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.png b/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.png
deleted file mode 100644
index 654df29345..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.webp b/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.webp
new file mode 100644
index 0000000000..7f38a76a2b
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/TDinsight-full.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.png b/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.png
deleted file mode 100644
index e3afa22c03..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.webp b/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.webp
new file mode 100644
index 0000000000..3d7fe932a2
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-manager-status.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.png b/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.png
deleted file mode 100644
index 198bf37141..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.webp b/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.webp
new file mode 100644
index 0000000000..517123954e
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-notification-channel.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.png b/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.png
deleted file mode 100644
index ace3aa3c2f..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.webp b/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.webp
new file mode 100644
index 0000000000..6666296ac1
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-query-demo.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png b/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png
deleted file mode 100644
index 7082e49f6b..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp b/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp
new file mode 100644
index 0000000000..6f74bc3a47
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-rule-condition-notifications.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.png b/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.png
deleted file mode 100644
index ffd4911b53..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.webp b/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.webp
new file mode 100644
index 0000000000..acda3b24a6
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/alert-rule-test.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.png b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.png
deleted file mode 100644
index 802c7366f9..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp
new file mode 100644
index 0000000000..903e236e2a
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-button.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png
deleted file mode 100644
index 019ec921b6..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp
new file mode 100644
index 0000000000..14fcfe9d18
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-tdengine.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.png b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.png
deleted file mode 100644
index 3963abb4ea..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp
new file mode 100644
index 0000000000..00b50cc619
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource-test.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.png b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.png
deleted file mode 100644
index 837100464b..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.webp b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.webp
new file mode 100644
index 0000000000..06d0ff6ed5
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-add-datasource.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.png b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.png
deleted file mode 100644
index 98223df254..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.webp b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.webp
new file mode 100644
index 0000000000..e2ec052b91
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-display.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png
deleted file mode 100644
index 07aba348f0..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp
new file mode 100644
index 0000000000..665c035f97
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-dashboard-import-options.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.png b/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.png
deleted file mode 100644
index 7e28939ead..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.webp b/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.webp
new file mode 100644
index 0000000000..7dc42eeba9
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/howto-import-dashboard.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.png b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.png
deleted file mode 100644
index 981f640b14..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.webp b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.webp
new file mode 100644
index 0000000000..7ef081900f
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-15167.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png
deleted file mode 100644
index 94ef4fa5fe..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp
new file mode 100644
index 0000000000..602452fc4c
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/import-dashboard-for-tdengine.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png b/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png
deleted file mode 100644
index 670cacc377..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp b/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp
new file mode 100644
index 0000000000..35a3ebba78
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/import-via-grafana-dot-com.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/import_dashboard.png b/docs-en/14-reference/07-tdinsight/assets/import_dashboard.png
deleted file mode 100644
index d74cd36c96..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/import_dashboard.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/import_dashboard.webp b/docs-en/14-reference/07-tdinsight/assets/import_dashboard.webp
new file mode 100644
index 0000000000..fb7958f1b9
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/import_dashboard.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.png b/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.png
deleted file mode 100644
index 0101e7430c..0000000000
Binary files a/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.png and /dev/null differ
diff --git a/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.webp b/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.webp
new file mode 100644
index 0000000000..49f1d88f4a
Binary files /dev/null and b/docs-en/14-reference/07-tdinsight/assets/tdengine_dashboard.webp differ
diff --git a/docs-en/14-reference/07-tdinsight/index.md b/docs-en/14-reference/07-tdinsight/index.md
index 4850cecb33..dc337bf9ff 100644
--- a/docs-en/14-reference/07-tdinsight/index.md
+++ b/docs-en/14-reference/07-tdinsight/index.md
@@ -233,33 +233,33 @@ The default username/password is `admin`. Grafana will require a password change
Point to the **Configurations** -> **Data Sources** menu, and click the **Add data source** button.
-
+
Search for and select **TDengine**.
-
+
Configure the TDengine datasource.
-
+
Save and test. It will report 'TDengine Data source is working' under normal circumstances.
-
+
### Importing dashboards
Point to **+** / **Create** - **import** (or `/dashboard/import` url).
-
+
Type the dashboard ID `15167` in the **Import via grafana.com** location and **Load**.
-
+
Once the import is complete, the full page view of TDinsight is shown below.
-
+
## TDinsight dashboard details
@@ -269,7 +269,7 @@ Details of the metrics are as follows.
### Cluster Status
-
+
This section contains the current information and status of the cluster, the alert information is also here (from left to right, top to bottom).
@@ -289,7 +289,7 @@ This section contains the current information and status of the cluster, the ale
### DNodes Status
-
+
- **DNodes Status**: simple table view of `show dnodes`.
- **DNodes Lifetime**: the time elapsed since the dnode was created.
@@ -298,14 +298,14 @@ This section contains the current information and status of the cluster, the ale
### MNode Overview
-
+
1. **MNodes Status**: a simple table view of `show mnodes`. 2.
2. **MNodes Number**: similar to `DNodes Number`, the number of MNodes changes.
### Request
-
+
1. **Requests Rate(Inserts per Second)**: average number of inserts per second.
2. **Requests (Selects)**: number of query requests and change rate (count of second).
@@ -313,7 +313,7 @@ This section contains the current information and status of the cluster, the ale
### Database
-
+
Database usage, repeated for each value of the variable `$database` i.e. multiple rows per database.
@@ -325,7 +325,7 @@ Database usage, repeated for each value of the variable `$database` i.e. multipl
### DNode Resource Usage
-
+
Data node resource usage display with repeated multiple rows for the variable `$fqdn` i.e., each data node. Includes.
@@ -346,13 +346,13 @@ Data node resource usage display with repeated multiple rows for the variable `$
### Login History
-
+
Currently, only the number of logins per minute is reported.
### Monitoring taosAdapter
-
+
Support monitoring taosAdapter request statistics and status details. Includes.
diff --git a/docs-en/14-reference/taosAdapter-architecture.png b/docs-en/14-reference/taosAdapter-architecture.png
deleted file mode 100644
index 08a9018553..0000000000
Binary files a/docs-en/14-reference/taosAdapter-architecture.png and /dev/null differ
diff --git a/docs-en/14-reference/taosAdapter-architecture.webp b/docs-en/14-reference/taosAdapter-architecture.webp
new file mode 100644
index 0000000000..a4162b0a03
Binary files /dev/null and b/docs-en/14-reference/taosAdapter-architecture.webp differ
diff --git a/docs-en/20-third-party/01-grafana.mdx b/docs-en/20-third-party/01-grafana.mdx
index c1bfd4a96a..7239710e0a 100644
--- a/docs-en/20-third-party/01-grafana.mdx
+++ b/docs-en/20-third-party/01-grafana.mdx
@@ -62,15 +62,15 @@ GF_PLUGINS_ALLOW_LOADING_UNSIGNED_PLUGINS=tdengine-datasource
Users can log in to the Grafana server (username/password: admin/admin) directly through the URL `http://localhost:3000` and add a datasource through `Configuration -> Data Sources` on the left side, as shown in the following figure.
-
+
Click `Add data source` to enter the Add data source page, and enter TDengine in the query box to add it, as shown in the following figure.
-
+
Enter the datasource configuration page, and follow the default prompts to modify the corresponding configuration.
-
+
- Host: IP address of the server where the components of the TDengine cluster provide REST service (offered by taosd before 2.4 and by taosAdapter since 2.4) and the port number of the TDengine REST service (6041), by default use `http://localhost:6041`.
- User: TDengine user name.
@@ -78,13 +78,13 @@ Enter the datasource configuration page, and follow the default prompts to modif
Click `Save & Test` to test. Follows are a success.
-
+
### Create Dashboard
Go back to the main interface to create the Dashboard, click Add Query to enter the panel query page:
-
+
As shown above, select the `TDengine` data source in the `Query` and enter the corresponding SQL in the query box below for query.
@@ -94,7 +94,7 @@ As shown above, select the `TDengine` data source in the `Query` and enter the c
Follow the default prompt to query the average system memory usage for the specified interval on the server where the current TDengine deployment is located as follows.
-
+
> For more information on how to use Grafana to create the appropriate monitoring interface and for more details on using Grafana, refer to the official Grafana [documentation](https://grafana.com/docs/).
diff --git a/docs-en/20-third-party/09-emq-broker.md b/docs-en/20-third-party/09-emq-broker.md
index 13562ba7f7..560c6463b5 100644
--- a/docs-en/20-third-party/09-emq-broker.md
+++ b/docs-en/20-third-party/09-emq-broker.md
@@ -44,25 +44,25 @@ Since the configuration interface of EMQX differs from version to version, here
Use your browser to open the URL `http://IP:18083` and log in to EMQX Dashboard. The initial installation username is `admin` and the password is: `public`.
-
+
### Creating Rule
Select "Rule" in the "Rule Engine" on the left and click the "Create" button: !
-
+
### Edit SQL fields
-
+
### Add "action handler"
-
+
### Add "Resource"
-
+
Select "Data to Web Service" and click the "New Resource" button.
@@ -70,13 +70,13 @@ Select "Data to Web Service" and click the "New Resource" button.
Select "Data to Web Service" and fill in the request URL as the address and port of the server running taosAdapter (default is 6041). Leave the other properties at their default values.
-
+
### Edit "action"
Edit the resource configuration to add the key/value pairing for Authorization. Please refer to the [ TDengine REST API documentation ](https://docs.taosdata.com/reference/rest-api/) for the authorization in details. Enter the rule engine replacement template in the message body.
-
+
## Compose program to mock data
@@ -163,7 +163,7 @@ Edit the resource configuration to add the key/value pairing for Authorization.
Note: `CLIENT_NUM` in the code can be set to a smaller value at the beginning of the test to avoid hardware performance be not capable to handle a more significant number of concurrent clients.
-
+
## Execute tests to simulate sending MQTT data
@@ -172,19 +172,19 @@ npm install mqtt mockjs --save ---registry=https://registry.npm.taobao.org
node mock.js
```
-
+
## Verify that EMQX is receiving data
Refresh the EMQX Dashboard rules engine interface to see how many records were received correctly:
-
+
## Verify that data writing to TDengine
Use the TDengine CLI program to log in and query the appropriate databases and tables to verify that the data is being written to TDengine correctly:
-
+
Please refer to the [TDengine official documentation](https://docs.taosdata.com/) for more details on how to use TDengine.
EMQX Please refer to the [EMQX official documentation](https://www.emqx.io/docs/en/v4.4/rule/rule-engine.html) for details on how to use EMQX.
diff --git a/docs-en/20-third-party/11-kafka.md b/docs-en/20-third-party/11-kafka.md
index b9c7a3814a..5aee6e044d 100644
--- a/docs-en/20-third-party/11-kafka.md
+++ b/docs-en/20-third-party/11-kafka.md
@@ -9,11 +9,11 @@ TDengine Kafka Connector contains two plugins: TDengine Source Connector and TDe
Kafka Connect is a component of Apache Kafka that enables other systems, such as databases, cloud services, file systems, etc., to connect to Kafka easily. Data can flow from other software to Kafka via Kafka Connect and Kafka to other systems via Kafka Connect. Plugins that read data from other software are called Source Connectors, and plugins that write data to other software are called Sink Connectors. Neither Source Connector nor Sink Connector will directly connect to Kafka Broker, and Source Connector transfers data to Kafka Connect. Sink Connector receives data from Kafka Connect.
-
+
TDengine Source Connector is used to read data from TDengine in real-time and send it to Kafka Connect. Users can use The TDengine Sink Connector to receive data from Kafka Connect and write it to TDengine.
-
+
## What is Confluent?
@@ -26,7 +26,7 @@ Confluent adds many extensions to Kafka. include:
5. GUI for managing and monitoring Kafka - Confluent Control Center
Some of these extensions are available in the community version of Confluent. Some are only available in the enterprise version.
-
+
Confluent Enterprise Edition provides the `confluent` command-line tool to manage various components.
diff --git a/docs-en/20-third-party/emqx/add-action-handler.png b/docs-en/20-third-party/emqx/add-action-handler.png
deleted file mode 100644
index 97a1f933ec..0000000000
Binary files a/docs-en/20-third-party/emqx/add-action-handler.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/add-action-handler.webp b/docs-en/20-third-party/emqx/add-action-handler.webp
new file mode 100644
index 0000000000..4a8d105f71
Binary files /dev/null and b/docs-en/20-third-party/emqx/add-action-handler.webp differ
diff --git a/docs-en/20-third-party/emqx/check-result-in-taos.png b/docs-en/20-third-party/emqx/check-result-in-taos.png
deleted file mode 100644
index c17a5c1ea2..0000000000
Binary files a/docs-en/20-third-party/emqx/check-result-in-taos.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/check-result-in-taos.webp b/docs-en/20-third-party/emqx/check-result-in-taos.webp
new file mode 100644
index 0000000000..8fa040a861
Binary files /dev/null and b/docs-en/20-third-party/emqx/check-result-in-taos.webp differ
diff --git a/docs-en/20-third-party/emqx/check-rule-matched.png b/docs-en/20-third-party/emqx/check-rule-matched.png
deleted file mode 100644
index 9e9a466946..0000000000
Binary files a/docs-en/20-third-party/emqx/check-rule-matched.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/check-rule-matched.webp b/docs-en/20-third-party/emqx/check-rule-matched.webp
new file mode 100644
index 0000000000..e5a6140357
Binary files /dev/null and b/docs-en/20-third-party/emqx/check-rule-matched.webp differ
diff --git a/docs-en/20-third-party/emqx/client-num.png b/docs-en/20-third-party/emqx/client-num.png
deleted file mode 100644
index fff48cbf3b..0000000000
Binary files a/docs-en/20-third-party/emqx/client-num.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/client-num.webp b/docs-en/20-third-party/emqx/client-num.webp
new file mode 100644
index 0000000000..a151b18484
Binary files /dev/null and b/docs-en/20-third-party/emqx/client-num.webp differ
diff --git a/docs-en/20-third-party/emqx/create-resource.png b/docs-en/20-third-party/emqx/create-resource.png
deleted file mode 100644
index 58da4c391a..0000000000
Binary files a/docs-en/20-third-party/emqx/create-resource.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/create-resource.webp b/docs-en/20-third-party/emqx/create-resource.webp
new file mode 100644
index 0000000000..bf9cccbe49
Binary files /dev/null and b/docs-en/20-third-party/emqx/create-resource.webp differ
diff --git a/docs-en/20-third-party/emqx/create-rule.png b/docs-en/20-third-party/emqx/create-rule.png
deleted file mode 100644
index 73b0b6ee3e..0000000000
Binary files a/docs-en/20-third-party/emqx/create-rule.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/create-rule.webp b/docs-en/20-third-party/emqx/create-rule.webp
new file mode 100644
index 0000000000..13e8fc83d4
Binary files /dev/null and b/docs-en/20-third-party/emqx/create-rule.webp differ
diff --git a/docs-en/20-third-party/emqx/edit-action.png b/docs-en/20-third-party/emqx/edit-action.png
deleted file mode 100644
index 2a43ee369a..0000000000
Binary files a/docs-en/20-third-party/emqx/edit-action.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/edit-action.webp b/docs-en/20-third-party/emqx/edit-action.webp
new file mode 100644
index 0000000000..7f6d2e36a8
Binary files /dev/null and b/docs-en/20-third-party/emqx/edit-action.webp differ
diff --git a/docs-en/20-third-party/emqx/edit-resource.png b/docs-en/20-third-party/emqx/edit-resource.png
deleted file mode 100644
index 0a0b356004..0000000000
Binary files a/docs-en/20-third-party/emqx/edit-resource.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/edit-resource.webp b/docs-en/20-third-party/emqx/edit-resource.webp
new file mode 100644
index 0000000000..fd5d278fab
Binary files /dev/null and b/docs-en/20-third-party/emqx/edit-resource.webp differ
diff --git a/docs-en/20-third-party/emqx/login-dashboard.png b/docs-en/20-third-party/emqx/login-dashboard.png
deleted file mode 100644
index d6c5035c98..0000000000
Binary files a/docs-en/20-third-party/emqx/login-dashboard.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/login-dashboard.webp b/docs-en/20-third-party/emqx/login-dashboard.webp
new file mode 100644
index 0000000000..f84cee668f
Binary files /dev/null and b/docs-en/20-third-party/emqx/login-dashboard.webp differ
diff --git a/docs-en/20-third-party/emqx/rule-engine.png b/docs-en/20-third-party/emqx/rule-engine.png
deleted file mode 100644
index db110a837b..0000000000
Binary files a/docs-en/20-third-party/emqx/rule-engine.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/rule-engine.webp b/docs-en/20-third-party/emqx/rule-engine.webp
new file mode 100644
index 0000000000..c1711c8cc7
Binary files /dev/null and b/docs-en/20-third-party/emqx/rule-engine.webp differ
diff --git a/docs-en/20-third-party/emqx/rule-header-key-value.png b/docs-en/20-third-party/emqx/rule-header-key-value.png
deleted file mode 100644
index b81b9a9684..0000000000
Binary files a/docs-en/20-third-party/emqx/rule-header-key-value.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/rule-header-key-value.webp b/docs-en/20-third-party/emqx/rule-header-key-value.webp
new file mode 100644
index 0000000000..e645b3822d
Binary files /dev/null and b/docs-en/20-third-party/emqx/rule-header-key-value.webp differ
diff --git a/docs-en/20-third-party/emqx/run-mock.png b/docs-en/20-third-party/emqx/run-mock.png
deleted file mode 100644
index 0da2581857..0000000000
Binary files a/docs-en/20-third-party/emqx/run-mock.png and /dev/null differ
diff --git a/docs-en/20-third-party/emqx/run-mock.webp b/docs-en/20-third-party/emqx/run-mock.webp
new file mode 100644
index 0000000000..ed33f1666d
Binary files /dev/null and b/docs-en/20-third-party/emqx/run-mock.webp differ
diff --git a/docs-en/20-third-party/grafana/add_datasource1.jpg b/docs-en/20-third-party/grafana/add_datasource1.jpg
deleted file mode 100644
index 1f0f5110f3..0000000000
Binary files a/docs-en/20-third-party/grafana/add_datasource1.jpg and /dev/null differ
diff --git a/docs-en/20-third-party/grafana/add_datasource1.webp b/docs-en/20-third-party/grafana/add_datasource1.webp
new file mode 100644
index 0000000000..211edc4457
Binary files /dev/null and b/docs-en/20-third-party/grafana/add_datasource1.webp differ
diff --git a/docs-en/20-third-party/grafana/add_datasource2.jpg b/docs-en/20-third-party/grafana/add_datasource2.jpg
deleted file mode 100644
index fa7a83e00e..0000000000
Binary files a/docs-en/20-third-party/grafana/add_datasource2.jpg and /dev/null differ
diff --git a/docs-en/20-third-party/grafana/add_datasource2.webp b/docs-en/20-third-party/grafana/add_datasource2.webp
new file mode 100644
index 0000000000..8ab547231f
Binary files /dev/null and b/docs-en/20-third-party/grafana/add_datasource2.webp differ
diff --git a/docs-en/20-third-party/grafana/add_datasource3.jpg b/docs-en/20-third-party/grafana/add_datasource3.jpg
deleted file mode 100644
index fc850ad08f..0000000000
Binary files a/docs-en/20-third-party/grafana/add_datasource3.jpg and /dev/null differ
diff --git a/docs-en/20-third-party/grafana/add_datasource3.webp b/docs-en/20-third-party/grafana/add_datasource3.webp
new file mode 100644
index 0000000000..d8a733360a
Binary files /dev/null and b/docs-en/20-third-party/grafana/add_datasource3.webp differ
diff --git a/docs-en/20-third-party/grafana/add_datasource4.jpg b/docs-en/20-third-party/grafana/add_datasource4.jpg
deleted file mode 100644
index 3ba73e50d4..0000000000
Binary files a/docs-en/20-third-party/grafana/add_datasource4.jpg and /dev/null differ
diff --git a/docs-en/20-third-party/grafana/add_datasource4.webp b/docs-en/20-third-party/grafana/add_datasource4.webp
new file mode 100644
index 0000000000..b1e0fc6e2b
Binary files /dev/null and b/docs-en/20-third-party/grafana/add_datasource4.webp differ
diff --git a/docs-en/20-third-party/grafana/create_dashboard1.jpg b/docs-en/20-third-party/grafana/create_dashboard1.jpg
deleted file mode 100644
index 3b83c3a171..0000000000
Binary files a/docs-en/20-third-party/grafana/create_dashboard1.jpg and /dev/null differ
diff --git a/docs-en/20-third-party/grafana/create_dashboard1.webp b/docs-en/20-third-party/grafana/create_dashboard1.webp
new file mode 100644
index 0000000000..55eb388833
Binary files /dev/null and b/docs-en/20-third-party/grafana/create_dashboard1.webp differ
diff --git a/docs-en/20-third-party/grafana/create_dashboard2.jpg b/docs-en/20-third-party/grafana/create_dashboard2.jpg
deleted file mode 100644
index fe5d768ac5..0000000000
Binary files a/docs-en/20-third-party/grafana/create_dashboard2.jpg and /dev/null differ
diff --git a/docs-en/20-third-party/grafana/create_dashboard2.webp b/docs-en/20-third-party/grafana/create_dashboard2.webp
new file mode 100644
index 0000000000..bb40e40718
Binary files /dev/null and b/docs-en/20-third-party/grafana/create_dashboard2.webp differ
diff --git a/docs-en/20-third-party/kafka/Kafka_Connect.png b/docs-en/20-third-party/kafka/Kafka_Connect.png
deleted file mode 100644
index f3dc02ea2a..0000000000
Binary files a/docs-en/20-third-party/kafka/Kafka_Connect.png and /dev/null differ
diff --git a/docs-en/20-third-party/kafka/Kafka_Connect.webp b/docs-en/20-third-party/kafka/Kafka_Connect.webp
new file mode 100644
index 0000000000..8f2000a749
Binary files /dev/null and b/docs-en/20-third-party/kafka/Kafka_Connect.webp differ
diff --git a/docs-en/20-third-party/kafka/confluentPlatform.png b/docs-en/20-third-party/kafka/confluentPlatform.png
deleted file mode 100644
index f8e69f2c7f..0000000000
Binary files a/docs-en/20-third-party/kafka/confluentPlatform.png and /dev/null differ
diff --git a/docs-en/20-third-party/kafka/confluentPlatform.webp b/docs-en/20-third-party/kafka/confluentPlatform.webp
new file mode 100644
index 0000000000..ff03d4e51a
Binary files /dev/null and b/docs-en/20-third-party/kafka/confluentPlatform.webp differ
diff --git a/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.png b/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.png
deleted file mode 100644
index 26d8a866d7..0000000000
Binary files a/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.png and /dev/null differ
diff --git a/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.webp b/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.webp
new file mode 100644
index 0000000000..120d534ec1
Binary files /dev/null and b/docs-en/20-third-party/kafka/streaming-integration-with-kafka-connect.webp differ
diff --git a/docs-en/21-tdinternal/01-arch.md b/docs-en/21-tdinternal/01-arch.md
index 9607c9b387..2c430908e4 100644
--- a/docs-en/21-tdinternal/01-arch.md
+++ b/docs-en/21-tdinternal/01-arch.md
@@ -11,7 +11,7 @@ The design of TDengine is based on the assumption that any hardware or software
Logical structure diagram of TDengine distributed architecture as following:
-
+
Figure 1: TDengine architecture diagram
A complete TDengine system runs on one or more physical nodes. Logically, it includes data node (dnode), TDengine client driver (TAOSC) and application (app). There are one or more data nodes in the system, which form a cluster. The application interacts with the TDengine cluster through TAOSC's API. The following is a brief introduction to each logical unit.
@@ -54,7 +54,7 @@ A complete TDengine system runs on one or more physical nodes. Logically, it inc
To explain the relationship between vnode, mnode, TAOSC and application and their respective roles, the following is an analysis of a typical data writing process.
-
+
Figure 2: Typical process of TDengine
1. Application initiates a request to insert data through JDBC, ODBC, or other APIs.
@@ -123,7 +123,7 @@ If a database has N replicas, thus a virtual node group has N virtual nodes, but
Master Vnode uses a writing process as follows:
-
+
Figure 3: TDengine Master writing process
1. Master vnode receives the application data insertion request, verifies, and moves to next step;
@@ -137,7 +137,7 @@ Master Vnode uses a writing process as follows:
For a slave vnode, the write process as follows:
-
+
Figure 4: TDengine Slave Writing Process
1. Slave vnode receives a data insertion request forwarded by Master vnode;
@@ -267,7 +267,7 @@ For the data collected by device D1001, the number of records per hour is counte
TDengine creates a separate table for each data collection point, but in practical applications, it is often necessary to aggregate data from different data collection points. In order to perform aggregation operations efficiently, TDengine introduces the concept of STable. STable is used to represent a specific type of data collection point. It is a table set containing multiple tables. The schema of each table in the set is the same, but each table has its own static tag. The tags can be multiple and be added, deleted and modified at any time. Applications can aggregate or statistically operate all or a subset of tables under a STABLE by specifying tag filters, thus greatly simplifying the development of applications. The process is shown in the following figure:
-
+
Figure 5: Diagram of multi-table aggregation query