Merge branch '2.0' of github.com:taosdata/TDengine into szhou/replace-function

This commit is contained in:
shenglian zhou 2023-04-06 11:19:10 +08:00
commit 47719b1028
203 changed files with 2056 additions and 1102 deletions

View File

@ -121,7 +121,7 @@ ELSE ()
MESSAGE(STATUS "Compile with Address Sanitizer!")
ELSE ()
SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-reserved-user-defined-literal -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k")
ENDIF ()
# disable all assert

View File

@ -2,7 +2,7 @@
IF (DEFINED VERNUMBER)
SET(TD_VER_NUMBER ${VERNUMBER})
ELSE ()
SET(TD_VER_NUMBER "3.0.3.1")
SET(TD_VER_NUMBER "3.0.3.2")
ENDIF ()
IF (DEFINED VERCOMPATIBLE)

View File

@ -2,7 +2,7 @@
# taosadapter
ExternalProject_Add(taosadapter
GIT_REPOSITORY https://github.com/taosdata/taosadapter.git
GIT_TAG d8059ff
GIT_TAG cb1e89c
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosadapter"
BINARY_DIR ""
#BUILD_IN_SOURCE TRUE

View File

@ -2,7 +2,7 @@
# taos-tools
ExternalProject_Add(taos-tools
GIT_REPOSITORY https://github.com/taosdata/taos-tools.git
GIT_TAG 04296a5
GIT_TAG ddd654a
SOURCE_DIR "${TD_SOURCE_DIR}/tools/taos-tools"
BINARY_DIR ""
#BUILD_IN_SOURCE TRUE

View File

@ -18,14 +18,8 @@ To achieve absolutely no data loss, set wal_level to 2 and wal_fsync_period to 0
## Disaster Recovery
TDengine uses replication to provide high availability.
TDengine provides disaster recovery by using taosX to replicate data between two TDengine clusters which are deployed in two distant data centers. Assume there are two TDengine clusters, A and B, A is the source and B is the target, and A takes the workload of writing and querying. You can deploy `taosX` in the data center where cluster A resides in, `taosX` consumes the data written into cluster A and writes into cluster B. If the data center of cluster A is disrupted because of disaster, you can switch to cluster B to take the workload of data writing and querying, and deploy a `taosX` in the data center of cluster B to replicate data from cluster B to cluster A if cluster A has been recovered, or another cluster C if cluster A has not been recovered.
A TDengine cluster is managed by mnodes. You can configure up to three mnodes to ensure high availability. The data replication between mnode replicas is performed in a synchronous way to guarantee metadata consistency.
You can use the data replication feature of `taosX` to build more complicated disaster recovery solution.
The number of replicas for time series data in TDengine is associated with each database. There can be many databases in a cluster and each database can be configured with a different number of replicas. When creating a database, the parameter `replica` is used to specify the number of replicas. To achieve high availability, set `replica` to 3.
The number of dnodes in a TDengine cluster must NOT be lower than the number of replicas for any database, otherwise it would fail when trying to create a table.
As long as the dnodes of a TDengine cluster are deployed on different physical machines and the replica number is higher than 1, high availability can be achieved without any other assistance. For disaster recovery, dnodes of a TDengine cluster should be deployed in geographically different data centers.
Alternatively, you can use taosX to synchronize the data from one TDengine cluster to another cluster in a remote location. However, taosX is only available in TDengine enterprise version, for more information please contact tdengine.com.
taosX is only provided in TDengine enterprise edition, for more details please contact business@tdengine.com.

View File

@ -353,6 +353,86 @@ For a more detailed description of the `sql()` method, please refer to [RestClie
</TabItem>
</Tabs>
### Usage with req_id
By using the optional req_id parameter, you can specify a request ID that can be used for tracing.
<Tabs defaultValue="rest">
<TabItem value="native" label="native connection">
##### TaosConnection class
The `TaosConnection` class contains both an implementation of the PEP249 Connection interface (e.g., the `cursor()` method and the `close()` method) and many extensions (e.g., the `execute()`, `query()`, `schemaless_insert()`, and `subscribe()` methods).
```python title="execute method"
{{#include docs/examples/python/connection_usage_native_reference_with_req_id.py:insert}}
```
```python title="query method"
{{#include docs/examples/python/connection_usage_native_reference_with_req_id.py:query}}
```
:::tip
The queried results can only be fetched once. For example, only one of `fetch_all()` and `fetch_all_into_dict()` can be used in the example above. Repeated fetches will result in an empty list.
:::
##### Use of TaosResult class
In the above example of using the `TaosConnection` class, we have shown two ways to get the result of a query: `fetch_all()` and `fetch_all_into_dict()`. In addition, `TaosResult` also provides methods to iterate through the result set by rows (`rows_iter`) or by data blocks (`blocks_iter`). Using these two methods will be more efficient in scenarios where the query has a large amount of data.
```python title="blocks_iter method"
{{#include docs/examples/python/result_set_with_req_id_examples.py}}
```
##### Use of the TaosCursor class
The `TaosConnection` class and the `TaosResult` class already implement all the functionality of the native interface. If you are familiar with the interfaces in the PEP249 specification, you can also use the methods provided by the `TaosCursor` class.
```python title="Use of TaosCursor"
{{#include docs/examples/python/cursor_usage_native_reference_with_req_id.py}}
```
:::note
The TaosCursor class uses native connections for write and query operations. In a client-side multi-threaded scenario, this cursor instance must remain thread exclusive and cannot be shared across threads for use, otherwise, it will result in errors in the returned results.
:::
</TabItem>
<TabItem value="rest" label="REST connection">
##### Use of TaosRestCursor class
The `TaosRestCursor` class is an implementation of the PEP249 Cursor interface.
```python title="Use of TaosRestCursor"
{{#include docs/examples/python/connect_rest_with_req_id_examples.py:basic}}
```
- `cursor.execute`: Used to execute arbitrary SQL statements.
- `cursor.rowcount` : For write operations, returns the number of successful rows written. For query operations, returns the number of rows in the result set.
- `cursor.description` : Returns the description of the field. Please refer to [TaosRestCursor](https://docs.taosdata.com/api/taospy/taosrest/cursor.html) for the specific format of the description information.
##### Use of the RestClient class
The `RestClient` class is a direct wrapper for the [REST API](/reference/rest-api). It contains only a `sql()` method for executing arbitrary SQL statements and returning the result.
```python title="Use of RestClient"
{{#include docs/examples/python/rest_client_with_req_id_example.py}}
```
For a more detailed description of the `sql()` method, please refer to [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html).
</TabItem>
<TabItem value="websocket" label="WebSocket connection">
```python
{{#include docs/examples/python/connect_websocket_with_req_id_examples.py:basic}}
```
- `conn.execute`: can use to execute arbitrary SQL statements, and return the number of rows affected.
- `conn.query`: can use to execute query SQL statements, and return the query results.
</TabItem>
</Tabs>
### Used with pandas
<Tabs defaultValue="rest">

View File

@ -99,6 +99,9 @@ The parameters described in this document by the effect that they have on the sy
## Monitoring Parameters
:::note
Please note the `taoskeeper` needs to be installed and running to create the `log` database and receiving metrics sent by `taosd` as the full monitoring solution.
### monitor
| Attribute | Description |

View File

@ -13,14 +13,12 @@ taosKeeper is a tool for TDengine that exports monitoring metrics. With taosKeep
## Installation
<!-- There are two ways to install taosKeeper: -->
There are two ways to install taosKeeper:
Methods of installing taosKeeper:
<!--- Installing the official TDengine installer will automatically install taosKeeper. Please refer to [TDengine installation](/operation/pkg-install) for details. -->
- You can compile taosKeeper separately and install it. Please refer to the [taosKeeper](https://github.com/taosdata/taoskeeper) repository for details. -->
You can compile taosKeeper separately and install it. Please refer to the [taosKeeper](https://github.com/taosdata/taoskeeper) repository for details.
- Installing the official TDengine installer will automatically install taosKeeper. Please refer to [TDengine installation](/operation/pkg-install) for details.
- You can compile taosKeeper separately and install it. Please refer to the [taosKeeper](https://github.com/taosdata/taoskeeper) repository for details.
## Configuration and Launch
### Configuration
@ -110,7 +108,7 @@ The following `launchctl` commands can help you manage taoskeeper service:
#### Launch With Configuration File
You can quickly launch taosKeeper with the following commands. If you do not specify a configuration file, `/etc/taos/keeper.toml` is used by default. If this file does not specify configurations, the default values are used.
You can quickly launch taosKeeper with the following commands. If you do not specify a configuration file, `/etc/taos/keeper.toml` is used by default. If this file does not specify configurations, the default values are used.
```shell
$ taoskeeper -c <keeper config file>
@ -188,19 +186,36 @@ $ curl http://127.0.0.1:6043/metrics
Sample result set (excerpt):
```shell
# HELP taos_cluster_info_connections_total
# HELP taos_cluster_info_connections_total
# TYPE taos_cluster_info_connections_total counter
taos_cluster_info_connections_total{cluster_id="5981392874047724755"} 16
# HELP taos_cluster_info_dbs_total
# HELP taos_cluster_info_dbs_total
# TYPE taos_cluster_info_dbs_total counter
taos_cluster_info_dbs_total{cluster_id="5981392874047724755"} 2
# HELP taos_cluster_info_dnodes_alive
# HELP taos_cluster_info_dnodes_alive
# TYPE taos_cluster_info_dnodes_alive counter
taos_cluster_info_dnodes_alive{cluster_id="5981392874047724755"} 1
# HELP taos_cluster_info_dnodes_total
# HELP taos_cluster_info_dnodes_total
# TYPE taos_cluster_info_dnodes_total counter
taos_cluster_info_dnodes_total{cluster_id="5981392874047724755"} 1
# HELP taos_cluster_info_first_ep
# HELP taos_cluster_info_first_ep
# TYPE taos_cluster_info_first_ep gauge
taos_cluster_info_first_ep{cluster_id="5981392874047724755",value="hlb:6030"} 1
```
```
### check_health
```
$ curl -i http://127.0.0.1:6043/check_health
```
Response:
```
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Mon, 03 Apr 2023 07:20:38 GMT
Content-Length: 19
{"version":"1.0.0"}
```

View File

@ -77,7 +77,7 @@ sudo -u grafana grafana-cli plugins install tdengine-datasource
You can also download zip files from [GitHub](https://github.com/taosdata/grafanaplugin/releases/tag/latest) or [Grafana](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation) and install manually. The commands are as follows:
```bash
GF_VERSION=3.2.7
GF_VERSION=3.3.1
# from GitHub
wget https://github.com/taosdata/grafanaplugin/releases/download/v$GF_VERSION/tdengine-datasource-$GF_VERSION.zip
# from Grafana

View File

@ -10,6 +10,14 @@ For TDengine 2.x installation packages by version, please visit [here](https://w
import Release from "/components/ReleaseV3";
## 3.0.3.2
<Release type="tdengine" version="3.0.3.2" />
## 3.0.3.1
<Release type="tdengine" version="3.0.3.1" />
## 3.0.3.1
<Release type="tdengine" version="3.0.3.1" />

View File

@ -10,6 +10,10 @@ For other historical version installers, please visit [here](https://www.taosdat
import Release from "/components/ReleaseV3";
## 2.4.11
<Release type="tools" version="2.4.11" />
## 2.4.10
<Release type="tools" version="2.4.10" />

View File

@ -70,7 +70,7 @@ static int32_t init_env() {
taos_free_result(pRes);
// create database
pRes = taos_query(pConn, "create database tmqdb");
pRes = taos_query(pConn, "create database tmqdb wal_retention_period 3600");
if (taos_errno(pRes) != 0) {
printf("error in create tmqdb, reason:%s\n", taos_errstr(pRes));
return -1;

View File

@ -48,7 +48,7 @@ namespace TDengineExample
static void PrepareDatabase(IntPtr conn)
{
IntPtr res = TDengine.Query(conn, "CREATE DATABASE test");
IntPtr res = TDengine.Query(conn, "CREATE DATABASE test WAL_RETENTION_PERIOD 3600");
if (TDengine.ErrorNo(res) != 0)
{
throw new Exception("failed to create database, reason: " + TDengine.Error(res));

View File

@ -54,7 +54,7 @@ namespace TDengineExample
static void PrepareDatabase(IntPtr conn)
{
IntPtr res = TDengine.Query(conn, "CREATE DATABASE test");
IntPtr res = TDengine.Query(conn, "CREATE DATABASE test WAL_RETENTION_PERIOD 3600");
if (TDengine.ErrorNo(res) != 0)
{
throw new Exception("failed to create database, reason: " + TDengine.Error(res));

View File

@ -58,7 +58,7 @@ namespace TDengineExample
static void PrepareDatabase(IntPtr conn)
{
IntPtr res = TDengine.Query(conn, "CREATE DATABASE test");
IntPtr res = TDengine.Query(conn, "CREATE DATABASE test WAL_RETENTION_PERIOD 3600");
if (TDengine.ErrorNo(res) != 0)
{
throw new Exception("failed to create database, reason: " + TDengine.Error(res));

View File

@ -11,7 +11,7 @@ namespace TDengineExample
IntPtr conn = GetConnection();
try
{
IntPtr res = TDengine.Query(conn, "CREATE DATABASE power");
IntPtr res = TDengine.Query(conn, "CREATE DATABASE power WAL_RETENTION_PERIOD 3600");
CheckRes(conn, res, "failed to create database");
res = TDengine.Query(conn, "USE power");
CheckRes(conn, res, "failed to change database");

View File

@ -76,7 +76,7 @@ namespace TDengineExample
static void PrepareSTable()
{
IntPtr res = TDengine.Query(conn, "CREATE DATABASE power");
IntPtr res = TDengine.Query(conn, "CREATE DATABASE power WAL_RETENTION_PERIOD 3600");
CheckResPtr(res, "failed to create database");
res = TDengine.Query(conn, "USE power");
CheckResPtr(res, "failed to change database");

View File

@ -15,7 +15,7 @@ func main() {
panic(err)
}
defer db.Close()
_, err = db.Exec("create database if not exists example_tmq")
_, err = db.Exec("create database if not exists example_tmq wal_retention_period 3600")
if err != nil {
panic(err)
}

View File

@ -35,7 +35,7 @@ public class SubscribeDemo {
try (Statement statement = connection.createStatement()) {
statement.executeUpdate("drop topic if exists " + TOPIC);
statement.executeUpdate("drop database if exists " + DB_NAME);
statement.executeUpdate("create database " + DB_NAME);
statement.executeUpdate("create database " + DB_NAME + " wal_retention_period 3600");
statement.executeUpdate("use " + DB_NAME);
statement.executeUpdate(
"CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT) TAGS (`groupid` INT, `location` BINARY(24))");

View File

@ -35,7 +35,7 @@ public class WebsocketSubscribeDemo {
Statement statement = connection.createStatement()) {
statement.executeUpdate("drop topic if exists " + TOPIC);
statement.executeUpdate("drop database if exists " + DB_NAME);
statement.executeUpdate("create database " + DB_NAME);
statement.executeUpdate("create database " + DB_NAME + " wal_retention_period 3600");
statement.executeUpdate("use " + DB_NAME);
statement.executeUpdate(
"CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT) TAGS (`groupid` INT, `location` BINARY(24))");

View File

@ -4,7 +4,7 @@ import taos
taos_conn = taos.connect()
taos_conn.execute('drop database if exists power')
taos_conn.execute('create database if not exists power')
taos_conn.execute('create database if not exists power wal_retention_period 3600')
taos_conn.execute("use power")
taos_conn.execute(
"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)")

View File

@ -0,0 +1,44 @@
# ANCHOR: connect
from taosrest import connect, TaosRestConnection, TaosRestCursor
conn = connect(url="http://localhost:6041",
user="root",
password="taosdata",
timeout=30)
# ANCHOR_END: connect
# ANCHOR: basic
# create STable
cursor = conn.cursor()
cursor.execute("DROP DATABASE IF EXISTS power", req_id=1)
cursor.execute("CREATE DATABASE power", req_id=2)
cursor.execute(
"CREATE STABLE power.meters (ts TIMESTAMP, current FLOAT, voltage INT, phase FLOAT) TAGS (location BINARY(64), groupId INT)", req_id=3)
# insert data
cursor.execute("""INSERT INTO power.d1001 USING power.meters TAGS('California.SanFrancisco', 2) VALUES ('2018-10-03 14:38:05.000', 10.30000, 219, 0.31000) ('2018-10-03 14:38:15.000', 12.60000, 218, 0.33000) ('2018-10-03 14:38:16.800', 12.30000, 221, 0.31000)
power.d1002 USING power.meters TAGS('California.SanFrancisco', 3) VALUES ('2018-10-03 14:38:16.650', 10.30000, 218, 0.25000)
power.d1003 USING power.meters TAGS('California.LosAngeles', 2) VALUES ('2018-10-03 14:38:05.500', 11.80000, 221, 0.28000) ('2018-10-03 14:38:16.600', 13.40000, 223, 0.29000)
power.d1004 USING power.meters TAGS('California.LosAngeles', 3) VALUES ('2018-10-03 14:38:05.000', 10.80000, 223, 0.29000) ('2018-10-03 14:38:06.500', 11.50000, 221, 0.35000)""", req_id=4)
print("inserted row count:", cursor.rowcount)
# query data
cursor.execute("SELECT * FROM power.meters LIMIT 3", req_id=5)
# get total rows
print("queried row count:", cursor.rowcount)
# get column names from cursor
column_names = [meta[0] for meta in cursor.description]
# get rows
data = cursor.fetchall()
print(column_names)
for row in data:
print(row)
# output:
# inserted row count: 8
# queried row count: 3
# ['ts', 'current', 'voltage', 'phase', 'location', 'groupid']
# [datetime.datetime(2018, 10, 3, 14, 38, 5, 500000, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 11.8, 221, 0.28, 'california.losangeles', 2]
# [datetime.datetime(2018, 10, 3, 14, 38, 16, 600000, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 13.4, 223, 0.29, 'california.losangeles', 2]
# [datetime.datetime(2018, 10, 3, 14, 38, 5, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 10.8, 223, 0.29, 'california.losangeles', 3]
# ANCHOR_END: basic

View File

@ -6,7 +6,7 @@ conn = taosws.connect("taosws://root:taosdata@localhost:6041")
# ANCHOR: basic
conn.execute("drop database if exists connwspy")
conn.execute("create database if not exists connwspy")
conn.execute("create database if not exists connwspy wal_retention_period 3600")
conn.execute("use connwspy")
conn.execute("create table if not exists stb (ts timestamp, c1 int) tags (t1 int)")
conn.execute("create table if not exists tb1 using stb tags (1)")

View File

@ -0,0 +1,29 @@
# ANCHOR: connect
import taosws
conn = taosws.connect("taosws://root:taosdata@localhost:6041")
# ANCHOR_END: connect
# ANCHOR: basic
conn.execute("drop database if exists connwspy", req_id=1)
conn.execute("create database if not exists connwspy", req_id=2)
conn.execute("use connwspy", req_id=3)
conn.execute("create table if not exists stb (ts timestamp, c1 int) tags (t1 int)", req_id=4)
conn.execute("create table if not exists tb1 using stb tags (1)", req_id=5)
conn.execute("insert into tb1 values (now, 1)", req_id=6)
conn.execute("insert into tb1 values (now, 2)", req_id=7)
conn.execute("insert into tb1 values (now, 3)", req_id=8)
r = conn.execute("select * from stb", req_id=9)
result = conn.query("select * from stb", req_id=10)
num_of_fields = result.field_count
print(num_of_fields)
for row in result:
print(row)
# output:
# 3
# ('2023-02-28 15:56:13.329 +08:00', 1, 1)
# ('2023-02-28 15:56:13.333 +08:00', 2, 1)
# ('2023-02-28 15:56:13.337 +08:00', 3, 1)

View File

@ -0,0 +1,45 @@
import taos
# ANCHOR: insert
conn = taos.connect()
# Execute a sql, ignore the result set, just get affected rows. It's useful for DDL and DML statement.
conn.execute("DROP DATABASE IF EXISTS test", req_id=1)
conn.execute("CREATE DATABASE test", req_id=2)
# change database. same as execute "USE db"
conn.select_db("test")
conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)", req_id=3)
affected_row = conn.execute("INSERT INTO t1 USING weather TAGS(1) VALUES (now, 23.5) (now+1m, 23.5) (now+2m, 24.4)", req_id=4)
print("affected_row", affected_row)
# output:
# affected_row 3
# ANCHOR_END: insert
# ANCHOR: query
# Execute a sql and get its result set. It's useful for SELECT statement
result = conn.query("SELECT * from weather", req_id=5)
# Get fields from result
fields = result.fields
for field in fields:
print(field) # {name: ts, type: 9, bytes: 8}
# output:
# {name: ts, type: 9, bytes: 8}
# {name: temperature, type: 6, bytes: 4}
# {name: location, type: 4, bytes: 4}
# Get data from result as list of tuple
data = result.fetch_all()
print(data)
# output:
# [(datetime.datetime(2022, 4, 27, 9, 4, 25, 367000), 23.5, 1), (datetime.datetime(2022, 4, 27, 9, 5, 25, 367000), 23.5, 1), (datetime.datetime(2022, 4, 27, 9, 6, 25, 367000), 24.399999618530273, 1)]
# Or get data from result as a list of dict
# map_data = result.fetch_all_into_dict()
# print(map_data)
# output:
# [{'ts': datetime.datetime(2022, 4, 27, 9, 1, 15, 343000), 'temperature': 23.5, 'location': 1}, {'ts': datetime.datetime(2022, 4, 27, 9, 2, 15, 343000), 'temperature': 23.5, 'location': 1}, {'ts': datetime.datetime(2022, 4, 27, 9, 3, 15, 343000), 'temperature': 24.399999618530273, 'location': 1}]
# ANCHOR_END: query
conn.close()

View File

@ -0,0 +1,32 @@
import taos
conn = taos.connect()
cursor = conn.cursor()
cursor.execute("DROP DATABASE IF EXISTS test", req_id=1)
cursor.execute("CREATE DATABASE test", req_id=2)
cursor.execute("USE test", req_id=3)
cursor.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)", req_id=4)
for i in range(1000):
location = str(i % 10)
tb = "t" + location
cursor.execute(f"INSERT INTO {tb} USING weather TAGS({location}) VALUES (now+{i}a, 23.5) (now+{i + 1}a, 23.5)", req_id=5+i)
cursor.execute("SELECT count(*) FROM weather", req_id=1005)
data = cursor.fetchall()
print("count:", data[0][0])
cursor.execute("SELECT tbname, * FROM weather LIMIT 2", req_id=1006)
col_names = [meta[0] for meta in cursor.description]
print(col_names)
rows = cursor.fetchall()
print(rows)
cursor.close()
conn.close()
# output:
# count: 2000
# ['tbname', 'ts', 'temperature', 'location']
# row_count: -1
# [('t0', datetime.datetime(2022, 4, 27, 14, 54, 24, 392000), 23.5, 0), ('t0', datetime.datetime(2022, 4, 27, 14, 54, 24, 393000), 23.5, 0)]

View File

@ -5,7 +5,7 @@ LOCATIONS = ['California.SanFrancisco', 'California.LosAngles', 'California.SanD
'California.PaloAlto', 'California.Campbell', 'California.MountainView', 'California.Sunnyvale',
'California.SantaClara', 'California.Cupertino']
CREATE_DATABASE_SQL = 'create database if not exists {} keep 365 duration 10 buffer 16 wal_level 1'
CREATE_DATABASE_SQL = 'create database if not exists {} keep 365 duration 10 buffer 16 wal_level 1 wal_retention_period 3600'
USE_DATABASE_SQL = 'use {}'
DROP_TABLE_SQL = 'drop table if exists meters'
DROP_DATABASE_SQL = 'drop database if exists {}'

View File

@ -0,0 +1,9 @@
from taosrest import RestClient
client = RestClient("http://localhost:6041", user="root", password="taosdata")
res: dict = client.sql("SELECT ts, current FROM power.meters LIMIT 1", req_id=1)
print(res)
# output:
# {'status': 'succ', 'head': ['ts', 'current'], 'column_meta': [['ts', 9, 8], ['current', 6, 4]], 'data': [[datetime.datetime(2018, 10, 3, 14, 38, 5, tzinfo=datetime.timezone(datetime.timedelta(seconds=28800), '+08:00')), 10.3]], 'rows': 1}

View File

@ -0,0 +1,33 @@
import taos
conn = taos.connect()
conn.execute("DROP DATABASE IF EXISTS test", req_id=1)
conn.execute("CREATE DATABASE test", req_id=2)
conn.select_db("test")
conn.execute("CREATE STABLE weather(ts TIMESTAMP, temperature FLOAT) TAGS (location INT)", req_id=3)
# prepare data
for i in range(2000):
location = str(i % 10)
tb = "t" + location
conn.execute(f"INSERT INTO {tb} USING weather TAGS({location}) VALUES (now+{i}a, 23.5) (now+{i + 1}a, 23.5)", req_id=4+i)
result: taos.TaosResult = conn.query("SELECT * FROM weather", req_id=2004)
block_index = 0
blocks: taos.TaosBlocks = result.blocks_iter()
for rows, length in blocks:
print("block ", block_index, " length", length)
print("first row in this block:", rows[0])
block_index += 1
conn.close()
# possible output:
# block 0 length 1200
# first row in this block: (datetime.datetime(2022, 4, 27, 15, 14, 52, 46000), 23.5, 0)
# block 1 length 1200
# first row in this block: (datetime.datetime(2022, 4, 27, 15, 14, 52, 76000), 23.5, 3)
# block 2 length 1200
# first row in this block: (datetime.datetime(2022, 4, 27, 15, 14, 52, 99000), 23.5, 6)
# block 3 length 400
# first row in this block: (datetime.datetime(2022, 4, 27, 15, 14, 52, 122000), 23.5, 9)

View File

@ -6,7 +6,7 @@ def init_tmq_env(db, topic):
conn = taos.connect()
conn.execute("drop topic if exists {}".format(topic))
conn.execute("drop database if exists {}".format(db))
conn.execute("create database if not exists {}".format(db))
conn.execute("create database if not exists {} wal_retention_period 3600".format(db))
conn.select_db(db)
conn.execute(
"create stable if not exists stb1 (ts timestamp, c1 int, c2 float, c3 varchar(16)) tags(t1 int, t3 varchar(16))")

View File

@ -4,7 +4,7 @@ description: '快速设置 TDengine 环境并体验其高效写入和查询'
---
import xiaot from './xiaot.webp'
import xiaot_new from './xiaot-new.webp'
import xiaot_new from './xiaot-03.webp'
import channel from './channel.webp'
import official_account from './official-account.webp'

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

View File

@ -17,7 +17,7 @@ import TabItem from '@theme/TabItem';
- JDBC 原生连接Java 应用在物理节点 1pnode1上使用 TSDBDriver 直接调用客户端驱动libtaos.so 或 taos.dll的 API 将写入和查询请求发送到位于物理节点 2pnode2上的 taosd 实例。
- JDBC REST 连接Java 应用通过 RestfulDriver 将 SQL 封装成一个 REST 请求,发送给物理节点 2 的 REST 服务器taosAdapter通过 REST 服务器请求 taosd 并返回结果。
使用 REST 连接,不依赖 TDengine 客户端驱动,可以跨平台,更加方便灵活,但性能比原生连接器低约 30%
使用 REST 连接,不依赖 TDengine 客户端驱动,可以跨平台,更加方便灵活。
:::info
TDengine 的 JDBC 驱动实现尽可能与关系型数据库驱动保持一致,但 TDengine 与关系对象型数据库的使用场景和技术特征存在差异,所以`taos-jdbcdriver` 与传统的 JDBC driver 也存在一定差异。在使用时需要注意以下几点:

View File

@ -353,6 +353,85 @@ TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线
</TabItem>
</Tabs>
### 与 req_id 一起使用
使用可选的 req_id 参数,指定请求 id可以用于 tracing
<Tabs defaultValue="rest">
<TabItem value="native" label="原生连接">
##### TaosConnection 类的使用
`TaosConnection` 类既包含对 PEP249 Connection 接口的实现(如:`cursor`方法和 `close` 方法),也包含很多扩展功能(如: `execute`、 `query`、`schemaless_insert` 和 `subscribe` 方法。
```python title="execute 方法"
{{#include docs/examples/python/connection_usage_native_reference_with_req_id.py:insert}}
```
```python title="query 方法"
{{#include docs/examples/python/connection_usage_native_reference_with_req_id.py:query}}
```
:::tip
查询结果只能获取一次。比如上面的示例中 `fetch_all()` 和 `fetch_all_into_dict()` 只能用一个。重复获取得到的结果为空列表。
:::
##### TaosResult 类的使用
上面 `TaosConnection` 类的使用示例中,我们已经展示了两种获取查询结果的方法: `fetch_all()` 和 `fetch_all_into_dict()`。除此之外 `TaosResult` 还提供了按行迭代(`rows_iter`)或按数据块迭代(`blocks_iter`)结果集的方法。在查询数据量较大的场景,使用这两个方法会更高效。
```python title="blocks_iter 方法"
{{#include docs/examples/python/result_set_with_req_id_examples.py}}
```
##### TaosCursor 类的使用
`TaosConnection` 类和 `TaosResult` 类已经实现了原生接口的所有功能。如果你对 PEP249 规范中的接口比较熟悉也可以使用 `TaosCursor` 类提供的方法。
```python title="TaosCursor 的使用"
{{#include docs/examples/python/cursor_usage_native_reference_with_req_id.py}}
```
:::note
TaosCursor 类使用原生连接进行写入、查询操作。在客户端多线程的场景下,这个游标实例必须保持线程独享,不能跨线程共享使用,否则会导致返回结果出现错误。
:::
</TabItem>
<TabItem value="rest" label="REST 连接">
##### TaosRestCursor 类的使用
`TaosRestCursor` 类是对 PEP249 Cursor 接口的实现。
```python title="TaosRestCursor 的使用"
{{#include docs/examples/python/connect_rest_with_req_id_examples.py:basic}}
```
- `cursor.execute` 用来执行任意 SQL 语句。
- `cursor.rowcount` 对于写入操作返回写入成功记录数。对于查询操作,返回结果集行数。
- `cursor.description` 返回字段的描述信息。关于描述信息的具体格式请参考[TaosRestCursor](https://docs.taosdata.com/api/taospy/taosrest/cursor.html)。
##### RestClient 类的使用
`RestClient` 类是对于 [REST API](../rest-api) 的直接封装。它只包含一个 `sql()` 方法用于执行任意 SQL 语句, 并返回执行结果。
```python title="RestClient 的使用"
{{#include docs/examples/python/rest_client_with_req_id_example.py}}
```
对于 `sql()` 方法更详细的介绍, 请参考 [RestClient](https://docs.taosdata.com/api/taospy/taosrest/restclient.html)。
</TabItem>
<TabItem value="websocket" label="WebSocket 连接">
```python
{{#include docs/examples/python/connect_websocket_with_req_id_examples.py:basic}}
```
- `conn.execute`: 用来执行任意 SQL 语句,返回影响的行数
- `conn.query`: 用来执行查询 SQL 语句,返回查询结果
</TabItem>
</Tabs>
### 与 pandas 一起使用
<Tabs defaultValue="rest">

View File

@ -269,7 +269,7 @@ description: TDengine 保留关键字的详细列表
- SPLIT
- STABLE
- STABLES
- STAR
- START
- STATE
- STATE_WINDOW
- STATEMENT

View File

@ -43,8 +43,6 @@ sudo apt-get update
sudo apt-get install grafana
```
### 在 CentOS / RHEL 上安装 Grafana
</TabItem>
<TabItem value="redhat" label="基于 CentOS / RHEL 系统">
@ -79,7 +77,37 @@ sudo yum install \
</Tabs>
<Tabs defaultValue="auto" groupId="deploy">
### 安装 TDengine 数据源插件
<Tabs defaultValue="manual" groupId="deploy">
<TabItem value="manual" label="手动设置 TDinsight">
从 GitHub 安装 TDengine 最新版数据源插件。
```bash
get_latest_release() {
curl --silent "https://api.github.com/repos/taosdata/grafanaplugin/releases/latest" |
grep '"tag_name":' |
sed -E 's/.*"v([^"]+)".*/\1/'
}
TDENGINE_PLUGIN_VERSION=$(get_latest_release)
sudo grafana-cli \
--pluginUrl https://github.com/taosdata/grafanaplugin/releases/download/v$TDENGINE_PLUGIN_VERSION/tdengine-datasource-$TDENGINE_PLUGIN_VERSION.zip \
plugins install tdengine-datasource
```
:::note
3.1.6 和更早版本插件需要在配置文件 `/etc/grafana/grafana.ini` 中添加如下设置,以启用未签名插件。
```ini
[plugins]
allow_loading_unsigned_plugins = tdengine-datasource
```
:::
</TabItem>
<TabItem value="auto" label="自动部署 TDinsight">
我们提供了一个自动化安装脚本 [`TDinsight.sh`](https://github.com/taosdata/grafanaplugin/releases/latest/download/TDinsight.sh) 脚本以便用户快速进行安装配置。
@ -175,33 +203,7 @@ sudo ./TDengine.sh -n TDengine-Env1 -a http://another:6041 -u root -p taosdata -
特别地,当您使用 Grafana Cloud 或其他组织时,`-O` 可用于设置组织 ID。 `-G` 可指定 Grafana 插件安装目录。 `-e` 参数将仪表盘设置为可编辑。
</TabItem>
<TabItem value="manual" label="手动设置 TDinsight">
### 安装 TDengine 数据源插件
从 GitHub 安装 TDengine 最新版数据源插件。
```bash
get_latest_release() {
curl --silent "https://api.github.com/repos/taosdata/grafanaplugin/releases/latest" |
grep '"tag_name":' |
sed -E 's/.*"v([^"]+)".*/\1/'
}
TDENGINE_PLUGIN_VERSION=$(get_latest_release)
sudo grafana-cli \
--pluginUrl https://github.com/taosdata/grafanaplugin/releases/download/v$TDENGINE_PLUGIN_VERSION/tdengine-datasource-$TDENGINE_PLUGIN_VERSION.zip \
plugins install tdengine-datasource
```
:::note
3.1.6 和更早版本插件需要在配置文件 `/etc/grafana/grafana.ini` 中添加如下设置,以启用未签名插件。
```ini
[plugins]
allow_loading_unsigned_plugins = tdengine-datasource
```
:::
</Tabs>
### 启动 Grafana 服务
@ -233,8 +235,7 @@ sudo systemctl enable grafana-server
![TDengine Database TDinsight 数据源测试](./assets/howto-add-datasource-test.webp)
</TabItem>
</Tabs>
### 导入仪表盘

View File

@ -99,6 +99,9 @@ taos --dump-config
## 监控相关
:::note
请注意,完整的监控功能需要安装并运行 `taoskeeper` 服务。taoskeeper 负责接收监控指标数据并创建 `log` 库。
### monitor
| 属性 | 说明 |

View File

@ -13,12 +13,11 @@ taosKeeper 是 TDengine 3.0 版本监控指标的导出工具,通过简单的
## 安装
<!-- taosKeeper 有两种安装方式: -->
taosKeeper 有两种安装方式:
taosKeeper 安装方式:
<!-- - 安装 TDengine 官方安装包的同时会自动安装 taosKeeper, 详情请参考[ TDengine 安装](/operation/pkg-install)。-->
- 安装 TDengine 官方安装包的同时会自动安装 taosKeeper, 详情请参考[ TDengine 安装](/operation/pkg-install)。
<!-- - 单独编译 taosKeeper 并安装,详情请参考 [taosKeeper](https://github.com/taosdata/taoskeeper) 仓库。-->
- 单独编译 taosKeeper 并安装,详情请参考 [taosKeeper](https://github.com/taosdata/taoskeeper) 仓库。
## 配置和运行方式
@ -112,7 +111,7 @@ Active: inactive (dead)
#### 配置文件启动
执行以下命令即可快速体验 taosKeeper。当不指定 taosKeeper 配置文件时,优先使用 `/etc/taos/keeper.toml` 配置,否则将使用默认配置。
执行以下命令即可快速体验 taosKeeper。当不指定 taosKeeper 配置文件时,优先使用 `/etc/taos/keeper.toml` 配置,否则将使用默认配置。
```shell
$ taoskeeper -c <keeper config file>
@ -190,19 +189,36 @@ $ curl http://127.0.0.1:6043/metrics
部分结果集:
```shell
# HELP taos_cluster_info_connections_total
# HELP taos_cluster_info_connections_total
# TYPE taos_cluster_info_connections_total counter
taos_cluster_info_connections_total{cluster_id="5981392874047724755"} 16
# HELP taos_cluster_info_dbs_total
# HELP taos_cluster_info_dbs_total
# TYPE taos_cluster_info_dbs_total counter
taos_cluster_info_dbs_total{cluster_id="5981392874047724755"} 2
# HELP taos_cluster_info_dnodes_alive
# HELP taos_cluster_info_dnodes_alive
# TYPE taos_cluster_info_dnodes_alive counter
taos_cluster_info_dnodes_alive{cluster_id="5981392874047724755"} 1
# HELP taos_cluster_info_dnodes_total
# HELP taos_cluster_info_dnodes_total
# TYPE taos_cluster_info_dnodes_total counter
taos_cluster_info_dnodes_total{cluster_id="5981392874047724755"} 1
# HELP taos_cluster_info_first_ep
# HELP taos_cluster_info_first_ep
# TYPE taos_cluster_info_first_ep gauge
taos_cluster_info_first_ep{cluster_id="5981392874047724755",value="hlb:6030"} 1
```
### check_health
```
$ curl -i http://127.0.0.1:6043/check_health
```
返回结果:
```
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Mon, 03 Apr 2023 07:20:38 GMT
Content-Length: 19
{"version":"1.0.0"}
```

View File

@ -19,12 +19,8 @@ TDengine 接收到应用的请求数据包时,先将请求的原始数据包
## 灾备
TDengine 的集群通过多个副本的机制,来提供系统的高可用性,同时具备一定的灾备能力
TDengine 灾备是通过在异地的两个数据中心中设置两个 TDengine 集群并利用 taosX 的数据复制能力来实现的。假定两个集群为集群 A 和集群 B其中集群 A 为源集群,承担写入请求并提供查询服务。则在集群 A 所在数据中心中可以配置 taosX 利用 TDengine 提供的数据订阅能力,实时消费集群 A 中新写入的数据,并同步到集群 B。如果发生了灾难导致集群 A 所在数据中心不可用,则可以启用集群 B 作为数据写入和查询的主节点,并在集群 B 所处数据中心中配置 taosX 将数据复制到已经恢复的集群 A 或者新建的集群 C
TDengine 集群是由 mnode 负责管理的,为保证 mnode 的高可靠,可以配置 三个 mnode 副本。为保证元数据的强一致性mnode 副本之间通过同步方式进行数据复制,保证了元数据的强一致性
利用 taosX 的数据复制能力也可以构造出更复杂的灾备方案
TDengine 集群中的时序数据的副本数是与数据库关联的,一个集群里可以有多个数据库,每个数据库可以配置不同的副本数。创建数据库时,通过参数 replica 指定副本数。为了支持高可靠,需要设置副本数为 3。
TDengine 集群的节点数必须大于等于副本数,否则创建表时将报错。
当 TDengine 集群中的节点部署在不同的物理机上并设置多个副本数时就实现了系统的高可靠性无需再使用其他软件或工具。TDengine 企业版还可以将副本部署在不同机房,从而实现异地容灾。
taosX 只在 TDengine 企业版中提供,关于其具体细节,请联系 business@taosdata.com

View File

@ -77,7 +77,7 @@ sudo -u grafana grafana-cli plugins install tdengine-datasource
或者从 [GitHub](https://github.com/taosdata/grafanaplugin/releases/tag/latest) 或 [Grafana](https://grafana.com/grafana/plugins/tdengine-datasource/?tab=installation) 下载 .zip 文件到本地并解压到 Grafana 插件目录。命令行下载示例如下:
```bash
GF_VERSION=3.2.9
GF_VERSION=3.3.1
# from GitHub
wget https://github.com/taosdata/grafanaplugin/releases/download/v$GF_VERSION/tdengine-datasource-$GF_VERSION.zip
# from Grafana

View File

@ -10,6 +10,10 @@ TDengine 2.x 各版本安装包请访问[这里](https://www.taosdata.com/all-do
import Release from "/components/ReleaseV3";
## 3.0.3.2
<Release type="tdengine" version="3.0.3.2" />
## 3.0.3.1
<Release type="tdengine" version="3.0.3.1" />

View File

@ -10,6 +10,10 @@ taosTools 各版本安装包下载链接如下:
import Release from "/components/ReleaseV3";
## 2.4.11
<Release type="tools" version="2.4.11" />
## 2.4.10
<Release type="tools" version="2.4.10" />

View File

@ -1,16 +1,15 @@
local _M = {}
local driver = require "luaconnector51"
local water_mark = 0
local occupied = 0
local connection_pool = {}
td_pool_watermark = 0
td_pool_occupied = 0
td_connection_pool = {}
function _M.new(o,config)
function _M.new(o, config)
o = o or {}
o.connection_pool = connection_pool
o.water_mark = water_mark
o.occupied = occupied
if #connection_pool == 0 then
o.connection_pool = td_connection_pool
o.watermark = td_pool_watermark
o.occupied = td_pool_occupied
if #td_connection_pool == 0 then
for i = 1, config.connection_pool_size do
local res = driver.connect(config)
if res.code ~= 0 then
@ -18,8 +17,8 @@ function _M.new(o,config)
return nil
else
local object = {obj = res.conn, state = 0}
table.insert(o.connection_pool,i, object)
ngx.log(ngx.INFO, "add connection, now pool size:"..#(o.connection_pool))
table.insert(td_connection_pool, i, object)
ngx.log(ngx.INFO, "add connection, now pool size:"..#(td_connection_pool))
end
end
@ -32,13 +31,13 @@ function _M:get_connection()
local connection_obj
for i = 1, #connection_pool do
connection_obj = connection_pool[i]
for i = 1, #td_connection_pool do
connection_obj = td_connection_pool[i]
if connection_obj.state == 0 then
connection_obj.state = 1
occupied = occupied +1
if occupied > water_mark then
water_mark = occupied
td_pool_occupied = td_pool_occupied + 1
if td_pool_occupied > td_pool_watermark then
td_pool_watermark = td_pool_occupied
end
return connection_obj["obj"]
end
@ -49,21 +48,27 @@ function _M:get_connection()
return nil
end
function _M:get_water_mark()
function _M:get_watermark()
return water_mark
return td_pool_watermark
end
function _M:get_current_load()
return td_pool_occupied
end
function _M:release_connection(conn)
local connection_obj
for i = 1, #connection_pool do
connection_obj = connection_pool[i]
for i = 1, #td_connection_pool do
connection_obj = td_connection_pool[i]
if connection_obj["obj"] == conn then
connection_obj["state"] = 0
occupied = occupied -1
td_pool_occupied = td_pool_occupied -1
return
end
end

View File

@ -4,8 +4,21 @@ local Pool = require "tdpool"
local config = require "config"
ngx.say("start time:"..os.time())
local pool = Pool.new(Pool,config)
local conn = pool:get_connection()
local pool = Pool.new(Pool, config)
local another_pool = Pool.new(Pool, config)
local conn, conn1, conn2
conn = pool:get_connection()
conn1 = pool:get_connection()
conn2 = pool:get_connection()
local temp_conn = another_pool:get_connection()
ngx.say("pool size:"..config.connection_pool_size)
ngx.say("pool watermark:"..pool:get_watermark())
ngx.say("pool current load:"..pool:get_current_load())
pool:release_connection(conn1)
pool:release_connection(conn2)
another_pool:release_connection(temp_conn)
ngx.say("pool watermark:"..pool:get_watermark())
ngx.say("pool current load:"..pool:get_current_load())
local res = driver.query(conn,"drop database if exists nginx")
if res.code ~=0 then
@ -31,7 +44,6 @@ end
res = driver.query(conn,"create table m1 (ts timestamp, speed int,owner binary(20))")
if res.code ~=0 then
ngx.say("create table---failed: "..res.error)
else
ngx.say("create table--- pass.")
end
@ -83,8 +95,5 @@ while not flag do
-- ngx.say("i am here once...")
ngx.sleep(0.001) -- time unit is second
end
ngx.say("pool water_mark:"..pool:get_water_mark())
pool:release_connection(conn)
ngx.say("end time:"..os.time())

View File

@ -64,7 +64,7 @@ static FORCE_INLINE int64_t taosGetTimestampToday(int32_t precision) {
: 1000000000;
time_t t = taosTime(NULL);
struct tm tm;
taosLocalTime(&t, &tm);
taosLocalTime(&t, &tm, NULL);
tm.tm_hour = 0;
tm.tm_min = 0;
tm.tm_sec = 0;

View File

@ -31,21 +31,49 @@
extern "C" {
#endif
#if defined(CUS_NAME) || defined(CUS_PROMPT) || defined(CUS_EMAIL)
#include "cus_name.h"
#endif
#ifdef WINDOWS
#define TD_TMP_DIR_PATH "C:\\Windows\\Temp\\"
#ifdef CUS_NAME
#define TD_CFG_DIR_PATH "C:\\"CUS_NAME"\\cfg\\"
#define TD_DATA_DIR_PATH "C:\\"CUS_NAME"\\data\\"
#define TD_LOG_DIR_PATH "C:\\"CUS_NAME"\\log\\"
#else
#define TD_CFG_DIR_PATH "C:\\TDengine\\cfg\\"
#define TD_DATA_DIR_PATH "C:\\TDengine\\data\\"
#define TD_LOG_DIR_PATH "C:\\TDengine\\log\\"
#endif // CUS_NAME
#elif defined(_TD_DARWIN_64)
#ifdef CUS_PROMPT
#define TD_TMP_DIR_PATH "/tmp/"CUS_PROMPT"d/"
#define TD_CFG_DIR_PATH "/etc/"CUS_PROMPT"/"
#define TD_DATA_DIR_PATH "/var/lib/"CUS_PROMPT"/"
#define TD_LOG_DIR_PATH "/var/log/"CUS_PROMPT"/"
#else
#define TD_TMP_DIR_PATH "/tmp/taosd/"
#define TD_CFG_DIR_PATH "/etc/taos/"
#define TD_DATA_DIR_PATH "/var/lib/taos/"
#define TD_LOG_DIR_PATH "/var/log/taos/"
#endif // CUS_PROMPT
#else
#define TD_TMP_DIR_PATH "/tmp/"
#ifdef CUS_PROMPT
#define TD_CFG_DIR_PATH "/etc/"CUS_PROMPT"/"
#define TD_DATA_DIR_PATH "/var/lib/"CUS_PROMPT"/"
#define TD_LOG_DIR_PATH "/var/log/"CUS_PROMPT"/"
#else
#define TD_CFG_DIR_PATH "/etc/taos/"
#define TD_DATA_DIR_PATH "/var/lib/taos/"
#define TD_LOG_DIR_PATH "/var/log/taos/"
#endif // CUS_PROMPT
#endif
typedef struct TdDir *TdDirPtr;

View File

@ -91,7 +91,7 @@ static FORCE_INLINE int64_t taosGetMonoTimestampMs() {
}
char *taosStrpTime(const char *buf, const char *fmt, struct tm *tm);
struct tm *taosLocalTime(const time_t *timep, struct tm *result);
struct tm *taosLocalTime(const time_t *timep, struct tm *result, char *buf);
struct tm *taosLocalTimeNolock(struct tm *result, const time_t *timep, int dst);
time_t taosTime(time_t *t);
time_t taosMktime(struct tm *timep);

31
include/util/cus_name.h Normal file
View File

@ -0,0 +1,31 @@
/*
* Copyright (c) 2019 TAOS Data, Inc. <jhtao@taosdata.com>
*
* This program is free software: you can use, redistribute, and/or modify
* it under the terms of the GNU Affero General Public License, version 3
* or later ("AGPL"), as published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
#ifndef _CUS_NAME_H_
#define _CUS_NAME_H_
#ifndef CUS_NAME
#define CUS_NAME "TDengine"
#endif
#ifndef CUS_PROMPT
#define CUS_PROMPT "taos"
#endif
#ifndef CUS_EMAIL
#define CUS_EMAIL "<support@taosdata.com>"
#endif
#endif // _CUS_NAME_H_

View File

@ -289,6 +289,7 @@ int32_t* taosGetErrno();
#define TSDB_CODE_MND_INVALID_DB_ACCT TAOS_DEF_ERROR_CODE(0, 0x0389) // internal
#define TSDB_CODE_MND_DB_OPTION_UNCHANGED TAOS_DEF_ERROR_CODE(0, 0x038A) //
#define TSDB_CODE_MND_DB_INDEX_NOT_EXIST TAOS_DEF_ERROR_CODE(0, 0x038B)
#define TSDB_CODE_MND_DB_RETENTION_PERIOD_ZERO TAOS_DEF_ERROR_CODE(0, 0x038C)
// #define TSDB_CODE_MND_INVALID_DB_OPTION_DAYS TAOS_DEF_ERROR_CODE(0, 0x0390) // 2.x
// #define TSDB_CODE_MND_INVALID_DB_OPTION_KEEP TAOS_DEF_ERROR_CODE(0, 0x0391) // 2.x
// #define TSDB_CODE_MND_INVALID_TOPIC TAOS_DEF_ERROR_CODE(0, 0x0392) // 2.x

View File

@ -25,7 +25,7 @@ extern "C" {
#define tjsonGetNumberValue(pJson, pName, val, code) \
do { \
int64_t _tmp = 0; \
uint64_t _tmp = 0; \
code = tjsonGetBigIntValue(pJson, pName, &_tmp); \
val = _tmp; \
} while (0)

View File

@ -29,7 +29,7 @@ extern "C" {
int32_t strdequote(char *src);
size_t strtrim(char *src);
char *strnchr(const char *haystack, char needle, int32_t len, bool skipquote);
TdUcs4 *wcsnchr(const TdUcs4 *haystack, TdUcs4 needle, size_t len);
TdUcs4* wcsnchr(const TdUcs4* haystack, TdUcs4 needle, size_t len);
char **strsplit(char *src, const char *delim, int32_t *num);
char *strtolower(char *dst, const char *src);
@ -37,11 +37,11 @@ char *strntolower(char *dst, const char *src, int32_t n);
char *strntolower_s(char *dst, const char *src, int32_t n);
int64_t strnatoi(char *num, int32_t len);
size_t tstrncspn(const char *str, size_t ssize, const char *reject, size_t rsize);
size_t twcsncspn(const TdUcs4 *wcs, size_t size, const TdUcs4 *reject, size_t rsize);
size_t tstrncspn(const char *str, size_t ssize, const char *reject, size_t rsize);
size_t twcsncspn(const TdUcs4 *wcs, size_t size, const TdUcs4 *reject, size_t rsize);
char *strbetween(char *string, char *begin, char *end);
char *paGetToken(char *src, char **token, int32_t *tokenLen);
char *strbetween(char *string, char *begin, char *end);
char *paGetToken(char *src, char **token, int32_t *tokenLen);
int32_t taosByteArrayToHexStr(char bytes[], int32_t len, char hexstr[]);
int32_t taosHexStrToByteArray(char hexstr[], char bytes[]);
@ -92,26 +92,12 @@ static FORCE_INLINE int32_t taosGetTbHashVal(const char *tbname, int32_t tblen,
}
}
#define TSDB_CHECK(condition, CODE, LINO, LABEL, ERRNO) \
if (!(condition)) { \
(CODE) = (ERRNO); \
(LINO) = __LINE__; \
goto LABEL; \
}
#define TSDB_CHECK_CODE(CODE, LINO, LABEL) \
if ((CODE)) { \
(LINO) = __LINE__; \
if (CODE) { \
LINO = __LINE__; \
goto LABEL; \
}
#define TSDB_CHECK_NULL(ptr, CODE, LINO, LABEL, ERRNO) \
if ((ptr) == NULL) { \
(CODE) = (ERRNO); \
(LINO) = __LINE__; \
goto LABEL; \
}
#ifdef __cplusplus
}
#endif

View File

@ -1,7 +1,6 @@
########################################################
# #
# Configuration #
# Any questions, please email support@taosdata.com #
# #
########################################################
@ -13,7 +12,7 @@
############### 1. Cluster End point ############################
# The end point of the first dnode in the cluster to be connected to when this dnode or a CLI `taos` is started
# The end point of the first dnode in the cluster to be connected to when this dnode or the CLI utility is started
# firstEp hostname:6030
# The end point of the second dnode to be connected to if the firstEp is not available
@ -25,7 +24,7 @@
# The FQDN of the host on which this dnode will be started. It can be IP address
# fqdn hostname
# The port for external access after this dnode is started
# The port for external access after this dnode is started
# serverPort 6030
# The maximum number of connections a dnode can accept
@ -96,7 +95,7 @@
# if free disk space is less than this value, this dnode will fail to start
# minimalDataDirGB 2.0
# enable/disable system monitor
# enable/disable system monitor
# monitor 1
# The following parameter is used to limit the maximum number of lines in log files.
@ -114,8 +113,8 @@
# The following parameters are used for debug purpose only by this dnode.
# debugFlag is a 8 bits mask: FILE-SCREEN-UNUSED-HeartBeat-DUMP-TRACE_WARN-ERROR
# Available debug levels are:
# 131: output warning and error
# Available debug levels are:
# 131: output warning and error
# 135: output debug, warning and error
# 143: output trace, debug, warning and error to log
# 199: output debug, warning and error to both screen and file
@ -130,7 +129,7 @@
# debug flag for util
# uDebugFlag 131
# debug flag for rpc
# debug flag for rpc
# rpcDebugFlag 131
# debug flag for jni
@ -139,7 +138,7 @@
# debug flag for query
# qDebugFlag 131
# debug flag for taosc driver
# debug flag for client driver
# cDebugFlag 131
# debug flag for dnode messages

View File

@ -1,5 +1,5 @@
[Unit]
Description=TDengine server service
Description=server service
After=network-online.target
Wants=network-online.target

View File

@ -4,7 +4,7 @@
# is required to use systemd to manage services at boot
set -e
#set -x
# set -x
verMode=edge
pagMode=full
@ -34,21 +34,25 @@ benchmarkName="taosBenchmark"
dumpName="taosdump"
demoName="taosdemo"
xname="taosx"
explorerName="${clientName}-explorer"
clientName2="taos"
serverName2="taosd"
serverName2="${clientName2}d"
configFile2="${clientName2}.cfg"
productName2="TDengine"
emailName2="taosdata.com"
xname2="${clientName2}x"
adapterName2="${clientName2}adapter"
explorerName="${clientName2}-explorer"
benchmarkName2="${clientName2}Benchmark"
demoName2="${clientName2}demo"
dumpName2="${clientName2}dump"
uninstallScript2="rm${clientName2}"
historyFile="${clientName2}_history"
logDir="/var/log/${clientName2}"
configDir="/etc/${clientName2}"
installDir="/usr/local/${clientName}"
installDir="/usr/local/${clientName2}"
data_dir=${dataDir}
log_dir=${logDir}
@ -206,15 +210,15 @@ function install_main_path() {
function install_bin() {
# Remove links
${csudo}rm -f ${bin_link_dir}/${clientName} || :
${csudo}rm -f ${bin_link_dir}/${serverName} || :
${csudo}rm -f ${bin_link_dir}/${clientName2} || :
${csudo}rm -f ${bin_link_dir}/${serverName2} || :
${csudo}rm -f ${bin_link_dir}/${udfdName} || :
${csudo}rm -f ${bin_link_dir}/${adapterName} || :
${csudo}rm -f ${bin_link_dir}/${uninstallScript} || :
${csudo}rm -f ${bin_link_dir}/${demoName} || :
${csudo}rm -f ${bin_link_dir}/${benchmarkName} || :
${csudo}rm -f ${bin_link_dir}/${dumpName} || :
${csudo}rm -f ${bin_link_dir}/${xname} || :
${csudo}rm -f ${bin_link_dir}/${uninstallScript2} || :
${csudo}rm -f ${bin_link_dir}/${demoName2} || :
${csudo}rm -f ${bin_link_dir}/${benchmarkName2} || :
${csudo}rm -f ${bin_link_dir}/${dumpName2} || :
${csudo}rm -f ${bin_link_dir}/${xname2} || :
${csudo}rm -f ${bin_link_dir}/${explorerName} || :
${csudo}rm -f ${bin_link_dir}/set_core || :
${csudo}rm -f ${bin_link_dir}/TDinsight.sh || :
@ -222,24 +226,23 @@ function install_bin() {
${csudo}cp -r ${script_dir}/bin/* ${install_main_dir}/bin && ${csudo}chmod 0555 ${install_main_dir}/bin/*
#Make link
[ -x ${install_main_dir}/bin/${clientName} ] && ${csudo}ln -sf ${install_main_dir}/bin/${clientName} ${bin_link_dir}/${clientName} || :
[ -x ${install_main_dir}/bin/${serverName} ] && ${csudo}ln -sf ${install_main_dir}/bin/${serverName} ${bin_link_dir}/${serverName} || :
[ -x ${install_main_dir}/bin/${clientName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${clientName2} ${bin_link_dir}/${clientName2} || :
[ -x ${install_main_dir}/bin/${serverName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${serverName2} ${bin_link_dir}/${serverName2} || :
[ -x ${install_main_dir}/bin/${udfdName} ] && ${csudo}ln -sf ${install_main_dir}/bin/${udfdName} ${bin_link_dir}/${udfdName} || :
[ -x ${install_main_dir}/bin/${adapterName} ] && ${csudo}ln -sf ${install_main_dir}/bin/${adapterName} ${bin_link_dir}/${adapterName} || :
[ -x ${install_main_dir}/bin/${benchmarkName} ] && ${csudo}ln -sf ${install_main_dir}/bin/${benchmarkName} ${bin_link_dir}/${demoName} || :
[ -x ${install_main_dir}/bin/${benchmarkName} ] && ${csudo}ln -sf ${install_main_dir}/bin/${benchmarkName} ${bin_link_dir}/${benchmarkName} || :
[ -x ${install_main_dir}/bin/${dumpName} ] && ${csudo}ln -sf ${install_main_dir}/bin/${dumpName} ${bin_link_dir}/${dumpName} || :
[ -x ${install_main_dir}/bin/${xname} ] && ${csudo}ln -sf ${install_main_dir}/bin/${xname} ${bin_link_dir}/${xname} || :
[ -x ${install_main_dir}/bin/${adapterName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${adapterName2} ${bin_link_dir}/${adapterName2} || :
[ -x ${install_main_dir}/bin/${benchmarkName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${benchmarkName2} ${bin_link_dir}/${demoName2} || :
[ -x ${install_main_dir}/bin/${benchmarkName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${benchmarkName2} ${bin_link_dir}/${benchmarkName2} || :
[ -x ${install_main_dir}/bin/${dumpName2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${dumpName2} ${bin_link_dir}/${dumpName2} || :
[ -x ${install_main_dir}/bin/${xname2} ] && ${csudo}ln -sf ${install_main_dir}/bin/${xname2} ${bin_link_dir}/${xname2} || :
[ -x ${install_main_dir}/bin/${explorerName} ] && ${csudo}ln -sf ${install_main_dir}/bin/${explorerName} ${bin_link_dir}/${explorerName} || :
[ -x ${install_main_dir}/bin/TDinsight.sh ] && ${csudo}ln -sf ${install_main_dir}/bin/TDinsight.sh ${bin_link_dir}/TDinsight.sh || :
[ -x ${install_main_dir}/bin/remove.sh ] && ${csudo}ln -sf ${install_main_dir}/bin/remove.sh ${bin_link_dir}/${uninstallScript} || :
[ -x ${install_main_dir}/bin/set_core.sh ] && ${csudo}ln -sf ${install_main_dir}/bin/set_core.sh ${bin_link_dir}/set_core || :
if [ "$clientName2" == "${clientName}" ]; then
[ -x ${install_main_dir}/bin/remove.sh ] && ${csudo}ln -s ${install_main_dir}/bin/remove.sh ${bin_link_dir}/${uninstallScript} || :
fi
[ -x ${install_main_dir}/bin/set_core.sh ] && ${csudo}ln -s ${install_main_dir}/bin/set_core.sh ${bin_link_dir}/set_core || :
if [ "$verMode" == "cluster" ] && [ "$clientName" != "$clientName2" ]; then
[ -x ${install_main_dir}/bin/${clientName} ] && ${csudo}ln -sf ${install_main_dir}/bin/${clientName} ${bin_link_dir}/${clientName2} || :
[ -x ${install_main_dir}/bin/${benchmarkName} ] && ${csudo}ln -sf ${install_main_dir}/bin/${benchmarkName} ${bin_link_dir}/${benchmarkName2} || :
[ -x ${install_main_dir}/bin/${dumpName} ] && ${csudo}ln -sf ${install_main_dir}/bin/${dumpName} ${bin_link_dir}/${dumpName2} || :
[ -x ${install_main_dir}/bin/remove.sh ] && ${csudo}ln -sf ${install_main_dir}/bin/remove.sh ${bin_link_dir}/${uninstallScript2} || :
[ -x ${install_main_dir}/bin/remove.sh ] && ${csudo}ln -s ${install_main_dir}/bin/remove.sh ${bin_link_dir}/${uninstallScript2} || :
fi
}
@ -399,7 +402,7 @@ function set_hostname() {
${csudo}sed -i -r "s/#*\s*(HOSTNAME=\s*).*/\1$newHostname/" /etc/sysconfig/network || :
fi
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$newHostname/" ${cfg_install_dir}/${configFile}
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$newHostname/" ${cfg_install_dir}/${configFile2}
serverFqdn=$newHostname
if [[ -e /etc/hosts ]]; then
@ -433,7 +436,7 @@ function set_ipAsFqdn() {
echo -e -n "${GREEN}Unable to get local ip, use 127.0.0.1${NC}"
localFqdn="127.0.0.1"
# Write the local FQDN to configuration file
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${cfg_install_dir}/${configFile}
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${cfg_install_dir}/${configFile2}
serverFqdn=$localFqdn
echo
return
@ -455,7 +458,7 @@ function set_ipAsFqdn() {
read -p "Please choose an IP from local IP list:" localFqdn
else
# Write the local FQDN to configuration file
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${cfg_install_dir}/${configFile}
${csudo}sed -i -r "s/#*\s*(fqdn\s*).*/\1$localFqdn/" ${cfg_install_dir}/${configFile2}
serverFqdn=$localFqdn
break
fi
@ -519,15 +522,15 @@ function install_adapter_config() {
function install_config() {
if [ ! -f "${cfg_install_dir}/${configFile}" ]; then
if [ ! -f "${cfg_install_dir}/${configFile2}" ]; then
${csudo}mkdir -p ${cfg_install_dir}
[ -f ${script_dir}/cfg/${configFile} ] && ${csudo}cp ${script_dir}/cfg/${configFile} ${cfg_install_dir}
[ -f ${script_dir}/cfg/${configFile2} ] && ${csudo}cp ${script_dir}/cfg/${configFile2} ${cfg_install_dir}
${csudo}chmod 644 ${cfg_install_dir}/*
else
${csudo}cp -f ${script_dir}/cfg/${configFile} ${cfg_install_dir}/${configFile}.new
${csudo}cp -f ${script_dir}/cfg/${configFile2} ${cfg_install_dir}/${configFile2}.new
fi
${csudo}ln -sf ${cfg_install_dir}/${configFile} ${install_main_dir}/cfg
${csudo}ln -sf ${cfg_install_dir}/${configFile2} ${install_main_dir}/cfg
[ ! -z $1 ] && return 0 || : # only install client
@ -548,7 +551,7 @@ function install_config() {
read firstEp
while true; do
if [ ! -z "$firstEp" ]; then
${csudo}sed -i -r "s/#*\s*(firstEp\s*).*/\1$firstEp/" ${cfg_install_dir}/${configFile}
${csudo}sed -i -r "s/#*\s*(firstEp\s*).*/\1$firstEp/" ${cfg_install_dir}/${configFile2}
break
else
break
@ -600,8 +603,8 @@ function install_web() {
function clean_service_on_sysvinit() {
if ps aux | grep -v grep | grep ${serverName} &>/dev/null; then
${csudo}service ${serverName} stop || :
if ps aux | grep -v grep | grep ${serverName2} &>/dev/null; then
${csudo}service ${serverName2} stop || :
fi
if ps aux | grep -v grep | grep tarbitrator &>/dev/null; then
@ -609,30 +612,30 @@ function clean_service_on_sysvinit() {
fi
if ((${initd_mod} == 1)); then
if [ -e ${service_config_dir}/${serverName} ]; then
${csudo}chkconfig --del ${serverName} || :
if [ -e ${service_config_dir}/${serverName2} ]; then
${csudo}chkconfig --del ${serverName2} || :
fi
if [ -e ${service_config_dir}/tarbitratord ]; then
${csudo}chkconfig --del tarbitratord || :
fi
elif ((${initd_mod} == 2)); then
if [ -e ${service_config_dir}/${serverName} ]; then
${csudo}insserv -r ${serverName} || :
if [ -e ${service_config_dir}/${serverName2} ]; then
${csudo}insserv -r ${serverName2} || :
fi
if [ -e ${service_config_dir}/tarbitratord ]; then
${csudo}insserv -r tarbitratord || :
fi
elif ((${initd_mod} == 3)); then
if [ -e ${service_config_dir}/${serverName} ]; then
${csudo}update-rc.d -f ${serverName} remove || :
if [ -e ${service_config_dir}/${serverName2} ]; then
${csudo}update-rc.d -f ${serverName2} remove || :
fi
if [ -e ${service_config_dir}/tarbitratord ]; then
${csudo}update-rc.d -f tarbitratord remove || :
fi
fi
${csudo}rm -f ${service_config_dir}/${serverName} || :
${csudo}rm -f ${service_config_dir}/${serverName2} || :
${csudo}rm -f ${service_config_dir}/tarbitratord || :
if $(which init &>/dev/null); then
@ -653,24 +656,24 @@ function install_service_on_sysvinit() {
fi
if ((${initd_mod} == 1)); then
${csudo}chkconfig --add ${serverName} || :
${csudo}chkconfig --level 2345 ${serverName} on || :
${csudo}chkconfig --add ${serverName2} || :
${csudo}chkconfig --level 2345 ${serverName2} on || :
elif ((${initd_mod} == 2)); then
${csudo}insserv ${serverName} || :
${csudo}insserv -d ${serverName} || :
${csudo}insserv ${serverName2} || :
${csudo}insserv -d ${serverName2} || :
elif ((${initd_mod} == 3)); then
${csudo}update-rc.d ${serverName} defaults || :
${csudo}update-rc.d ${serverName2} defaults || :
fi
}
function clean_service_on_systemd() {
taosd_service_config="${service_config_dir}/${serverName}.service"
if systemctl is-active --quiet ${serverName}; then
service_config="${service_config_dir}/${serverName2}.service"
if systemctl is-active --quiet ${serverName2}; then
echo "${productName} is running, stopping it..."
${csudo}systemctl stop ${serverName} &>/dev/null || echo &>/dev/null
${csudo}systemctl stop ${serverName2} &>/dev/null || echo &>/dev/null
fi
${csudo}systemctl disable ${serverName} &>/dev/null || echo &>/dev/null
${csudo}rm -f ${taosd_service_config}
${csudo}systemctl disable ${serverName2} &>/dev/null || echo &>/dev/null
${csudo}rm -f ${service_config}
tarbitratord_service_config="${service_config_dir}/tarbitratord.service"
if systemctl is-active --quiet tarbitratord; then
@ -687,19 +690,19 @@ function clean_service_on_systemd() {
function install_service_on_systemd() {
clean_service_on_systemd
[ -f ${script_dir}/cfg/${serverName}.service ] &&
${csudo}cp ${script_dir}/cfg/${serverName}.service \
[ -f ${script_dir}/cfg/${serverName2}.service ] &&
${csudo}cp ${script_dir}/cfg/${serverName2}.service \
${service_config_dir}/ || :
# if [ "$verMode" == "cluster" ] && [ "$clientName" != "$clientName2" ]; then
# [ -f ${script_dir}/cfg/${serverName}.service ] &&
# ${csudo}cp ${script_dir}/cfg/${serverName}.service \
# [ -f ${script_dir}/cfg/${serverName2}.service ] &&
# ${csudo}cp ${script_dir}/cfg/${serverName2}.service \
# ${service_config_dir}/${serverName2}.service || :
# fi
${csudo}systemctl daemon-reload
${csudo}systemctl enable ${serverName}
${csudo}systemctl enable ${serverName2}
${csudo}systemctl daemon-reload
}
@ -719,7 +722,7 @@ function install_service() {
elif ((${service_mod} == 1)); then
install_service_on_sysvinit
else
kill_process ${serverName}
kill_process ${serverName2}
fi
}
@ -756,10 +759,10 @@ function is_version_compatible() {
if [ -f ${script_dir}/driver/vercomp.txt ]; then
min_compatible_version=$(cat ${script_dir}/driver/vercomp.txt)
else
min_compatible_version=$(${script_dir}/bin/${serverName} -V | head -1 | cut -d ' ' -f 5)
min_compatible_version=$(${script_dir}/bin/${serverName2} -V | head -1 | cut -d ' ' -f 5)
fi
exist_version=$(${installDir}/bin/${serverName} -V | head -1 | cut -d ' ' -f 3)
exist_version=$(${installDir}/bin/${serverName2} -V | head -1 | cut -d ' ' -f 3)
vercomp $exist_version "3.0.0.0"
case $? in
2)
@ -829,13 +832,13 @@ function updateProduct() {
echo -e "${GREEN}Start to update ${productName2}...${NC}"
# Stop the service if running
if ps aux | grep -v grep | grep ${serverName} &>/dev/null; then
if ps aux | grep -v grep | grep ${serverName2} &>/dev/null; then
if ((${service_mod} == 0)); then
${csudo}systemctl stop ${serverName} || :
${csudo}systemctl stop ${serverName2} || :
elif ((${service_mod} == 1)); then
${csudo}service ${serverName} stop || :
${csudo}service ${serverName2} stop || :
else
kill_process ${serverName}
kill_process ${serverName2}
fi
sleep 1
fi
@ -862,21 +865,21 @@ function updateProduct() {
openresty_work=false
echo
echo -e "${GREEN_DARK}To configure ${productName2} ${NC}: edit ${cfg_install_dir}/${configFile}"
[ -f ${configDir}/taosadapter.toml ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To configure ${clientName2} Adapter ${NC}: edit ${configDir}/taosadapter.toml"
echo -e "${GREEN_DARK}To configure ${productName2} ${NC}: edit ${cfg_install_dir}/${configFile2}"
[ -f ${configDir}/${clientName2}adapter.toml ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To configure ${clientName2} Adapter ${NC}: edit ${configDir}/${clientName2}adapter.toml"
if ((${service_mod} == 0)); then
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}systemctl start ${serverName}${NC}"
[ -f ${service_config_dir}/taosadapter.service ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}systemctl start taosadapter ${NC}"
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}systemctl start ${serverName2}${NC}"
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}systemctl start ${clientName2}adapter ${NC}"
elif ((${service_mod} == 1)); then
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}service ${serverName} start${NC}"
[ -f ${service_config_dir}/taosadapter.service ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}service taosadapter start${NC}"
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}service ${serverName2} start${NC}"
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}service ${clientName2}adapter start${NC}"
else
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ./${serverName}${NC}"
[ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: taosadapter &${NC}"
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ./${serverName2}${NC}"
[ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${clientName2}adapter &${NC}"
fi
if [ ${openresty_work} = 'true' ]; then
@ -887,7 +890,7 @@ function updateProduct() {
if ((${prompt_force} == 1)); then
echo ""
echo -e "${RED}Please run '${serverName} --force-keep-file' at first time for the exist ${productName2} $exist_version!${NC}"
echo -e "${RED}Please run '${serverName2} --force-keep-file' at first time for the exist ${productName2} $exist_version!${NC}"
fi
echo
echo -e "\033[44;32;1m${productName2} is updated successfully!${NC}"
@ -899,7 +902,7 @@ function updateProduct() {
echo -e "\033[44;32;1m${productName2} client is updated successfully!${NC}"
fi
rm -rf $(tar -tf ${tarName} | grep -v "^\./$")
rm -rf $(tar -tf ${tarName} | grep -Ev "^\./$|^\/")
}
function installProduct() {
@ -944,21 +947,21 @@ function installProduct() {
# Ask if to start the service
echo
echo -e "${GREEN_DARK}To configure ${productName2} ${NC}: edit ${cfg_install_dir}/${configFile}"
[ -f ${configDir}/taosadapter.toml ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To configure ${clientName2} Adapter ${NC}: edit ${configDir}/taosadapter.toml"
echo -e "${GREEN_DARK}To configure ${productName2} ${NC}: edit ${cfg_install_dir}/${configFile2}"
[ -f ${configDir}/${clientName2}adapter.toml ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To configure ${clientName2} Adapter ${NC}: edit ${configDir}/${clientName2}adapter.toml"
if ((${service_mod} == 0)); then
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}systemctl start ${serverName}${NC}"
[ -f ${service_config_dir}/taosadapter.service ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}systemctl start taosadapter ${NC}"
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}systemctl start ${serverName2}${NC}"
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}systemctl start ${clientName2}adapter ${NC}"
elif ((${service_mod} == 1)); then
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}service ${serverName} start${NC}"
[ -f ${service_config_dir}/taosadapter.service ] && [ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}service taosadapter start${NC}"
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${csudo}service ${serverName2} start${NC}"
[ -f ${service_config_dir}/${clientName2}adapter.service ] && [ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${csudo}service ${clientName2}adapter start${NC}"
else
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${serverName}${NC}"
[ -f ${installDir}/bin/taosadapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: taosadapter &${NC}"
echo -e "${GREEN_DARK}To start ${productName2} ${NC}: ${serverName2}${NC}"
[ -f ${installDir}/bin/${clientName2}adapter ] && \
echo -e "${GREEN_DARK}To start ${clientName2} Adapter ${NC}: ${clientName2}adapter &${NC}"
fi
if [ ! -z "$firstEp" ]; then
@ -991,7 +994,7 @@ function installProduct() {
fi
touch ~/.${historyFile}
rm -rf $(tar -tf ${tarName} | grep -v "^\./$")
rm -rf $(tar -tf ${tarName} | grep -Ev "^\./$|^\/")
}
## ==============================Main program starts from here============================
@ -1002,7 +1005,7 @@ if [ "$verType" == "server" ]; then
echo -e "\033[44;31;5mThe default data directory ${data_dir} contains old data of ${productName2} 2.x, please clear it before installing!\033[0m"
else
# Install server and client
if [ -x ${bin_dir}/${serverName} ]; then
if [ -x ${bin_dir}/${serverName2} ]; then
update_flag=1
updateProduct
else
@ -1012,7 +1015,7 @@ if [ "$verType" == "server" ]; then
elif [ "$verType" == "client" ]; then
interactiveFqdn=no
# Only install client
if [ -x ${bin_dir}/${clientName} ]; then
if [ -x ${bin_dir}/${clientName2} ]; then
update_flag=1
updateProduct client
else

View File

@ -95,7 +95,7 @@ function install_main_path() {
${csudo}mkdir -p ${install_main_dir}/cfg
${csudo}mkdir -p ${install_main_dir}/bin
${csudo}mkdir -p ${install_main_dir}/driver
if [ $productName == "TDengine" ]; then
if [ "$productName2" == "TDengine" ]; then
${csudo}mkdir -p ${install_main_dir}/examples
fi
${csudo}mkdir -p ${install_main_dir}/include
@ -118,18 +118,19 @@ function install_bin() {
#Make link
[ -x ${install_main_dir}/bin/${clientName} ] && ${csudo}ln -s ${install_main_dir}/bin/${clientName} ${bin_link_dir}/${clientName} || :
if [ "$osType" != "Darwin" ]; then
[ -x ${install_main_dir}/bin/taosdemo ] && ${csudo}ln -s ${install_main_dir}/bin/taosdemo ${bin_link_dir}/taosdemo || :
[ -x ${install_main_dir}/bin/${demoName2} ] && ${csudo}ln -s ${install_main_dir}/bin/${demoName2} ${bin_link_dir}/${demoName2} || :
fi
[ -x ${install_main_dir}/bin/remove_client.sh ] && ${csudo}ln -s ${install_main_dir}/bin/remove_client.sh ${bin_link_dir}/${uninstallScript} || :
[ -x ${install_main_dir}/bin/set_core.sh ] && ${csudo}ln -s ${install_main_dir}/bin/set_core.sh ${bin_link_dir}/set_core || :
if [ "$verMode" == "cluster" ] && [ "$clientName" != "$clientName2" ]; then
#Make link
[ -x ${install_main_dir}/bin/${clientName} ] && ${csudo}ln -s ${install_main_dir}/bin/${clientName} ${bin_link_dir}/${clientName2} || :
[ -x ${install_main_dir}/bin/${clientName2} ] && ${csudo}ln -s ${install_main_dir}/bin/${clientName2} ${bin_link_dir}/${clientName2} || :
if [ "$osType" != "Darwin" ]; then
[ -x ${install_main_dir}/bin/taosdemo ] && ${csudo}ln -s ${install_main_dir}/bin/taosdemo ${bin_link_dir}/${demoName2} || :
[ -x ${install_main_dir}/bin/${demoName2} ] && ${csudo}ln -s ${install_main_dir}/bin/${demoName2} ${bin_link_dir}/${demoName2} || :
[ -x ${install_main_dir}/bin/${benchmarkName2} ] && ${csudo}ln -s ${install_main_dir}/bin/${benchmarkName2} ${bin_link_dir}/${benchmarkName2} || :
fi
[ -x ${install_main_dir}/bin/remove_client.sh ] && ${csudo}ln -s ${install_main_dir}/bin/remove_client.sh ${bin_link_dir}/${uninstallScript2} || :
[ -x ${install_main_dir}/bin/remove_client.sh ] && ${csudo}ln -sf ${install_main_dir}/bin/remove_client.sh ${bin_link_dir}/${uninstallScript2} || :
fi
}
@ -305,7 +306,7 @@ function update_TDengine() {
echo
echo -e "\033[44;32;1m${productName2} client is updated successfully!${NC}"
rm -rf $(tar -tf ${tarName})
rm -rf $(tar -tf ${tarName} | grep -Ev "^\./$|^\/")
}
function install_TDengine() {
@ -332,7 +333,7 @@ function install_TDengine() {
echo
echo -e "\033[44;32;1m${productName2} client is installed successfully!${NC}"
rm -rf $(tar -tf ${tarName})
rm -rf $(tar -tf ${tarName} | grep -Ev "^\./$|^\/")
}

View File

@ -2,7 +2,7 @@
#
# Generate tar.gz package for linux client in all os system
set -e
# set -x
set -x
curr_dir=$(pwd)
compile_dir=$1
@ -23,9 +23,12 @@ clientName2="${12}"
productName="TDengine"
clientName="taos"
benchmarkName="taosBenchmark"
configFile="taos.cfg"
tarName="package.tar.gz"
benchmarkName2="${clientName2}Benchmark"
if [ "$osType" != "Darwin" ]; then
script_dir="$(dirname $(readlink -f $0))"
top_dir="$(readlink -f ${script_dir}/../..)"
@ -53,11 +56,12 @@ fi
# Directories and files.
if [ "$verMode" == "cluster" ]; then
sed -i 's/verMode=edge/verMode=cluster/g' ${script_dir}/remove_client.sh
sed -i "s/clientName2=\"taos\"/clientName2=\"${clientName2}\"/g" ${script_dir}/remove_client.sh
sed -i "s/productName2=\"TDengine\"/productName2=\"${productName2}\"/g" ${script_dir}/remove_client.sh
fi
#if [ "$verMode" == "cluster" ]; then
# sed -i 's/verMode=edge/verMode=cluster/g' ${script_dir}/remove_client.sh
# sed -i "s/clientName2=\"taos\"/clientName2=\"${clientName2}\"/g" ${script_dir}/remove_client.sh
# sed -i "s/configFile2=\"taos\"/configFile2=\"${clientName2}\"/g" ${script_dir}/remove_client.sh
# sed -i "s/productName2=\"TDengine\"/productName2=\"${productName2}\"/g" ${script_dir}/remove_client.sh
#fi
if [ "$osType" != "Darwin" ]; then
if [ "$pagMode" == "lite" ]; then
@ -66,6 +70,7 @@ if [ "$osType" != "Darwin" ]; then
${script_dir}/remove_client.sh"
else
bin_files="${build_dir}/bin/${clientName} \
${build_dir}/bin/${benchmarkName} \
${script_dir}/remove_client.sh \
${script_dir}/set_core.sh \
${script_dir}/get_client.sh"
@ -153,6 +158,7 @@ if [ "$verMode" == "cluster" ]; then
sed -i 's/verMode=edge/verMode=cluster/g' install_client_temp.sh
sed -i "s/serverName2=\"taosd\"/serverName2=\"${serverName2}\"/g" install_client_temp.sh
sed -i "s/clientName2=\"taos\"/clientName2=\"${clientName2}\"/g" install_client_temp.sh
sed -i "s/configFile2=\"taos.cfg\"/configFile2=\"${clientName2}.cfg\"/g" install_client_temp.sh
sed -i "s/productName2=\"TDengine\"/productName2=\"${productName2}\"/g" install_client_temp.sh
sed -i "s/emailName2=\"taosdata.com\"/emailName2=\"${cusEmail2}\"/g" install_client_temp.sh

View File

@ -96,7 +96,7 @@ else
${taostools_bin_files} \
${taosx_bin} \
${explorer_bin_files} \
${build_dir}/bin/taosadapter \
${build_dir}/bin/${clientName}adapter \
${build_dir}/bin/udfd \
${script_dir}/remove.sh \
${script_dir}/set_core.sh \
@ -135,12 +135,12 @@ mkdir -p ${install_dir}/inc && cp ${header_files} ${install_dir}/inc
mkdir -p ${install_dir}/cfg && cp ${cfg_dir}/${configFile} ${install_dir}/cfg/${configFile}
if [ -f "${compile_dir}/test/cfg/taosadapter.toml" ]; then
cp ${compile_dir}/test/cfg/taosadapter.toml ${install_dir}/cfg || :
if [ -f "${compile_dir}/test/cfg/${clientName}adapter.toml" ]; then
cp ${compile_dir}/test/cfg/${clientName}adapter.toml ${install_dir}/cfg || :
fi
if [ -f "${compile_dir}/test/cfg/taosadapter.service" ]; then
cp ${compile_dir}/test/cfg/taosadapter.service ${install_dir}/cfg || :
if [ -f "${compile_dir}/test/cfg/${clientName}adapter.service" ]; then
cp ${compile_dir}/test/cfg/${clientName}adapter.service ${install_dir}/cfg || :
fi
if [ -f "${cfg_dir}/${serverName}.service" ]; then
@ -152,16 +152,16 @@ mkdir -p ${install_dir}/init.d && cp ${init_file_deb} ${install_dir}/init.d/${se
mkdir -p ${install_dir}/init.d && cp ${init_file_rpm} ${install_dir}/init.d/${serverName}.rpm
if [ $adapterName != "taosadapter" ]; then
mv ${install_dir}/cfg/taosadapter.toml ${install_dir}/cfg/$adapterName.toml
mv ${install_dir}/cfg/${clientName2}adapter.toml ${install_dir}/cfg/$adapterName.toml
sed -i "s/path = \"\/var\/log\/taos\"/path = \"\/var\/log\/${productName}\"/g" ${install_dir}/cfg/$adapterName.toml
sed -i "s/password = \"taosdata\"/password = \"${defaultPasswd}\"/g" ${install_dir}/cfg/$adapterName.toml
mv ${install_dir}/cfg/taosadapter.service ${install_dir}/cfg/$adapterName.service
mv ${install_dir}/cfg/${clientName2}adapter.service ${install_dir}/cfg/$adapterName.service
sed -i "s/TDengine/${productName}/g" ${install_dir}/cfg/$adapterName.service
sed -i "s/taosAdapter/${adapterName}/g" ${install_dir}/cfg/$adapterName.service
sed -i "s/taosadapter/${adapterName}/g" ${install_dir}/cfg/$adapterName.service
mv ${install_dir}/bin/taosadapter ${install_dir}/bin/${adapterName}
mv ${install_dir}/bin/${clientName2}adapter ${install_dir}/bin/${adapterName}
mv ${install_dir}/bin/taosd-dump-cfg.gdb ${install_dir}/bin/${serverName}-dump-cfg.gdb
fi
@ -233,8 +233,10 @@ if [ "$verMode" == "cluster" ]; then
sed 's/verMode=edge/verMode=cluster/g' ${install_dir}/bin/remove.sh >>remove_temp.sh
sed -i "s/serverName2=\"taosd\"/serverName2=\"${serverName2}\"/g" remove_temp.sh
sed -i "s/clientName2=\"taos\"/clientName2=\"${clientName2}\"/g" remove_temp.sh
sed -i "s/configFile2=\"taos.cfg\"/configFile2=\"${clientName2}.cfg\"/g" remove_temp.sh
sed -i "s/productName2=\"TDengine\"/productName2=\"${productName2}\"/g" remove_temp.sh
sed -i "s/emailName2=\"taosdata.com\"/emailName2=\"${cusEmail2}\"/g" remove_temp.sh
cusDomain=`echo "${cusEmail2}" | sed 's/^[^@]*@//'`
sed -i "s/emailName2=\"taosdata.com\"/emailName2=\"${cusDomain}\"/g" remove_temp.sh
mv remove_temp.sh ${install_dir}/bin/remove.sh
fi
if [ "$verMode" == "cloud" ]; then
@ -262,8 +264,10 @@ if [ "$verMode" == "cluster" ]; then
sed -i 's/verMode=edge/verMode=cluster/g' install_temp.sh
sed -i "s/serverName2=\"taosd\"/serverName2=\"${serverName2}\"/g" install_temp.sh
sed -i "s/clientName2=\"taos\"/clientName2=\"${clientName2}\"/g" install_temp.sh
sed -i "s/configFile2=\"taos.cfg\"/configFile2=\"${clientName2}.cfg\"/g" install_temp.sh
sed -i "s/productName2=\"TDengine\"/productName2=\"${productName2}\"/g" install_temp.sh
sed -i "s/emailName2=\"taosdata.com\"/emailName2=\"${cusEmail2}\"/g" install_temp.sh
cusDomain=`echo "${cusEmail2}" | sed 's/^[^@]*@//'`
sed -i "s/emailName2=\"taosdata.com\"/emailName2=\"${cusDomain}\"/g" install_temp.sh
mv install_temp.sh ${install_dir}/install.sh
fi
if [ "$verMode" == "cloud" ]; then

View File

@ -40,11 +40,16 @@ serverName2="taosd"
clientName2="taos"
productName2="TDengine"
adapterName2="${clientName2}adapter"
demoName2="${clientName2}demo"
benchmarkName2="${clientName2}Benchmark"
dumpName2="${clientName2}dump"
keeperName2="${clientName2}keeper"
xName2="${clientName2}x"
explorerName2="${clientName2}-explorer"
uninstallScript2="rm${clientName2}"
installDir="/usr/local/${clientName}"
installDir="/usr/local/${clientName2}"
#install main path
install_main_dir=${installDir}
@ -55,8 +60,8 @@ local_bin_link_dir="/usr/local/bin"
service_config_dir="/etc/systemd/system"
taos_service_name=${serverName}
taosadapter_service_name="taosadapter"
taos_service_name=${serverName2}
taosadapter_service_name="${clientName2}adapter"
tarbitrator_service_name="tarbitratord"
csudo=""
if command -v sudo >/dev/null; then
@ -84,14 +89,14 @@ else
fi
function kill_taosadapter() {
pid=$(ps -ef | grep "taosadapter" | grep -v "grep" | awk '{print $2}')
pid=$(ps -ef | grep "${adapterName2}" | grep -v "grep" | awk '{print $2}')
if [ -n "$pid" ]; then
${csudo}kill -9 $pid || :
fi
}
function kill_taosd() {
pid=$(ps -ef | grep ${serverName} | grep -v "grep" | awk '{print $2}')
pid=$(ps -ef | grep ${serverName2} | grep -v "grep" | awk '{print $2}')
if [ -n "$pid" ]; then
${csudo}kill -9 $pid || :
fi
@ -109,17 +114,17 @@ function clean_bin() {
${csudo}rm -f ${bin_link_dir}/${clientName} || :
${csudo}rm -f ${bin_link_dir}/${serverName} || :
${csudo}rm -f ${bin_link_dir}/udfd || :
${csudo}rm -f ${bin_link_dir}/taosadapter || :
${csudo}rm -f ${bin_link_dir}/taosBenchmark || :
${csudo}rm -f ${bin_link_dir}/taosdemo || :
${csudo}rm -f ${bin_link_dir}/taosdump || :
${csudo}rm -f ${bin_link_dir}/${uninstallScript} || :
${csudo}rm -f ${bin_link_dir}/${adapterName2} || :
${csudo}rm -f ${bin_link_dir}/${benchmarkName2} || :
${csudo}rm -f ${bin_link_dir}/${demoName2} || :
${csudo}rm -f ${bin_link_dir}/${dumpName2} || :
${csudo}rm -f ${bin_link_dir}/${uninstallScript} || :
${csudo}rm -f ${bin_link_dir}/tarbitrator || :
${csudo}rm -f ${bin_link_dir}/set_core || :
${csudo}rm -f ${bin_link_dir}/TDinsight.sh || :
${csudo}rm -f ${bin_link_dir}/taoskeeper || :
${csudo}rm -f ${bin_link_dir}/taosx || :
${csudo}rm -f ${bin_link_dir}/taos-explorer || :
${csudo}rm -f ${bin_link_dir}/${keeperName2} || :
${csudo}rm -f ${bin_link_dir}/${xName2} || :
${csudo}rm -f ${bin_link_dir}/${explorerName2} || :
if [ "$verMode" == "cluster" ] && [ "$clientName" != "$clientName2" ]; then
${csudo}rm -f ${bin_link_dir}/${clientName2} || :
@ -130,8 +135,8 @@ function clean_bin() {
}
function clean_local_bin() {
${csudo}rm -f ${local_bin_link_dir}/taosBenchmark || :
${csudo}rm -f ${local_bin_link_dir}/taosdemo || :
${csudo}rm -f ${local_bin_link_dir}/${benchmarkName2} || :
${csudo}rm -f ${local_bin_link_dir}/${demoName2} || :
}
function clean_lib() {
@ -173,7 +178,7 @@ function clean_service_on_systemd() {
${csudo}systemctl disable ${taos_service_name} &>/dev/null || echo &>/dev/null
${csudo}rm -f ${taosd_service_config}
taosadapter_service_config="${service_config_dir}/taosadapter.service"
taosadapter_service_config="${service_config_dir}/${clientName2}adapter.service"
if systemctl is-active --quiet ${taosadapter_service_name}; then
echo "${productName2} ${clientName2}Adapter is running, stopping it..."
${csudo}systemctl stop ${taosadapter_service_name} &>/dev/null || echo &>/dev/null
@ -235,8 +240,8 @@ function clean_service_on_sysvinit() {
function clean_service_on_launchctl() {
${csudouser}launchctl unload -w /Library/LaunchDaemons/com.taosdata.taosd.plist > /dev/null 2>&1 || :
${csudo}rm /Library/LaunchDaemons/com.taosdata.taosd.plist > /dev/null 2>&1 || :
${csudouser}launchctl unload -w /Library/LaunchDaemons/com.taosdata.taosadapter.plist > /dev/null 2>&1 || :
${csudo}rm /Library/LaunchDaemons/com.taosdata.taosadapter.plist > /dev/null 2>&1 || :
${csudouser}launchctl unload -w /Library/LaunchDaemons/com.taosdata.${clientName2}adapter.plist > /dev/null 2>&1 || :
${csudo}rm /Library/LaunchDaemons/com.taosdata.${clientName2}adapter.plist > /dev/null 2>&1 || :
}
function clean_service() {

View File

@ -15,11 +15,12 @@ uninstallScript="rmtaos"
clientName2="taos"
productName2="TDengine"
benchmarkName2="${clientName}Benchmark"
dumpName2="${clientName}dump"
uninstallScript2="rm${clientName}"
benchmarkName2="${clientName2}Benchmark"
demoName2="${clientName2}demo"
dumpName2="${clientName2}dump"
uninstallScript2="rm${clientName2}"
installDir="/usr/local/${clientName}"
installDir="/usr/local/${clientName2}"
#install main path
install_main_dir=${installDir}
@ -44,14 +45,17 @@ function kill_client() {
function clean_bin() {
# Remove link
${csudo}rm -f ${bin_link_dir}/${clientName} || :
${csudo}rm -f ${bin_link_dir}/taosdemo || :
${csudo}rm -f ${bin_link_dir}/taosdump || :
${csudo}rm -f ${bin_link_dir}/${clientName2} || :
${csudo}rm -f ${bin_link_dir}/${demoName2} || :
${csudo}rm -f ${bin_link_dir}/${benchmarkName2} || :
${csudo}rm -f ${bin_link_dir}/${dumpName2} || :
${csudo}rm -f ${bin_link_dir}/${uninstallScript} || :
${csudo}rm -f ${bin_link_dir}/set_core || :
if [ "$verMode" == "cluster" ] && [ "$clientName" != "$clientName2" ]; then
${csudo}rm -f ${bin_link_dir}/${clientName2} || :
${csudo}rm -f ${bin_link_dir}/${demoName2} || :
${csudo}rm -f ${bin_link_dir}/${benchmarkName2} || :
${csudo}rm -f ${bin_link_dir}/${dumpName2} || :
${csudo}rm -f ${bin_link_dir}/${uninstallScript2} || :
fi

View File

@ -30,6 +30,10 @@
#include "tsched.h"
#include "ttime.h"
#if defined(CUS_NAME) || defined(CUS_PROMPT) || defined(CUS_EMAIL)
#include "cus_name.h"
#endif
#define TSC_VAR_NOT_RELEASE 1
#define TSC_VAR_RELEASED 0
@ -541,9 +545,15 @@ void taos_init_imp(void) {
deltaToUtcInitOnce();
if (taosCreateLog("taoslog", 10, configDir, NULL, NULL, NULL, NULL, 1) != 0) {
char logDirName[64] = {0};
#ifdef CUS_PROMPT
snprintf(logDirName, 64, "%slog", CUS_PROMPT);
#else
snprintf(logDirName, 64, "taoslog");
#endif
if (taosCreateLog(logDirName, 10, configDir, NULL, NULL, NULL, NULL, 1) != 0) {
// ignore create log failed, only print
printf(" WARING: Create taoslog failed:%s. configDir=%s\n", strerror(errno), configDir);
printf(" WARING: Create %s failed:%s. configDir=%s\n", logDirName, strerror(errno), configDir);
}
if (taosInitCfg(configDir, NULL, NULL, NULL, NULL, 1) != 0) {

View File

@ -114,6 +114,8 @@ static const SSysDbTableSchema userFuncSchema[] = {
{.name = "create_time", .bytes = 8, .type = TSDB_DATA_TYPE_TIMESTAMP, .sysInfo = false},
{.name = "code_len", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false},
{.name = "bufsize", .bytes = 4, .type = TSDB_DATA_TYPE_INT, .sysInfo = false},
{.name = "func_language", .bytes = TSDB_TYPE_STR_MAX_LEN - 1 + VARSTR_HEADER_SIZE, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false},
{.name = "func_body", .bytes = TSDB_MAX_BINARY_LEN, .type = TSDB_DATA_TYPE_VARCHAR, .sysInfo = false},
};
static const SSysDbTableSchema userIdxSchema[] = {

View File

@ -1864,7 +1864,9 @@ static char* formatTimestamp(char* buf, int64_t val, int precision) {
}
}
struct tm ptm = {0};
taosLocalTime(&tt, &ptm);
if (taosLocalTime(&tt, &ptm, buf) == NULL) {
return buf;
}
size_t pos = strftime(buf, 35, "%Y-%m-%d %H:%M:%S", &ptm);
if (precision == TSDB_TIME_PRECISION_NANO) {

View File

@ -228,7 +228,11 @@ static int32_t taosLoadCfg(SConfig *pCfg, const char **envCmd, const char *input
taosExpandDir(inputCfgDir, cfgDir, PATH_MAX);
if (taosIsDir(cfgDir)) {
#ifdef CUS_PROMPT
snprintf(cfgFile, sizeof(cfgFile), "%s" TD_DIRSEP "%s.cfg", CUS_PROMPT, cfgDir);
#else
snprintf(cfgFile, sizeof(cfgFile), "%s" TD_DIRSEP "taos.cfg", cfgDir);
#endif
} else {
tstrncpy(cfgFile, cfgDir, sizeof(cfgDir));
}

View File

@ -727,7 +727,7 @@ int64_t taosTimeAdd(int64_t t, int64_t duration, char unit, int32_t precision) {
struct tm tm;
time_t tt = (time_t)(t / TSDB_TICK_PER_SECOND(precision));
taosLocalTime(&tt, &tm);
taosLocalTime(&tt, &tm, NULL);
int32_t mon = tm.tm_year * 12 + tm.tm_mon + (int32_t)numOfMonth;
tm.tm_year = mon / 12;
tm.tm_mon = mon % 12;
@ -750,11 +750,11 @@ int32_t taosTimeCountInterval(int64_t skey, int64_t ekey, int64_t interval, char
struct tm tm;
time_t t = (time_t)skey;
taosLocalTime(&t, &tm);
taosLocalTime(&t, &tm, NULL);
int32_t smon = tm.tm_year * 12 + tm.tm_mon;
t = (time_t)ekey;
taosLocalTime(&t, &tm);
taosLocalTime(&t, &tm, NULL);
int32_t emon = tm.tm_year * 12 + tm.tm_mon;
if (unit == 'y') {
@ -774,7 +774,7 @@ int64_t taosTimeTruncate(int64_t t, const SInterval* pInterval, int32_t precisio
start /= (int64_t)(TSDB_TICK_PER_SECOND(precision));
struct tm tm;
time_t tt = (time_t)start;
taosLocalTime(&tt, &tm);
taosLocalTime(&tt, &tm, NULL);
tm.tm_sec = 0;
tm.tm_min = 0;
tm.tm_hour = 0;
@ -867,13 +867,17 @@ const char* fmtts(int64_t ts) {
if (ts > -62135625943 && ts < 32503651200) {
time_t t = (time_t)ts;
taosLocalTime(&t, &tm);
if (taosLocalTime(&t, &tm, buf) == NULL) {
return buf;
}
pos += strftime(buf + pos, sizeof(buf), "s=%Y-%m-%d %H:%M:%S", &tm);
}
if (ts > -62135625943000 && ts < 32503651200000) {
time_t t = (time_t)(ts / 1000);
taosLocalTime(&t, &tm);
if (taosLocalTime(&t, &tm, buf) == NULL) {
return buf;
}
if (pos > 0) {
buf[pos++] = ' ';
buf[pos++] = '|';
@ -885,7 +889,9 @@ const char* fmtts(int64_t ts) {
{
time_t t = (time_t)(ts / 1000000);
taosLocalTime(&t, &tm);
if (taosLocalTime(&t, &tm, buf) == NULL) {
return buf;
}
if (pos > 0) {
buf[pos++] = ' ';
buf[pos++] = '|';
@ -937,7 +943,9 @@ void taosFormatUtcTime(char* buf, int32_t bufLen, int64_t t, int32_t precision)
ASSERT(false);
}
taosLocalTime(&quot, &ptm);
if (taosLocalTime(&quot, &ptm, buf) == NULL) {
return;
}
int32_t length = (int32_t)strftime(ts, 40, "%Y-%m-%dT%H:%M:%S", &ptm);
length += snprintf(ts + length, fractionLen, format, mod);
length += (int32_t)strftime(ts + length, 40 - length, "%z", &ptm);

View File

@ -19,6 +19,21 @@
#include "tconfig.h"
#include "tglobal.h"
#if defined(CUS_NAME) || defined(CUS_PROMPT) || defined(CUS_EMAIL)
#include "cus_name.h"
#else
#ifndef CUS_NAME
#define CUS_NAME "TDengine"
#endif
#ifndef CUS_PROMPT
#define CUS_PROMPT "taos"
#endif
#ifndef CUS_EMAIL
#define CUS_EMAIL "<support@taosdata.com>"
#endif
#endif
// clang-format off
#define DM_APOLLO_URL "The apollo string to use when configuring the server, such as: -a 'jsonFile:./tests/cfg.json', cfg.json text can be '{\"fqdn\":\"td1\"}'."
#define DM_CFG_DIR "Configuration directory."
@ -228,7 +243,7 @@ static void dmDumpCfg() {
}
static int32_t dmInitLog() {
return taosCreateLog("taosdlog", 1, configDir, global.envCmd, global.envFile, global.apolloUrl, global.pArgs, 0);
return taosCreateLog(CUS_PROMPT"dlog", 1, configDir, global.envCmd, global.envFile, global.apolloUrl, global.pArgs, 0);
}
static void taosCleanupArgs() {

View File

@ -32,6 +32,7 @@ bool mndTopicExistsForDb(SMnode *pMnode, SDbObj *pDb);
const char *mndTopicGetShowName(const char topic[TSDB_TOPIC_FNAME_LEN]);
int32_t mndSetTopicCommitLogs(SMnode *pMnode, STrans *pTrans, SMqTopicObj *pTopic);
int32_t mndGetNumOfTopics(SMnode *pMnode, char *dbName, int32_t *pNumOfTopics);
#ifdef __cplusplus
}

View File

@ -846,6 +846,18 @@ static int32_t mndProcessAlterDbReq(SRpcMsg *pReq) {
goto _OVER;
}
int32_t numOfTopics = 0;
if (mndGetNumOfTopics(pMnode, pDb->name, &numOfTopics) != 0) {
goto _OVER;
}
if (numOfTopics != 0 && alterReq.walRetentionPeriod == 0) {
terrno = TSDB_CODE_MND_DB_RETENTION_PERIOD_ZERO;
mError("db:%s, not allowed to set WAL_RETENTION_PERIOD 0 when there are topics defined. numOfTopics:%d", pDb->name,
numOfTopics);
goto _OVER;
}
memcpy(&dbObj, pDb, sizeof(SDbObj));
if (dbObj.cfg.pRetensions != NULL) {
dbObj.cfg.pRetensions = taosArrayDup(pDb->cfg.pRetensions, NULL);

View File

@ -543,6 +543,7 @@ static int32_t mndRetrieveFuncs(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBl
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataSetVal(pColInfo, numOfRows, (const char *)b2, false);
taosMemoryFree(b2);
} else {
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataSetVal(pColInfo, numOfRows, NULL, true);
@ -569,6 +570,26 @@ static int32_t mndRetrieveFuncs(SRpcMsg *pReq, SShowObj *pShow, SSDataBlock *pBl
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
colDataSetVal(pColInfo, numOfRows, (const char *)&pFunc->bufSize, false);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
char* language = "";
if (pFunc->scriptType == TSDB_FUNC_SCRIPT_BIN_LIB) {
language = "C";
} else if (pFunc->scriptType == TSDB_FUNC_SCRIPT_PYTHON) {
language = "Python";
}
char varLang[TSDB_TYPE_STR_MAX_LEN + 1] = {0};
varDataSetLen(varLang, strlen(language));
strcpy(varDataVal(varLang), language);
colDataSetVal(pColInfo, numOfRows, (const char *)varLang, false);
pColInfo = taosArrayGet(pBlock->pDataBlock, cols++);
int32_t varCodeLen = (pFunc->codeSize + VARSTR_HEADER_SIZE) > TSDB_MAX_BINARY_LEN ? TSDB_MAX_BINARY_LEN : pFunc->codeSize + VARSTR_HEADER_SIZE;
char *b4 = taosMemoryMalloc(varCodeLen);
memcpy(varDataVal(b4), pFunc->pCode, varCodeLen - VARSTR_HEADER_SIZE);
varDataSetLen(b4, varCodeLen - VARSTR_HEADER_SIZE);
colDataSetVal(pColInfo, numOfRows, (const char*)b4, false);
taosMemoryFree(b4);
numOfRows++;
sdbRelease(pSdb, pFunc);
}

View File

@ -653,7 +653,7 @@ _OVER:
pMsg->msgType == TDMT_MND_TRANS_TIMER || pMsg->msgType == TDMT_MND_TTL_TIMER ||
pMsg->msgType == TDMT_MND_UPTIME_TIMER) {
mTrace("timer not process since mnode restored:%d stopped:%d, sync restored:%d role:%s ", pMnode->restored,
pMnode->stopped, state.restored, syncStr(state.restored));
pMnode->stopped, state.restored, syncStr(state.state));
return -1;
}

View File

@ -605,6 +605,12 @@ static int32_t mndProcessCreateTopicReq(SRpcMsg *pReq) {
goto _OVER;
}
if (pDb->cfg.walRetentionPeriod == 0) {
terrno = TSDB_CODE_MND_DB_RETENTION_PERIOD_ZERO;
mError("db:%s, not allowed to create topic when WAL_RETENTION_PERIOD is zero", pDb->name);
goto _OVER;
}
code = mndCreateTopic(pMnode, pReq, &createTopicReq, pDb, pReq->info.conn.user);
if (code == 0) {
code = TSDB_CODE_ACTION_IN_PROGRESS;
@ -793,7 +799,7 @@ static int32_t mndProcessDropTopicReq(SRpcMsg *pReq) {
return TSDB_CODE_ACTION_IN_PROGRESS;
}
static int32_t mndGetNumOfTopics(SMnode *pMnode, char *dbName, int32_t *pNumOfTopics) {
int32_t mndGetNumOfTopics(SMnode *pMnode, char *dbName, int32_t *pNumOfTopics) {
*pNumOfTopics = 0;
SSdb *pSdb = pMnode->pSdb;
@ -943,4 +949,4 @@ int32_t mndDropTopicByDB(SMnode *pMnode, STrans *pTrans, SDbObj *pDb) {
return code;
}
#endif
#endif

View File

@ -88,98 +88,6 @@ _exit:
return code;
}
extern int32_t tsdbDelFileToJson(const SDelFile *pDelFile, cJSON *pJson);
extern int32_t tsdbJsonToDelFile(const cJSON *pJson, SDelFile *pDelFile);
extern int32_t tsdbDFileSetToJson(const SDFileSet *pSet, cJSON *pJson);
extern int32_t tsdbJsonToDFileSet(const cJSON *pJson, SDFileSet *pDelFile);
static int32_t tsdbFSToJsonStr(STsdbFS *pFS, char **ppStr) {
int32_t code = 0;
int32_t lino = 0;
cJSON *pJson;
ppStr[0] = NULL;
pJson = cJSON_CreateObject();
TSDB_CHECK_NULL(pJson, code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
// format version
TSDB_CHECK_NULL(cJSON_AddNumberToObject(pJson, "format", 1), code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
// SDelFile
if (pFS->pDelFile) {
code = tsdbDelFileToJson(pFS->pDelFile, cJSON_AddObjectToObject(pJson, "del"));
TSDB_CHECK_CODE(code, lino, _exit);
}
// aDFileSet
cJSON *aSetJson = cJSON_AddArrayToObject(pJson, "file set");
TSDB_CHECK_NULL(aSetJson, code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
for (int32_t iSet = 0; iSet < taosArrayGetSize(pFS->aDFileSet); iSet++) {
cJSON *pSetJson = cJSON_CreateObject();
TSDB_CHECK_NULL(pSetJson, code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
cJSON_AddItemToArray(aSetJson, pSetJson);
code = tsdbDFileSetToJson(taosArrayGet(pFS->aDFileSet, iSet), pSetJson);
TSDB_CHECK_CODE(code, lino, _exit);
}
// print
ppStr[0] = cJSON_Print(pJson);
TSDB_CHECK_NULL(ppStr[0], code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
_exit:
cJSON_Delete(pJson);
if (code) tsdbError("%s failed at line %d since %s", __func__, lino, tstrerror(code));
return code;
}
static int32_t tsdbJsonStrToFS(const char *pStr, STsdbFS *pFS) {
int32_t code = 0;
int32_t lino;
cJSON *pJson = cJSON_Parse(pStr);
TSDB_CHECK(pJson, code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
const cJSON *pItem;
// format version
TSDB_CHECK(cJSON_IsNumber(pItem = cJSON_GetObjectItem(pJson, "format")), code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
// SDelFile
if (cJSON_IsObject(pItem = cJSON_GetObjectItem(pJson, "del"))) {
pFS->pDelFile = (SDelFile *)taosMemoryCalloc(1, sizeof(SDelFile));
TSDB_CHECK_NULL(pFS->pDelFile, code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
code = tsdbJsonToDelFile(pItem, pFS->pDelFile);
TSDB_CHECK_CODE(code, lino, _exit);
pFS->pDelFile->nRef = 1;
} else {
pFS->pDelFile = NULL;
}
// aDFileSet
taosArrayClear(pFS->aDFileSet);
const cJSON *pSetJson;
TSDB_CHECK(cJSON_IsArray(pItem = cJSON_GetObjectItem(pJson, "file set")), code, lino, _exit,
TSDB_CODE_FILE_CORRUPTED);
cJSON_ArrayForEach(pSetJson, pItem) {
SDFileSet *pSet = (SDFileSet *)taosArrayReserve(pFS->aDFileSet, 1);
TSDB_CHECK_NULL(pSet, code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
code = tsdbJsonToDFileSet(pSetJson, pSet);
TSDB_CHECK_CODE(code, lino, _exit);
}
_exit:
cJSON_Delete(pJson);
if (code) tsdbError("%s failed at line %d since %s", __func__, lino, tstrerror(code));
return code;
}
static int32_t tsdbSaveFSToFile(STsdbFS *pFS, const char *fname) {
int32_t code = 0;
int32_t lino = 0;
@ -224,84 +132,6 @@ _exit:
return code;
}
static int32_t tsdbSaveFSToJsonFile(STsdbFS *pFS, const char *fname) {
int32_t code;
int32_t lino;
char *pData;
code = tsdbFSToJsonStr(pFS, &pData);
if (code) return code;
TdFilePtr pFD = taosOpenFile(fname, TD_FILE_WRITE | TD_FILE_CREATE | TD_FILE_TRUNC);
if (pFD == NULL) {
code = TAOS_SYSTEM_ERROR(errno);
TSDB_CHECK_CODE(code, lino, _exit);
}
int64_t n = taosWriteFile(pFD, pData, strlen(pData) + 1);
if (n < 0) {
code = TAOS_SYSTEM_ERROR(errno);
taosCloseFile(&pFD);
TSDB_CHECK_CODE(code, lino, _exit);
}
if (taosFsyncFile(pFD) < 0) {
code = TAOS_SYSTEM_ERROR(errno);
taosCloseFile(&pFD);
TSDB_CHECK_CODE(code, lino, _exit);
}
taosCloseFile(&pFD);
_exit:
taosMemoryFree(pData);
if (code) {
tsdbError("%s failed at line %d since %s", __func__, lino, tstrerror(code));
}
return code;
}
static int32_t tsdbLoadFSFromJsonFile(const char *fname, STsdbFS *pFS) {
int32_t code = 0;
int32_t lino = 0;
char *pData = NULL;
TdFilePtr pFD = taosOpenFile(fname, TD_FILE_READ);
if (pFD == NULL) {
code = TAOS_SYSTEM_ERROR(errno);
TSDB_CHECK_CODE(code, lino, _exit);
}
int64_t size;
if (taosFStatFile(pFD, &size, NULL) < 0) {
code = TAOS_SYSTEM_ERROR(errno);
taosCloseFile(&pFD);
TSDB_CHECK_CODE(code, lino, _exit);
}
if ((pData = taosMemoryMalloc(size)) == NULL) {
code = TSDB_CODE_OUT_OF_MEMORY;
taosCloseFile(&pFD);
TSDB_CHECK_CODE(code, lino, _exit);
}
if (taosReadFile(pFD, pData, size) < 0) {
code = TAOS_SYSTEM_ERROR(errno);
taosCloseFile(&pFD);
TSDB_CHECK_CODE(code, lino, _exit);
}
taosCloseFile(&pFD);
TSDB_CHECK_CODE(code = tsdbJsonStrToFS(pData, pFS), lino, _exit);
_exit:
if (pData) taosMemoryFree(pData);
if (code) tsdbError("%s failed at line %d since %s", __func__, lino, tstrerror(code));
return code;
}
int32_t tsdbFSCreate(STsdbFS *pFS) {
int32_t code = 0;
@ -439,8 +269,7 @@ int32_t tDFileSetCmprFn(const void *p1, const void *p2) {
return 0;
}
static void tsdbGetCurrentFName(STsdb *pTsdb, char *current, char *current_t, char *current_json,
char *current_json_t) {
static void tsdbGetCurrentFName(STsdb *pTsdb, char *current, char *current_t) {
SVnode *pVnode = pTsdb->pVnode;
if (pVnode->pTfs) {
if (current) {
@ -451,14 +280,6 @@ static void tsdbGetCurrentFName(STsdb *pTsdb, char *current, char *current_t, ch
snprintf(current_t, TSDB_FILENAME_LEN - 1, "%s%s%s%sCURRENT.t", tfsGetPrimaryPath(pTsdb->pVnode->pTfs), TD_DIRSEP,
pTsdb->path, TD_DIRSEP);
}
if (current_json) {
snprintf(current_json, TSDB_FILENAME_LEN - 1, "%s%s%s%scurrent.json", tfsGetPrimaryPath(pTsdb->pVnode->pTfs),
TD_DIRSEP, pTsdb->path, TD_DIRSEP);
}
if (current_json_t) {
snprintf(current_json_t, TSDB_FILENAME_LEN - 1, "%s%s%s%scurrent.json.t", tfsGetPrimaryPath(pTsdb->pVnode->pTfs),
TD_DIRSEP, pTsdb->path, TD_DIRSEP);
}
} else {
if (current) {
snprintf(current, TSDB_FILENAME_LEN - 1, "%s%sCURRENT", pTsdb->path, TD_DIRSEP);
@ -466,12 +287,6 @@ static void tsdbGetCurrentFName(STsdb *pTsdb, char *current, char *current_t, ch
if (current_t) {
snprintf(current_t, TSDB_FILENAME_LEN - 1, "%s%sCURRENT.t", pTsdb->path, TD_DIRSEP);
}
if (current_json) {
snprintf(current_json, TSDB_FILENAME_LEN - 1, "%s%scurrent.json", pTsdb->path, TD_DIRSEP);
}
if (current_json_t) {
snprintf(current_json_t, TSDB_FILENAME_LEN - 1, "%s%scurrent.json.t", pTsdb->path, TD_DIRSEP);
}
}
}
@ -887,15 +702,20 @@ _exit:
return code;
}
static int32_t tsdbFSCommitImpl(STsdb *pTsdb, const char *fname, const char *tfname, bool isJson) {
// EXPOSED APIS ====================================================================================
int32_t tsdbFSCommit(STsdb *pTsdb) {
int32_t code = 0;
int32_t lino = 0;
STsdbFS fs = {0};
if (!taosCheckExistFile(tfname)) goto _exit;
char current[TSDB_FILENAME_LEN] = {0};
char current_t[TSDB_FILENAME_LEN] = {0};
tsdbGetCurrentFName(pTsdb, current, current_t);
if (!taosCheckExistFile(current_t)) goto _exit;
// rename the file
if (taosRenameFile(tfname, fname) < 0) {
if (taosRenameFile(current_t, current) < 0) {
code = TAOS_SYSTEM_ERROR(errno);
TSDB_CHECK_CODE(code, lino, _exit);
}
@ -904,11 +724,7 @@ static int32_t tsdbFSCommitImpl(STsdb *pTsdb, const char *fname, const char *tfn
code = tsdbFSCreate(&fs);
TSDB_CHECK_CODE(code, lino, _exit);
if (isJson) {
code = tsdbLoadFSFromJsonFile(fname, &fs);
} else {
code = tsdbLoadFSFromFile(fname, &fs);
}
code = tsdbLoadFSFromFile(current, &fs);
TSDB_CHECK_CODE(code, lino, _exit);
// apply file change
@ -923,19 +739,18 @@ _exit:
return code;
}
// EXPOSED APIS ====================================================================================
int32_t tsdbFSCommit(STsdb *pTsdb) {
char current_json[TSDB_FILENAME_LEN] = {0};
char current_json_t[TSDB_FILENAME_LEN] = {0};
tsdbGetCurrentFName(pTsdb, NULL, NULL, current_json, current_json_t);
return tsdbFSCommitImpl(pTsdb, current_json, current_json_t, true);
}
int32_t tsdbFSRollback(STsdb *pTsdb) {
int32_t code = 0;
char current_json_t[TSDB_FILENAME_LEN] = {0};
tsdbGetCurrentFName(pTsdb, NULL, NULL, NULL, current_json_t);
(void)taosRemoveFile(current_json_t);
int32_t lino = 0;
char current_t[TSDB_FILENAME_LEN] = {0};
tsdbGetCurrentFName(pTsdb, NULL, current_t);
(void)taosRemoveFile(current_t);
_exit:
if (code) {
tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, lino, tstrerror(errno));
}
return code;
}
@ -951,33 +766,13 @@ int32_t tsdbFSOpen(STsdb *pTsdb, int8_t rollback) {
// open impl
char current[TSDB_FILENAME_LEN] = {0};
char current_t[TSDB_FILENAME_LEN] = {0};
char current_json[TSDB_FILENAME_LEN] = {0};
char current_json_t[TSDB_FILENAME_LEN] = {0};
tsdbGetCurrentFName(pTsdb, current, current_t, current_json, current_json_t);
tsdbGetCurrentFName(pTsdb, current, current_t);
if (taosCheckExistFile(current)) {
// CURRENT file exists
code = tsdbLoadFSFromFile(current, &pTsdb->fs);
TSDB_CHECK_CODE(code, lino, _exit);
if (taosCheckExistFile(current_t)) {
if (rollback) {
(void)taosRemoveFile(current_t);
} else {
code = tsdbFSCommitImpl(pTsdb, current, current_t, false);
TSDB_CHECK_CODE(code, lino, _exit);
}
}
code = tsdbSaveFSToJsonFile(&pTsdb->fs, current_json);
TSDB_CHECK_CODE(code, lino, _exit);
(void)taosRemoveFile(current);
} else if (taosCheckExistFile(current_json)) {
// current.json exists
code = tsdbLoadFSFromJsonFile(current_json, &pTsdb->fs);
TSDB_CHECK_CODE(code, lino, _exit);
if (taosCheckExistFile(current_json_t)) {
if (rollback) {
code = tsdbFSRollback(pTsdb);
TSDB_CHECK_CODE(code, lino, _exit);
@ -987,10 +782,11 @@ int32_t tsdbFSOpen(STsdb *pTsdb, int8_t rollback) {
}
}
} else {
// empty TSDB
ASSERT(!rollback);
code = tsdbSaveFSToJsonFile(&pTsdb->fs, current_json);
// empty one
code = tsdbSaveFSToFile(&pTsdb->fs, current);
TSDB_CHECK_CODE(code, lino, _exit);
ASSERT(!rollback);
}
// scan and fix FS
@ -1228,12 +1024,12 @@ _exit:
int32_t tsdbFSPrepareCommit(STsdb *pTsdb, STsdbFS *pFSNew) {
int32_t code = 0;
int32_t lino = 0;
char current_json_t[TSDB_FILENAME_LEN];
char tfname[TSDB_FILENAME_LEN];
tsdbGetCurrentFName(pTsdb, NULL, NULL, NULL, current_json_t);
tsdbGetCurrentFName(pTsdb, NULL, tfname);
// generate current.json
code = tsdbSaveFSToJsonFile(pFSNew, current_json_t);
// gnrt CURRENT.t
code = tsdbSaveFSToFile(pFSNew, tfname);
TSDB_CHECK_CODE(code, lino, _exit);
_exit:

View File

@ -92,11 +92,11 @@ static int32_t tGetSmaFile(uint8_t *p, SSmaFile *pSmaFile) {
}
// EXPOSED APIS ==================================================
static char *getFileNamePrefix(STsdb *pTsdb, SDiskID did, int32_t fid, uint64_t commitId, char fname[]) {
const char *p1 = tfsGetDiskPath(pTsdb->pVnode->pTfs, did);
int32_t len = strlen(p1);
static char* getFileNamePrefix(STsdb *pTsdb, SDiskID did, int32_t fid, uint64_t commitId, char fname[]) {
const char* p1 = tfsGetDiskPath(pTsdb->pVnode->pTfs, did);
int32_t len = strlen(p1);
char *p = memcpy(fname, p1, len);
char* p = memcpy(fname, p1, len);
p += len;
*(p++) = TD_DIRSEP[0];
@ -121,25 +121,25 @@ static char *getFileNamePrefix(STsdb *pTsdb, SDiskID did, int32_t fid, uint64_t
}
void tsdbHeadFileName(STsdb *pTsdb, SDiskID did, int32_t fid, SHeadFile *pHeadF, char fname[]) {
char *p = getFileNamePrefix(pTsdb, did, fid, pHeadF->commitID, fname);
char* p = getFileNamePrefix(pTsdb, did, fid, pHeadF->commitID, fname);
memcpy(p, ".head", 5);
p[5] = 0;
}
void tsdbDataFileName(STsdb *pTsdb, SDiskID did, int32_t fid, SDataFile *pDataF, char fname[]) {
char *p = getFileNamePrefix(pTsdb, did, fid, pDataF->commitID, fname);
char* p = getFileNamePrefix(pTsdb, did, fid, pDataF->commitID, fname);
memcpy(p, ".data", 5);
p[5] = 0;
}
void tsdbSttFileName(STsdb *pTsdb, SDiskID did, int32_t fid, SSttFile *pSttF, char fname[]) {
char *p = getFileNamePrefix(pTsdb, did, fid, pSttF->commitID, fname);
char* p = getFileNamePrefix(pTsdb, did, fid, pSttF->commitID, fname);
memcpy(p, ".stt", 4);
p[4] = 0;
}
void tsdbSmaFileName(STsdb *pTsdb, SDiskID did, int32_t fid, SSmaFile *pSmaF, char fname[]) {
char *p = getFileNamePrefix(pTsdb, did, fid, pSmaF->commitID, fname);
char* p = getFileNamePrefix(pTsdb, did, fid, pSmaF->commitID, fname);
memcpy(p, ".sma", 4);
p[4] = 0;
}
@ -280,272 +280,6 @@ int32_t tGetDFileSet(uint8_t *p, SDFileSet *pSet) {
return n;
}
static int32_t tDiskIdToJson(const SDiskID *pDiskId, cJSON *pJson) {
int32_t code = 0;
int32_t lino;
if (pJson == NULL) return TSDB_CODE_OUT_OF_MEMORY;
TSDB_CHECK_NULL(cJSON_AddNumberToObject(pJson, "level", pDiskId->level), code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
TSDB_CHECK_NULL(cJSON_AddNumberToObject(pJson, "id", pDiskId->id), code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
_exit:
return code;
}
static int32_t tJsonToDiskId(const cJSON *pJson, SDiskID *pDiskId) {
int32_t code = 0;
int32_t lino;
const cJSON *pItem;
// level
TSDB_CHECK(cJSON_IsNumber(pItem = cJSON_GetObjectItem(pJson, "level")), code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
pDiskId->level = (int32_t)pItem->valuedouble;
// id
TSDB_CHECK(cJSON_IsNumber(pItem = cJSON_GetObjectItem(pJson, "id")), code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
pDiskId->id = (int32_t)pItem->valuedouble;
_exit:
return code;
}
static int32_t tHeadFileToJson(const SHeadFile *pHeadF, cJSON *pJson) {
int32_t code = 0;
int32_t lino;
if (pJson == NULL) return TSDB_CODE_OUT_OF_MEMORY;
TSDB_CHECK_NULL(cJSON_AddNumberToObject(pJson, "commit id", pHeadF->commitID), code, lino, _exit,
TSDB_CODE_OUT_OF_MEMORY);
TSDB_CHECK_NULL(cJSON_AddNumberToObject(pJson, "size", pHeadF->size), code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
TSDB_CHECK_NULL(cJSON_AddNumberToObject(pJson, "offset", pHeadF->offset), code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
_exit:
return code;
}
static int32_t tJsonToHeadFile(const cJSON *pJson, SHeadFile *pHeadF) {
int32_t code = 0;
int32_t lino;
const cJSON *pItem;
// commit id
TSDB_CHECK(cJSON_IsNumber(pItem = cJSON_GetObjectItem(pJson, "commit id")), code, lino, _exit,
TSDB_CODE_FILE_CORRUPTED);
pHeadF->commitID = (int64_t)pItem->valuedouble;
// size
TSDB_CHECK(cJSON_IsNumber(pItem = cJSON_GetObjectItem(pJson, "size")), code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
pHeadF->size = (int64_t)pItem->valuedouble;
// offset
TSDB_CHECK(cJSON_IsNumber(pItem = cJSON_GetObjectItem(pJson, "offset")), code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
pHeadF->offset = (int64_t)pItem->valuedouble;
_exit:
return code;
}
static int32_t tDataFileToJson(const SDataFile *pDataF, cJSON *pJson) {
int32_t code = 0;
int32_t lino;
if (pJson == NULL) return TSDB_CODE_OUT_OF_MEMORY;
TSDB_CHECK_NULL(cJSON_AddNumberToObject(pJson, "commit id", pDataF->commitID), code, lino, _exit,
TSDB_CODE_OUT_OF_MEMORY);
TSDB_CHECK_NULL(cJSON_AddNumberToObject(pJson, "size", pDataF->size), code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
_exit:
return code;
}
static int32_t tJsonToDataFile(const cJSON *pJson, SDataFile *pDataF) {
int32_t code = 0;
int32_t lino;
const cJSON *pItem;
// commit id
TSDB_CHECK(cJSON_IsNumber(pItem = cJSON_GetObjectItem(pJson, "commit id")), code, lino, _exit,
TSDB_CODE_FILE_CORRUPTED);
pDataF->commitID = (int64_t)pItem->valuedouble;
// size
TSDB_CHECK(cJSON_IsNumber(pItem = cJSON_GetObjectItem(pJson, "size")), code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
pDataF->size = (int64_t)pItem->valuedouble;
_exit:
return code;
}
static int32_t tSmaFileToJson(const SSmaFile *pSmaF, cJSON *pJson) {
int32_t code = 0;
int32_t lino;
if (pJson == NULL) return TSDB_CODE_OUT_OF_MEMORY;
TSDB_CHECK_NULL(cJSON_AddNumberToObject(pJson, "commit id", pSmaF->commitID), code, lino, _exit,
TSDB_CODE_OUT_OF_MEMORY);
TSDB_CHECK_NULL(cJSON_AddNumberToObject(pJson, "size", pSmaF->size), code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
_exit:
return code;
}
static int32_t tJsonToSmaFile(const cJSON *pJson, SSmaFile *pSmaF) {
int32_t code = 0;
int32_t lino;
// commit id
const cJSON *pItem;
TSDB_CHECK(cJSON_IsNumber(pItem = cJSON_GetObjectItem(pJson, "commit id")), code, lino, _exit,
TSDB_CODE_FILE_CORRUPTED);
pSmaF->commitID = (int64_t)pItem->valuedouble;
// size
TSDB_CHECK(cJSON_IsNumber(pItem = cJSON_GetObjectItem(pJson, "size")), code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
pSmaF->size = (int64_t)pItem->valuedouble;
_exit:
return code;
}
static int32_t tSttFileToJson(const SSttFile *pSttF, cJSON *pJson) {
int32_t code = 0;
int32_t lino;
if (pJson == NULL) return TSDB_CODE_OUT_OF_MEMORY;
TSDB_CHECK_NULL(cJSON_AddNumberToObject(pJson, "commit id", pSttF->commitID), code, lino, _exit,
TSDB_CODE_OUT_OF_MEMORY);
TSDB_CHECK_NULL(cJSON_AddNumberToObject(pJson, "size", pSttF->size), code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
TSDB_CHECK_NULL(cJSON_AddNumberToObject(pJson, "offset", pSttF->offset), code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
_exit:
return code;
}
static int32_t tJsonToSttFile(const cJSON *pJson, SSttFile *pSttF) {
int32_t code = 0;
int32_t lino;
const cJSON *pItem;
// commit id
TSDB_CHECK(cJSON_IsNumber(pItem = cJSON_GetObjectItem(pJson, "commit id")), code, lino, _exit,
TSDB_CODE_FILE_CORRUPTED);
pSttF->commitID = (int64_t)pItem->valuedouble;
// size
TSDB_CHECK(cJSON_IsNumber(pItem = cJSON_GetObjectItem(pJson, "size")), code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
pSttF->size = (int64_t)pItem->valuedouble;
// offset
TSDB_CHECK(cJSON_IsNumber(pItem = cJSON_GetObjectItem(pJson, "offset")), code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
pSttF->offset = (int64_t)pItem->valuedouble;
_exit:
return code;
}
int32_t tsdbDFileSetToJson(const SDFileSet *pSet, cJSON *pJson) {
int32_t code = 0;
int32_t lino;
if (pJson == NULL) return TSDB_CODE_OUT_OF_MEMORY;
code = tDiskIdToJson(&pSet->diskId, cJSON_AddObjectToObject(pJson, "disk id"));
TSDB_CHECK_CODE(code, lino, _exit);
TSDB_CHECK_NULL(cJSON_AddNumberToObject(pJson, "fid", pSet->fid), code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
// head
code = tHeadFileToJson(pSet->pHeadF, cJSON_AddObjectToObject(pJson, "head"));
TSDB_CHECK_CODE(code, lino, _exit);
// data
code = tDataFileToJson(pSet->pDataF, cJSON_AddObjectToObject(pJson, "data"));
TSDB_CHECK_CODE(code, lino, _exit);
// sma
code = tSmaFileToJson(pSet->pSmaF, cJSON_AddObjectToObject(pJson, "sma"));
TSDB_CHECK_CODE(code, lino, _exit);
// stt array
cJSON *aSttJson = cJSON_AddArrayToObject(pJson, "stt");
TSDB_CHECK_NULL(aSttJson, code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
for (int32_t iStt = 0; iStt < pSet->nSttF; iStt++) {
cJSON *pSttJson = cJSON_CreateObject();
TSDB_CHECK_NULL(pSttJson, code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
cJSON_AddItemToArray(aSttJson, pSttJson);
code = tSttFileToJson(pSet->aSttF[iStt], pSttJson);
TSDB_CHECK_CODE(code, lino, _exit);
}
_exit:
return code;
}
int32_t tsdbJsonToDFileSet(const cJSON *pJson, SDFileSet *pSet) {
int32_t code = 0;
int32_t lino;
const cJSON *pItem;
// disk id
TSDB_CHECK(cJSON_IsObject(pItem = cJSON_GetObjectItem(pJson, "disk id")), code, lino, _exit,
TSDB_CODE_FILE_CORRUPTED);
code = tJsonToDiskId(pItem, &pSet->diskId);
TSDB_CHECK_CODE(code, lino, _exit);
// fid
TSDB_CHECK(cJSON_IsNumber(pItem = cJSON_GetObjectItem(pJson, "fid")), code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
pSet->fid = (int32_t)pItem->valuedouble;
// head
TSDB_CHECK(cJSON_IsObject(pItem = cJSON_GetObjectItem(pJson, "head")), code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
TSDB_CHECK_NULL(pSet->pHeadF = (SHeadFile *)taosMemoryMalloc(sizeof(SHeadFile)), code, lino, _exit,
TSDB_CODE_OUT_OF_MEMORY);
TSDB_CHECK_CODE(code = tJsonToHeadFile(pItem, pSet->pHeadF), lino, _exit);
pSet->pHeadF->nRef = 1;
// data
TSDB_CHECK(cJSON_IsObject(pItem = cJSON_GetObjectItem(pJson, "data")), code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
TSDB_CHECK_NULL(pSet->pDataF = (SDataFile *)taosMemoryMalloc(sizeof(SDataFile)), code, lino, _exit,
TSDB_CODE_OUT_OF_MEMORY);
TSDB_CHECK_CODE(code = tJsonToDataFile(pItem, pSet->pDataF), lino, _exit);
pSet->pDataF->nRef = 1;
// sma
TSDB_CHECK(cJSON_IsObject(pItem = cJSON_GetObjectItem(pJson, "sma")), code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
TSDB_CHECK_NULL(pSet->pSmaF = (SSmaFile *)taosMemoryMalloc(sizeof(SSmaFile)), code, lino, _exit,
TSDB_CODE_OUT_OF_MEMORY);
TSDB_CHECK_CODE(code = tJsonToSmaFile(pItem, pSet->pSmaF), lino, _exit);
pSet->pSmaF->nRef = 1;
// stt array
const cJSON *element;
pSet->nSttF = 0;
TSDB_CHECK(cJSON_IsArray(pItem = cJSON_GetObjectItem(pJson, "stt")), code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
cJSON_ArrayForEach(element, pItem) {
TSDB_CHECK(cJSON_IsObject(element), code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
pSet->aSttF[pSet->nSttF] = (SSttFile *)taosMemoryMalloc(sizeof(SSttFile));
TSDB_CHECK_NULL(pSet->aSttF[pSet->nSttF], code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
TSDB_CHECK_CODE(code = tJsonToSttFile(element, pSet->aSttF[pSet->nSttF]), lino, _exit);
pSet->aSttF[pSet->nSttF]->nRef = 1;
pSet->nSttF++;
}
_exit:
if (code) tsdbError("%s failed at line %d since %s", __func__, lino, tstrerror(code));
return code;
}
// SDelFile ===============================================
void tsdbDelFileName(STsdb *pTsdb, SDelFile *pFile, char fname[]) {
snprintf(fname, TSDB_FILENAME_LEN - 1, "%s%s%s%sv%dver%" PRId64 "%s", tfsGetPrimaryPath(pTsdb->pVnode->pTfs),
@ -571,42 +305,3 @@ int32_t tGetDelFile(uint8_t *p, SDelFile *pDelFile) {
return n;
}
int32_t tsdbDelFileToJson(const SDelFile *pDelFile, cJSON *pJson) {
if (pJson == NULL) return TSDB_CODE_OUT_OF_MEMORY;
int32_t code = 0;
int32_t lino;
TSDB_CHECK_NULL(cJSON_AddNumberToObject(pJson, "commit id", pDelFile->commitID), code, lino, _exit,
TSDB_CODE_OUT_OF_MEMORY);
TSDB_CHECK_NULL(cJSON_AddNumberToObject(pJson, "size", pDelFile->size), code, lino, _exit, TSDB_CODE_OUT_OF_MEMORY);
TSDB_CHECK_NULL(cJSON_AddNumberToObject(pJson, "offset", pDelFile->offset), code, lino, _exit,
TSDB_CODE_OUT_OF_MEMORY);
_exit:
return code;
}
int32_t tsdbJsonToDelFile(const cJSON *pJson, SDelFile *pDelFile) {
int32_t code = 0;
int32_t lino;
const cJSON *pItem;
// commit id
TSDB_CHECK(cJSON_IsNumber(pItem = cJSON_GetObjectItem(pJson, "commit id")), code, lino, _exit,
TSDB_CODE_FILE_CORRUPTED);
pDelFile->commitID = cJSON_GetNumberValue(pItem);
// size
TSDB_CHECK(cJSON_IsNumber(pItem = cJSON_GetObjectItem(pJson, "size")), code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
pDelFile->size = cJSON_GetNumberValue(pItem);
// offset
TSDB_CHECK(cJSON_IsNumber(pItem = cJSON_GetObjectItem(pJson, "offset")), code, lino, _exit, TSDB_CODE_FILE_CORRUPTED);
pDelFile->offset = cJSON_GetNumberValue(pItem);
_exit:
return code;
}

View File

@ -88,7 +88,7 @@ static void getNextTimeWindow(SInterval* pInterval, STimeWindow* tw, int32_t ord
struct tm tm;
time_t t = (time_t)key;
taosLocalTime(&t, &tm);
taosLocalTime(&t, &tm, NULL);
int mon = (int)(tm.tm_year * 12 + tm.tm_mon + interval * factor);
tm.tm_year = mon / 12;

View File

@ -281,7 +281,7 @@ static void getNextTimeWindow(SInterval* pInterval, int32_t precision, int32_t o
struct tm tm;
time_t t = (time_t)key;
taosLocalTime(&t, &tm);
taosLocalTime(&t, &tm, NULL);
int mon = (int)(tm.tm_year * 12 + tm.tm_mon + interval * factor);
tm.tm_year = mon / 12;

View File

@ -213,8 +213,9 @@ static int32_t addTimezoneParam(SNodeList* pList) {
char buf[6] = {0};
time_t t = taosTime(NULL);
struct tm tmInfo;
taosLocalTime(&t, &tmInfo);
strftime(buf, sizeof(buf), "%z", &tmInfo);
if (taosLocalTime(&t, &tmInfo, buf) != NULL) {
strftime(buf, sizeof(buf), "%z", &tmInfo);
}
int32_t len = (int32_t)strlen(buf);
SValueNode* pVal = (SValueNode*)nodesMakeNode(QUERY_NODE_VALUE);

View File

@ -1070,8 +1070,15 @@ int32_t callUdfScalarFunc(char *udfName, SScalarParam *input, int32_t numOfCols,
if (code != 0) {
return code;
}
SUdfcUvSession *session = handle;
code = doCallUdfScalarFunc(handle, input, numOfCols, output);
if (code != TSDB_CODE_SUCCESS) {
fnError("udfc scalar function execution failure");
releaseUdfFuncHandle(udfName);
return code;
}
if (output->columnData == NULL) {
fnError("udfc scalar function calculate error. no column data");
code = TSDB_CODE_UDF_INVALID_OUTPUT_TYPE;

View File

@ -1067,9 +1067,15 @@ int32_t toISO8601Function(SScalarParam *pInput, int32_t inputNum, SScalarParam *
}
struct tm tmInfo;
taosLocalTime((const time_t *)&timeVal, &tmInfo);
int32_t len = 0;
if (taosLocalTime((const time_t *)&timeVal, &tmInfo, buf) == NULL) {
len = (int32_t)strlen(buf);
goto _end;
}
strftime(buf, sizeof(buf), "%Y-%m-%dT%H:%M:%S", &tmInfo);
int32_t len = (int32_t)strlen(buf);
len = (int32_t)strlen(buf);
// add timezone string
if (tzLen > 0) {
@ -1103,6 +1109,7 @@ int32_t toISO8601Function(SScalarParam *pInput, int32_t inputNum, SScalarParam *
len += fracLen;
}
_end:
memmove(buf + VARSTR_HEADER_SIZE, buf, len);
varDataSetLen(buf, len);

View File

@ -290,14 +290,22 @@ int32_t walEndSnapshot(SWal *pWal) {
int ts = taosGetTimestampSec();
ver = TMAX(ver - pWal->vers.logRetention, pWal->vers.firstVer - 1);
bool hasTopic = false;
int64_t refVer = ver;
void *pIter = NULL;
while (1) {
pIter = taosHashIterate(pWal->pRefHash, pIter);
if (pIter == NULL) break;
SWalRef *pRef = *(SWalRef **)pIter;
if (pRef->refVer == -1) continue;
ver = TMIN(ver, pRef->refVer - 1);
refVer = TMIN(refVer, pRef->refVer - 1);
wDebug("vgId:%d, wal found ref %" PRId64 ", refId %" PRId64, pWal->cfg.vgId, pRef->refVer, pRef->refId);
hasTopic = true;
}
// compatible mode
if (pWal->cfg.retentionPeriod == 0 && hasTopic) {
ver = TMIN(ver, refVer);
}
int deleteCnt = 0;

View File

@ -17,6 +17,10 @@
#include "os.h"
#include "taoserror.h"
#if defined(CUS_NAME) || defined(CUS_PROMPT) || defined(CUS_EMAIL)
#include "cus_name.h"
#endif
#define PROCESS_ITEM 12
#define UUIDLEN37 37
@ -252,7 +256,11 @@ int32_t taosGetEmail(char *email, int32_t maxLen) {
#ifdef WINDOWS
// ASSERT(0);
#elif defined(_TD_DARWIN_64)
#ifdef CUS_PROMPT
const char *filepath = "/usr/local/"CUS_PROMPT"/email";
#else
const char *filepath = "/usr/local/taos/email";
#endif // CUS_PROMPT
TdFilePtr pFile = taosOpenFile(filepath, TD_FILE_READ);
if (pFile == NULL) return false;
@ -264,8 +272,12 @@ int32_t taosGetEmail(char *email, int32_t maxLen) {
taosCloseFile(&pFile);
return 0;
#else
#ifdef CUS_PROMPT
const char *filepath = "/usr/local/"CUS_PROMPT"/email";
#else
const char *filepath = "/usr/local/taos/email";
#endif // CUS_PROMPT
TdFilePtr pFile = taosOpenFile(filepath, TD_FILE_READ);
if (pFile == NULL) return false;

View File

@ -407,12 +407,21 @@ time_t taosMktime(struct tm *timep) {
#endif
}
struct tm *taosLocalTime(const time_t *timep, struct tm *result) {
struct tm *taosLocalTime(const time_t *timep, struct tm *result, char *buf) {
struct tm *res = NULL;
if (result == NULL) {
return localtime(timep);
res = localtime(timep);
if (res == NULL && buf != NULL) {
sprintf(buf, "NaN");
}
return res;
}
#ifdef WINDOWS
if (*timep < 0) {
if (buf != NULL) {
sprintf(buf, "NaN");
}
return NULL;
// TODO: bugs in following code
SYSTEMTIME ss, s;
@ -421,6 +430,9 @@ struct tm *taosLocalTime(const time_t *timep, struct tm *result) {
struct tm tm1;
time_t tt = 0;
if (localtime_s(&tm1, &tt) != 0 ) {
if (buf != NULL) {
sprintf(buf, "NaN");
}
return NULL;
}
ss.wYear = tm1.tm_year + 1900;
@ -449,11 +461,17 @@ struct tm *taosLocalTime(const time_t *timep, struct tm *result) {
result->tm_isdst = 0;
} else {
if (localtime_s(result, timep) != 0) {
if (buf != NULL) {
sprintf(buf, "NaN");
}
return NULL;
}
}
#else
localtime_r(timep, result);
res = localtime_r(timep, result);
if (res == NULL && buf != NULL) {
sprintf(buf, "NaN");
}
#endif
return result;
}

View File

@ -893,7 +893,7 @@ void taosGetSystemTimezone(char *outTimezoneStr, enum TdTimezone *tsTimezone) {
*/
time_t tx1 = taosGetTimestampSec();
struct tm tm1;
taosLocalTime(&tx1, &tm1);
taosLocalTime(&tx1, &tm1, NULL);
daylight = tm1.tm_isdst;
/*
@ -921,7 +921,7 @@ void taosGetSystemTimezone(char *outTimezoneStr, enum TdTimezone *tsTimezone) {
*/
time_t tx1 = taosGetTimestampSec();
struct tm tm1;
taosLocalTime(&tx1, &tm1);
taosLocalTime(&tx1, &tm1, NULL);
/* load time zone string from /etc/timezone */
// FILE *f = fopen("/etc/timezone", "r");
errno = 0;
@ -1008,7 +1008,7 @@ void taosGetSystemTimezone(char *outTimezoneStr, enum TdTimezone *tsTimezone) {
*/
time_t tx1 = taosGetTimestampSec();
struct tm tm1;
taosLocalTime(&tx1, &tm1);
taosLocalTime(&tx1, &tm1, NULL);
/*
* format example:

View File

@ -224,6 +224,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_MND_INVALID_DB, "Invalid database name
TAOS_DEFINE_ERROR(TSDB_CODE_MND_TOO_MANY_DATABASES, "Too many databases for account")
TAOS_DEFINE_ERROR(TSDB_CODE_MND_DB_IN_DROPPING, "Database in dropping status")
TAOS_DEFINE_ERROR(TSDB_CODE_MND_DB_NOT_EXIST, "Database not exist")
TAOS_DEFINE_ERROR(TSDB_CODE_MND_DB_RETENTION_PERIOD_ZERO, "WAL retention period is zero")
TAOS_DEFINE_ERROR(TSDB_CODE_MND_INVALID_DB_ACCT, "Invalid database account")
TAOS_DEFINE_ERROR(TSDB_CODE_MND_DB_OPTION_UNCHANGED, "Database options not changed")
TAOS_DEFINE_ERROR(TSDB_CODE_MND_DB_INDEX_NOT_EXIST, "Index not exist")

View File

@ -121,7 +121,7 @@ static FORCE_INLINE void taosUpdateDaylight() {
struct timeval timeSecs;
taosGetTimeOfDay(&timeSecs);
time_t curTime = timeSecs.tv_sec;
ptm = taosLocalTime(&curTime, &Tm);
ptm = taosLocalTime(&curTime, &Tm, NULL);
tsDaylightActive = ptm->tm_isdst;
}
static FORCE_INLINE int32_t taosGetDaylight() { return tsDaylightActive; }
@ -437,7 +437,7 @@ static inline int32_t taosBuildLogHead(char *buffer, const char *flags) {
taosGetTimeOfDay(&timeSecs);
time_t curTime = timeSecs.tv_sec;
ptm = taosLocalTime(&curTime, &Tm);
ptm = taosLocalTime(&curTime, &Tm, NULL);
return sprintf(buffer, "%02d/%02d %02d:%02d:%02d.%06d %08" PRId64 " %s", ptm->tm_mon + 1, ptm->tm_mday, ptm->tm_hour,
ptm->tm_min, ptm->tm_sec, (int32_t)timeSecs.tv_usec, taosGetSelfPthreadId(), flags);

View File

@ -16,7 +16,7 @@
"num_of_records_per_req": 10,
"databases": [{
"dbinfo": {
"name": "db",
"name": "opentsdb_telnet",
"drop": "yes"
},
"super_tables": [{

View File

@ -52,7 +52,7 @@ python3 conn_rest_pandas.py
taos -s "drop database if exists power"
# 11
taos -s "create database if not exists test"
taos -s "create database if not exists test wal_retention_period 3600"
python3 connect_native_reference.py
# 12

View File

@ -120,6 +120,7 @@
,,y,system-test,./pytest.sh python3 ./test.py -f 0-others/fsync.py
,,n,system-test,python3 ./test.py -f 0-others/compatibility.py
,,n,system-test,python3 ./test.py -f 0-others/tag_index_basic.py
,,n,system-test,python3 ./test.py -f 0-others/udfpy_main.py
,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/alter_database.py
,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/influxdb_line_taosc_insert.py
,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/opentsdb_telnet_line_taosc_insert.py
@ -1102,9 +1103,9 @@
,,n,develop-test,python3 ./test.py -f 5-taos-tools/taosbenchmark/json_tag.py
,,n,develop-test,python3 ./test.py -f 5-taos-tools/taosbenchmark/query_json.py
,,n,develop-test,python3 ./test.py -f 5-taos-tools/taosbenchmark/sample_csv_json.py
#,,n,develop-test,python3 ./test.py -f 5-taos-tools/taosbenchmark/sml_json_alltypes.py
,,n,develop-test,python3 ./test.py -f 5-taos-tools/taosbenchmark/sml_json_alltypes.py
,,n,develop-test,python3 ./test.py -f 5-taos-tools/taosbenchmark/taosdemoTestQueryWithJson.py -R
#,,n,develop-test,python3 ./test.py -f 5-taos-tools/taosbenchmark/telnet_tcp.py -R
,,n,develop-test,python3 ./test.py -f 5-taos-tools/taosbenchmark/telnet_tcp.py -R
#docs-examples test
,,n,docs-examples-test,bash python.sh

View File

@ -17,6 +17,7 @@ import time
import datetime
import inspect
import importlib
import traceback
from util.log import *
@ -75,6 +76,7 @@ class TDCases:
case.run()
except Exception as e:
tdLog.notice(repr(e))
traceback.print_exc()
tdLog.exit("%s failed" % (fileName))
case.stop()
runNum += 1

View File

@ -251,7 +251,7 @@ class TDSql:
if self.queryResult[row][col] != data:
if self.cursor.istype(col, "TIMESTAMP"):
# suppose user want to check nanosecond timestamp if a longer data passed``
# suppose user want to check nanosecond timestamp if a longer data passed``
if isinstance(data,str) :
if (len(data) >= 28):
if self.queryResult[row][col] == _parse_ns_timestamp(data):
@ -260,7 +260,7 @@ class TDSql:
else:
caller = inspect.getframeinfo(inspect.stack()[1][0])
args = (caller.filename, caller.lineno, self.sql, row, col, self.queryResult[row][col], data)
tdLog.exit("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args)
tdLog.exit("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args)
else:
if self.queryResult[row][col].astimezone(datetime.timezone.utc) == _parse_datetime(data).astimezone(datetime.timezone.utc):
# tdLog.info(f"sql:{self.sql}, row:{row} col:{col} data:{self.queryResult[row][col]} == expect:{data}")
@ -270,12 +270,12 @@ class TDSql:
args = (caller.filename, caller.lineno, self.sql, row, col, self.queryResult[row][col], data)
tdLog.exit("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args)
return
elif isinstance(data,int) :
if len(str(data)) == 16 :
elif isinstance(data,int):
if len(str(data)) == 16:
precision = 'us'
elif len(str(data)) == 13 :
elif len(str(data)) == 13:
precision = 'ms'
elif len(str(data)) == 19 :
elif len(str(data)) == 19:
precision = 'ns'
else:
caller = inspect.getframeinfo(inspect.stack()[1][0])
@ -303,11 +303,21 @@ class TDSql:
args = (caller.filename, caller.lineno, self.sql, row, col, self.queryResult[row][col], data)
tdLog.exit("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args)
return
elif isinstance(data,datetime.datetime):
dt_obj = self.queryResult[row][col]
delt_data = data-datetime.datetime.fromtimestamp(0,data.tzinfo)
delt_result = self.queryResult[row][col] - datetime.datetime.fromtimestamp(0,self.queryResult[row][col].tzinfo)
if delt_data == delt_result:
tdLog.info("check successfully")
else:
caller = inspect.getframeinfo(inspect.stack()[1][0])
args = (caller.filename, caller.lineno, self.sql, row, col, self.queryResult[row][col], data)
tdLog.exit("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args)
return
else:
caller = inspect.getframeinfo(inspect.stack()[1][0])
args = (caller.filename, caller.lineno, self.sql, row, col, self.queryResult[row][col], data)
tdLog.exit("%s(%d) failed: sql:%s row:%d col:%d data:%s != expect:%s" % args)
if str(self.queryResult[row][col]) == str(data):
# tdLog.info(f"sql:{self.sql}, row:{row} col:{col} data:{self.queryResult[row][col]} == expect:{data}")

View File

@ -42,6 +42,25 @@ sql show functions;
if $rows != 4 then
return -1
endi
sql select func_language, func_body,name from information_schema.ins_functions order by name
if $rows != 4 then
return -1
endi
if $data00 != @C@ then
return -1
endi
if $data10 != @C@ then
return -1
endi
if $data20 != @Python@ then
return -1
endi
if $data30 != @Python@ then
return -1
endi
sql select bit_and(f, f) from t;
if $rows != 2 then
return -1

View File

@ -34,6 +34,9 @@ $showRow = 0
sql connect
sql use $dbName
print == alter database
sql alter database $dbName wal_retention_period 3600
print == create topics from super table
sql create topic topic_stb_column as select ts, c3 from stb
sql create topic topic_stb_all as select ts, c1, c2, c3 from stb
@ -83,6 +86,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)
@ -155,6 +161,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)
@ -226,6 +235,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)

View File

@ -34,6 +34,9 @@ $showRow = 0
sql connect
sql use $dbName
print == alter database
sql alter database $dbName wal_retention_period 3600
print == create topics from super table
sql create topic topic_stb_column as select ts, c3 from stb
sql create topic topic_stb_all as select ts, c1, c2, c3 from stb
@ -83,6 +86,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table for stb
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)
@ -186,6 +192,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table for ctb
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)
@ -288,6 +297,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table for ntb
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)

View File

@ -34,6 +34,9 @@ $showRow = 0
sql connect
sql use $dbName
print == alter database
sql alter database $dbName wal_retention_period 3600
print == create topics from super table
sql create topic topic_stb_column as select ts, c3 from stb
sql create topic topic_stb_all as select ts, c1, c2, c3 from stb
@ -118,6 +121,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)
@ -175,6 +181,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)

View File

@ -34,6 +34,9 @@ $showRow = 0
sql connect
sql use $dbName
print == alter database
sql alter database $dbName wal_retention_period 3600
print == create topics from super table
sql create topic topic_stb_column as select ts, c3 from stb
sql create topic topic_stb_all as select ts, c1, c2, c3 from stb
@ -147,6 +150,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table for ctb
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)
@ -234,6 +240,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table for ntb
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)

View File

@ -34,6 +34,9 @@ $showRow = 0
sql connect
sql use $dbName
print == alter database
sql alter database $dbName wal_retention_period 3600
print == create topics from super table
sql create topic topic_stb_column as select ts, c3 from stb
sql create topic topic_stb_all as select ts, c1, c2, c3 from stb
@ -168,6 +171,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table for ctb
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)
@ -259,6 +265,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table for ntb
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)

View File

@ -34,6 +34,9 @@ $showRow = 0
sql connect
sql use $dbName
print == alter database
sql alter database $dbName wal_retention_period 3600
print == create topics from super table
sql create topic topic_stb_column as select ts, c3 from stb
sql create topic topic_stb_all as select ts, c1, c2, c3 from stb
@ -83,6 +86,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)
@ -154,6 +160,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)
@ -225,6 +234,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)

View File

@ -34,6 +34,9 @@ $showRow = 0
sql connect
sql use $dbName
print == alter database
sql alter database $dbName wal_retention_period 3600
print == create topics from super table
sql create topic topic_stb_column as select ts, c3 from stb
sql create topic topic_stb_all as select ts, c1, c2, c3 from stb
@ -82,6 +85,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)
@ -197,6 +203,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)
@ -299,6 +308,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)

View File

@ -34,6 +34,9 @@ $showRow = 0
sql connect
sql use $dbName
print == alter database
sql alter database $dbName wal_retention_period 3600
print == create topics from super table
sql create topic topic_stb_column as select ts, c3 from stb
sql create topic topic_stb_all as select ts, c1, c2, c3 from stb
@ -115,6 +118,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)
@ -172,6 +178,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)

View File

@ -34,6 +34,9 @@ $showRow = 0
sql connect
sql use $dbName
print == alter database
sql alter database $dbName wal_retention_period 3600
print == create topics from super table
sql create topic topic_stb_column as select ts, c3 from stb
sql create topic topic_stb_all as select ts, c1, c2, c3 from stb
@ -156,6 +159,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)
@ -244,6 +250,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)

View File

@ -34,6 +34,9 @@ $showRow = 0
sql connect
sql use $dbName
print == alter database
sql alter database $dbName wal_retention_period 3600
print == create topics from super table
sql create topic topic_stb_column as select ts, c3 from stb
sql create topic topic_stb_all as select ts, c1, c2, c3 from stb
@ -83,6 +86,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)
@ -152,6 +158,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)
@ -223,6 +232,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)

View File

@ -34,6 +34,9 @@ $showRow = 0
sql connect
sql use $dbName
print == alter database
sql alter database $dbName wal_retention_period 3600
print == create topics from super table
sql create topic topic_stb_column as select ts, c3 from stb
sql create topic topic_stb_all as select ts, c1, c2, c3 from stb
@ -147,6 +150,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table for ctb
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)
@ -224,6 +230,9 @@ sql create database $cdbName vgroups 1
sleep 500
sql use $cdbName
print == alter database
sql alter database $cdbName wal_retention_period 3600
print == create consume info table and consume result table for ntb
sql create table consumeinfo (ts timestamp, consumerid int, topiclist binary(1024), keylist binary(1024), expectmsgcnt bigint, ifcheckdata int, ifmanualcommit int)
sql create table consumeresult (ts timestamp, consumerid int, consummsgcnt bigint, consumrowcnt bigint, checkresult int)

Some files were not shown because too many files have changed in this diff Show More