tencent cloud

Data Lake Compute

Release Notes
Product Introduction
Overview
Strengths
Use Cases
Purchase Guide
Billing Overview
Refund
Payment Overdue
Configuration Adjustment Fees
Getting Started
Complete Process for New User Activation
DLC Data Import Guide
Quick Start with Data Analytics in Data Lake Compute
Quick Start with Permission Management in Data Lake Compute
Quick Start with Partition Table
Enabling Data Optimization
Cross-Source Analysis of EMR Hive Data
Standard Engine Configuration Guide
Configuring Data Access Policy
Operation Guide
Console Operation Introduction
Development Guide
Runtime Environment
SparkJar Job Development Guide
PySpark Job Development Guide
Query Performance Optimization Guide
UDF Function Development Guide
System Restraints
Client Access
JDBC Access
TDLC Command Line Interface Tool Access
Third-party Software Linkage
Python Access
Practical Tutorial
Accessing DLC Data with Power BI
Table Creation Practice
Using Apache Airflow to Schedule DLC Engine to Submit Tasks
Direct Query of DLC Internal Storage with StarRocks
Spark cost optimization practice
DATA + AI
Using DLC to Analyze CLS Logs
Using Role SSO to Access DLC
Resource-Level Authentication Guide
Implementing Tencent Cloud TCHouse-D Read and Write Operations in DLC
DLC Native Table
SQL Statement
SuperSQL Statement
Overview of Standard Spark Statement
Overview of Standard Presto Statement
Reserved Words
API Documentation
History
Introduction
API Category
Making API Requests
Data Table APIs
Task APIs
Metadata APIs
Service Configuration APIs
Permission Management APIs
Database APIs
Data Source Connection APIs
Data Optimization APIs
Data Engine APIs
Resource Group for the Standard Engine APIs
Data Types
Error Codes
General Reference
Error Codes
Quotas and limits
Operation Guide on Connecting Third-Party Software to DLC
FAQs
FAQs on Permissions
FAQs on Engines
FAQs on Features
FAQs on Spark Jobs
DLC Policy
Privacy Policy
Data Privacy And Security Agreement
Service Level Agreement
Contact Us
DocumentationData Lake ComputeClient AccessTDLC Command Line Interface Tool Access

TDLC Command Line Interface Tool Access

PDF
Focus Mode
Font Size
Last updated: 2026-03-04 11:17:11
TDLC is a client command tool provided by Tencent Cloud Data Lake Compute (DLC). Using the TDLC tool, you can submit SQL and Spark tasks to the DLC data engine.
TDLC is written in Go, based on the Cobra framework, and supports configuring multiple buckets and cross-bucket operations. You can view the usage of TDLC by running ./tdlc [command] --help.

Downloading and Installation

The TDLC Tencent Cloud Command Line Interface provides binary packets for Windows, Mac, and Linux operating systems. You can use it after a simple installation and configuration. Select the download based on your client's operating system type.
Operating System
TDLC Binary Packet Download Link
Windows
Mac
Linux
Rename the downloaded file to tdlc. Open the command line on your client, navigate to the download path, and if you are using Mac/Linux, use the chmod +x tdlc command to grant executable permissions to the file. After ./tdlc is executed, if the following content is displayed successfully, the installation is complete and the tool is ready for use.
Tencentcloud DLC command tools is used to play around with DLC.
With TDLC user can manger engines, execute SQLs and submit Spark Jobs.

Usage:
tdlc [flags]
tdlc [command]

Available Commands:
config
help Help about any command
spark Submit spark app to engines.
sql Executing SQL.
version

Flags:
--endpoint string Endpoint of Tencentcloud account. (default "dlc.tencentcloudapi.com")
--engine string DLC engine. (default "public-engine")
-h, --help help for tdlc
--region string Region of Tencentcloud account.
--role-arn string Required by spark jar app.
--secret-id string SecretId of Tencentcloud account.
--secret-key string SecretKey of Tencentcloud account.
--token string Token of Tencentcloud account.

Use "tdlc [command] --help" for more information about a command.

Usage Instructions 

Global Parameters

TDLC provides the following global parameters.
Global Parameters
Description
--endpoint string
Service connection address, and dlc.tencentcloudapi.com is used by default.
--engine string
DLC data engine name. The default value is public-engine. It is recommended to use a Dedicated Data Engine.
--region string
Region to be used Examples: ap-nanjing, ap-beijing, ap-guangzhou, ap-shanghai, ap-chengdu, ap-chongqing, na-siliconvalley, ap-singapore, ap-hongkong
--role-arn string
When submitting Spark jobs, you need to specify the permissions to access COS files. This involves assigning the appropriate rolearn with the required permissions. For details on the rolearn, see Configuring Data Access Policy.
--secret-id string
Tencent Cloud account secretId
--secret-key string
Tencent Cloud account secretKey
--token string
(Optional) Tencent Cloud account temporary token

CONFIG Command

The config command can configure commonly used parameters. The configured parameters will be provided as default values. Command line parameters will override the configured config parameters.
Command
 Note
list
List current default configurations.
set
Change configuration.
unset
Reset configuration.
Example:
./tdlc config list
./tdlc config set secret-id={1} secret-key={2} region={b}
./tdlc config unset region

SQL Subcommands

SQL subcommands currently support Presto or SparkSQL clusters. The following parameters are supported by SQL subcommands.
Parameter
 Note
-e, --exec
Execute SQL statements
-f, --file
Execute SQL files. If there are multiple SQL files, please separate them with ;.
--no-result
Do not fetch results after execution.
-p, --progress
Display execution progress
-q, --quiet
Quiet mode, do not wait for task execution status after submission.
Example:
./tdlc sql -e "SELECT 1" --secret-id aa --secret-key bb --region ap-beijing --engine public-engine
./tdlc sql -f ~/biz.sql --no-result 

SPARK Subcommands

Spark subcommands include the following commands, which can be used to submit Spark jobs, view running logs, and terminate tasks.
Command
Description
submit
Submit a task through spark-submit.
run 
Execute Spark jobs.
log
View running logs.
list
View the list of Spark jobs.
kill
Terminate tasks
The following parameters are supported by the Spark submit subcommand, with file-related parameters supporting the use of local files or COSN protocol.
Parameter
 Note
--driver-size 
Driver specification, defaults to small, medium, large, xlarge. For memory-optimized clusters, use m.small, m.medium, m.large, m.xlarge.
--executor-size 
Executor specification, defaults to small, medium, large, xlarge. For memory-optimized clusters, use m.small, m.medium, m.large, m.xlarge.
--executor-num
Quantities of executors
--files
View the list of Spark jobs.
--archives
Dependent compressed files
--class
Primary function run by the Java/Scala application.
--jars
Dependent jar files, separated by the comma.
--name
Application name
--py-files
Dependent Python files, supports .zip, .egg, .py formats.
--conf
Additional configurations
--network
Network Configuration, such as --network "network name"
Example:
./tdlc spark submit --name spark-demo1 --engine sparkjar --jars /root/sparkjar-dep.jar --class com.demo.Example /root/sparkjar-main.jar arg1
./tdlc spark submit --name spark-demo2 cosn://bucket1/abc.py arg1

Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback