tencent cloud

Cloud Log Service

Release Notes and Announcements
Release Notes
Announcements
User Guide
Product Introduction
Overview
Features
Available Regions
Limits
Concepts
Service Regions and Service Providers
Purchase Guide
Billing Overview
Product Pricing
Pay-as-You-Go
Billing
Cleaning up CLS resources
Cost Optimization
FAQs
Getting Started
Getting Started in 1 Minute
Getting Started Guide
Quickly Trying out CLS with Demo
Operation Guide
Resource Management
Permission Management
Log Collection
Metric Collection
Log Storage
Metric Storage
Search and Analysis (Log Topic)
Search and Analysis (Metric Topic)
Dashboard
Data Processing documents
Shipping and Consumption
Monitoring Alarm
Cloud Insight
Independent DataSight console
Historical Documentation
Practical Tutorial
Log Collection
Search and Analysis
Dashboard
Monitoring Alarm
Shipping and Consumption
Cost Optimization
Developer Guide
Embedding CLS Console
CLS Connection to Grafana
API Documentation
History
Introduction
API Category
Making API Requests
Topic Management APIs
Log Set Management APIs
Index APIs
Topic Partition APIs
Machine Group APIs
Collection Configuration APIs
Log APIs
Metric APIs
Alarm Policy APIs
Data Processing APIs
Kafka Protocol Consumption APIs
CKafka Shipping Task APIs
Kafka Data Subscription APIs
COS Shipping Task APIs
SCF Delivery Task APIs
Scheduled SQL Analysis APIs
COS Data Import Task APIs
Data Types
Error Codes
FAQs
Health Check
Collection
Log Search
Others
CLS Service Level Agreement
CLS Policy
Privacy Policy
Data Processing And Security Agreement
Contact Us
Glossary

Shipping Overview

PDF
Focus Mode
Font Size
Last updated: 2024-01-20 17:44:35

Shipping to COS

CLS can ship data in a log topic to COS to meet the needs in the following scenarios:
Logs are shipped to and stored in COS in STANDARD storage class. If you need other storage classes, perform relevant operations in COS. For more information, see Overview.
Log data is processed through offline computing or other computing programs. Such data is shipped to COS first and then loaded by Data Lake Compute (data lake) or EMR (big data platform) for further analysis. For more information, see Using Data Lake Compute (Hive) to Analyze CLS Log. We recommend you choose CSV or Parquet as the shipping format.

Billing Description

Log shipping generates private network read traffic fees (cross-region shipping is not supported for now), and CLS will charge fees based on the compression format (Snappy/GZIP/lzop). If your raw log is 100 GB and you choose Snappy for compression, around 50 GB will be billable. As the read traffic price is 0.032 USD/GB, the fees will be 50 GB * 0.032 USD/GB = 1.6 USD.

Feature Limits

Historical data cannot be shipped.
Cross-region shipping is not supported. The log topic and COS bucket must be in the same region.
Cross-account shipping is not supported.

Shipping Format

Data Shipping Format
Description
Recommended Scenario
Log data is shipped to COS based on the specified separator, such as space, tab, comma, semicolon, and vertical bar.
It can be used for computing in Data Lake Compute.
It can be used to ship raw logs (logs collected in a single line, in multiple lines, and with separators).
Log data is shipped to COS in JSON format.
It is a common data format and can be selected as needed.
Log data is shipped to COS in Parquet format.
Log data needs to be structured data. The data type can be converted (data not collected in a single line or multiple lines). This format is mainly used for Hive batch processing.
Note:
After log data is shipped to COS, COS storage fees will be incurred. For billing details, see Billing Overview.
To cleanse the log data before shipping to COS, see Log Filtering and Distribution.

Help and Support

Was this page helpful?

Help us improve! Rate your documentation experience in 5 mins.

Feedback