tencent cloud

Data Transfer Service

소식 및 공지 사항
릴리스 노트
제품 소개
제품 개요
데이터 마이그레이션 기능 설명
데이터 동기화 기능 설명
데이터 구독(Kafka 버전) 기능 설명
제품 장점
구매 가이드
과금 개요
환불 설명
시작하기
데이터 마이그레이션 작업 가이드
데이터 동기화 작업 가이드
데이터 구독 작업 가이드(Kafka 버전)
준비 작업
자체구축 MySQL용 Binlog 설정
데이터 마이그레이션
데이터 마이그레이션 지원 데이터베이스
ApsaraDB 교차 계정 인스턴스 간 마이그레이션
PostgreSQL로 마이그레이션
작업 관리
데이터 동기화
데이터 동기화가 지원하는 데이터베이스
계정 간 TencentDB 인스턴스 동기화
작업 관리
데이터 구독(Kafka 버전)
데이터 구독이 지원하는 데이터베이스
데이터 구독 작업 생성
작업 관리
컷오버 설명
모니터링 및 알람
모니터링 메트릭 조회
사례 튜토리얼
양방향 동기화 데이터 구조 생성
다대일 동기화 데이터 구조 생성
멀티 사이트 Active-Active IDC 구축
데이터 동기화 충돌 해결 정책 선택하기
CLB 프록시를 사용하여 계정 간 데이터베이스 마이그레이션하기
CCN으로 자체 구축 MySQL에서 TencentDB for MySQL로 마이그레이션
검증 불통과 처리 방법
버전 확인
원본 데이터베이스 권한 확인
계정 충돌 확인
부분 데이터베이스 매개변수 확인
원본 인스턴스 매개변수 확인
매개변수 설정 충돌 확인
대상 데이터베이스 콘텐츠 충돌 확인
대상 데이터베이스 공간 확인
Binlog 매개변수 확인
증분 마이그레이션 전제 조건 확인
플러그인 호환성 확인
레벨2 파티션 테이블 확인
기본 키 확인
마이그레이션할 테이블에 대한 DDL 확인
시스템 데이터베이스 충돌 확인
소스 및 대상 인스턴스 테이블 구조 확인
InnoDB 테이블 확인
마이그레이션 객체 종속성 확인
제약 조건 확인
FAQs
데이터 마이그레이션
데이터 동기화
데이터 구독 Kafka 버전 FAQ
구독 정규식
API문서
History
Introduction
API Category
Making API Requests
(NewDTS) Data Migration APIs
Data Sync APIs
Data Consistency Check APIs
(NewDTS) Data Subscription APIs
Data Types
Error Codes
DTS API 2018-03-30
Service Agreement
Service Level Agreements
액세스 관리
DTS를 사용할 서브 계정 생성 및 권한 부여
서브 계정에 재무 권한 부여하기
문서Data Transfer Service

Data Sync Guide

포커스 모드
폰트 크기
마지막 업데이트 시간: 2024-07-08 19:02:56
DTS allows you to sync the full and incremental data of the source database to Ckafka, so that you can quickly obtain business change data and use it. This document describes how to use DTS to sync data from TDSQL for MySQL to Ckafka.
Currently, TDSQL for MySQL is the only supported source database type.

Prerequisites

The source and target databases must meet the requirements for the sync feature and version as instructed in Databases Supported by Data Sync.
Source database permissions required for the sync task account:
GRANT RELOAD, LOCK TABLES, REPLICATION CLIENT, REPLICATION SLAVE, SELECT ON *.* TO 'migration account'@'%' IDENTIFIED BY 'migration password';
FLUSH PRIVILEGES;
You need to modify the message retention period and message size limit in target Ckafka.
We recommend that you set the message retention period to 3 days. The data beyond the retention period will be cleared, so you need to consume data in time within the set period. The upper limit for message size refers to the maximum size of a single message that Ckafka can receive. You must set it to be greater than the maximum size of a single row of data in the source database table so that data can be normally delivered to CKafka.

Directions

1. Log in to the data sync task purchase page, select appropriate configuration items, and click Buy Now.
Parameter
Description
Billing Mode
Monthly subscription and pay-as-you-go billing modes are supported.
Source Instance Type
Select TDSQL for MySQL, which cannot be changed after purchase.
Source Instance Region
Select the source instance region, which cannot be changed after purchase.
Target Instance Type
Select Kafka, which cannot be changed after purchase.
Target Instance Region
Select the target instance region, which cannot be changed after purchase.
Specification
Select a specification based on your business needs. The higher the specification, the higher the performance. For more information, see Billing Overview.
2. After making the purchase, return to the data sync task list to view the task you just created.Then, click Configure in the Operation column to enter the Configure Sync Task page.
3. On the Configure Sync Task page, configure Instance ID, Account, and Password for the source instance, configure Instance ID for the target instance, test connectivity, and click Next.
Setting Items
Parameter
Description
Task Configuration
Task Name
DTS will automatically generate a task name, which is customizable.
Running Mode
Immediate execution and scheduled execution are supported.
Source Instance Settings
Source Instance Type
The source database type selected during purchase, which cannot be changed.
Source Instance Region
The source instance region selected during purchase, which cannot be changed.
Access Type
Select a type based on your scenario. In this scenario, you can only select Database.
Account/Password
Account/Password: Enter the source database account and password.
Target Instance Settings
Target Instance Type
The target instance type selected during purchase, which cannot be changed.
Target Instance Region
The target instance region selected during purchase, which cannot be changed.
Access Type
Select a type based on your scenario. In this scenario, select CKafka instance.
Instance ID
Select the instance ID of the target instance.
4. On the Set sync options and objects page, set the following items: Data Initialization Option, Policy for Syncing Data to Kafka, Data Sync Option, and Sync Object Option. Then click Save and Go Next.
Deliver to custom topic
Deliver to a single topic
Setting Items
Parameter
Description
Data Initialization Option
Initialization Type
Structure initialization: Table structures in the source instance will be initialized into the target instance before the sync task runs.
Full data initialization: Data in the source instance will be initialized into the target instance before the sync task runs. If you select Full data initialization only, you need to create the table structures in the target database in advance.
Both options are selected by default, and you can deselect them as needed.
Format of Data Delivered to Kafka
Avro adopts the binary format with a higher consumption efficiency, while JSON adopts the easier-to-use lightweight text format.
Policy for Syncing Data to Kafka
Topic Sync Policy
Deliver to custom topic: Customize the topic name for delivery. After that, the target Kafka will automatically create a topic with the custom name. The synced data is randomly delivered to different partitions under the topic. If the target Kafka fails to create the topic, the task will report an error.
Deliver to a single topic: Select an existing topic on the target side, and then deliver data based on multiple partitioning policies. Data can be delivered to a single partition of the specified topic, or delivered to different partitions by table name or by table name + primary key.
Rules for delivering to custom topic
If you add multiple rules, the database and table rules are matched one by one from top to bottom. If no rules are matched, data will be delivered to the topic corresponding to the last rule. If multiple rules are matched, data will be delivered to the topics corresponding to all the matched rules.
Example 1: There are tables named "Student" and "Teacher" in a database named "Users" on database instance X. If you need to deliver the data in the "Users" database to a topic named "Topic_A". The rules are configured as follows:
Enter Topic_A for Topic Name, ^Users$ for Database Name Match, and .* for Table Name Match.
Enter Topic_default for Topic Name, Databases that don't match the above rules for Database Name Match, and Tables that don't match the above rules for Table Name Match.
Example 1: There are tables named "Student" and "Teacher" in a database named "Users" on database instance X. If you need to deliver the data in the "Student" table and "Teacher" tables to topics named "Topic_A" and "Topic_default" respectively. The rules are configured as follows:
Enter Topic_A for Topic Name, ^Users$ for Database Name Match, and ^Student$ for Table Name Match.
Enter Topic_default for Topic Name, Databases that don't match the above rules for Database Name Match, and Tables that don't match the above rules for Table Name Match.
Rules for delivering to a single topic
After selecting a specified topic, the system will perform partitioning based on the specified policy as follows.
Deliver all data to partition 0: Deliver all the synced data of the source database to the first partition.
By table name: Partition the synced data from the source database by table name. After setting, the data with the same table name will be written into the same partition.
By table name + primary key: Partition the synced data from the source database by table name and primary key. This policy is suitable for frequently accessed data. After settings, frequently accessed data is distributed from tables to different partitions by table name and primary key, so as to improve the concurrent consumption efficiency.
Topic for DDL Storage
(Optional) If you need to deliver the DDL operation of the source database to the specified topic separately, you can select the settings here. After setting, it will be delivered to Partition 0 of the selected topic by default; if not set, it will be delivered based on the topic rules selected above.
Data Sync Option
Setting Items
Parameter
Description
Data Sync Option
SQL Type
The following operations are supported: INSERT, DELETE, UPDATE, and DDL.
Sync Object Option
Database and Table Objects of Source Instance
Only the database/table objects can be synced.
5. On the task verification page, complete the verification. After all check items are passed, click Start Task. If the verification fails, fix the problem as instructed in Check Item Overview and initiate the verification again.
Failed: It indicates that a check item fails and the task is blocked. You need to fix the problem and run the verification task again.
Alarm: It indicates that a check item doesn't completely meet the requirements, and the task can be continued, but the business will be affected. You need to assess whether to ignore the alarm or fix the problem and continue the task based on the alarm message.
6. Return to the data sync task list, and you can see that the task has entered the Running status.
Note
You can click More > Stop in the Operation column to stop a sync task. Before doing so, ensure that data sync has been completed.
7. (Optional) you can click a task name to enter the task details page and view the task initialization status and monitoring data.

Subsequent Operations

After the data is synced to the target Kafka, the data can be consumed. We provide you with a c

도움말 및 지원

문제 해결에 도움이 되었나요?

피드백