-1. In **Connectivity Method**, select **VPC Peering** or **Public IP**, fill in your Kafka brokers endpoints. You can use commas `,` to separate multiple endpoints.
+1. In **Connectivity Method**, select **Public**, fill in your Kafka brokers endpoints. You can use commas `,` to separate multiple endpoints.
2. Select an **Authentication** option according to your Kafka authentication configuration.
- If your Kafka does not require authentication, keep the default option **Disable**.
@@ -139,11 +86,11 @@ The steps vary depending on the connectivity method you select.
6. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
+
1. In **Connectivity Method**, select **Private Link**.
-2. In **Private Endpoint**, select the private endpoint that you created in the [Network](#network) section. Make sure the AZs of the private endpoint match the AZs of the Kafka deployment.
-3. Fill in the **Bootstrap Ports** that you obtained from the [Network](#network) section. It is recommended that you set at least one port for one AZ. You can use commas `,` to separate multiple ports.
+2. In **Private Link Connection**, select the private link connection that you created in the [Network](#network) section. Make sure the AZs of the private link connection match the AZs of the Kafka deployment.
+3. Fill in the **Bootstrap Port** that you obtained from the [Network](#network) section.
4. Select an **Authentication** option according to your Kafka authentication configuration.
- If your Kafka does not require authentication, keep the default option **Disable**.
@@ -151,90 +98,26 @@ The steps vary depending on the connectivity method you select.
5. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
6. Select a **Compression** type for the data in this changefeed.
7. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
-8. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
+8. Input the **TLS Server Name** if your Kafka requires TLS SNI verification. For example, Confluent Cloud Dedicated clusters.
+9. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
-
-
-
-
-1. In **Connectivity Method**, select **Private Link**.
-2. In **Private Endpoint**, select the private endpoint that you created in the [Network](#network) section. Make sure the AZs of the private endpoint match the AZs of the Kafka deployment.
-3. Fill in the **Bootstrap Ports** that you obtained from the [Network](#network) section. It is recommended that you set at least one port for one AZ. You can use commas `,` to separate multiple ports.
-4. Select an **Authentication** option according to your Kafka authentication configuration.
-
- - If your Kafka does not require authentication, keep the default option **Disable**.
- - If your Kafka requires authentication, select the corresponding authentication type, and then fill in the **user name** and **password** of your Kafka account for authentication.
-5. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
-6. Select a **Compression** type for the data in this changefeed.
-7. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
-8. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
-
-
-
-
-
-
-
-1. In **Connectivity Method**, select **Private Service Connect**.
-2. In **Private Endpoint**, select the private endpoint that you created in the [Network](#network) section.
-3. Fill in the **Bootstrap Ports** that you obtained from the [Network](#network) section. It is recommended that you provide more than one port. You can use commas `,` to separate multiple ports.
-4. Select an **Authentication** option according to your Kafka authentication configuration.
-
- - If your Kafka does not require authentication, keep the default option **Disable**.
- - If your Kafka requires authentication, select the corresponding authentication type, and then fill in the **user name** and **password** of your Kafka account for authentication.
-5. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
-6. Select a **Compression** type for the data in this changefeed.
-7. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
-8. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
-9. TiDB Cloud creates the endpoint for **Private Service Connect**, which might take several minutes.
-10. Once the endpoint is created, log in to your cloud provider console and accept the connection request.
-11. Return to the [TiDB Cloud console](https://tidbcloud.com) to confirm that you have accepted the connection request. TiDB Cloud will test the connection and proceed to the next page if the test succeeds.
-
-
-
-
-
-
-
-1. In **Connectivity Method**, select **Private Link**.
-2. In **Private Endpoint**, select the private endpoint that you created in the [Network](#network) section.
-3. Fill in the **Bootstrap Ports** that you obtained in the [Network](#network) section. It is recommended that you set at least one port for one AZ. You can use commas `,` to separate multiple ports.
-4. Select an **Authentication** option according to your Kafka authentication configuration.
-
- - If your Kafka does not require authentication, keep the default option **Disable**.
- - If your Kafka requires authentication, select the corresponding authentication type, and then fill in the **user name** and **password** of your Kafka account for authentication.
-5. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
-6. Select a **Compression** type for the data in this changefeed.
-7. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
-8. Click **Next** to test the network connection. If the test succeeds, you will be directed to the next page.
-9. TiDB Cloud creates the endpoint for **Private Link**, which might take several minutes.
-10. Once the endpoint is created, log in to the [Azure portal](https://portal.azure.com/) and accept the connection request.
-11. Return to the [TiDB Cloud console](https://tidbcloud.com) to confirm that you have accepted the connection request. TiDB Cloud will test the connection and proceed to the next page if the test succeeds.
-
-
-
## Step 3. Set the changefeed
-1. Customize **Table Filter** to filter the tables that you want to replicate. For the rule syntax, refer to [table filter rules](/table-filter.md).
+1. Customize **Table Filter** to filter the tables that you want to replicate. For the rule syntax, refer to [table filter rules](https://docs.pingcap.com/tidb/stable/table-filter/#syntax).
+ - **Replication Scope**: you can choose to only replicate tables with valid keys or replicate all selected tables.
+ - **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule and click `apply`, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules under the `Filter results`.
- **Case Sensitive**: you can set whether the matching of database and table names in filter rules is case-sensitive. By default, matching is case-insensitive.
- - **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules in the box on the right. You can add up to 100 filter rules.
- - **Tables with valid keys**: this column displays the tables that have valid keys, including primary keys or unique indexes.
- - **Tables without valid keys**: this column shows tables that lack primary keys or unique keys. These tables present a challenge during replication because the absence of a unique identifier can result in inconsistent data when the downstream handles duplicate events. To ensure data consistency, it is recommended to add unique keys or primary keys to these tables before initiating the replication. Alternatively, you can add filter rules to exclude these tables. For example, you can exclude the table `test.tbl1` by using the rule `"!test.tbl1"`.
+ - **Filter results with valid keys**: this column displays the tables that have valid keys, including primary keys or unique indexes.
+ - **Filter results without valid keys**: this column shows tables that lack primary keys or unique keys. These tables present a challenge during replication because the absence of a unique identifier can result in inconsistent data when the downstream handles duplicate events. To ensure data consistency, it is recommended to add unique keys or primary keys to these tables before initiating the replication. Alternatively, you can add filter rules to exclude these tables. For example, you can exclude the table `test.tbl1` by using the rule `"!test.tbl1"`.
2. Customize **Event Filter** to filter the events that you want to replicate.
- - **Tables matching**: you can set which tables the event filter will be applied to in this column. The rule syntax is the same as that used for the preceding **Table Filter** area. You can add up to 10 event filter rules per changefeed.
- - **Event Filter**: you can use the following event filters to exclude specific events from the changefeed:
- - **Ignore event**: excludes specified event types.
- - **Ignore SQL**: excludes DDL events that match specified expressions. For example, `^drop` excludes statements starting with `DROP`, and `add column` excludes statements containing `ADD COLUMN`.
- - **Ignore insert value expression**: excludes `INSERT` statements that meet specific conditions. For example, `id >= 100` excludes `INSERT` statements where `id` is greater than or equal to 100.
- - **Ignore update new value expression**: excludes `UPDATE` statements where the new value matches a specified condition. For example, `gender = 'male'` excludes updates that result in `gender` being `male`.
- - **Ignore update old value expression**: excludes `UPDATE` statements where the old value matches a specified condition. For example, `age < 18` excludes updates where the old value of `age` is less than 18.
- - **Ignore delete value expression**: excludes `DELETE` statements that meet a specified condition. For example, `name = 'john'` excludes `DELETE` statements where `name` is `'john'`.
+ - **Tables matching**: you can set which tables the event filter will be applied to in this column. The rule syntax is the same as that used for the preceding **Table Filter** area.
+ - **Event Filter**: you can choose the events you want to ingnore.
3. Customize **Column Selector** to select columns from events and send only the data changes related to those columns to the downstream.
@@ -257,7 +140,7 @@ The steps vary depending on the connectivity method you select.
6. If you select **Avro** as your data format, you will see some Avro-specific configurations on the page. You can fill in these configurations as follows:
- In the **Decimal** and **Unsigned BigInt** configurations, specify how TiDB Cloud handles the decimal and unsigned bigint data types in Kafka messages.
- - In the **Schema Registry** area, fill in your schema registry endpoint. If you enable **HTTP Authentication**, the fields for user name and password are displayed and automatically filled in with your TiDB
clusterinstance endpoint and password.
+ - In the **Schema Registry** area, fill in your schema registry endpoint. If you enable **HTTP Authentication**, the fields for user name and password are displayed to fill in.
7. In the **Topic Distribution** area, select a distribution mode, and then fill in the topic name configurations according to the mode.
@@ -285,7 +168,7 @@ The steps vary depending on the connectivity method you select.
- **Distribute changelogs by primary key or index value to Kafka partition**
- If you want the changefeed to send Kafka messages of a table to different partitions, choose this distribution method. The primary key or index value of a row changelog will determine which partition the changelog is sent to. This distribution method provides a better partition balance and ensures row-level orderliness.
+ If you want the changefeed to send Kafka messages of a table to different partitions, choose this distribution method. The primary key or index value of a row changelog will determine which partition the changelog is sent to. Keep the **Index Name** field empty if you want to use the primary key. This distribution method provides a better partition balance and ensures row-level orderliness.
- **Distribute changelogs by table to Kafka partition**
@@ -308,14 +191,7 @@ The steps vary depending on the connectivity method you select.
11. Click **Next**.
-## Step 4. Configure your changefeed specification
-
-1. In the **Changefeed Specification** area, specify the number of
Replication Capacity Units (RCUs)Changefeed Capacity Units (CCUs) to be used by the changefeed.
-2. In the **Changefeed Name** area, specify a name for the changefeed.
-3. Click **Next** to check the configurations you set and go to the next page.
-
-## Step 5. Review the configurations
-
-On this page, you can review all the changefeed configurations that you set.
+## Step 4. Review and create your changefeed specification
-If you find any error, you can go back to fix the error. If there is no error, you can click the check box at the bottom, and then click **Create** to create the changefeed.
+1. In the **Changefeed Name** area, specify a name for the changefeed.
+2. Review all the changefeed configurations that you set. Click **Previous** to go back to the previous configuration pages if you want to modify some configurations. Click **Submit** if all configurations are correct to create the changefeed.
\ No newline at end of file
From 90c8a4f904002ee0dc4ead1264f999b6c78517d4 Mon Sep 17 00:00:00 2001
From: shi yuhang <52435083+shiyuhang0@users.noreply.github.com>
Date: Fri, 26 Dec 2025 15:44:13 +0800
Subject: [PATCH 06/13] Apply suggestions from code review
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
---
TOC-tidb-cloud-essential.md | 4 ++--
tidb-cloud/essential-changefeed-overview.md | 4 ++--
.../essential-changefeed-sink-to-kafka.md | 18 +++++++++---------
.../essential-changefeed-sink-to-mysql.md | 16 ++++++++--------
4 files changed, 21 insertions(+), 21 deletions(-)
diff --git a/TOC-tidb-cloud-essential.md b/TOC-tidb-cloud-essential.md
index 7cc0d54fc2051..93a9d7893325e 100644
--- a/TOC-tidb-cloud-essential.md
+++ b/TOC-tidb-cloud-essential.md
@@ -234,8 +234,8 @@
- [Connect AWS DMS to TiDB Cloud clusters](/tidb-cloud/tidb-cloud-connect-aws-dms.md)
- Stream Data
- [Changefeed Overview](/tidb-cloud/essential-changefeed-overview.md)
- - [To MySQL Sink](/tidb-cloud/essential-changefeed-sink-to-mysql.md)
- - [To Kafka Sink](/tidb-cloud/essential-changefeed-sink-to-kafka.md)
+ - [Sink to MySQL](/tidb-cloud/essential-changefeed-sink-to-mysql.md)
+ - [Sink to Apache Kafka](/tidb-cloud/essential-changefeed-sink-to-kafka.md)
- Vector Search 
- [Overview](/vector-search/vector-search-overview.md)
- Get Started
diff --git a/tidb-cloud/essential-changefeed-overview.md b/tidb-cloud/essential-changefeed-overview.md
index 912186477d584..9ed3b05fc242c 100644
--- a/tidb-cloud/essential-changefeed-overview.md
+++ b/tidb-cloud/essential-changefeed-overview.md
@@ -58,7 +58,7 @@ ticloud serverless changefeed get -c
--changefeed-id
-1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB ckuster.
+1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB cluster.
2. Locate the corresponding changefeed you want to pause or resume, and click **...** > **Pause/Resume** in the **Action** column.
@@ -85,7 +85,7 @@ ticloud serverless changefeed resume -c --changefeed-id **Note:**
>
-> TiDB Cloud currently only allows editing changefeeds in the paused status.
+> TiDB Cloud currently only allows editing changefeeds that are in the `Paused` state.
diff --git a/tidb-cloud/essential-changefeed-sink-to-kafka.md b/tidb-cloud/essential-changefeed-sink-to-kafka.md
index 5f548e4b3d595..f578de8ce79e9 100644
--- a/tidb-cloud/essential-changefeed-sink-to-kafka.md
+++ b/tidb-cloud/essential-changefeed-sink-to-kafka.md
@@ -33,7 +33,7 @@ Ensure that your TiDB Cloud cluster can connect to the Apache Kafka service. You
Private Link Connection leverages **Private Link** technologies from cloud providers to enable resources in your VPC to connect to services in other VPCs using private IP addresses, as if those services were hosted directly within your VPC.
-TiDB Cloud currently supports Private Link Connection only for self-hosted Kafka and Confluent Cloud dedicated cluster. It does not support direct integration with MSK, or other Kafka SaaS services.
+TiDB Cloud currently supports Private Link Connection only for self-hosted Kafka and Confluent Cloud Dedicated Cluster. It does not support direct integration with MSK, or other Kafka SaaS services.
See the following instructions to set up a Private Link connection according to your Kafka deployment and cloud provider:
@@ -45,9 +45,9 @@ See the following instructions to set up a Private Link connection according to
-If you want to provide Public access to your Apache Kafka service, assign Public IP addresses or domain names to all your Kafka brokers.
+If you want to provide public access to your Apache Kafka service, assign public IP addresses or domain names to all your Kafka brokers.
-It is **NOT** recommended to use Public access in a production environment.
+It is not recommended to use public access in a production environment.
@@ -59,7 +59,7 @@ To allow TiDB Cloud changefeeds to stream data to Apache Kafka and create Kafka
- The `Create` and `Write` permissions are added for the topic resource type in Kafka.
- The `DescribeConfigs` permission is added for the cluster resource type in Kafka.
-For example, if your Kafka cluster is in Confluent Cloud, you can see [Resources](https://docs.confluent.io/platform/current/kafka/authorization.html#resources) and [Adding ACLs](https://docs.confluent.io/platform/current/kafka/authorization.html#adding-acls) in Confluent documentation for more information.
+For example, if your Kafka cluster is in Confluent Cloud, refer to [Resources](https://docs.confluent.io/platform/current/kafka/authorization.html#resources) and [Adding ACLs](https://docs.confluent.io/platform/current/kafka/authorization.html#adding-acls) in the Confluent documentation for more information.
## Step 1. Open the Changefeed page for Apache Kafka
@@ -74,7 +74,7 @@ The steps vary depending on the connectivity method you select.
-1. In **Connectivity Method**, select **Public**, fill in your Kafka brokers endpoints. You can use commas `,` to separate multiple endpoints.
+1. In **Connectivity Method**, select **Public**, and fill in your Kafka broker endpoints. You can use commas `,` to separate multiple endpoints.
2. Select an **Authentication** option according to your Kafka authentication configuration.
- If your Kafka does not require authentication, keep the default option **Disable**.
@@ -109,7 +109,7 @@ The steps vary depending on the connectivity method you select.
1. Customize **Table Filter** to filter the tables that you want to replicate. For the rule syntax, refer to [table filter rules](https://docs.pingcap.com/tidb/stable/table-filter/#syntax).
- **Replication Scope**: you can choose to only replicate tables with valid keys or replicate all selected tables.
- - **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule and click `apply`, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules under the `Filter results`.
+ - **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule and click **Apply**, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules under **Filter results**.
- **Case Sensitive**: you can set whether the matching of database and table names in filter rules is case-sensitive. By default, matching is case-insensitive.
- **Filter results with valid keys**: this column displays the tables that have valid keys, including primary keys or unique indexes.
- **Filter results without valid keys**: this column shows tables that lack primary keys or unique keys. These tables present a challenge during replication because the absence of a unique identifier can result in inconsistent data when the downstream handles duplicate events. To ensure data consistency, it is recommended to add unique keys or primary keys to these tables before initiating the replication. Alternatively, you can add filter rules to exclude these tables. For example, you can exclude the table `test.tbl1` by using the rule `"!test.tbl1"`.
@@ -117,7 +117,7 @@ The steps vary depending on the connectivity method you select.
2. Customize **Event Filter** to filter the events that you want to replicate.
- **Tables matching**: you can set which tables the event filter will be applied to in this column. The rule syntax is the same as that used for the preceding **Table Filter** area.
- - **Event Filter**: you can choose the events you want to ingnore.
+ - **Event Filter**: you can choose the events you want to ignore.
3. Customize **Column Selector** to select columns from events and send only the data changes related to those columns to the downstream.
@@ -130,7 +130,7 @@ The steps vary depending on the connectivity method you select.
- Avro is a compact, fast, and binary data format with rich data structures, which is widely used in various flow systems. For more information, see [Avro data format](https://docs.pingcap.com/tidb/stable/ticdc-avro-protocol).
- Canal-JSON is a plain JSON text format, which is easy to parse. For more information, see [Canal-JSON data format](https://docs.pingcap.com/tidb/stable/ticdc-canal-json).
- - Open Protocol is a row-level data change notification protocol that provides data sources for monitoring, caching, full-text indexing, analysis engines, and primary-secondary replication between different databases. For more information, see [Open Protocol data format](https://docs.pingcap.com/tidb/stable/ticdc-open-protocol).
+ - Open Protocol is a row-level data change notification protocol that provides data sources for monitoring, caching, full-text indexing, analysis engines, and primary-secondary replication between different databases. For more information, see [Open Protocol data format](https://docs.pingcap.com/tidb/stable/ticdc-open-protocol).
- Debezium is a tool for capturing database changes. It converts each captured database change into a message called an "event" and sends these events to Kafka. For more information, see [Debezium data format](https://docs.pingcap.com/tidb/stable/ticdc-debezium).
5. Enable the **TiDB Extension** option if you want to add TiDB-extension fields to the Kafka message body.
@@ -180,7 +180,7 @@ The steps vary depending on the connectivity method you select.
- **Distribute changelogs by column value to Kafka partition**
- If you want the changefeed to send Kafka messages of a table to different partitions, choose this distribution method. The specified column values of a row changelog will determine which partition the changelog is sent to. This distribution method ensures orderliness in each partition and guarantees that the changelog with the same column values is send to the same partition.
+ If you want the changefeed to send Kafka messages of a table to different partitions, choose this distribution method. The specified column values of a row changelog will determine which partition the changelog is sent to. This distribution method ensures orderliness in each partition and guarantees that the changelog with the same column values is sent to the same partition.
9. In the **Topic Configuration** area, configure the following numbers. The changefeed will automatically create the Kafka topics according to the numbers.
diff --git a/tidb-cloud/essential-changefeed-sink-to-mysql.md b/tidb-cloud/essential-changefeed-sink-to-mysql.md
index 9d1e32791686f..286bef181be75 100644
--- a/tidb-cloud/essential-changefeed-sink-to-mysql.md
+++ b/tidb-cloud/essential-changefeed-sink-to-mysql.md
@@ -34,7 +34,7 @@ If your MySQL service can be accessed over the public network, you can choose to
-Private link connection leverage **Private Link** technologies from cloud providers, enabling resources in your VPC to connect to services in other VPCs through private IP addresses, as if those services were hosted directly within your VPC.
+Private link connections leverage **Private Link** technologies from cloud providers, enabling resources in your VPC to connect to services in other VPCs through private IP addresses, as if those services were hosted directly within your VPC.
You can connect your TiDB Cloud cluster to your MySQL service securely through a private link connection. If the private link connection is not available for your MySQL service, follow [Connect to Amazon RDS via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-aws-rds.md) or [Connect to Alibaba Cloud ApsaraDB RDS for MySQL via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-alicloud-rds.md) to create one.
@@ -48,7 +48,7 @@ The **Sink to MySQL** connector can only sink incremental data from your TiDB Cl
To load the existing data:
-1. Extend the [tidb_gc_life_time](https://docs.pingcap.com/tidb/stable/system-variables#tidb_gc_life_time-new-in-v50) to be longer than the total time of the following two operations, so that historical data during the time is not garbage collected by TiDB.
+1. Extend the [tidb_gc_life_time](https://docs.pingcap.com/tidb/stable/system-variables#tidb_gc_life_time-new-in-v50) to be longer than the total time of the following two operations, so that historical data during this period is not garbage collected by TiDB.
- The time to export and import the existing data
- The time to create **Sink to MySQL**
@@ -82,7 +82,7 @@ After completing the prerequisites, you can sink your data to MySQL.
- If you choose **Public**, fill in your MySQL endpoint.
- If you choose **Private Link**, select the private link connection that you created in the [Network](#network) section, and then fill in the MySQL port for your MySQL service.
-4. In **Authentication**, fill in the MySQL user name, password and TLS Encryption of your MySQL service. TiDB Cloud does not support self-signed certificates for MySQL TLS connections currently.
+4. In **Authentication**, fill in the MySQL user name and password, and configure TLS encryption for your MySQL service. Currently, TiDB Cloud does not support self-signed certificates for MySQL TLS connections.
5. Click **Next** to test whether TiDB can connect to MySQL successfully:
@@ -92,7 +92,7 @@ After completing the prerequisites, you can sink your data to MySQL.
6. Customize **Table Filter** to filter the tables that you want to replicate. For the rule syntax, refer to [table filter rules](https://docs.pingcap.com/tidb/stable/table-filter/#syntax).
- **Replication Scope**: you can choose to only replicate tables with valid keys or replicate all selected tables.
- - **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule and click `apply`, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules under the `Filter results`.
+ - **Filter Rules**: you can set filter rules in this column. By default, there is a rule `*.*`, which stands for replicating all tables. When you add a new rule and click **Apply**, TiDB Cloud queries all the tables in TiDB and displays only the tables that match the rules under **Filter results**.
- **Case Sensitive**: you can set whether the matching of database and table names in filter rules is case-sensitive. By default, matching is case-insensitive.
- **Filter results with valid keys**: this column displays the tables that have valid keys, including primary keys or unique indexes.
- **Filter results without valid keys**: this column shows tables that lack primary keys or unique keys. These tables present a challenge during replication because the absence of a unique identifier can result in inconsistent data when the downstream handles duplicate events. To ensure data consistency, it is recommended to add unique keys or primary keys to these tables before initiating the replication. Alternatively, you can add filter rules to exclude these tables. For example, you can exclude the table `test.tbl1` by using the rule `"!test.tbl1"`.
@@ -100,20 +100,20 @@ After completing the prerequisites, you can sink your data to MySQL.
7. Customize **Event Filter** to filter the events that you want to replicate.
- **Tables matching**: you can set which tables the event filter will be applied to in this column. The rule syntax is the same as that used for the preceding **Table Filter** area.
- - **Event Filter**: you can choose the events you want to ingnore.
+ - **Event Filter**: you can choose the events you want to ignore.
8. In **Start Replication Position**, configure the starting position for your MySQL sink.
- - If you have [loaded the existing data](#load-existing-data-optional) using Export, select **From Time** and fill in the snapshot time that you get from Export. Pay attention the time zone.
+ - If you have [loaded the existing data](#load-existing-data-optional) using Export, select **From Time** and fill in the snapshot time that you get from Export. Pay attention to the time zone.
- If you do not have any data in the upstream TiDB cluster, select **Start replication from now on**.
9. Click **Next** to configure your changefeed specification.
- In the **Changefeed Name** area, specify a name for the changefeed.
-10. If you confirm that all configurations are correct, click **Submit**. If you want to modify some configurations, click **Previous** to go back to the previous configuration page.
+10. If you confirm that all configurations are correct, click **Submit**. If you want to modify some configurations, click **Previous** to go back to the previous configuration page.
-11. The sink starts soon, and you can see the status of the sink changes from **Creating** to **Running**.
+11. The sink starts soon, and you can see the sink status change from **Creating** to **Running**.
Click the changefeed name, and you can see more details about the changefeed, such as the checkpoint, replication latency, and other metrics.
From c7f2950145a4fa4eb753525195d7fc2ef01c200d Mon Sep 17 00:00:00 2001
From: shiyuhang <1136742008@qq.com>
Date: Fri, 26 Dec 2025 16:00:10 +0800
Subject: [PATCH 07/13] fix changefeed
---
tidb-cloud/essential-changefeed-overview.md | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/tidb-cloud/essential-changefeed-overview.md b/tidb-cloud/essential-changefeed-overview.md
index 9ed3b05fc242c..136df8c168697 100644
--- a/tidb-cloud/essential-changefeed-overview.md
+++ b/tidb-cloud/essential-changefeed-overview.md
@@ -30,7 +30,7 @@ On the **Changefeed** page, you can create a changefeed, view a list of existing
To create a changefeed, refer to the tutorials:
-- [Sink to Apache Kafka](/tidb-cloud/essential-changefeed-sink-to-apache-kafka.md)
+- [Sink to Apache Kafka](/tidb-cloud/essential-changefeed-sink-to-kafka.md)
- [Sink to MySQL](/tidb-cloud/essential-changefeed-sink-to-mysql.md)
## View a changefeed
@@ -80,7 +80,6 @@ ticloud serverless changefeed resume -c
--changefeed-id
-
## Edit a changefeed
> **Note:**
From 3623e4a17fcdaf4f899909374275dd80f757a43a Mon Sep 17 00:00:00 2001
From: shiyuhang <1136742008@qq.com>
Date: Fri, 26 Dec 2025 16:46:29 +0800
Subject: [PATCH 08/13] fix link
---
tidb-cloud/essential-changefeed-sink-to-kafka.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/tidb-cloud/essential-changefeed-sink-to-kafka.md b/tidb-cloud/essential-changefeed-sink-to-kafka.md
index f578de8ce79e9..5a0b2efeec95e 100644
--- a/tidb-cloud/essential-changefeed-sink-to-kafka.md
+++ b/tidb-cloud/essential-changefeed-sink-to-kafka.md
@@ -59,7 +59,7 @@ To allow TiDB Cloud changefeeds to stream data to Apache Kafka and create Kafka
- The `Create` and `Write` permissions are added for the topic resource type in Kafka.
- The `DescribeConfigs` permission is added for the cluster resource type in Kafka.
-For example, if your Kafka cluster is in Confluent Cloud, refer to [Resources](https://docs.confluent.io/platform/current/kafka/authorization.html#resources) and [Adding ACLs](https://docs.confluent.io/platform/current/kafka/authorization.html#adding-acls) in the Confluent documentation for more information.
+For example, if your Kafka cluster is in Confluent Cloud, refer to [Resources](https://docs.confluent.io/platform/current/kafka/authorization.html#resources) and [Adding ACLs](https://docs.confluent.io/platform/current/security/authorization/acls/manage-acls.html#add-acls) in the Confluent documentation for more information.
## Step 1. Open the Changefeed page for Apache Kafka
From 18e8c41ca19c3891f868d427606193beee655544 Mon Sep 17 00:00:00 2001
From: houfaxin
Date: Sun, 4 Jan 2026 14:54:55 +0800
Subject: [PATCH 09/13] Update essential-changefeed-overview.md
---
tidb-cloud/essential-changefeed-overview.md | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)
diff --git a/tidb-cloud/essential-changefeed-overview.md b/tidb-cloud/essential-changefeed-overview.md
index 136df8c168697..500e7e93926bb 100644
--- a/tidb-cloud/essential-changefeed-overview.md
+++ b/tidb-cloud/essential-changefeed-overview.md
@@ -9,6 +9,7 @@ TiDB Cloud changefeed helps you stream data from TiDB Cloud to other data servic
> **Note:**
>
+> - The changefeed feature is in beta. It might be changed without prior notice. If you find a bug, you can report an [issue](https://github.com/pingcap/tidb/issues) on GitHub.
> - Currently, TiDB Cloud only allows up to 10 changefeeds per {{{ .essential }}} cluster.
> - For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) clusters, the changefeed feature is unavailable.
@@ -35,6 +36,8 @@ To create a changefeed, refer to the tutorials:
## View a changefeed
+You can view a changefeed using the TiDB Cloud console or the TiDB Cloud CLI.
+
@@ -55,11 +58,13 @@ ticloud serverless changefeed get -c
--changefeed-id
1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB cluster.
-2. Locate the corresponding changefeed you want to pause or resume, and click **...** > **Pause/Resume** in the **Action** column.
+2. Locate the corresponding changefeed you want to pause or resume. In the **Action** column, click **...** > **Pause/Resume**.
@@ -86,11 +91,13 @@ ticloud serverless changefeed resume -c --changefeed-id
> TiDB Cloud currently only allows editing changefeeds that are in the `Paused` state.
+You can edit a changefeed using the TiDB Cloud console or the TiDB Cloud CLI.
+
1. Navigate to the [**Changefeed**](#view-the-changefeed-page) page of your target TiDB cluster.
-2. Locate the changefeed you want to pause, and click **...** > **Pause** in the **Action** column.
+2. Locate the changefeed you want to pause. In the **Action** column, click **...** > **Pause**.
3. When the changefeed status changes to `Paused`, click **...** > **Edit** to edit the corresponding changefeed.
TiDB Cloud populates the changefeed configuration by default. You can modify the following configurations:
@@ -128,6 +135,8 @@ ticloud serverless changefeed edit -c
--changefeed-id
@@ -147,7 +156,7 @@ ticloud serverless changefeed delete -c
--changefeed-id
Date: Sun, 4 Jan 2026 15:00:52 +0800
Subject: [PATCH 10/13] Refactor docs to use .essential variable for product
name
Replaced hardcoded 'TiDB Cloud' references with the templated '{{{ .essential }}}' variable in changefeed sink documentation for Kafka and MySQL. Added beta feature notes for both sinks and updated instructions and restrictions to use the variable for consistency and easier product branding.
---
.../essential-changefeed-sink-to-kafka.md | 20 ++++++++++-------
.../essential-changefeed-sink-to-mysql.md | 22 +++++++++++--------
2 files changed, 25 insertions(+), 17 deletions(-)
diff --git a/tidb-cloud/essential-changefeed-sink-to-kafka.md b/tidb-cloud/essential-changefeed-sink-to-kafka.md
index 5a0b2efeec95e..14f044195a0c5 100644
--- a/tidb-cloud/essential-changefeed-sink-to-kafka.md
+++ b/tidb-cloud/essential-changefeed-sink-to-kafka.md
@@ -1,17 +1,21 @@
---
title: Sink to Apache Kafka
-summary: This document explains how to create a changefeed to stream data from TiDB Cloud to Apache Kafka. It includes restrictions, prerequisites, and steps to configure the changefeed for Apache Kafka. The process involves setting up network connections, adding permissions for Kafka ACL authorization, and configuring the changefeed specification.
+summary: This document explains how to create a changefeed to stream data from {{{ .essential }}} to Apache Kafka. It includes restrictions, prerequisites, and steps to configure the changefeed for Apache Kafka. The process involves setting up network connections, adding permissions for Kafka ACL authorization, and configuring the changefeed specification.
---
# Sink to Apache Kafka
-This document describes how to create a changefeed to stream data from TiDB Cloud to Apache Kafka.
+This document describes how to create a changefeed to stream data from {{{ .essential }}} to Apache Kafka.
+
+> **Note:**
+>
+> - The sink to Apache Kafka feature is in beta. It might be changed without prior notice. If you find a bug, you can report an [issue](https://github.com/pingcap/tidb/issues) on GitHub.
## Restrictions
-- For each TiDB Cloud cluster, you can create up to 10 changefeeds.
-- Currently, TiDB Cloud does not support uploading self-signed TLS certificates to connect to Kafka brokers.
-- Because TiDB Cloud uses TiCDC to establish changefeeds, it has the same [restrictions as TiCDC](https://docs.pingcap.com/tidb/stable/ticdc-overview#unsupported-scenarios).
+- For each {{{ .essential }}} cluster, you can create up to 10 changefeeds.
+- Currently, {{{ .essential }}} does not support uploading self-signed TLS certificates to connect to Kafka brokers.
+- Because {{{ .essential }}} uses TiCDC to establish changefeeds, it has the same [restrictions as TiCDC](https://docs.pingcap.com/tidb/stable/ticdc-overview#unsupported-scenarios).
- If the table to be replicated does not have a primary key or a non-null unique index, the absence of a unique constraint during replication could result in duplicated data being inserted downstream in some retry scenarios.
## Prerequisites
@@ -23,7 +27,7 @@ Before creating a changefeed to stream data to Apache Kafka, you need to complet
### Network
-Ensure that your TiDB Cloud cluster can connect to the Apache Kafka service. You can choose one of the following connection methods:
+Ensure that your {{{ .essential }}} cluster can connect to the Apache Kafka service. You can choose one of the following connection methods:
- Public Access: suitable for a quick setup.
- Private Link Connection: meeting security compliance and ensuring network quality.
@@ -33,7 +37,7 @@ Ensure that your TiDB Cloud cluster can connect to the Apache Kafka service. You
Private Link Connection leverages **Private Link** technologies from cloud providers to enable resources in your VPC to connect to services in other VPCs using private IP addresses, as if those services were hosted directly within your VPC.
-TiDB Cloud currently supports Private Link Connection only for self-hosted Kafka and Confluent Cloud Dedicated Cluster. It does not support direct integration with MSK, or other Kafka SaaS services.
+{{{ .essential }}} currently supports Private Link Connection only for self-hosted Kafka and Confluent Cloud Dedicated Cluster. It does not support direct integration with MSK, or other Kafka SaaS services.
See the following instructions to set up a Private Link connection according to your Kafka deployment and cloud provider:
@@ -54,7 +58,7 @@ It is not recommended to use public access in a production environment.
### Kafka ACL authorization
-To allow TiDB Cloud changefeeds to stream data to Apache Kafka and create Kafka topics automatically, ensure that the following permissions are added in Kafka:
+To allow {{{ .essential }}} changefeeds to stream data to Apache Kafka and create Kafka topics automatically, ensure that the following permissions are added in Kafka:
- The `Create` and `Write` permissions are added for the topic resource type in Kafka.
- The `DescribeConfigs` permission is added for the cluster resource type in Kafka.
diff --git a/tidb-cloud/essential-changefeed-sink-to-mysql.md b/tidb-cloud/essential-changefeed-sink-to-mysql.md
index 286bef181be75..f70791acd1bd3 100644
--- a/tidb-cloud/essential-changefeed-sink-to-mysql.md
+++ b/tidb-cloud/essential-changefeed-sink-to-mysql.md
@@ -1,16 +1,20 @@
---
title: Sink to MySQL
-summary: This document explains how to stream data from TiDB Cloud to MySQL using the Sink to MySQL changefeed. It includes restrictions, prerequisites, and steps to create a MySQL sink for data replication. The process involves setting up network connections, loading existing data to MySQL, and creating target tables in MySQL. After completing the prerequisites, users can create a MySQL sink to replicate data to MySQL.
+summary: This document explains how to stream data from {{{ .essential }}} to MySQL using the Sink to MySQL changefeed. It includes restrictions, prerequisites, and steps to create a MySQL sink for data replication. The process involves setting up network connections, loading existing data to MySQL, and creating target tables in MySQL. After completing the prerequisites, users can create a MySQL sink to replicate data to MySQL.
---
# Sink to MySQL
-This document describes how to stream data from TiDB Cloud to MySQL using the **Sink to MySQL** changefeed.
+This document describes how to stream data from {{{ .essential }}} to MySQL using the **Sink to MySQL** changefeed.
+
+> **Note:**
+>
+> The sink to MySQL feature is in beta. It might be changed without prior notice. If you find a bug, you can report an [issue](https://github.com/pingcap/tidb/issues) on GitHub.
## Restrictions
-- For each TiDB Cloud cluster, you can create up to 10 changefeeds.
-- Because TiDB Cloud uses TiCDC to establish changefeeds, it has the same [restrictions as TiCDC](https://docs.pingcap.com/tidb/stable/ticdc-overview#unsupported-scenarios).
+- For each{{{ .essential }}} cluster, you can create up to 10 changefeeds.
+- Because {{{ .essential }}} uses TiCDC to establish changefeeds, it has the same [restrictions as TiCDC](https://docs.pingcap.com/tidb/stable/ticdc-overview#unsupported-scenarios).
- If the table to be replicated does not have a primary key or a non-null unique index, the absence of a unique constraint during replication could result in duplicated data being inserted downstream in some retry scenarios.
## Prerequisites
@@ -23,7 +27,7 @@ Before creating a changefeed, you need to complete the following prerequisites:
### Network
-Make sure that your TiDB Cloud cluster can connect to the MySQL service.
+Make sure that your {{{ .essential }}} cluster can connect to the MySQL service.
@@ -36,7 +40,7 @@ If your MySQL service can be accessed over the public network, you can choose to
Private link connections leverage **Private Link** technologies from cloud providers, enabling resources in your VPC to connect to services in other VPCs through private IP addresses, as if those services were hosted directly within your VPC.
-You can connect your TiDB Cloud cluster to your MySQL service securely through a private link connection. If the private link connection is not available for your MySQL service, follow [Connect to Amazon RDS via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-aws-rds.md) or [Connect to Alibaba Cloud ApsaraDB RDS for MySQL via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-alicloud-rds.md) to create one.
+You can connect your {{{ .essential }}} cluster to your MySQL service securely through a private link connection. If the private link connection is not available for your MySQL service, follow [Connect to Amazon RDS via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-aws-rds.md) or [Connect to Alibaba Cloud ApsaraDB RDS for MySQL via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-alicloud-rds.md) to create one.
@@ -44,7 +48,7 @@ You can connect your TiDB Cloud cluster to your MySQL service securely through a
### Load existing data (optional)
-The **Sink to MySQL** connector can only sink incremental data from your TiDB Cloud cluster to MySQL after a certain timestamp. If you already have data in your TiDB Cloud cluster, you can export and load the existing data of your TiDB Cloud cluster into MySQL before enabling **Sink to MySQL**.
+The **Sink to MySQL** connector can only sink incremental data from your {{{ .essential }}} cluster to MySQL after a certain timestamp. If you already have data in your {{{ .essential }}} cluster, you can export and load the existing data of your {{{ .essential }}} cluster into MySQL before enabling **Sink to MySQL**.
To load the existing data:
@@ -61,7 +65,7 @@ To load the existing data:
SET GLOBAL tidb_gc_life_time = '72h';
```
-2. Use [Export](/tidb-cloud/serverless-export.md) to export data from your TiDB Cloud cluster, then use community tools such as [mydumper/myloader](https://centminmod.com/mydumper.html) to load data to the MySQL service.
+2. Use [Export](/tidb-cloud/serverless-export.md) to export data from your {{{ .essential }}} cluster, then use community tools such as [mydumper/myloader](https://centminmod.com/mydumper.html) to load data to the MySQL service.
3. Use the snapshot time of [Export](/tidb-cloud/serverless-export.md) as the start position of MySQL sink.
@@ -73,7 +77,7 @@ If you do not load the existing data, you need to create corresponding target ta
After completing the prerequisites, you can sink your data to MySQL.
-1. Navigate to the overview page of the target TiDB Cloud cluster, and then click **Data** > **Changefeed** in the left navigation pane.
+1. Navigate to the overview page of the target {{{ .essential }}} cluster, and then click **Data** > **Changefeed** in the left navigation pane.
2. Click **Create Changefeed**, and select **MySQL** as **Destination**.
From e854ebb9f44b89dbccfdf9251c705786cf2881f7 Mon Sep 17 00:00:00 2001
From: houfaxin
Date: Sun, 4 Jan 2026 15:01:49 +0800
Subject: [PATCH 11/13] Update TOC-tidb-cloud-essential.md
---
TOC-tidb-cloud-essential.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/TOC-tidb-cloud-essential.md b/TOC-tidb-cloud-essential.md
index 93a9d7893325e..a498cc45e7b33 100644
--- a/TOC-tidb-cloud-essential.md
+++ b/TOC-tidb-cloud-essential.md
@@ -232,7 +232,7 @@
- [CSV Configurations for Importing Data](/tidb-cloud/csv-config-for-import-data.md)
- [Troubleshoot Access Denied Errors during Data Import from Amazon S3](/tidb-cloud/troubleshoot-import-access-denied-error.md)
- [Connect AWS DMS to TiDB Cloud clusters](/tidb-cloud/tidb-cloud-connect-aws-dms.md)
-- Stream Data
+- Stream Data 
- [Changefeed Overview](/tidb-cloud/essential-changefeed-overview.md)
- [Sink to MySQL](/tidb-cloud/essential-changefeed-sink-to-mysql.md)
- [Sink to Apache Kafka](/tidb-cloud/essential-changefeed-sink-to-kafka.md)
From df83c8d061dd341f55d2f2cb5d6b2cf44b990c0d Mon Sep 17 00:00:00 2001
From: houfaxin
Date: Sun, 4 Jan 2026 15:34:26 +0800
Subject: [PATCH 12/13] Update changefeed docs to indicate beta status
Added '(Beta)' to titles and headings in changefeed overview and sink documents for Kafka and MySQL. Removed redundant beta notes from the body text to streamline documentation and clarify feature status.
---
tidb-cloud/essential-changefeed-overview.md | 5 ++---
tidb-cloud/essential-changefeed-sink-to-kafka.md | 8 ++------
tidb-cloud/essential-changefeed-sink-to-mysql.md | 8 ++------
3 files changed, 6 insertions(+), 15 deletions(-)
diff --git a/tidb-cloud/essential-changefeed-overview.md b/tidb-cloud/essential-changefeed-overview.md
index 500e7e93926bb..969efd31148d7 100644
--- a/tidb-cloud/essential-changefeed-overview.md
+++ b/tidb-cloud/essential-changefeed-overview.md
@@ -1,15 +1,14 @@
---
-title: Changefeed
+title: Changefeed (Beta)
summary: TiDB Cloud changefeed helps you stream data from TiDB Cloud to other data services.
---
-# Changefeed
+# Changefeed (Beta)
TiDB Cloud changefeed helps you stream data from TiDB Cloud to other data services. Currently, TiDB Cloud supports streaming data to Apache Kafka and MySQL.
> **Note:**
>
-> - The changefeed feature is in beta. It might be changed without prior notice. If you find a bug, you can report an [issue](https://github.com/pingcap/tidb/issues) on GitHub.
> - Currently, TiDB Cloud only allows up to 10 changefeeds per {{{ .essential }}} cluster.
> - For [{{{ .starter }}}](/tidb-cloud/select-cluster-tier.md#starter) clusters, the changefeed feature is unavailable.
diff --git a/tidb-cloud/essential-changefeed-sink-to-kafka.md b/tidb-cloud/essential-changefeed-sink-to-kafka.md
index 14f044195a0c5..8d0da652edc83 100644
--- a/tidb-cloud/essential-changefeed-sink-to-kafka.md
+++ b/tidb-cloud/essential-changefeed-sink-to-kafka.md
@@ -1,16 +1,12 @@
---
-title: Sink to Apache Kafka
+title: Sink to Apache Kafka (Beta)
summary: This document explains how to create a changefeed to stream data from {{{ .essential }}} to Apache Kafka. It includes restrictions, prerequisites, and steps to configure the changefeed for Apache Kafka. The process involves setting up network connections, adding permissions for Kafka ACL authorization, and configuring the changefeed specification.
---
-# Sink to Apache Kafka
+# Sink to Apache Kafka (Beta)
This document describes how to create a changefeed to stream data from {{{ .essential }}} to Apache Kafka.
-> **Note:**
->
-> - The sink to Apache Kafka feature is in beta. It might be changed without prior notice. If you find a bug, you can report an [issue](https://github.com/pingcap/tidb/issues) on GitHub.
-
## Restrictions
- For each {{{ .essential }}} cluster, you can create up to 10 changefeeds.
diff --git a/tidb-cloud/essential-changefeed-sink-to-mysql.md b/tidb-cloud/essential-changefeed-sink-to-mysql.md
index f70791acd1bd3..48ba99b68b456 100644
--- a/tidb-cloud/essential-changefeed-sink-to-mysql.md
+++ b/tidb-cloud/essential-changefeed-sink-to-mysql.md
@@ -1,16 +1,12 @@
---
-title: Sink to MySQL
+title: Sink to MySQL (Beta)
summary: This document explains how to stream data from {{{ .essential }}} to MySQL using the Sink to MySQL changefeed. It includes restrictions, prerequisites, and steps to create a MySQL sink for data replication. The process involves setting up network connections, loading existing data to MySQL, and creating target tables in MySQL. After completing the prerequisites, users can create a MySQL sink to replicate data to MySQL.
---
-# Sink to MySQL
+# Sink to MySQL (Beta)
This document describes how to stream data from {{{ .essential }}} to MySQL using the **Sink to MySQL** changefeed.
-> **Note:**
->
-> The sink to MySQL feature is in beta. It might be changed without prior notice. If you find a bug, you can report an [issue](https://github.com/pingcap/tidb/issues) on GitHub.
-
## Restrictions
- For each{{{ .essential }}} cluster, you can create up to 10 changefeeds.
From b531428705a01e02891d22eb6c7c5162f38e9216 Mon Sep 17 00:00:00 2001
From: houfaxin
Date: Sun, 4 Jan 2026 18:29:13 +0800
Subject: [PATCH 13/13] refine wording
---
.../essential-changefeed-sink-to-kafka.md | 13 +++++----
.../essential-changefeed-sink-to-mysql.md | 29 +++++++++----------
2 files changed, 21 insertions(+), 21 deletions(-)
diff --git a/tidb-cloud/essential-changefeed-sink-to-kafka.md b/tidb-cloud/essential-changefeed-sink-to-kafka.md
index 8d0da652edc83..d029e302be43b 100644
--- a/tidb-cloud/essential-changefeed-sink-to-kafka.md
+++ b/tidb-cloud/essential-changefeed-sink-to-kafka.md
@@ -25,8 +25,8 @@ Before creating a changefeed to stream data to Apache Kafka, you need to complet
Ensure that your {{{ .essential }}} cluster can connect to the Apache Kafka service. You can choose one of the following connection methods:
-- Public Access: suitable for a quick setup.
- Private Link Connection: meeting security compliance and ensuring network quality.
+- Public Network: suitable for a quick setup.
@@ -37,9 +37,9 @@ Private Link Connection leverages **Private Link** technologies from cloud provi
See the following instructions to set up a Private Link connection according to your Kafka deployment and cloud provider:
-- [Connect to Confluent Cloud via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-aws-confluent.md)
-- [Connect to AWS Self-Hosted Kafka via Private Link Connection](/tidbcloud/serverless-private-link-connection-to-self-hosted-kafka-in-aws.md)
-- [Connect to Alibaba Cloud Self-Hosted Kafka via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-self-hosted-kafka-in-alicloud.md)
+- [Connect to Confluent Cloud on AWS via a Private Link Connection](/tidb-cloud/serverless-private-link-connection-to-aws-confluent.md)
+- [Connect to AWS Self-Hosted Kafka via Private Link Connection](/tidb-cloud/serverless-private-link-connection-to-self-hosted-kafka-in-aws.md)
+- [Connect to Alibaba Cloud Self-Hosted Kafka via a Private Link Connection](/tidb-cloud/serverless-private-link-connection-to-self-hosted-kafka-in-alicloud.md)
@@ -65,7 +65,7 @@ For example, if your Kafka cluster is in Confluent Cloud, refer to [Resources](h
1. Log in to the [TiDB Cloud console](https://tidbcloud.com).
2. Navigate to the overview page of the target TiDB Cloud cluster, and then click **Data** > **Changefeed** in the left navigation pane.
-3. Click **Create Changefeed**, and select **Kafka** as **Destination**.
+3. Click **Create Changefeed**, and then select **Kafka** as **Destination**.
## Step 2. Configure the changefeed target
@@ -95,6 +95,7 @@ The steps vary depending on the connectivity method you select.
- If your Kafka does not require authentication, keep the default option **Disable**.
- If your Kafka requires authentication, select the corresponding authentication type, and then fill in the **user name** and **password** of your Kafka account for authentication.
+
5. Select your **Kafka Version**. If you do not know which one to use, use **Kafka v2**.
6. Select a **Compression** type for the data in this changefeed.
7. Enable the **TLS Encryption** option if your Kafka has enabled TLS encryption and you want to use TLS encryption for the Kafka connection.
@@ -194,4 +195,4 @@ The steps vary depending on the connectivity method you select.
## Step 4. Review and create your changefeed specification
1. In the **Changefeed Name** area, specify a name for the changefeed.
-2. Review all the changefeed configurations that you set. Click **Previous** to go back to the previous configuration pages if you want to modify some configurations. Click **Submit** if all configurations are correct to create the changefeed.
\ No newline at end of file
+2. Review all the changefeed configurations that you set. Click **Previous** to go back to the previous configuration pages if you want to modify some configurations. Click **Submit** if all configurations are correct to create the changefeed.
diff --git a/tidb-cloud/essential-changefeed-sink-to-mysql.md b/tidb-cloud/essential-changefeed-sink-to-mysql.md
index 48ba99b68b456..4d86f8568a618 100644
--- a/tidb-cloud/essential-changefeed-sink-to-mysql.md
+++ b/tidb-cloud/essential-changefeed-sink-to-mysql.md
@@ -9,7 +9,7 @@ This document describes how to stream data from {{{ .essential }}} to MySQL usin
## Restrictions
-- For each{{{ .essential }}} cluster, you can create up to 10 changefeeds.
+- For each {{{ .essential }}} cluster, you can create up to 10 changefeeds.
- Because {{{ .essential }}} uses TiCDC to establish changefeeds, it has the same [restrictions as TiCDC](https://docs.pingcap.com/tidb/stable/ticdc-overview#unsupported-scenarios).
- If the table to be replicated does not have a primary key or a non-null unique index, the absence of a unique constraint during replication could result in duplicated data being inserted downstream in some retry scenarios.
@@ -23,20 +23,23 @@ Before creating a changefeed, you need to complete the following prerequisites:
### Network
-Make sure that your {{{ .essential }}} cluster can connect to the MySQL service.
+Make sure that your {{{ .essential }}} cluster can connect to the MySQL service. You can choose one of the following connection methods:
+
+- Private Link Connection: meeting security compliance and ensuring network quality.
+- Public Network: suitable for a quick setup.
-
+
-If your MySQL service can be accessed over the public network, you can choose to connect to MySQL through a public IP or domain name.
+Private link connections leverage **Private Link** technologies from cloud providers, enabling resources in your VPC to connect to services in other VPCs through private IP addresses, as if those services were hosted directly within your VPC.
-
+You can connect your {{{ .essential }}} cluster to your MySQL service securely through a private link connection. If the private link connection is not available for your MySQL service, follow [Connect to Amazon RDS via a Private Link Connection](/tidb-cloud/serverless-private-link-connection-to-aws-rds.md) or [Connect to Alibaba Cloud ApsaraDB RDS for MySQL via a Private Link Connection](/tidb-cloud/serverless-private-link-connection-to-alicloud-rds.md) to create one.
-
+
-Private link connections leverage **Private Link** technologies from cloud providers, enabling resources in your VPC to connect to services in other VPCs through private IP addresses, as if those services were hosted directly within your VPC.
+
-You can connect your {{{ .essential }}} cluster to your MySQL service securely through a private link connection. If the private link connection is not available for your MySQL service, follow [Connect to Amazon RDS via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-aws-rds.md) or [Connect to Alibaba Cloud ApsaraDB RDS for MySQL via a Private Link Connection](/tidbcloud/serverless-private-link-connection-to-alicloud-rds.md) to create one.
+If your MySQL service can be accessed over the public network, you can choose to connect to MySQL through a public IP or domain name.
@@ -55,8 +58,6 @@ To load the existing data:
For example:
- {{< copyable "sql" >}}
-
```sql
SET GLOBAL tidb_gc_life_time = '72h';
```
@@ -119,8 +120,6 @@ After completing the prerequisites, you can sink your data to MySQL.
12. If you have [loaded the existing data](#load-existing-data-optional) using Export, you need to restore the GC time to its original value (the default value is `10m`) after the sink is created:
-{{< copyable "sql" >}}
-
-```sql
-SET GLOBAL tidb_gc_life_time = '10m';
-```
+ ```sql
+ SET GLOBAL tidb_gc_life_time = '10m';
+ ```