Search⌘ K

Kafka Connect Sink Connectors

Explore how Kafka Connect sink connectors enable exporting data from Kafka topics to relational databases. Understand configuring the JDBC Sink connector, task parallelism, data transformations, and schema handling for building reliable data pipelines.

Kafka Connect has Sink connectors for various systems, including databases. Here, we will learn how to build a data pipeline from Kafka to PostgreSQL using the popular JDBC Sink connector.

Here is a high-level representation of the solution we will implement. Let’s dive into its individual building blocks and understand how they work.

End-to-end solution using thw JDBC Sink connector
End-to-end solution using thw JDBC Sink connector

Kafka JDBC Sink connector

The Kafka Connect JDBC Sink connector allows us to export data from Apache Kafka topics to any relational database with a JDBC driver. The connector provides many capabilities, most of which are configured/activated using the connector properties. Here are some of the important ones:

  • It can create the destination table if it is missing (the auto.create configuration should be set to true).

  • If auto.evolve is set to true and the connector encounters a record for which a column is missing, it can execute the ALTER command to perform auto-evolution of the table.

  • It makes idempotent writes possible with upserts (defined by the insert.mode property). The default insert.mode is insert. Upsert semantics refer to atomically adding a new row or updating the existing row if there is a primary key ...