해결된 질문
작성
·
719
답변 1
0
안녕하세요 카프카 커넥터의 종류에 따라 사용 용도가 모두 다릅니다. 그렇기 때문에 말씀하신 내용으로는 정확히 어떤 역할을 뜻하는지 알기 어렵습니다. 각 커넥터의 description을 확인하시는 것이 좋을거 같네요
https://docs.confluent.io/kafka-connectors/jdbc/current/sink-connector/overview.html : The Kafka Connect JDBC Sink connector allows you to export data from Apache Kafka® topics to any relational database with a JDBC driver. This connector can support a wide variety of databases. The connector polls data from Kafka to write to the database based on the topics subscription. It is possible to achieve idempotent writes with upserts. Auto-creation of tables and limited auto-evolution is also supported.
https://docs.confluent.io/cloud/current/connectors/cc-mysql-source.html : The Kafka Connect MySQL Source connector for Confluent Cloud can obtain a snapshot of the existing data in a MySQL database and then monitor and record all subsequent row-level changes to that data. The connector supports Avro, JSON Schema, Protobuf, or JSON (schemaless) output data formats. All of the events for each table are recorded in a separate Apache Kafka® topic. The events can then be easily consumed by applications and services. Note that deleted records are not captured.