Confluent local + debezium + mysql + schema Registry

參考

【1】快速搭建Confluent Kafka的本地測試環(huán)境:https://www.modb.pro/db/54580

【2】confluent local current:https://docs.confluent.io/confluent-cli/current/command-reference/local/confluent_local_current.html

【3】快速搭建Confluent Kafka的本地測試環(huán)境:https://www.modb.pro/db/54580

一、安裝Confluent步驟

1,下載confluent platform

下載地址:Confluent Platform (需要輸入郵箱)

2,安裝Install Confluent CLI

如果在confluent-7.0.1/bin目錄下沒有confluent cli
需要安裝,安裝步驟:Install Confluent CLI

設(shè)置環(huán)境變量

export CONFLUENT_HOME=/Users/sun/Downloads/confluent-7.0.1/

3,confluent local命令

confluent local

./confluent local current

/var/folders/5w/33wydx3s1ys12smyf72762mr0000gq/T/confluent.917162
// 在root權(quán)限下執(zhí)行的
./confluent local services start
./confluent local services status
./confluent local destroy

4,debezium-connector-mysql插件

share/java/debezium-connector-mysql

debezium-connector-mysql插件下載

5, 驗證connectors

curl http://localhost:8083/connectors

6,啟動本地mysql docker

因為my.cnf為readonly,無法開啟binlog,所以改為本地安裝mysql
注意,mysql需要開啟binlog

1)brew install安裝mysql&啟動

brew uninstall mysql --force
rm -fr /usr/local/var/mysql/
brew services start mysql
brew services list
mysqladmin -uroot -p password 123456
mysql -uroot -p

[1] install MySQL 8 in MacOS
[2] MySQL configuration files
[3] Setting The Binary Log Format

2)是否開啟了binlog

show variables like '%log_bin%'

show variables like '%binlog%%';

[1] docker mysql 開啟binlog
[2] docker開啟mysql啟binlog日志
[3] 開啟MySQL的binlog日志

7, 添加debezium mysql connector配置

[1] Debezium MySQL connector configuration properties

[2] Debezium介紹及動態(tài)調(diào)用Connector API接口實例

curl -i -X POST -H "Accept:application/json" -H "Content-Type:application/json" localhost:8083/connectors -d '
{
    "name": "test_avro_source",
    "config": {
        "connector.class": "io.debezium.connector.mysql.MySqlConnector",
        "database.user": "root",
        "database.server.id": "122222",
        "database.history.kafka.bootstrap.servers": "172.16.6.218:9092",
        "database.history.kafka.topic": "history_test_avro",
        "database.server.name": "test_avro",
        "database.port": "3306",
        "include.schema.changes": "true",
        "value.converter.schema.registry.url": "http://172.16.6.218:8081",
        "decimal.handling.mode": "string",
        "include.schema.comments": "true",
        "database.hostname": "localhost",
        "database.password": "123456",
        "name": "test_avro_source",
        "value.converter": "io.confluent.connect.avro.AvroConverter",
        "key.converter": "io.confluent.connect.avro.AvroConverter",
        "key.converter.schema.registry.url": "http://localhost:8081",
        "database.include.list": "workdb",
        "snapshot.mode": "schema_only"
    }
}
'

8,check connectors

curl http://localhost:8083/connectors
curl http://localhost:8083/connectors/test_avro_source
curl -i -X GET localhost:8083/connectors/test_avro_source/status
curl -i -X POST localhost:8083/connectors/test_avro_source/restart
curl -i -X GET localhost:8083/connectors/test_avro_source/tasks
curl -i -X GET localhost:8083/connectors/test_avro_source/tasks/0/status
curl -i -X POST localhost:8083/connectors/test_avro_source/tasks/0/restart
curl -i -X DELETE localhost:8083/connectors/test_avro_source

9,創(chuàng)建表+插入數(shù)據(jù)

CREATE TABLE `table_name_1` (
  `id` int NOT NULL,
  `name` varchar(255) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci

INSERT INTO test_avro.table_name_1 (id, name) VALUES (1, '22111a');

10,驗證是否有對應(yīng)的topic&是否有數(shù)據(jù)

cd confluent-7.0.1/bin
// 驗證是否有對應(yīng)topic
kafka-topics --bootstrap-server localhost:9092 --list

// 驗證topic中是否有數(shù)據(jù),avro反序列化后的信息
kafka-avro-console-consumer --bootstrap-server localhost:9092 --topic test_avro.table_name_1 --from-beginning


====>
{
    "before":{
        "test_avro.table_name_1.Value":{
            "id":1,
            "name":{
                "string":"22111"
            }
        }
    },
    "after":{
        "test_avro.test_avro.table_name_1.Value":{
            "id":1,
            "name":{
                "string":"22111a"
            }
        }
    },
    "source":{
        "version":"1.5.0.Final",
        "connector":"mysql",
        "name":"test_avro",
        "ts_ms":1646621858000,
        "snapshot":{
            "string":"false"
        },
        "db":"test_avro",
        "sequence":null,
        "table":{
            "string":"table_name_1"
        },
        "server_id":1,
        "gtid":null,
        "file":"binlog.000044",
        "pos":4697,
        "row":0,
        "thread":null,
        "query":null
    },
    "op":"u",
    "ts_ms":{
        "long":1646621858881
    },
    "transaction":null
}

// 驗證topic中是否有數(shù)據(jù),avro序列化后的信息,沒有經(jīng)過發(fā)序列化處理
kafka-console-consumer --bootstrap-server localhost:9092 --topic test_avro

其他命令

// 刪除topic 1111
kafka-topics --bootstrap-server localhost:9092 --topic 1111 --delete
// 創(chuàng)建topic 1111
kafka-topics --bootstrap-server localhost:9092 --topic 1111 --create
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容