利用Binlog和Kafka實時同步mysql數據到Elasticsearch(三) - Binlog日志生產消息到Kafka

目錄

1、利用Binlog和Kafka實時同步mysql數據到Elasticsearch(一) - 開啟Binlog日志
2、利用Binlog和Kafka實時同步mysql數據到Elasticsearch(二) - 安裝并運行Kafka
3、利用Binlog和Kafka實時同步mysql數據到Elasticsearch(三) - Binlog日志生產消息到Kafka
4、利用Binlog和Kafka實時同步mysql數據到Elasticsearch(四) - 消費Kafka消息同步數據到ES


前言

- 項目模塊

BinlogMiddleware

1、binlog中間件,負責解析binlog,把變動的數據以json形式發(fā)送到kafka隊列。

KafkaMiddleware

2、kafka中間件,負責消費kafka隊列中的Message,把數據寫入Elasticsearch中。

- 基礎服務

(1)Mysql
(2)Kafka(用于存放mysql變動消息,存放于Kafka隊列)
(3)Elasticsearch

- 項目源碼

碼云:https://gitee.com/OrgXxxx/SyncMysqlToElasticsearch

簡介:

BinlogMiddleware服務主要負責監(jiān)聽Binlog日志,并將其發(fā)送到Kafka隊列(及Kafka生產者)。

  • 本示例模擬監(jiān)聽teemoliu數據庫的user、role表。為了方便表結構設計的很簡單,均只含有id、name兩個屬性。
  • 中間件寫進Kafka隊列的消息格式如下:
{"event":"teemoliu.user.update","value":[1,"TeemoLiu"]}
{"event":"teemoliu.role.insert","value":[1,"管理員"]}
  • 項目結構如下:


    image.png

1、創(chuàng)建SpringBoot項目。

image.png

2、導入maven引用。

<dependency>
  <groupId>com.github.shyiko</groupId>
  <artifactId>mysql-binlog-connector-java</artifactId>
  <version>0.16.1</version>
</dependency>
<dependency>
  <groupId>com.alibaba</groupId>
  <artifactId>fastjson</artifactId>
  <version>1.2.49</version>
</dependency>
<dependency>
  <groupId>org.springframework.kafka</groupId>
  <artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
  <groupId>org.apache.kafka</groupId>
  <artifactId>kafka-clients</artifactId>
  <version>1.1.1</version>
</dependency>

3、配置文件如下:

# 停用服務端口
spring.main.web-environment=false

# binlog配置
server.id=1
binlog.host=localhost
binlog.port=3306
binlog.user=root
binlog.password=root
# 指定監(jiān)聽的表格
binlog.database.table=teemoliu.user,teemoliu.role

# kafka
spring.kafka.bootstrap-servers=localhost:9092
kafka.topic=binlog
kafka.partNum=3
kafka.repeatNum=1

4、創(chuàng)建Binlog數據傳輸對象

public class BinlogDto {
    private String event;
    private Object value;

    public BinlogDto(String event, Object value) {
        this.event = event;
        this.value = value;
    }

    public BinlogDto() {
    }

    public String getEvent() {
        return event;
    }

    public void setEvent(String event) {
        this.event = event;
    }

    public Object getValue() {
        return value;
    }

    public void setValue(Object value) {
        this.value = value;
    }
}

5、創(chuàng)建Kafka數據傳輸對象

public class Message {
    private Long id;
    private String msg;
    private Date sendTime;

    public Message(Long id, String msg, Date sendTime) {
        this.id = id;
        this.msg = msg;
        this.sendTime = sendTime;
    }

    public Message() {
    }

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getMsg() {
        return msg;
    }

    public void setMsg(String msg) {
        this.msg = msg;
    }

    public Date getSendTime() {
        return sendTime;
    }

    public void setSendTime(Date sendTime) {
        this.sendTime = sendTime;
    }
}

6、binlog監(jiān)聽BinlogClientRunner

@Component
public class BinlogClientRunner implements CommandLineRunner {

    @Value("${binlog.host}")
    private String host;

    @Value("${binlog.port}")
    private int port;

    @Value("${binlog.user}")
    private String user;

    @Value("${binlog.password}")
    private String password;

    // binlog server_id
    @Value("${server.id}")
    private long serverId;

    // kafka話題
    @Value("${kafka.topic}")
    private String topic;

    // kafka分區(qū)
    @Value("${kafka.partNum}")
    private int partNum;

    // Kafka備份數
    @Value("${kafka.repeatNum}")
    private short repeatNum;

    // kafka地址
    @Value("${spring.kafka.bootstrap-servers}")
    private String kafkaHost;

    // 指定監(jiān)聽的數據表
    @Value("${binlog.database.table}")
    private String database_table;

    @Autowired
    KafkaSender kafkaSender;

    @Async
    @Override
    public void run(String... args) throws Exception {

        // 創(chuàng)建topic
        kafkaSender.createTopic(kafkaHost, topic, partNum, repeatNum);
        // 獲取監(jiān)聽數據表數組
        List<String> databaseList = Arrays.asList(database_table.split(","));
        HashMap<Long, String> tableMap = new HashMap<Long, String>();
        // 創(chuàng)建binlog監(jiān)聽客戶端
        BinaryLogClient client = new BinaryLogClient(host, port, user, password);
        client.setServerId(serverId);
        client.registerEventListener((event -> {
            // binlog事件
            EventData data = event.getData();
            if (data != null) {
                if (data instanceof TableMapEventData) {
                    TableMapEventData tableMapEventData = (TableMapEventData) data;
                    tableMap.put(tableMapEventData.getTableId(), tableMapEventData.getDatabase() + "." + tableMapEventData.getTable());
                }
                // update數據
                if (data instanceof UpdateRowsEventData) {
                    UpdateRowsEventData updateRowsEventData = (UpdateRowsEventData) data;
                    String tableName = tableMap.get(updateRowsEventData.getTableId());
                    if (tableName != null && databaseList.contains(tableName)) {
                        String eventKey = tableName + ".update";
                        for (Map.Entry<Serializable[], Serializable[]> row : updateRowsEventData.getRows()) {
                            String msg = JSON.toJSONString(new BinlogDto(eventKey, row.getValue()));
                            kafkaSender.send(topic, msg);
                        }
                    }
                }
                // insert數據
                else if (data instanceof WriteRowsEventData) {
                    WriteRowsEventData writeRowsEventData = (WriteRowsEventData) data;
                    String tableName = tableMap.get(writeRowsEventData.getTableId());
                    if (tableName != null && databaseList.contains(tableName)) {
                        String eventKey = tableName + ".insert";
                        for (Serializable[] row : writeRowsEventData.getRows()) {
                            String msg = JSON.toJSONString(new BinlogDto(eventKey, row));
                            kafkaSender.send(topic, msg);
                        }
                    }
                }
                // delete數據
                else if (data instanceof DeleteRowsEventData) {
                    DeleteRowsEventData deleteRowsEventData = (DeleteRowsEventData) data;
                    String tableName = tableMap.get(deleteRowsEventData.getTableId());
                    if (tableName != null && databaseList.contains(tableName)) {
                        String eventKey = tableName + ".delete";
                        for (Serializable[] row : deleteRowsEventData.getRows()) {
                            String msg = JSON.toJSONString(new BinlogDto(eventKey, row));
                            kafkaSender.send(topic, msg);
                        }
                    }
                }
            }
        }));
        client.connect();
    }
}
最后編輯于
?著作權歸作者所有,轉載或內容合作請聯系作者
【社區(qū)內容提示】社區(qū)部分內容疑似由AI輔助生成,瀏覽時請結合常識與多方信息審慎甄別。
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發(fā)布,文章內容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務。

友情鏈接更多精彩內容