百度360必应搜狗淘宝本站头条
当前位置:网站首页 > 技术资源 > 正文

Canal HA搭建

lipiwang 2024-11-27 17:20 9 浏览 0 评论

前言

Canal是用java开发的基于数据库增量日志解析,提供增量数据订阅&消费的中间件。目前,Canal主要支持了MySQL的Binlog解析,解析完成后才利用Canal Client 用来处理获得的相关数据。

工作原理

复制过程分成三步:

1)Master主库将改变记录写到二进制日志(binary log)中;

2)Slave从库向Mysql Master发送dump协议,将master主库的binary log events拷贝到它的中继日志(relay log);

3)Slave从库读取并重做中继日志中的事件,将改变的数据同步到自己的数据库。

canal的工作原理很简单,就是把自己伪装成Slave,假装从Master复制数据。

环境准备

准备三台测试机hadoop102,hadoop103.hadoop104,分别安装zookeeper、kafka集群。以及MySql服务(hadoop102上安装MySql)。

Canal 1.14的安装包

Canal admin安装

(1)解压Canal安装包,进行安装

[atguigu@hadoop102 ~]$ cd /opt/software/
[atguigu@hadoop102 software]$ mkdir -p /opt/module/canal-admin
[atguigu@hadoop102 software]$ tar -zxvf canal.admin-1.1.4.tar.gz -C /opt/module/canal-admin/

(2)初始化元数据库

[atguigu@hadoop102 canal-admin]$ vim conf/application.yml
server:
  port: 8089
spring:
  jackson:
    date-format: yyyy-MM-dd HH:mm:ss
    time-zone: GMT+8


spring.datasource:
  address: hadoop102:3306
  database: canal_manager
  username: root
  password: 123456
  driver-class-name: com.mysql.jdbc.Driver
  url: jdbc:mysql://${spring.datasource.address}/${spring.datasource.database}?useUnicode=true&characterEncoding=UTF-8&useSSL=false
  hikari:
    maximum-pool-size: 30
    minimum-idle: 1


canal:
  adminUser: admin
  adminPasswd: admin
[atguigu@hadoop102 software]$ cd /opt/module/canal-admin/
[atguigu@hadoop102 canal-admin]$ mysql -uroot -p123456
mysql>  source conf/canal_manager.sql

(3)启动canal-admin

[atguigu@hadoop102 canal-admin]$ sh bin/startup.sh

(4)查看日志

[atguigu@hadoop102 canal-admin]$ tail -f logs/admin.log
2019-12-28 14:55:00.725 [main] INFO  o.s.jmx.export.annotation.AnnotationMBeanExporter - Bean with name 'dataSource' has been autodetected for JMX exposure
2019-12-28 14:55:00.742 [main] INFO  o.s.jmx.export.annotation.AnnotationMBeanExporter - Located MBean 'dataSource': registering with JMX server as MBean [com.zaxxer.hikari:name=dataSource,type=HikariDataSource]
2019-12-28 14:55:00.750 [main] INFO  org.apache.coyote.http11.Http11NioProtocol - Starting ProtocolHandler ["http-nio-8089"]
2019-12-28 14:55:00.813 [main] INFO  org.apache.tomcat.util.net.NioSelectorPool - Using a shared selector for servlet write/read
2019-12-28 14:55:00.935 [main] INFO  o.s.boot.web.embedded.tomcat.TomcatWebServer - Tomcat started on port(s): 8089 (http) with context path ''
2019-12-28 14:55:00.938 [main] INFO  com.alibaba.otter.canal.admin.CanalAdminApplication - Started CanalAdminApplication in 3.005 seconds (JVM running for 3.423)

(5)访问hadoop102 8089端口

(6)默认账号密码是admin /123456进行登录

(7)关闭cancal-admin

[atguigu@hadoop102 canal-admin]$ sh bin/stop.sh

开启MySql Binlog

(1)修改MySql配置文件

[atguigu@hadoop102 canal-admin]$ whereis my.cnf
my: /etc/my.cnf
[atguigu@hadoop102 canal-admin]$ sudo vim /etc/my.cnf
[mysqld]
server_id=1
log-bin=mysql-bin
binlog_format=row

(2)重启MySql服务,查看配置是否生效

[atguigu@hadoop102 canal-admin]$ sudo service mysql restart
[atguigu@hadoop102 canal-admin]$ mysql -uroot -p123456
mysql> show variables like 'binlog_format';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| binlog_format | ROW   |
+---------------+-------+
mysql> show variables like 'log_bin';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| log_bin       | ON    |
+---------------+-------+

(3)配置起效果后,创建canale用户,并赋予权限

mysql> CREATE USER canal IDENTIFIED BY 'canal'; 
mysql> GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%';
mysql> FLUSH PRIVILEGES; 
mysql> show grants for 'canal' ;
+----------------------------------------------------------------------------------------------------------------------------------------------+
| Grants for canal@%                                                                                                                           |
+----------------------------------------------------------------------------------------------------------------------------------------------+
| GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%' IDENTIFIED BY PASSWORD '*E3619321C1A937C46A0D8BD1DAC39F93B27D4458' |
+----------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

Canal server安装

(1)解压server包

[atguigu@hadoop102 canal-admin]$ cd /opt/software/
[atguigu@hadoop102 software]$ mkdir -p /opt/module/canal-server
[atguigu@hadoop102 software]$ tar -zxvf canal.deployer-1.1.4.tar.gz -C /opt/module/canal-server/

(2)启动

[atguigu@hadoop102 canal-server]$ sh bin/startup.sh

(3)关闭

[atguigu@hadoop102 canal-server]$ sh bin/stop.sh

搭建HA模式

(1)启动zookeeper

[atguigu@hadoop102 ~]$ /opt/module/zookeeper-3.4.10/bin/zkServer.sh start
[atguigu@hadoop103 ~]$ /opt/module/zookeeper-3.4.10/bin/zkServer.sh start
[atguigu@hadoop104 ~]$ /opt/module/zookeeper-3.4.10/bin/zkServer.sh start

(2)启动kafka

[atguigu@hadoop102 ~]$ /opt/module/kafka_2.11-0.11.0.0/bin/kafka-server-start.sh -daemon /opt/module/kafka_2.11-0.11.0.0/config/server.properties
[atguigu@hadoop103 ~]$ /opt/module/kafka_2.11-0.11.0.0/bin/kafka-server-start.sh -daemon /opt/module/kafka_2.11-0.11.0.0/config/server.properties
[atguigu@hadoop104 ~]$ /opt/module/kafka_2.11-0.11.0.0/bin/kafka-server-start.sh -daemon /opt/module/kafka_2.11-0.11.0.0/config/server.properties 

(3)添加zookeeper地址,将file-instance.xml注释,解开default-instance.xml注释。修改两个canal-server节点的配置

[atguigu@hadoop102 canal-server]$ vim conf/canal.properties
canal.zkServers =hadoop102:2181,hadoop103:2181,hadoop104:2181
#canal.instance.global.spring.xml = classpath:spring/file-instance.xml
canal.instance.global.spring.xml = classpath:spring/default-instance.xml


[atguigu@hadoop103 canal-server]$ vim conf/canal.properties
canal.zkServers =hadoop102:2181,hadoop103:2181,hadoop104:2181
#canal.instance.global.spring.xml = classpath:spring/file-instance.xml
canal.instance.global.spring.xml = classpath:spring/default-instance.xml

(4)进入conf/example目录修改instance配置

[atguigu@hadoop102 canal-server]$ cd conf/example/
[atguigu@hadoop102 example]$ vim instance.properties
canal.instance.mysql.slaveId = 100
canal.instance.master.address = hadoop102:3306


[atguigu@hadoop103 canal-server]$ cd conf/example/
[atguigu@hadoop103 example]$ vim instance.properties
canal.instance.mysql.slaveId = 101
canal.instance.master.address = hadoop102:3306

(5)启动两台canal-server

[atguigu@hadoop102 canal-server]$ sh bin/startup.sh 
[atguigu@hadoop103 canal-server]$ sh bin/startup.sh

(6)查看日志,无报错信息启动便成功了,并且一台节点有日志信息,一台节点无日志信息。

[atguigu@hadoop102 canal-server]$ tail -f logs/example/example.log

(7)通过zookeeper可以查看当前工作节点,可以看出当前活动节点为hadoop102

[atguigu@hadoop102 canal-server]$ /opt/module/zookeeper-3.4.10/bin/zkCli.sh 
[zk: localhost:2181(CONNECTED) 0] ls /
[kafa_2.11, zookeeper, yarn-leader-election, hadoop-ha, otter, rmstore]
[zk: localhost:2181(CONNECTED) 1] get /otter/canal/destinations/example/running  
{"active":true,"address":"192.168.1.102:11111"}

(8)关闭canal

[atguigu@hadoop102 canal-server]$ sh bin/stop.sh 
[atguigu@hadoop103 canal-server]$ sh bin/stop.s

对接Kafka

canal 1.1.1版本之后, 默认支持将canal server接收到的binlog数据直接投递到MQ, 目前默认支持的MQ系统有:KAFKA和RocketMq

(1)修改instance配置文件 (两台节点)

[atguigu@hadoop102 canal-server]$ vim conf/example/instance.properties
canal.mq.topic=test  ##将数据发送到指定的topic
canal.mq.partition=0


[atguigu@hadoop103 canal-server]$ vim conf/example/instance.properties
canal.mq.topic=test  ##将数据发送到指定的topic
canal.mq.partition=0

(2)修改canal.properties(两台节点)

[atguigu@hadoop102 canal-server]$ vim conf/canal.properties 
canal.serverMode = kafka
canal.mq.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092
canal.mq.retries = 0
canal.mq.batchSize = 16384
canal.mq.maxRequestSize = 1048576
canal.mq.lingerMs = 100
canal.mq.bufferMemory = 33554432
canal.mq.canalBatchSize = 50
canal.mq.canalGetTimeout = 100
canal.mq.flatMessage = true
canal.mq.compressionType = none
canal.mq.acks = all
#canal.mq.properties. =
canal.mq.producerGroup = test




[atguigu@hadoop103 canal-server]$ vim conf/canal.properties 
canal.serverMode = kafka
canal.mq.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092
canal.mq.retries = 0
canal.mq.batchSize = 16384
canal.mq.maxRequestSize = 1048576
canal.mq.lingerMs = 100
canal.mq.bufferMemory = 33554432
canal.mq.canalBatchSize = 50
canal.mq.canalGetTimeout = 100
canal.mq.flatMessage = true
canal.mq.compressionType = none
canal.mq.acks = all
#canal.mq.properties. =
canal.mq.producerGroup = test

(3)启动canal

[atguigu@hadoop102 canal-server]$ sh bin/startup.sh
[atguigu@hadoop103 canal-server]$ sh bin/startup.sh

(4)启动kafka消费者监听topic test

[atguigu@hadoop104 ~]$ /opt/module/kafka_2.11-0.11.0.0/bin/kafka-console-consumer.sh --bootstrap-server hadoop102:9092 --topic test

(5)向MySql数据库插入测试数据

CREATE TABLE aa (
 `name` VARCHAR(55),
  age INT);
  
  INSERT INTO aa VALUES("haha",111)

(6)查看topic消费详情

[atguigu@hadoop104 ~]$ /opt/module/kafka_2.11-0.11.0.0/bin/kafka-console-consumer.sh --bootstrap-server hadoop102:9092 --topic test
{"data":[{"name":"haha","age":"111"}],"database":"test","es":1577524089000,"id":1,"isDdl":false,"mysqlType":{"name":"varchar(55)","age":"int"},"old":null,"pkNames":null,"sql":"","sqlType":{"name":12,"age":4},"table":"aa","ts":1577524413400,"type":"INSERT"}

(7)修改表数据

  UPDATE aa SET `name`='wawa' WHERE age=111

(8)topic接收到的数据

{"data":[{"name":"wawa","age":"111"}],"database":"test","es":1577524909000,"id":2,"isDdl":false,"mysqlType":{"name":"varchar(55)","age":"int"},"old":[{"name":"haha"}],"pkNames":null,"sql":"","sqlType":{"name":12,"age":4},"table":"aa","ts":1577524909146,"type":"UPDATE"}

(9)验证HA模式,停止hadoop102 canal-server

[atguigu@hadoop102 canal-server]$ sh bin/stop.sh

(10)再次向数据库插入数据,仍然可以接收到,正常stop停止其中一台机器HA模式仍然可以正常工作

 INSERT INTO aa VALUES("bbbbb",111)
 {"data":[{"name":"bbbbb","age":"111"}],"database":"test","es":1577525322000,"id":2,"isDdl":false,"mysqlType":{"name":"varchar(55)","age":"int(11)"},"old":null,"pkNames":null,"sql":"","sqlType":{"name":12,"age":4},"table":"aa","ts":1577525323566,"type":"INSERT"}

(11)再次启动hadoop102,测试kill命令,kill hadoop103活动节点(canal server)。测试结果HA模式仍然可以正常工作

[atguigu@hadoop102 canal-server]$ sh bin/startup.sh 
[atguigu@hadoop103 canal-server]$ jps
39667 QuorumPeerMain
39956 Kafka
40777 Jps
40716 CanalLauncher
[atguigu@hadoop103 canal-server]$ kill -9 40716
INSERT INTO aa VALUES("cccc",111)
{"data":[{"name":"cccc","age":"111"}],"database":"test","es":1577525454000,"id":1,"isDdl":false,"mysqlType":{"name":"varchar(55)","age":"int"},"old":null,"pkNames":null,"sql":"","sqlType":{"name":12,"age":4},"table":"aa","ts":1577525477964,"type":"INSERT"}

Canal admin使用

(1)关闭Canal-server

[atguigu@hadoop102 canal-server]$ sh bin/stop.sh
[atguigu@hadoop103 canal-server]$ sh bin/stop.sh

(2)修改配置填写Canal-damin地址 (两台节点都改)

[atguigu@hadoop102 canal-server]$ vim conf/canal.properties
canal.admin.manager = hadoop102:8089
# admin auto register
canal.admin.register.auto = true
canal.admin.register.cluster =

(3)启动Canal-admin

[atguigu@hadoop102 canal-admin]$ sh bin/startup.sh

(4)启动Canal-server

[atguigu@hadoop102 canal-server]$ sh bin/startup.sh 
[atguigu@hadoop103 canal-server]$ sh bin/startup.sh

(5)登录页面hadoop102:8089

(6)新建集群

(7)server管理

配置项:

· 所属集群,可以选择为单机 或者 集群。一般单机Server的模式主要用于一次性的任务或者测试任务

· Server名称,唯一即可,方便自己记忆

· Server Ip,机器ip

· admin端口,canal 1.1.4版本新增的能力,会在canal-server上提供远程管理操作,默认值11110

· tcp端口,canal提供netty数据订阅服务的端口

· metric端口, promethues的exporter监控数据端口 (未来会对接监控)

(8)可以在页面查看相关配置信息

(9)也可以在页面上进行修改,启动和停止节点

(10)页面上查看日志

(11)Instance管理,创建instance


大数据和云计算的关系

大数据HBase原理

大数据项目架构

大数据面试题整合

大数据的切片机制有哪些

相关推荐

前端入门——css 网格轨道详细介绍

上篇前端入门——cssGrid网格基础知识整体大概介绍了cssgrid的基本概念及使用方法,本文将介绍创建网格容器时会发生什么?以及在网格容器上使用行、列属性如何定位元素。在本文中,将介绍:...

Islands Architecture(孤岛架构)在携程新版首页的实践

一、项目背景2022,携程PC版首页终于迎来了首次改版,完成了用户体验与技术栈的全面升级。作为与用户连接的重要入口,旧版PC首页已经陪伴携程走过了22年,承担着重要使命的同时,也遇到了很多问题:维护/...

HTML中script标签中的那些属性

HTML中的<script>标签详解在HTML中,<script>标签用于包含或引用JavaScript代码,是前端开发中不可或缺的一部分。通过合理使用<scrip...

CSS 中各种居中你真的玩明白了么

页面布局中最常见的需求就是元素或者文字居中了,但是根据场景的不同,居中也有简单到复杂各种不同的实现方式,本篇就带大家一起了解下,各种场景下,该如何使用CSS实现居中前言页面布局中最常见的需求就是元...

CSS样式更改——列表、表格和轮廓

上篇文章主要介绍了CSS样式更改篇中的字体设置Font&边框Border设置,这篇文章分享列表、表格和轮廓,一起来看看吧。1.列表List1).列表的类型<ulstyle='list-...

一文吃透 CSS Flex 布局

原文链接:一文吃透CSSFlex布局教学游戏这里有两个小游戏,可用来练习flex布局。塔防游戏送小青蛙回家Flexbox概述Flexbox布局也叫Flex布局,弹性盒子布局。它决定了...

css实现多行文本的展开收起

背景在我们写需求时可能会遇到类似于这样的多行文本展开与收起的场景:那么,如何通过纯css实现这样的效果呢?实现的难点(1)位于多行文本右下角的展开收起按钮。(2)展开和收起两种状态的切换。(3)文本...

css 垂直居中的几种实现方式

前言设计是带有主观色彩的,同样网页设计中的css一样让人摸不头脑。网上列举的实现方式一大把,或许在这里你都看到过,但既然来到这里我希望这篇能让你看有所收获,毕竟这也是前端面试的基础。实现方式备注:...

WordPress固定链接设置

WordPress设置里的最后一项就是固定链接设置,固定链接设置是决定WordPress文章及静态页面URL的重要步骤,从站点的SEO角度来讲也是。固定链接设置决定网站URL,当页面数少的时候,可以一...

面试发愁!吃透 20 道 CSS 核心题,大厂 Offer 轻松拿

前端小伙伴们,是不是一想到面试里的CSS布局题就发愁?写代码时布局总是对不齐,面试官追问兼容性就卡壳,想跳槽却总被“多列等高”“响应式布局”这些问题难住——别担心!从今天起,咱们每天拆解一...

3种CSS清除浮动的方法

今天这篇文章给大家介绍3种CSS清除浮动的方法。有一定的参考价值,有需要的朋友可以参考一下,希望对大家有所帮助。首先,这里就不讲为什么我们要清楚浮动,反正不清除浮动事多多。下面我就讲3种常用清除浮动的...

2025 年 CSS 终于要支持强大的自定义函数了?

大家好,很高兴又见面了,我是"高级前端进阶",由我带着大家一起关注前端前沿、深入前端底层技术,大家一起进步,也欢迎大家关注、点赞、收藏、转发!1.什么是CSS自定义属性CSS自...

css3属性(transform)的一个css3动画小应用

闲言碎语不多讲,咱们说说css3的transform属性:先上效果:效果说明:当鼠标移到a标签的时候,从右上角滑出二维码。实现方法:HTML代码如下:需要说明的一点是,a链接的跳转需要用javasc...

CSS基础知识(七)CSS背景

一、CSS背景属性1.背景颜色(background-color)属性值:transparent(透明的)或color(颜色)2.背景图片(background-image)属性值:none(没有)...

CSS 水平居中方式二

<divid="parent"><!--定义子级元素--><divid="child">居中布局</div>...

取消回复欢迎 发表评论: