## 流量 load ref extra 没有 star entos address
LVS+Keepalied 高可用架构
LVS 是 Linux Virtual Server 的简写,意即 Linux 虚拟服务器,是一个虚拟的服务器集群系统。LVS 的负载能力强,因为其工作方式逻辑非常简单,仅进行请求分发,而且工作在网络的第 4 层,没有流量,所以其效率不需要有过多的忧虑。所以在很多企业是应用很广泛的。LVS 基本能支持所有应用,因为工作在第 4 层,所以 LVS 可以对几乎所有应用进行负载均衡,包括 web、数据库等。LVS 可以提供负载均衡,keepalived 可以提供健康检查,故障转移,两者结合很好地提高系统的可用性。
LVS 并不能完全判别节点故障,比如在 WLC 规则下,如果集群里有一个节点没有配置 VIP,将会导致整个集群不能使用。
接下来我们依据 lvs 的优点结合 keepalived 部署高可用架构,实现更高的可用性。
环境主机 centos7, 环境需求如下:
node2 测试主机 ens33:172.25.0.32/24
node3 Master ens33: 172.25.0.33/24
node4 backup ens33:172.25.0.34/24
vip : 172.25.0.200/32
为了简便我这里直接用 172.25.0.33/24 172.25.0.34/24 直接做 revel server
搭建拓扑图如下:
一、环境安装
1、开启路由转发并 、安装 lvs。
- [root@node3 ~]# echo "1">/proc/sys/net/ipv4/ip_forward
- [root@node4 ~]# echo "1">/proc/sys/net/ipv4/ip_forward
在 node3 和 node4 使用 yum 进行 lvs 安装
- [root@node3 ~]# yum install -y ipvsadm
- [root@node4 ~]# yum install -y ipvsadm
2、keepalived 安装。
我们可以采取源码在 node3 和 node4 上安装 keepalived 就可以了, 我这里在 node3 演示做,另一台也要同样的操作。
先安装 keepalived 依赖组件
- [root@node3 ~]#yum -y install libnl libnl-devel libnfnetlink-devel popt-devel gcc make
- [root@node3 ~]# cd /usr/local/src/
#下载 keepalived 压缩包
- [root@node3 src]# wget http://www.keepalived.org/software/keepalived-1.2.7.tar.gz
- [root@node3 src]#tar zxvf keepalived-1.2.7.tar.gz -C /usr/local
- [root@node3 src]#cd ../keepalived-1.2.7
- [root@node3 keepalived-1.2.7]#./configure
#编译成功的结果如下:
- Keepalived configuration
- ------------------------
- Keepalived version :1.2.7
- Compiler : gcc
- Compiler flags :-g -O2
- ExtraLib :-lpopt -lssl -lcrypto -lnl
- Use IPVS Framework :Yes
- IPVS sync daemon support :Yes
- IPVS use libnl :Yes
- Use VRRP Framework :Yes
- Use VRRP VMAC :Yes
- SNMP support :No
- UseDebug flags :No
#继续以下步骤:
- [root@node3 keepalived-1.2.7]# make && make install
- [root@node3 keepalived-1.2.7]#cp /usr/local/etc/rc.d/init.d/keepalived /etc/rc.d/init.d/
- [root@node3 keepalived-1.2.7]#cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/
- [root@node3 keepalived-1.2.7]#mkdir /etc/keepalived
- [root@node3 keepalived-1.2.7]#cp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/
- [root@node3 keepalived-1.2.7]#cp /usr/local/sbin/keepalived /usr/sbin/
3、web 服务安装与测试。
node3 和 node4 上安装 web 服务, 用作测试。
- [root@node3 ~]# yum install -y httpd
- [root@node4 ~]# yum install -y httpd
#配置完 httpd 后我们要可以访问以下本地的 web 服务.
- [root@node3 ~]# curl 172.25.0.33
- node3
- [root@node4 ~]# curl 172.25.0.34
- node4
二、lvs、keepalived 配置
1、环境需求。
我们可以先理解好整个架构的配置先
Master: 172.25.0.33
backup: 172.25.0.34
VIP :172.25.0.200
测试直接用本地 web 服务测试
2、Real server 的配置
在 lvs 的 DR 和 TUN 模式下,用户的访问请求到达真实服务器后,是直接返回给用户的,而不再经过前端的 Director Server,因此,就需要在每个 Real server 节点上增加虚拟的 VIP 地址,这样数据才能直接返回给用户,增加 VIP 地址的操作可以通过下面的命令来实现:
在各自主机绑定 vip 地址 (两台上都执行)。
- [root@node3 ~]#ifconfig lo:0172.25.0.200 broadcast 172.25.0.200 netmask 255.255.255.255 up
- [root@node4 ~]#ifconfig lo:0172.25.0.200 broadcast 172.25.0.200 netmask 255.255.255.255 up
#子网掩码 255.255.255.255 表示这个整个网段只有这一个地址
- [root@node3 ~]# route add-host 172.25.0.200 dev lo:0
- [root@node4 ~]# route add-host 172.25.0.200 dev lo:0
#上面的命令表示网络请求地址是 172.25.0.200 的请求,返回的时候的源地址是 lo:0 网卡上配置的地址,这样出口 src 地址就是绑定这个假 VIP 地址,就不会引起丢弃这个包。
3、在 Real server 上抑制 ARP 请求
node3 上:
- [root@node3 ~]# echo "1">/proc/sys/net/ipv4/conf/lo/arp_ignore
- [root@node3 ~]# echo "2">/proc/sys/net/ipv4/conf/lo/arp_announce
- [root@node3 ~]# echo "1">/proc/sys/net/ipv4/conf/all/arp_ignore
- [root@node3 ~]# echo "2">/proc/sys/net/ipv4/conf/all/arp_announce
- [root@node3 ~]# sysctl -p
node4 上:
- [root@node4 ~]# echo "1">/proc/sys/net/ipv4/conf/lo/arp_ignore
- [root@node4 ~]# echo "2">/proc/sys/net/ipv4/conf/lo/arp_announce
- [root@node4 ~]# echo "1">/proc/sys/net/ipv4/conf/all/arp_ignore
- [root@node4 ~]# echo "2">/proc/sys/net/ipv4/conf/all/arp_announce
- [root@node4 ~]# sysctl -p
#当然上面的过程可以直接写个脚本执行,就可以省了很多时间
4、查看到 vip 状态
node3 上查看 vip 状态[root@node3 ~]# ip addr show lo
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 172.25.0.200/32 brd 172.25.0.200 scope global lo:0
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
node4 上查看 vip 状态[root@node4 ~]# ip addr show lo
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet 172.25.0.200/32 brd 172.25.0.200 scope global lo:0
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
#可以发现两个 vip 都已经起来了
5、因为 keepalived 是为了 lvs 而生的,所以我们可以直接用 keepalived 直接配置 lvs 的 DR 模型。
注意:在企业模式下使用 keepalived 时,建议是两端都为 backup,因为为了防止业务切换繁忙,而使得整个业务系统出问题。
master 配置:
[root@node3 ~]# cat /etc/keepalived/keepalived.conf
- !ConfigurationFilefor keepalived
- global_defs {
- notification_email {
- root@localhost
- }
- notification_email_from root@localhost
- smtp_server localhost
- smtp_connect_timeout 30
- router_id lvs
- vrrp_mcast_group4 224.0.100.19
- }
- vrrp_instance VI_1 {
- state MASTER
- interface ens33
- virtual_router_id 51
- priority 100
- advert_int 1
- authentication {
- auth_type PASS
- auth_pass 1111
- }
- virtual_ipaddress {
- 172.25.0.200
- }
- }
- virtual_server 172.25.0.20080{
- delay_loop 6
- lb_algo rr
- lb_kind DR
- persistence_timeout 0
- protocol TCP
- sorry_server 127.0.0.180
- real_server 172.25.0.3380{ ##后台的真实服务器IP
- weight 1
- HTTP_GET {
- url {
- path /index.html
- }
- connect_timeout 3
- nb_get_retry 3
- delay_before_retry 3
- }
- }
- real_server 172.25.0.3480{ ##后台的真实服务器IP
- weight 1
- HTTP_GET {
- url {
- path /index.html
- }
- connect_timeout 3
- nb_get_retry 3
- delay_before_retry 3
- }
- }
- }
backup 配置:
[root@node4 ~]# cat /etc/keepalived/keepalived.conf
- !ConfigurationFilefor keepalived
- global_defs {
- notification_email {
- root@localhost
- }
- notification_email_from root@localhost
- smtp_server localhost
- smtp_connect_timeout 30
- router_id lvs
- vrrp_mcast_group4 224.0.100.19
- }
- vrrp_instance VI_1 {
- state BACKUP
- interface ens33
- virtual_router_id 51
- priority 100
- advert_int 1
- authentication {
- auth_type PASS
- auth_pass 1111
- }
- virtual_ipaddress {
- 172.25.0.200
- }
- }
- virtual_server 172.25.0.20080{
- delay_loop 6
- lb_algo rr
- lb_kind DR
- persistence_timeout 0
- protocol TCP
- sorry_server 127.0.0.180
- real_server 172.25.0.3380{ ##后台的真实服务器IP
- weight 1
- HTTP_GET {
- url {
- path /index.html
- }
- connect_timeout 3
- nb_get_retry 3
- delay_before_retry 3
- }
- }
- real_server 172.25.0.3480{ ##后台的真实服务器IP
- weight 1
- HTTP_GET {
- url {
- path /index.html
- }
- connect_timeout 3
- nb_get_retry 3
- delay_before_retry 3
- }
- }
- }
5、启动 keepalived 服务
- [root@node3 ~]# service keepalived start
- Unit keepalived.service could not be found.
- Reloading systemd: [ OK ]
- Starting keepalived (via systemctl): [ OK ]
- [root@node4 ~]# service keepalived start
- Unit keepalived.service could not be found.
- Reloading systemd: [ OK ]
- Starting keepalived (via systemctl): [ OK ]
三、高可用测试
1、我们可以查看一下两端的 keepalived 状态
master 上查看 keepalived 状态:
[root@node3 ~]# service keepalived status -l
● keepalived.service - SYSV: Start and stop Keepalived
Loaded: loaded (/etc/rc.d/init.d/keepalived; bad; vendor preset: disabled)
Active: active (running) since Sat 2018-01-06 17:30:53 CST; 3min 25s ago
Docs: man:systemd-sysv-generator(8)
Process: 73275 ExecStart=/etc/rc.d/init.d/keepalived start (code=exited, status=0/SUCCESS)
Main PID: 73278 (keepalived)
CGroup: /system.slice/keepalived.service
├─73278 keepalived -D
├─73280 keepalived -D
└─73281 keepalived -D
Jan 06 17:30:53 node3 Keepalived_vrrp[73281]: VRRP sockpool: [ifindex(2), proto(112), fd(10,11)]
Jan 06 17:30:53 node3 Keepalived_healthcheckers[73280]: Using LinkWatch kernel netlink reflector...
Jan 06 17:30:53 node3 Keepalived_healthcheckers[73280]: Activating healthchecker for service [172.25.0.33]:80
Jan 06 17:30:53 node3 Keepalived_healthcheckers[73280]: Activating healthchecker for service [172.25.0.34]:80
Jan 06 17:30:54 node3 Keepalived_vrrp[73281]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jan 06 17:30:55 node3 Keepalived_vrrp[73281]: VRRP_Instance(VI_1) Entering MASTER STATE
Jan 06 17:30:55 node3 Keepalived_vrrp[73281]: VRRP_Instance(VI_1) setting protocol VIPs.
Jan 06 17:30:55 node3 Keepalived_vrrp[73281]: VRRP_Instance(VI_1) Sending gratuitous ARPs on ens33 for 172.25.0.200
Jan 06 17:30:55 node3 Keepalived_healthcheckers[73280]: Netlink reflector reports IP 172.25.0.200 added
Jan 06 17:31:00 node3 Keepalived_vrrp[73281]: VRRP_Instance(VI_1) Sending gratuitous ARPs on ens33 for 172.25.0.200
Hint: Some lines were ellipsized, use -l to show in full.
backup 上查看 keepalived 状态:
[root@node4 ~]# service keepalived status -l
● keepalived.service - SYSV: Start and stop Keepalived
Loaded: loaded (/etc/rc.d/init.d/keepalived; bad; vendor preset: disabled)
Active: active (running) since Sat 2018-01-06 17:31:01 CST; 7min ago
Docs: man:systemd-sysv-generator(8)
Process: 18947 ExecStart=/etc/rc.d/init.d/keepalived start (code=exited, status=0/SUCCESS)
Main PID: 18950 (keepalived)
CGroup: /system.slice/keepalived.service
├─18950 keepalived -D
├─18952 keepalived -D
└─18953 keepalived -D
Jan 06 17:31:01 node4 Keepalived_healthcheckers[18952]: Configuration is using : 16719 Bytes
Jan 06 17:31:01 node4 Keepalived_vrrp[18953]: Configuration is using : 63020 Bytes
Jan 06 17:31:01 node4 Keepalived_vrrp[18953]: Using LinkWatch kernel netlink reflector...
Jan 06 17:31:01 node4 Keepalived_vrrp[18953]: VRRP_Instance(VI_1) Entering BACKUP STATE
Jan 06 17:31:01 node4 Keepalived_vrrp[18953]: VRRP sockpool: [ifindex(2), proto(112), fd(10,11)]
Jan 06 17:31:01 node4 Keepalived_healthcheckers[18952]: Using LinkWatch kernel netlink reflector...
Jan 06 17:31:01 node4 Keepalived_healthcheckers[18952]: Activating healthchecker for service [172.25.0.33]:80
Jan 06 17:31:01 node4 Keepalived_healthcheckers[18952]: Activating healthchecker for service [172.25.0.34]:80
Jan 06 17:35:38 node4 Keepalived_vrrp[18953]: Netlink reflector reports IP 192.168.65.135 added
Jan 06 17:35:38 node4 Keepalived_healthcheckers[18952]: Netlink reflector reports IP 192.168.65.135 added
#我们可以发现在我标注的地方可以看到 vip 已经在 master 即 node3 起来了
2、通过 vip 对 web 服务测试
- [root@node2 ~]# curl 172.25.0.200
- node3
- [root@node2 ~]# curl 172.25.0.200
- node4
发现访问 vip 可以访问成功
3、把 master 的 keepalived 停掉
- [root@node3 ~]# service keepalived stop
- Stopping keepalived (via systemctl): [ OK ]
查看 node4 的 keepalived 状态
[root@node4 ~]# service keepalived status -l
● keepalived.service - SYSV: Start and stop Keepalived
Loaded: loaded (/etc/rc.d/init.d/keepalived; bad; vendor preset: disabled)
Active: active (running) since Sat 2018-01-06 17:31:01 CST; 25min ago
Docs: man:systemd-sysv-generator(8)
Process: 18947 ExecStart=/etc/rc.d/init.d/keepalived start (code=exited, status=0/SUCCESS)
Main PID: 18950 (keepalived)
CGroup: /system.slice/keepalived.service
├─18950 keepalived -D
├─18952 keepalived -D
└─18953 keepalived -D
Jan 06 17:35:38 node4 Keepalived_vrrp[18953]: Netlink reflector reports IP 192.168.65.135 added
Jan 06 17:35:38 node4 Keepalived_healthcheckers[18952]: Netlink reflector reports IP 192.168.65.135 added
Jan 06 17:47:09 node4 Keepalived_healthcheckers[18952]: Netlink reflector reports IP 192.168.65.135 added
Jan 06 17:47:09 node4 Keepalived_vrrp[18953]: Netlink reflector reports IP 192.168.65.135 added
Jan 06 17:55:37 node4 Keepalived_vrrp[18953]: VRRP_Instance(VI_1) Transition to MASTER STATE
Jan 06 17:55:38 node4 Keepalived_vrrp[18953]: VRRP_Instance(VI_1) Entering MASTER STATE
Jan 06 17:55:38 node4 Keepalived_vrrp[18953]: VRRP_Instance(VI_1) setting protocol VIPs.
Jan 06 17:55:38 node4 Keepalived_vrrp[18953]: VRRP_Instance(VI_1) Sending gratuitous ARPs on ens33 for 172.25.0.200
Jan 06 17:55:38 node4 Keepalived_healthcheckers[18952]: Netlink reflector reports IP 172.25.0.200 added
Jan 06 17:55:43 node4 Keepalived_vrrp[18953]: VRRP_Instance(VI_1) Sending gratuitous ARPs on ens33 for 172.25.0.200
Hint: Some lines were ellipsized, use -l to show in full.
可以发现已经切换过来了,继续访问 vip
- [root@node2 ~]# curl 172.25.0.200
- node4
- [root@node2 ~]# curl 172.25.0.200
- node3
- [root@node2 ~]# curl 172.25.0.200
- node4
- [root@node2 ~]# curl 172.25.0.200
- node3
也是可以访问的
到这里为止,我们就已经完成了整个 lvs+keepalived 的高可用实现了。
总结:我们可以发现,整个过程主要是让 vip 跑在 lvsDR 模式上,然后通过 keepalived 来调度后台的真实服务器。
!!以上就是我的实现过程,希望能帮到大家!!!
LVS+Keepalied 高可用架构
来源: http://www.bubuko.com/infodetail-2452259.html