1. 程式人生 > >LVS+Keepalied高可用架構

LVS+Keepalied高可用架構

## 流量 load ref extra 沒有 star entos address

LVS+Keepalied高可用架構

LVS是Linux Virtual Server的簡寫,意即Linux虛擬服務器,是一個虛擬的服務器集群系統。LVS的負載能力強,因為其工作方式邏輯非常簡單,僅進行請求分發,而且工作在網絡的第4層,沒有流量,所以其效率不需要有過多的憂慮。所以在很多企業是應用很廣泛的。LVS基本能支持所有應用,因為工作在第4層,所以LVS可以對幾乎所有應用進行負載均衡,包括Web、數據庫等。LVS可以提供負載均衡,keepalived可以提供健康檢查,故障轉移,兩者結合很好地提高系統的可用性。

LVS並不能完全判別節點故障,比如在WLC規則下,如果集群裏有一個節點沒有配置VIP,將會導致整個集群不能使用。

接下來我們依據lvs的優點結合keepalived部署高可用架構,實現更高的可用性。

環境主機centos7,環境需求如下:

node2 測試主機 ens33:172.25.0.32/24

node3 Master ens33: 172.25.0.33/24

node4 backup ens33:172.25.0.34/24

vip : 172.25.0.200/32

為了簡便我這裏直接用172.25.0.33/24 172.25.0.34/24 直接做revel server

搭建拓撲圖如下:

技術分享圖片

一、環境安裝

1、開啟路由轉發並 、安裝lvs。

[root@node3 ~]# echo "1" >/proc/sys/net/ipv4/ip_forward 
[root@node4 ~]# echo "1" >/proc/sys/net/ipv4/ip_forward

在node3和node4使用yum進行lvs安裝

[root@node3 ~]# yum install -y  ipvsadm  
[root@node4 ~]# yum install -y  ipvsadm

2、keepalived安裝。

我們可以采取源碼在node3和node4上安裝keepalived就可以了,我這裏在node3演示做,另一臺也要同樣的操作。

先安裝keepalived依賴組件

[root@node3 ~]#yum -y install libnl libnl-devel libnfnetlink-devel popt-devel  gcc make 
[root@node3 ~]# cd /usr/local/src/	

#下載keepalived壓縮包

[root@node3 src]# wget http://www.keepalived.org/software/keepalived-1.2.7.tar.gz 
[root@node3 src]#tar zxvf keepalived-1.2.7.tar.gz  -C  /usr/local
[root@node3 src]#cd ../keepalived-1.2.7
[root@node3 keepalived-1.2.7]#./configure

#編譯成功的結果如下:

Keepalived configuration 
------------------------ 
Keepalived version       : 1.2.7 
Compiler                 : gcc 
Compiler flags           : -g -O2 
Extra Lib                : -lpopt -lssl -lcrypto  -lnl 
Use IPVS Framework       : Yes 
IPVS sync daemon support : Yes 
IPVS use libnl           : Yes 
Use VRRP Framework       : Yes 
Use VRRP VMAC            : Yes 
SNMP support             : No 
Use Debug flags          : No

#繼續以下步驟:

[root@node3 keepalived-1.2.7]# make && make install
[root@node3 keepalived-1.2.7]#cp /usr/local/etc/rc.d/init.d/keepalived /etc/rc.d/init.d/
[root@node3 keepalived-1.2.7]#cp /usr/local/etc/sysconfig/keepalived /etc/sysconfig/
[root@node3 keepalived-1.2.7]#mkdir /etc/keepalived
[root@node3 keepalived-1.2.7]#cp /usr/local/etc/keepalived/keepalived.conf /etc/keepalived/
[root@node3 keepalived-1.2.7]#cp /usr/local/sbin/keepalived /usr/sbin/


3、web服務安裝與測試。

node3和node4上安裝web服務,用作測試。


[root@node3 ~]# yum install -y httpd
[root@node4 ~]# yum install -y httpd


#配置完httpd後我們要可以訪問以下本地的web服務.

[root@node3 ~]# curl 172.25.0.33 
node3 
[root@node4 ~]# curl 172.25.0.34
node4


二、lvs、keepalived配置

1、環境需求。

我們可以先理解好整個架構的配置先

Master: 172.25.0.33

backup: 172.25.0.34

VIP :172.25.0.200

測試直接用本地web服務測試

2、Real server 的配置

在lvs的DR和TUN模式下,用戶的訪問請求到達真實服務器後,是直接返回給用戶的,而不再經過前端的Director Server,因此,就需要在每個Real server節點上增加虛擬的VIP地址,這樣數據才能直接返回給用戶,增加VIP地址的操作可以通過下面的命令來實現:

在各自主機綁定vip地址(兩臺上都執行)。

[root@node3 ~]#ifconfig lo:0 172.25.0.200 broadcast 172.25.0.200 netmask 255.255.255.255 up
[root@node4 ~]#ifconfig lo:0 172.25.0.200 broadcast 172.25.0.200 netmask 255.255.255.255 up

#子網掩碼255.255.255.255表示這個整個網段只有這一個地址

[root@node3 ~]# route add -host 172.25.0.200 dev lo:0
[root@node4 ~]# route add -host 172.25.0.200 dev lo:0

#上面的命令表示網絡請求地址是172.25.0.200的請求,返回的時候的源地址是lo:0網卡上配置的地址,這樣出口src地址就是綁定這個假VIP地址,就不會引起丟棄這個包。

3、在Real server上抑制ARP請求

node3上:

[root@node3 ~]# echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
[root@node3 ~]# echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
[root@node3 ~]# echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
[root@node3 ~]# echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
[root@node3 ~]# sysctl -p

node4上:

[root@node4 ~]# echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
[root@node4 ~]# echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
[root@node4 ~]# echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
[root@node4 ~]# echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
[root@node4 ~]# sysctl -p

#當然上面的過程可以直接寫個腳本執行,就可以省了很多時間

4、查看到vip狀態

node3上查看vip狀態

[root@node3 ~]# ip addr show lo

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

valid_lft forever preferred_lft forever

inet 172.25.0.200/32 brd 172.25.0.200 scope global lo:0

valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

node4上查看vip狀態

[root@node4 ~]# ip addr show lo

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

valid_lft forever preferred_lft forever

inet 172.25.0.200/32 brd 172.25.0.200 scope global lo:0

valid_lft forever preferred_lft forever

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

#可以發現兩個vip都已經起來了


5、因為keepalived是為了lvs而生的,所以我們可以直接用keepalived直接配置lvs的DR模型。

註意:在企業模式下使用keepalived時,建議是兩端都為backup,因為為了防止業務切換繁忙,而使得整個業務系統出問題。

master配置:

[root@node3 ~]# cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived
global_defs {  
   notification_email {  
     root@localhost  
   }  
   notification_email_from root@localhost  
   smtp_server localhost  
   smtp_connect_timeout 30  
   router_id  lvs 
   vrrp_mcast_group4 224.0.100.19
} 
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.0.200
    }
}
virtual_server 172.25.0.200 80 {  
   delay_loop 6                  
   lb_algo rr                     
   lb_kind DR                            
   persistence_timeout 0           
   protocol TCP            
   sorry_server 127.0.0.1 80  
   real_server 172.25.0.33 80 {    ##後臺的真實服務器IP
       weight 1                      
        HTTP_GET {
            url { 
              path /index.html
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }       
   }
   real_server 172.25.0.34 80 {     ##後臺的真實服務器IP
       weight 1
        HTTP_GET {
            url { 
              path /index.html 
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }        
   }
}

backup配置:

[root@node4 ~]# cat /etc/keepalived/keepalived.conf

! Configuration File for keepalived
global_defs {  
   notification_email {  
     root@localhost  
   }  
   notification_email_from root@localhost  
   smtp_server localhost  
   smtp_connect_timeout 30  
   router_id  lvs 
   vrrp_mcast_group4 224.0.100.19
} 
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        172.25.0.200
    }
}
virtual_server 172.25.0.200 80 { 
   delay_loop 6                  
   lb_algo rr                     
   lb_kind DR                            
   persistence_timeout 0           
   protocol TCP            
   sorry_server 127.0.0.1 80  
   real_server 172.25.0.33 80 {       ##後臺的真實服務器IP
       weight 1                      
        HTTP_GET {
            url { 
              path /index.html
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }       
   }
   real_server 172.25.0.34 80 {     ##後臺的真實服務器IP
       weight 1
        HTTP_GET {
            url { 
              path /index.html 
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }        
   }
}

5、啟動keepalived服務

[root@node3 ~]# service keepalived start 
Unit keepalived.service could not be found. 
Reloading systemd:                                         [  OK  ] 
Starting keepalived (via systemctl):                       [  OK  ] 
[root@node4 ~]# service keepalived start
Unit keepalived.service could not be found.
Reloading systemd:                                         [  OK  ]
Starting keepalived (via systemctl):                       [  OK  ]

三、高可用測試

1、我們可以查看一下兩端的keepalived狀態

master上查看keepalived狀態:

[root@node3 ~]# service keepalived status -l

● keepalived.service - SYSV: Start and stop Keepalived

Loaded: loaded (/etc/rc.d/init.d/keepalived; bad; vendor preset: disabled)

Active: active (running) since Sat 2018-01-06 17:30:53 CST; 3min 25s ago

Docs: man:systemd-sysv-generator(8)

Process: 73275 ExecStart=/etc/rc.d/init.d/keepalived start (code=exited, status=0/SUCCESS)

Main PID: 73278 (keepalived)

CGroup: /system.slice/keepalived.service

├─73278 keepalived -D

├─73280 keepalived -D

└─73281 keepalived -D


Jan 06 17:30:53 node3 Keepalived_vrrp[73281]: VRRP sockpool: [ifindex(2), proto(112), fd(10,11)]

Jan 06 17:30:53 node3 Keepalived_healthcheckers[73280]: Using LinkWatch kernel netlink reflector...

Jan 06 17:30:53 node3 Keepalived_healthcheckers[73280]: Activating healthchecker for service [172.25.0.33]:80

Jan 06 17:30:53 node3 Keepalived_healthcheckers[73280]: Activating healthchecker for service [172.25.0.34]:80

Jan 06 17:30:54 node3 Keepalived_vrrp[73281]: VRRP_Instance(VI_1) Transition to MASTER STATE

Jan 06 17:30:55 node3 Keepalived_vrrp[73281]: VRRP_Instance(VI_1) Entering MASTER STATE

Jan 06 17:30:55 node3 Keepalived_vrrp[73281]: VRRP_Instance(VI_1) setting protocol VIPs.

Jan 06 17:30:55 node3 Keepalived_vrrp[73281]: VRRP_Instance(VI_1) Sending gratuitous ARPs on ens33 for 172.25.0.200

Jan 06 17:30:55 node3 Keepalived_healthcheckers[73280]: Netlink reflector reports IP 172.25.0.200 added

Jan 06 17:31:00 node3 Keepalived_vrrp[73281]: VRRP_Instance(VI_1) Sending gratuitous ARPs on ens33 for 172.25.0.200

Hint: Some lines were ellipsized, use -l to show in full.

backup上查看keepalived狀態:

[root@node4 ~]# service keepalived status -l

● keepalived.service - SYSV: Start and stop Keepalived

Loaded: loaded (/etc/rc.d/init.d/keepalived; bad; vendor preset: disabled)

Active: active (running) since Sat 2018-01-06 17:31:01 CST; 7min ago

Docs: man:systemd-sysv-generator(8)

Process: 18947 ExecStart=/etc/rc.d/init.d/keepalived start (code=exited, status=0/SUCCESS)

Main PID: 18950 (keepalived)

CGroup: /system.slice/keepalived.service

├─18950 keepalived -D

├─18952 keepalived -D

└─18953 keepalived -D


Jan 06 17:31:01 node4 Keepalived_healthcheckers[18952]: Configuration is using : 16719 Bytes

Jan 06 17:31:01 node4 Keepalived_vrrp[18953]: Configuration is using : 63020 Bytes

Jan 06 17:31:01 node4 Keepalived_vrrp[18953]: Using LinkWatch kernel netlink reflector...

Jan 06 17:31:01 node4 Keepalived_vrrp[18953]: VRRP_Instance(VI_1) Entering BACKUP STATE

Jan 06 17:31:01 node4 Keepalived_vrrp[18953]: VRRP sockpool: [ifindex(2), proto(112), fd(10,11)]

Jan 06 17:31:01 node4 Keepalived_healthcheckers[18952]: Using LinkWatch kernel netlink reflector...

Jan 06 17:31:01 node4 Keepalived_healthcheckers[18952]: Activating healthchecker for service [172.25.0.33]:80

Jan 06 17:31:01 node4 Keepalived_healthcheckers[18952]: Activating healthchecker for service [172.25.0.34]:80

Jan 06 17:35:38 node4 Keepalived_vrrp[18953]: Netlink reflector reports IP 192.168.65.135 added

Jan 06 17:35:38 node4 Keepalived_healthcheckers[18952]: Netlink reflector reports IP 192.168.65.135 added


#我們可以發現在我標註的地方可以看到vip已經在master即node3起來了

2、通過vip對web服務測試

[root@node2 ~]# curl 172.25.0.200
node3
[root@node2 ~]# curl 172.25.0.200
node4

發現訪問vip可以訪問成功

3、把master的keepalived停掉

[root@node3 ~]# service keepalived stop 
Stopping keepalived (via systemctl):                       [  OK  ]

查看node4的keepalived狀態

[root@node4 ~]# service keepalived status -l

● keepalived.service - SYSV: Start and stop Keepalived

Loaded: loaded (/etc/rc.d/init.d/keepalived; bad; vendor preset: disabled)

Active: active (running) since Sat 2018-01-06 17:31:01 CST; 25min ago

Docs: man:systemd-sysv-generator(8)

Process: 18947 ExecStart=/etc/rc.d/init.d/keepalived start (code=exited, status=0/SUCCESS)

Main PID: 18950 (keepalived)

CGroup: /system.slice/keepalived.service

├─18950 keepalived -D

├─18952 keepalived -D

└─18953 keepalived -D


Jan 06 17:35:38 node4 Keepalived_vrrp[18953]: Netlink reflector reports IP 192.168.65.135 added

Jan 06 17:35:38 node4 Keepalived_healthcheckers[18952]: Netlink reflector reports IP 192.168.65.135 added

Jan 06 17:47:09 node4 Keepalived_healthcheckers[18952]: Netlink reflector reports IP 192.168.65.135 added

Jan 06 17:47:09 node4 Keepalived_vrrp[18953]: Netlink reflector reports IP 192.168.65.135 added

Jan 06 17:55:37 node4 Keepalived_vrrp[18953]: VRRP_Instance(VI_1) Transition to MASTER STATE

Jan 06 17:55:38 node4 Keepalived_vrrp[18953]: VRRP_Instance(VI_1) Entering MASTER STATE

Jan 06 17:55:38 node4 Keepalived_vrrp[18953]: VRRP_Instance(VI_1) setting protocol VIPs.

Jan 06 17:55:38 node4 Keepalived_vrrp[18953]: VRRP_Instance(VI_1) Sending gratuitous ARPs on ens33 for 172.25.0.200

Jan 06 17:55:38 node4 Keepalived_healthcheckers[18952]: Netlink reflector reports IP 172.25.0.200 added

Jan 06 17:55:43 node4 Keepalived_vrrp[18953]: VRRP_Instance(VI_1) Sending gratuitous ARPs on ens33 for 172.25.0.200

Hint: Some lines were ellipsized, use -l to show in full.

可以發現已經切換過來了,繼續訪問vip

[root@node2 ~]# curl 172.25.0.200
node4
[root@node2 ~]# curl 172.25.0.200
node3
[root@node2 ~]# curl 172.25.0.200
node4
[root@node2 ~]# curl 172.25.0.200
node3

也是可以訪問的

到這裏為止,我們就已經完成了整個lvs+keepalived的高可用實現了。


總結:我們可以發現,整個過程主要是讓vip跑在lvsDR模式上,然後通過keepalived來調度後臺的真實服務器。


!!以上就是我的實現過程,希望能幫到大家!!!







































LVS+Keepalied高可用架構