1. 程式人生 > >linux之高階網路配置——bond、team網路的配置

linux之高階網路配置——bond、team網路的配置

高階網路配置

一、bond網路

Bond 網路

    Red Hat Enterprise Linux 允許管理員使用,bonding核心模組合稱為通道繫結介面的特殊網路介面將多個網路介面繫結到一個通道。根據選擇的繫結模式,通道繫結使兩個或更多個網路介面作為一個網路介面,從而增加頻寬和/提供冗餘性。

 

選擇linux乙太網繫結模式

模式0(平衡輪循)-輪循策略,所有介面都採用輪循方式在所有Slave(從盤)中傳輸封包;任何Slave都可以接收。

模式1(主動備份)-容錯。一次只能使用一個Slave介面,但是如果接口出現故障,另一個Slave將接替它。

模式3(廣播)-容錯。所有封包都通過所有Slave介面廣播。

實驗一:網絡卡模式一(主動備份)

1.為虛擬機器新增一個網絡卡

2.[root@localhost ~]# nmcli connection delete eth0  #刪除網絡卡的配置資訊

3.[root@localhost ~]# nmcli connection add con-name bond0 ifname bond0 type bond mode active-backup ip4 172.25.254.126/24  #新建以一個bond網路

4.[root@localhost ~]# watch -n 1 cat /proc/net/bonding/bond0  #監控命令

5.[root@localhost ~]# ping 172.25.254.26

  PING 172.25.254.26 (172.25.254.26) 56(84) bytes of data.   

 #失敗,因為還為設定網絡卡

6.[root@localhost ~]# nmcli connection add con-name eht0 ifname eth0 type bond-slave master bond0

7.[root@localhost ~]# nmcli connection add con-name eht1 ifname eth1 type bond-slave master bond0     #為bond設定兩個網絡卡為其工作。

8.[root@localhost ~]# ping 172.25.254.26   #成功

PING 172.25.254.26 (172.25.254.26) 56(84) bytes of data.

64 bytes from 172.25.254.26: icmp_seq=1 ttl=64 time=0.099 ms

64 bytes from 172.25.254.26: icmp_seq=2 ttl=64 time=0.105 ms

64 bytes from 172.25.254.26: icmp_seq=3 ttl=64 time=0.126 ms

 













二、Team網路介面

Team 和 bond0 功能類似

Team 不需要手動載入相應核心模組

Team 有更強的擴充套件性

支援8塊網絡卡,企業七之前沒有這個功能

平衡輪叫比較機械

負載均衡是將任務平衡分配

[root@localhost Desktop]# watch -n 1 'teamdctl team0 stat'  #監控命令

[root@localhost Desktop]# nmcli connection add con-name team0 ifname team0 type team config '{"runner":{"name":"activebackup"}}' ip4 172.25.254.126/24   #新增一個team網路。

Connection 'team0' (825e8e72-f445-4cb5-aa68-7aa593d51497) successfully added.

[root@localhost Desktop]# ifconfig

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        ether 52:54:00:00:47:0a  txqueuelen 1000  (Ethernet)

        RX packets 79  bytes 5489 (5.3 KiB)

        RX errors 0  dropped 37  overruns 0  frame 0

        TX packets 442  bytes 21223 (20.7 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        ether 52:54:00:48:56:5a  txqueuelen 1000  (Ethernet)

        RX packets 473  bytes 23963 (23.4 KiB)

        RX errors 0  dropped 423  overruns 0  frame 0

        TX packets 36  bytes 1624 (1.5 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536

        inet 127.0.0.1  netmask 255.0.0.0

        inet6 ::1  prefixlen 128  scopeid 0x10<host>

        loop  txqueuelen 0  (Local Loopback)

        RX packets 348  bytes 34336 (33.5 KiB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 348  bytes 34336 (33.5 KiB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

team0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500

        inet 172.25.254.126  netmask 255.255.255.0  broadcast 172.25.254.255

        ether aa:5a:48:1d:7e:65  txqueuelen 0  (Ethernet)

        RX packets 0  bytes 0 (0.0 B)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 0  bytes 0 (0.0 B)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@localhost Desktop]# nmcli connection add con-name eth0 ifname eth0 type  team-slave master team0

Connection 'eth0' (4c2668fa-f942-4c9a-83a0-7a0c776b1722) successfully added.   #新增eth0為team服務

[root@localhost Desktop]# nmcli connection add con-name eth1 ifname eth1 type  team-slave master team0

Connection 'eth1' (bc45902b-a912-4d96-92b0-cb827567e9d6) successfully added.#新增eth1為team服務
##監控##

Every 1.0s: teamdctl team0 stat                        Wed May 23 15:20:45 2018

 

setup:

  runner: activebackup

ports:

  eth0

    link watches:

      link summary: up

      instance[link_watch_0]:

        name: ethtool

        link: up

  eth1

    link watches:

      link summary: up

      instance[link_watch_0]:

        name: ethtool

        link: up

runner:

  active port: eth0