1. 程式人生 > >Centos7網卡綁定的方法

Centos7網卡綁定的方法

ble 主備 efi etc pack 方法 top n 交換機 mct

一:傳統的bond方式(飯已驗證)

(1) bond幾種模式介紹。

mode 0 :load balancing(round-robin)模式,需要交換機端支持,支持多端口負載均衡,支持端口冗余,slave接口的mac相同

mode 1:active-backup模式,最大支持兩個端口,一主一備,同一時間只有一塊網卡工作,不支持搶占

mode 4:采用IEEE802.3ad方式的動態協商機制聚合端口,需要交換機開啟lacp並配置為主動(active)模式

mode 5和 mode 6類似mode 1的主備模式,不常用

(2)bond配置

需要關閉NetworkManager服務

#systemctl stop NetworkManager

#systemctl disable NetworkManager

查看內核是否加載bonding

#lsmod |grep bonding (如果未加載,用#modprobe --first-time bonding 飯:這只是臨時的加載,重啟就沒了,永久的需要做下面的配置文件)

配置bonding驅動

# vi /etc/modprobe.d/bond.conf (沒有則新建),寫入以下內容:

alias bond0 bonding

options bond0 miimon=100 mode=0 //miimon是用來進行鏈路監測的,後面指定的是檢查的間隔時間,單位是ms

註:網卡配置文件,“=”左邊均為大寫,右邊為小寫,如bond0的不成功,請細心檢查配置文件。

(3)配置bond接口

# vi /etc/sysconfig/network-scripts/ifcfg-bond0(新建,寫入以下內容)

TYPE=Bond

BOOTPROTO=none

ONBOOT=yes

USERCTL=no //是否允許普通用戶控制此設備

DEVICE=bond0

IPADDR=192.168.0.111

PREFIX=24

NM_CONTROLLED=no //NetworkManager服務的參數,配置修改後無重啟立即生效

BONDING_MASTER=yes

我的實際配置文件:

[root@cnbgdphapwanp01 network-scripts]# cat ifcfg-bond0

DEVICE="bond0"

BOOTPROTO=none

ONBOOT="yes"

IPADDR=10.11.1.137

NETMASK=255.255.255.0

GATEWAY=10.11.1.1

TYPE=Ethernet

(4)配置slave接口

# vi /etc/sysconfig/network-scripts/ifcfg-ens33

TYPE=Ethernet

BOOTPROTO=none

NAME=ens33

DEVICE=ens33

ONBOOT=yes

MASTER=bond0

SLAVE=yes

USERCTL=no

其他slave網卡與此配置相同

我的實際配置文件

[root@cnbgdphapwanp01 network-scripts]# cat ifcfg-bond1

DEVICE="bond1"

BOOTPROTO=none

ONBOOT="yes"

IPADDR=221.99.229.226

NETMASK=255.255.255.0

GATEWAY=211.99.229.21

TYPE=Ethernet

(5)重啟network服務,並檢查

#systemctl restart network

#cat /proc/net/bonding/bond0

二.NetworkManager服務的nmcli方式(暫時未測試)

請參考:http://www.bubuko.com/infodetail-2296969.html

(1)查看網絡設備狀態

[root@compute ~]# nmcli dev
DEVICE TYPE STATE CONNECTION
eth0 ethernet connected eth0
eth1 ethernet connected Wired connection 1
lo loopback unmanaged --
[root@compute ~]#

(2)查看網絡連接狀態

[root@compute ~]# nmcli con sh
NAME UUID TYPE DEVICE
Wired connection 1 d75d7715-1098-353e-bb11-4b718e51ff38 802-3-ethernet eth1
eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 802-3-ethernet eth0
[root@compute ~]#

(3)創建team0(也就是bond接口)

按照下面的語法,用 nmcli 命令為網絡組接口創建一個連接。

# nmcli con add type team con-name CNAME ifname INAME [config JSON]

CNAME 指代連接的名稱,INAME 是接口名稱,JSON (JavaScript Object Notation) 指定所使用的處理器(runner)。JSON語法格式如下:

‘{"runner":{"name":"METHOD"}}‘

METHOD 是以下的其中一個:broadcast、activebackup、roundrobin、loadbalance 或者 lacp

下面以“roundrobin”為例:

[root@compute ~]# nmcli con add type team con-name team0 ifname team0 config ‘{"runner":{"name":"roundrobin"}}‘
Connection ‘team0‘ (64021ca5-85c3-429d-b930-56802dc0ccc4) successfully added.
[root@compute ~]#

設置team0的ip,gateway,dns

[root@compute ~]# nmcli con modify team0 ipv4.address "192.168.0.222/16" ipv4.gateway "192.168.0.1"
[root@compute ~]# nmcli con modify team0 ipv4.dns "223.5.5.5"
[root@compute ~]#

設置team0的屬性為手動(manual)

[root@compute ~]# nmcli con modify team0 ipv4.method manual

添加slave網卡

[root@compute ~]# nmcli con add type team-slave con-name team-port2 ifname eth1 master team0
Connection ‘team-port2‘ (df74a4c7-f8ff-4ae3-b04f-3dd1210598cd) successfully added.
[root@compute ~]# nmcli con add type team-slave con-name team-port1 ifname eth0 master team0
Connection ‘team-port1‘ (757648c4-114f-439f-b022-5bcf63ae0cb3) successfully added.
[root@compute ~]#

啟動team0網口,並檢查

[root@compute ~]# nmcli con up team0
Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/3)
[root@compute ~]# teamdctl team0 sta
setup:
runner: roundrobin
ports:
eth0
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
eth1
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
[root@compute ~]#

三.清除bond信息(未測試)

1. #ifconfig bond0 down

2. 清除對應配置文件信息

3. #lsmod |grep bonding 存在則用rmmod bonding刪除,更改NetworkManager開機自啟,重啟系統。

查看網卡速率: ethtool 網卡名(eth0)

常見故障:

啟動team0網口,team0仍舊為down

[root@compute network-scripts]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:4f:fd:82 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.222/16 brd 192.168.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe4f:fd82/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:4f:fd:8c brd ff:ff:ff:ff:ff:ff
inet 192.168.0.160/16 brd 192.168.255.255 scope global dynamic eth1
valid_lft 7191sec preferred_lft 7191sec
inet 192.168.0.159/16 brd 192.168.255.255 scope global secondary dynamic eth1
valid_lft 6300sec preferred_lft 6300sec
inet6 fe80::86d1:12d7:5a7c:2d88/64 scope link
valid_lft forever preferred_lft forever
4: team0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN

排錯:

1.檢查網絡連接狀態,發現team-port1和team-port2以及team0沒有連接到網卡設備

[root@compute network-scripts]# nmcli con sh
NAME UUID TYPE DEVICE
eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 802-3-ethernet eth0
eth1 22a287d8-6206-4d10-bdd9-5299b063300e 802-3-ethernet eth1
team-port1 757648c4-114f-439f-b022-5bcf63ae0cb3 802-3-ethernet --
team-port2 df74a4c7-f8ff-4ae3-b04f-3dd1210598cd 802-3-ethernet --
team0 64021ca5-85c3-429d-b930-56802dc0ccc4 team --

2.刪除eth0和eth1的連接

[root@compute network-scripts]# nmcli con del eth0 eth1

3再次查看發現team0及slave接口正常連接到設備

[root@compute ~]# nmcli con sh
NAME UUID TYPE DEVICE
team-port1 757648c4-114f-439f-b022-5bcf63ae0cb3 802-3-ethernet eth0
team-port2 df74a4c7-f8ff-4ae3-b04f-3dd1210598cd 802-3-ethernet eth1
team0 64021ca5-85c3-429d-b930-56802dc0ccc4 team team0

4.查看team0接口狀態並測試連通性

[root@compute ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master team0 state UP qlen 1000
link/ether 00:0c:29:4f:fd:82 brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master team0 state UP qlen 1000
link/ether 00:0c:29:4f:fd:82 brd ff:ff:ff:ff:ff:ff
4: team0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
link/ether 00:0c:29:4f:fd:82 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.222/16 brd 192.168.255.255 scope global team0
valid_lft forever preferred_lft forever
inet6 fe80::ac47:e724:cd16:c5ca/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::acce:9394:eafe:57bb/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
inet6 fe80::e1a2:77fd:6148:c7c6/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
[root@compute ~]# ping baidu.com
PING baidu.com (111.13.101.208) 56(84) bytes of data.
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=1 ttl=52 time=30.6 ms
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=1 ttl=52 time=30.7 ms (DUP!)
^C
--- baidu.com ping statistics ---
1 packets transmitted, 1 received, +1 duplicates, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 30.684/30.696/30.708/0.012 ms
[root@compute ~]#

註意測試中出現以下狀況是由於交換機端沒有做端口聚合配置造成

[root@compute ~]# ping baidu.com
PING baidu.com (111.13.101.208) 56(84) bytes of data.
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=1 ttl=52 time=28.2 ms
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=1 ttl=52 time=28.2 ms (DUP!)
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=2 ttl=52 time=29.2 ms
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=2 ttl=52 time=29.2 ms (DUP!)
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=3 ttl=52 time=29.8 ms
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=3 ttl=52 time=29.9 ms (DUP!)
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=4 ttl=52 time=27.7 ms
64 bytes from 111.13.101.208 (111.13.101.208): icmp_seq=4 ttl=52 time=27.7 ms (DUP

?著作權歸作者所有:來自51CTO博客作者BigManer的原創作品,如需轉載,請註明出處,否則將追究法律責任

Centos7網卡綁定的方法