思科網路外掛Contiv (一) -- 環境部署
什麼是Contiv
Contiv (官網)是一個用於跨虛擬機器、裸機、公有云或私有云的異構容器部署的開源容器網路架構。作為業界最強大的容器網路架構,Contiv具有2層、3層、overlay和ACI模式,能夠與思科基礎設施進行本地整合,並使用豐富的網路和安全策略將應用意圖與基礎設施功能進行對映。
Contiv是跨主機容器網路架構,因此,本文將兩臺虛擬機器作為宿主機,在其上執行容器,驗證其連通性。
Contiv 網路結構
上圖為Contiv的網路模型,大體上可分為Master
和Host Agent
兩個元件,其中Master
負責管理所有網路資源 (IP地址分配\租戶管理\策略管理等等), 而Host Agent
Master
可以在多個節點執行多個例項以實現HA. Host Agent
執行在每個節點上, 執行相應的Container RunTime(如docker)的Plugin.
所有的網路資訊資源都通過分散式KV儲存單元(如ETCD)在每個節點間分享.
Master
上還運行了一個REST Server供外部來進行網路控制(如建立刪除網路 ), netctl 工具可以連線這個Server來達到控制的目的.
環境部署
host1
: ubuntu 16.04+docker 18.06+etcd 3.2.4+ ovs 2.5.4 + contiv 1.2.0
host2
: ubuntu 16.04+docker 18.06+ ovs 2.5.4+contiv 1.2.0(netplugin+netctl)
etcd 的下載安裝方法見 Flannel 環境搭建
ovs 使用 apt-get install 即可安裝,具體安裝方法查詢網路
contiv 在 github 下載 netplugin-1.2.0.tar.bz2
二進位制檔案 ,解壓可以看到多個二進位制可執行檔案,將其中netmaster
netplugin
netctl
放到系統路徑 (比如 /usr/local/bin) , 其中netmaster
netplugin
是前一小節Master
和Host Agent
的實現
host1
的外部網絡卡的 IP 地址是 172.16.112.128
host1
的外部網絡卡的 IP 地址是 172.16.112.133
host1 執行 etcd
[email protected]:~# systemctl start etcd
[email protected]:~# systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/lib/systemd/system/etcd.service; disabled; vendor preset: enabled)
Active: active (running) since Fri 2018-09-21 01:34:56 PDT; 2h 10min ago
Docs: https://github.com/coreos/etcd
Main PID: 37808 (etcd)
Tasks: 8
Memory: 7.6M
CPU: 24.690s
CGroup: /system.slice/etcd.service
└─37808 /usr/local/bin/etcd
host1 啟動 netmaster
[email protected]:~# netmaster --etcd-endpoints http://172.16.112.128:2379 --mode docker --netmode vlan --fwdmode bridge
INFO[0000] Using netmaster log level: info
INFO[0000] Using netmaster syslog config: nil
INFO[Sep 21 03:59:14.020675885] Using netmaster log format: text
INFO[Sep 21 03:59:14.032729154] Using netmaster mode: docker
INFO[Sep 21 03:59:14.033005583] Using netmaster network mode: vlan
INFO[Sep 21 03:59:14.033087927] Using netmaster forwarding mode: bridge
INFO[Sep 21 03:59:14.033171366] Using netmaster state db endpoints: etcd: http://172.16.112.128:2379
INFO[Sep 21 03:59:14.033304358] Using netmaster docker v2 plugin name: netplugin
INFO[Sep 21 03:59:14.033397300] docker v2plugin (netplugin) updated to netplugin and ipam (netplugin) updated to netplugin
INFO[Sep 21 03:59:14.033481243] Using netmaster external-address: 0.0.0.0:9999
INFO[Sep 21 03:59:14.068173614] Using netmaster internal-address: 172.16.112.128:9999
INFO[Sep 21 03:59:14.068345036] Using netmaster infra type: default
INFO[Sep 21 03:59:14.098425337] RPC Server is listening on [::]:9001
......
INFO[Sep 21 03:59:14.211196969] Creating default tenant
INFO[Sep 21 03:59:14.211403602] Received TenantCreate: &{Key:default DefaultNetwork: TenantName:default LinkSets:{AppProfiles:map[] EndpointGroups:map[] NetProfiles:map[] Networks:map[] Policies:map[] Servicelbs:map[] VolumeProfiles:map[] Volumes:map[]}}
INFO[Sep 21 03:59:14.212270603] Restoring ProviderDb and ServiceDB cache
INFO[Sep 21 03:59:14.214864274] Registering service key: /contiv.io/service/netmaster/172.16.112.128:9999, value: {ServiceName:netmaster Role:leader Version: TTL:10 HostAddr:172.16.112.128 Port:9999 Hostname:}
INFO[Sep 21 03:59:14.215062578] Registering service key: /contiv.io/service/netmaster.rpc/172.16.112.128:9001, value: {ServiceName:netmaster.rpc Role:leader Version: TTL:10 HostAddr:172.16.112.128 Port:9001 Hostname:}
INFO[Sep 21 03:59:14.215215731] Registered netmaster service with registry
INFO[Sep 21 03:59:14.215577345] Stop refreshing key: /contiv.io/service/netmaster/172.16.112.128:9999
INFO[Sep 21 03:59:14.216141640] Stop refreshing key: /contiv.io/service/netmaster.rpc/172.16.112.128:9001
INFO[Sep 21 03:59:14.217693322] Netmaster listening on 0.0.0.0:9999
INFO[Sep 21 03:59:14.218048101] Ignore creating API listener on "172.16.112.128:9999" because "0.0.0.0:9999" covers it
host1 啟動 netplugin
[email protected]:~# netplugin --etcd-endpoints http://172.16.112.128:2379 --mode docker --netmode vlan --fwdmode bridge
INFO[0000] Using netplugin log level: info
INFO[0000] Using netplugin syslog config: nil
INFO[Sep 21 04:14:25.928104731] Using netplugin log format: text
INFO[Sep 21 04:14:25.928303507] Using netplugin mode: docker
INFO[Sep 21 04:14:25.928597302] Using netplugin network mode: vlan
INFO[Sep 21 04:14:25.928684356] Using netplugin forwarding mode: bridge
INFO[Sep 21 04:14:25.928784788] Using netplugin state db endpoints: etcd: http://172.16.112.128:2379
INFO[Sep 21 04:14:25.928946543] Using netplugin host: node-1
INFO[Sep 21 04:14:25.929640584] Using netplugin control IP: 172.16.112.128
INFO[Sep 21 04:14:25.930269612] Using netplugin VTEP IP: 172.16.112.128
INFO[Sep 21 04:14:25.930463923] Using netplugin vlan uplinks: []
INFO[Sep 21 04:14:25.930549976] Using netplugin vxlan port: 4789
INFO[Sep 21 04:14:25.959240581] Got global forwarding mode: bridge
INFO[Sep 21 04:14:25.959429869] Got global private subnet: 172.19.0.0/16
INFO[Sep 21 04:14:25.959481773] Using forwarding mode: bridge
INFO[Sep 21 04:14:25.959529880] Using host private subnet: 172.19.0.0/16
INFO[Sep 21 04:14:25.964467700] Initializing ovsdriver
INFO[Sep 21 04:14:25.964713009] Received request to create new ovs switch bridge:contivVxlanBridge, localIP:172.16.112.128, fwdMode:bridge
INFO[Sep 21 04:14:26.075378465] Creating new ofnet agent for contivVxlanBridge,vxlan,[0 0 0 0 0 0 0 0 0 0 255 255 172 16 112 128],9002,6633
INFO[Sep 21 04:14:26.076282704] RPC Server is listening on [::]:9002
......
此時 在 /run/docker/plugins/netplugin.sock 目錄 能看到netplugin
創建出的.sock檔案
[email protected]:~# ls -l /run/docker/plugins/netplugin.sock
srwxr-xr-x 1 root root 0 Sep 21 04:14 /run/docker/plugins/netplugin.sock
host2 啟動 netplugin
同 host1 的啟動方式(略)
host1 上建立網路
[email protected]:~# netctl --netmaster http://172.16.112.128:9999 network create --subnet 10.103.1.0/24 test-net
Creating network default:test-net
[email protected]:~# netctl --netmaster http://172.16.112.128:9999 network ls
Tenant Network Nw Type Encap type Packet tag Subnet Gateway IPv6Subnet IPv6Gateway Cfgd Tag
------ ------- ------- ---------- ---------- ------- ------ ---------- ----------- ---------
default test-net data vxlan 0 10.103.1.0/24
[email protected]:~# docker network ls
NETWORK ID NAME DRIVER SCOPE
e600bbb6ba21 bridge bridge local
47b4619271d1 docker_gwbridge bridge local
193acc695266 host host local
af251471aaa4 none null local
32a8f6609407 test-net netplugin global
這裡使用 netctl
建立私有網路 test-net (10.103.1.0/24網段), 由於沒有指定租戶(–tenant引數), 因此建立的網路屬於 default 租戶. 使用 docker network ls
命令也可以看到建立的 test-net , 並可以看到其使用的 driver 型別正是 netplugin ,且 Scope 是 global ,表明這是一個跨主機網路.
在 host2 上可以看到該網路同樣被建立
[email protected]:~# netctl --netmaster http://172.16.112.128:9999 network ls
Tenant Network Nw Type Encap type Packet tag Subnet Gateway IPv6Subnet IPv6Gateway Cfgd Tag
------ ------- ------- ---------- ---------- ------- ------ ---------- ----------- ---------
default test-net data vxlan 0 10.103.1.0/24
[email protected]:~# netctl --netmaster http://172.16.112.128:9999 network ls
Tenant Network Nw Type Encap type Packet tag Subnet Gateway IPv6Subnet IPv6Gateway Cfgd Tag
------ ------- ------- ---------- ---------- ------- ------ ---------- ----------- ---------
default test-net data vxlan 0 10.103.1.0/24
[email protected]:~# docker network ls
NETWORK ID NAME DRIVER SCOPE
c42850c2d8c4 bridge bridge local
d925b7303139 docker_gwbridge bridge local
390b696af601 host host local
d1016e4ad403 none null local
32a8f6609407 test-net netplugin global
host1 host2 上啟動容器
[email protected]:~# docker run --net test-net --name contiv-bbox -tid busybox
420722e232a2bf24700976b514e405915afef56697c15aa630f37b1683714d59
host1 上啟動 busybox 容器, 指定使用 test-net 網路
[email protected]:~# docker run --net test-net --name cbox1 -tid busybox
cbd3cf2ecfbae8f88e49f45db6a64c841cfb8f228e8c556af8d3919762f61a65
[email protected]:~# docker exec cbox1 ifconfig
eth0 Link encap:Ethernet HWaddr 02:02:0A:67:01:03
inet addr:10.103.1.3 Bcast:10.103.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:21 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2619 (2.5 KiB) TX bytes:0 (0.0 B)
......
可以看到容器上 eth0 分配的 IP 地址 10.103.1.3 正是屬於之前建立的Contiv網路
在 host2 上進行同樣的操作, eth0 分配的 IP 地址 10.103.1.4
[email protected]:~# docker exec cbox2 ifconfig
eth0 Link encap:Ethernet HWaddr 02:02:0A:67:01:04
inet addr:10.103.1.4 Bcast:10.103.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:21 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:2619 (2.5 KiB) TX bytes:0 (0.0 B)
......
容器的連通
從host1 上的容器cbox1 ping host2 上的容器cbox2, 可見是可以通訊的
[email protected]:~# docker exec cbox1 ping -c 4 cbox2
PING cbox2 (10.103.1.4): 56 data bytes
64 bytes from 10.103.1.4: seq=0 ttl=64 time=10.661 ms
64 bytes from 10.103.1.4: seq=1 ttl=64 time=0.628 ms
64 bytes from 10.103.1.4: seq=2 ttl=64 time=0.450 ms
64 bytes from 10.103.1.4: seq=3 ttl=64 time=0.604 ms
--- cbox2 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.450/3.085/10.661 ms