docker容器技術之虛擬化網路概述(四)
前一篇文章連結:Docker容器技術之映象管理基礎(三)
目錄
一、docker網路簡介
現在的linux,在核心級已經直接支援六種名稱空間:
- 主機名和域名的叫:UTS
- 使用者的叫:USER
- 掛載檔案系統的:Mount
- 程序間通訊的:IPC
- 程序ID的:Pid
- 網路的:Net
網路作為docker容器化實現的6個名稱空間的其中之一,是必不可少的。其在Linux核心2.6時已經被載入進核心支援了。
網路名稱空間主要用於實現網路裝置和協議棧的隔離
例如:一個docker host有4塊網絡卡,在建立容器的時候,將其中一塊網絡卡分配給該名稱空間,那麼其他名稱空間是看不到這塊網絡卡的。且:一個裝置只能屬於一個名稱空間。因為一個名稱空間繫結一個物理網絡卡和外界通訊,且一個物理網絡卡不能分配多個名稱空間,這使得我們只能建立4個名稱空間。如果要建立的名稱空間多於我們的物理網絡卡數量,那該怎麼辦呢?
1、 虛擬網路通訊的三種方式
1.1、橋接網路:
在kvm的虛擬網路中,我們使用的是虛擬網絡卡裝置(用純軟體的方式來模擬一組裝置來使用),而在docker中,也不例外。
什麼是MAC:
MAC地址(英語:Media Access Control Address),直譯為媒體訪問控制地址,也稱為區域網地址(LAN Address),乙太網地址(Ethernet Address)或實體地址(Physical Address),它是一個用來確認網上裝置位置的地址。在OSI模型中,第三層網路層負責IP地址,第二層資料鏈接層則負責MAC地址。MAC地址用於在網路中唯一標示一個
網絡卡,一臺裝置若有一或多個網絡卡,則每個網絡卡都需要並會有一個唯一的MAC地址。
在Linux核心級,支援兩種級別裝置的模擬,分別是二層裝置(工作在鏈路層能實現封裝物理報文並在各網路裝置中報文轉發的元件)和三層裝置,這裡我們說二層的;而這個功能,是可以在Linux上利用核心中對二層虛擬裝置的支援建立虛擬網絡卡介面的。而且,這種虛擬網絡卡介面非常獨特,每一個網路介面裝置是成對出現的,可以模擬一根網線的兩端,其中,一端可以插在主機上,另一端可以插在交換機上。這就相當於讓一個主機連線到一個交換機上了。而Linux核心原生支援二層虛擬網橋裝置(用軟體來構建一個交換機)。例如;我有兩個名稱空間,都分別使用虛擬網路建立一對網路介面,一頭插在名稱空間上,另一頭插在虛擬網橋裝置上,並且兩個名稱空間配置在同一個網段上,這樣就實現了容器間的通訊,但是這種橋接方式,如果用在有N多個容器的網路中,由於所有容器全部是橋接在同一塊虛擬網橋裝置上,會產生廣播風暴,在隔離上也是極為不易的,因此在規模容器的場景中,使用橋接這種方式無疑是自討苦吃,否則都不應該直接橋接的。
1.2、NAT網路:
如果不橋接,又能與外部通訊,用的是NAT技術。NAT(network address transfer)網路地址轉換,就是替換IP報文頭部的地址資訊,通過將內部網路IP地址替換為出口的IP地址提供不同網段的通訊。比如:兩個容器都配置了不同的私網地址,並且為容器配置了虛擬網橋(虛擬交換機),把容器1的閘道器指向虛擬網橋的IP地址,而後在docker host上開啟核心轉發功能,這時,當容器1與容器2通訊時,報文先送給各自的虛擬網橋經由核心,核心判定目的IP不是自己,會查詢路由表,而後將報文送給對應的網絡卡,物理網絡卡收到報文之後報文的原地址替換成自己的IP(這個操作稱為snat),再將報文傳送給容器2的物理網絡卡,物理網絡卡收到報文後,會將報文的原IP替換為自己的IP(這個操作稱作dnat)傳送給虛擬交換機,最後在傳送給容器2。容器2收到報文之後,同樣的也要經過相同的操作,將回復報文經過改寫原ip地址的操作(snat和dnat)送達給容器1的物理網絡卡,物理網絡卡收到報文之後在將報文轉發給虛擬網橋送給容器1。在這種網路中,如果要跨物理主機,讓兩個容器通訊,必須經過兩次nat(snat和dnat),造成了通訊效率的低下。在多容器的場景中也不適合。
1.3、Overlay Network
疊加網路,在這種網路中,不同主機的容器通訊會藉助於一個虛擬網橋,讓當前主機的各個容器連線到這個虛擬網橋上來,隨後,他們通訊時,藉助物理網路,來完成報文的隧道轉發,從而可以實現容器可以直接看到不同主機的其他容器,進而互相通訊。例如;容器1要和其他host上的容器2通訊,容器1會把報文傳送給虛擬網橋,虛擬網橋發現目的IP不在本地物理伺服器上,於是這個報文會從物理網絡卡傳送出去,在發出去之前不在做snat,而是在新增一層IP報頭,原地址是容器1的物理網絡卡地址,目的地址是容器2所在主機的物理網絡卡地址。報文到達主機,主機拆完第一層資料報文,發現還有一層報頭,並且IP地址是當前主機的容器地址,進而將報文傳送給虛擬網橋,最後在傳送給容器2。這種用一個IP來承載另外一個IP的方式叫做隧道。
2、docker支援的四種網路模型
2.1、Closed container:只有loop介面,就是null型別
2.2、Bridged container A:橋接式型別,容器網路接入到docker0網路上
2.3、joined container A:聯盟式網路,讓兩個容器有一部分名稱空間隔離(User、Mount、Pid),這樣兩個容器間就擁有同一個網路介面,網路協議棧
2.4、Open container:開放式網路:直接共享物理機的三個名稱空間(UTS、IPC、Net),世界使用物理主機的網絡卡通訊,賦予容器管理物理主機網路的特權
二、Docker網路的指定
1、bridge網路(NAT)
docker在安裝完以後自動提供了3種網路,預設使用bridge(nat橋接)網路,如果啟動容器時,不指定--network=string,就是用的bridge網路,使用docker network ls可以看到這三種網路型別
[[email protected] ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
ea9de27d788c bridge bridge local
126249d6b177 host host local
4ad67e37d383 none null local
docker在安裝完成後,會自動在本機建立一個軟交換機(docker0),可以扮演二層的交換機裝置,也可以扮演二層的網絡卡裝置
[[email protected] ~]# ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:2f:51:41:2d txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
當我們在建立容器時,docker會通過軟體自動建立2個虛擬的網絡卡,一端接在容器上,另一端接在docker0交換機上,從而使得容器就好像連線在了交換機上。
這是我還沒有啟動容器之前本地host的網路資訊
[[email protected] ~]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:2fff:fe51:412d prefixlen 64 scopeid 0x20<link>
ether 02:42:2f:51:41:2d txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 14 bytes 1758 (1.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.31.186 netmask 255.255.255.0 broadcast 192.168.31.255
inet6 fe80::a3fa:7451:4298:fe76 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:fb:f6:a1 txqueuelen 1000 (Ethernet)
RX packets 2951 bytes 188252 (183.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 295 bytes 36370 (35.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 96 bytes 10896 (10.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 96 bytes 10896 (10.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:1a:be:ae txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[[email protected] ~]#
[[email protected] ~]#
[[email protected] ~]#
下面我啟動兩個容器,檢視網路資訊的變化,可以看到多出來兩個vethf的虛擬網絡卡
這就是docker為容器啟動建立的一對虛擬網絡卡中的一半
[[email protected] ~]# docker container run --name=nginx1 -d nginx:stable
11b031f93d019640b1cd636a48fb9448ed0a7fc6103aa509cd053cbbf8605e6e
[[email protected] ~]# docker container run --name=redis1 -d redis:4-alpine
fca571d7225f6ce94ccf6aa0d832bad9b8264624e41cdf9b18a4a8f72c9a0d33
[[email protected] ~]# ifconfig
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:2fff:fe51:412d prefixlen 64 scopeid 0x20<link>
ether 02:42:2f:51:41:2d txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 14 bytes 1758 (1.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.31.186 netmask 255.255.255.0 broadcast 192.168.31.255
inet6 fe80::a3fa:7451:4298:fe76 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:fb:f6:a1 txqueuelen 1000 (Ethernet)
RX packets 2951 bytes 188252 (183.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 295 bytes 36370 (35.5 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 96 bytes 10896 (10.6 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 96 bytes 10896 (10.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth0a95d3a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::cc12:e7ff:fe27:2c7f prefixlen 64 scopeid 0x20<link>
ether ce:12:e7:27:2c:7f txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 8 bytes 648 (648.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethf618ec3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::882a:aeff:fe73:f6df prefixlen 64 scopeid 0x20<link>
ether 8a:2a:ae:73:f6:df txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 22 bytes 2406 (2.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:1a:be:ae txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[[email protected] ~]#
[[email protected] ~]#
另一半在容器中
[[email protected] ~]# docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fca571d7225f redis:4-alpine "docker-entrypoint.s?? About a minute ago Up About a minute 6379/tcp redis1
11b031f93d01 nginx:stable "nginx -g 'daemon of?? 10 minutes ago Up 10 minutes 80/tcp nginx1
並且他們都被關聯到了docker0虛擬交換機中,可以使用brctl和ip link show檢視到
需要先安裝一下 yum -y install bridge-utils
[[email protected] ~]# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.02422f51412d no veth0a95d3a
vethf618ec3
[[email protected] ~]# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:fb:f6:a1 brd ff:ff:ff:ff:ff:ff
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
link/ether 52:54:00:1a:be:ae brd ff:ff:ff:ff:ff:ff
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT group default qlen 1000
link/ether 52:54:00:1a:be:ae brd ff:ff:ff:ff:ff:ff
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:2f:51:41:2d brd ff:ff:ff:ff:ff:ff
7: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
link/ether 8a:2a:ae:73:f6:df brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
link/ether ce:12:e7:27:2c:7f brd ff:ff:ff:ff:ff:ff link-netnsid 1
可以看到,vethf虛擬網絡卡後面還有一半“@if6和@if8”,這兩個就是在容器中的虛擬網絡卡
bridge0是一個nat橋,因此docker在啟動容器後,還會自動為容器生成一個iptables規則
[[email protected] ~]# iptables -t nat -vnL
Chain PREROUTING (policy ACCEPT 43 packets, 3185 bytes)
pkts bytes target prot opt in out source destination
53 4066 PREROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0
53 4066 PREROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0
53 4066 PREROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 3 packets, 474 bytes)
pkts bytes target prot opt in out source destination
24 2277 OUTPUT_direct all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 3 packets, 474 bytes)
pkts bytes target prot opt in out source destination
0 0 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
2 267 RETURN all -- * * 192.168.122.0/24 224.0.0.0/24
0 0 RETURN all -- * * 192.168.122.0/24 255.255.255.255
0 0 MASQUERADE tcp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
0 0 MASQUERADE udp -- * * 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
0 0 MASQUERADE all -- * * 192.168.122.0/24 !192.168.122.0/24
22 2010 POSTROUTING_direct all -- * * 0.0.0.0/0 0.0.0.0/0
22 2010 POSTROUTING_ZONES_SOURCE all -- * * 0.0.0.0/0 0.0.0.0/0
22 2010 POSTROUTING_ZONES all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT_direct (1 references)
pkts bytes target prot opt in out source destination
Chain POSTROUTING_ZONES (1 references)
pkts bytes target prot opt in out source destination
12 953 POST_public all -- * ens33 0.0.0.0/0 0.0.0.0/0 [goto]
10 1057 POST_public all -- * + 0.0.0.0/0 0.0.0.0/0 [goto]
Chain POSTROUTING_ZONES_SOURCE (1 references)
pkts bytes target prot opt in out source destination
Chain POSTROUTING_direct (1 references)
pkts bytes target prot opt in out source destination
Chain POST_public (2 references)
pkts bytes target prot opt in out source destination
22 2010 POST_public_log all -- * * 0.0.0.0/0 0.0.0.0/0
22 2010 POST_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0
22 2010 POST_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0
Chain POST_public_allow (1 references)
pkts bytes target prot opt in out source destination
Chain POST_public_deny (1 references)
pkts bytes target prot opt in out source destination
Chain POST_public_log (1 references)
pkts bytes target prot opt in out source destination
Chain PREROUTING_ZONES (1 references)
pkts bytes target prot opt in out source destination
53 4066 PRE_public all -- ens33 * 0.0.0.0/0 0.0.0.0/0 [goto]
0 0 PRE_public all -- + * 0.0.0.0/0 0.0.0.0/0 [goto]
Chain PREROUTING_ZONES_SOURCE (1 references)
pkts bytes target prot opt in out source destination
Chain PREROUTING_direct (1 references)
pkts bytes target prot opt in out source destination
Chain PRE_public (2 references)
pkts bytes target prot opt in out source destination
53 4066 PRE_public_log all -- * * 0.0.0.0/0 0.0.0.0/0
53 4066 PRE_public_deny all -- * * 0.0.0.0/0 0.0.0.0/0
53 4066 PRE_public_allow all -- * * 0.0.0.0/0 0.0.0.0/0
Chain PRE_public_allow (1 references)
pkts bytes target prot opt in out source destination
Chain PRE_public_deny (1 references)
pkts bytes target prot opt in out source destination
Chain PRE_public_log (1 references)
pkts bytes target prot opt in out source destination
其中在POSTROUTING的chain上,有一個“MASQUERADE”從任何地址進入,只要不從docker0出去,原地址是172.17網段,到任何地址去的資料,都將被地址轉換,snat
上面提到過,當docker使用nat網路時,僅僅只有當前docker host和當前docker host上的容器之間可以互相訪問,那麼不同主機的容器要進行通訊,就必須要進行dnat(埠對映的方式),且同一個埠只能對映一個服務,那麼在這個docker host中如果有多個web服務,就只能對映到一個80埠,其他的web服務就只能改預設埠,這也為我們帶來了很大的侷限性。
2、Host網路
重新啟動一個容器,指定--network為host網路
[[email protected] ~]# docker container run --name=myhttpd --network=host -d httpd:1.1
17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769
[[email protected] ~]#
使用互動模式連線到容器內部,檢視網路資訊
可以看到,這個容器使用的網路和物理主機的一模一樣。注意:在這個容器內部更改網路資訊,就和改物理主機的網路資訊是同等的。
[[email protected] ~]# docker container exec -it myhttpd /bin/sh
sh-4.1#
sh-4.1# ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:2F:51:41:2D
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
inet6 addr: fe80::42:2fff:fe51:412d/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:14 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:1758 (1.7 KiB)
ens33 Link encap:Ethernet HWaddr 00:0C:29:FB:F6:A1
inet addr:192.168.31.186 Bcast:192.168.31.255 Mask:255.255.255.0
inet6 addr: fe80::a3fa:7451:4298:fe76/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:30112 errors:0 dropped:0 overruns:0 frame:0
TX packets:2431 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1927060 (1.8 MiB) TX bytes:299534 (292.5 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:96 errors:0 dropped:0 overruns:0 frame:0
TX packets:96 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:10896 (10.6 KiB) TX bytes:10896 (10.6 KiB)
veth0a95d3a Link encap:Ethernet HWaddr CE:12:E7:27:2C:7F
inet6 addr: fe80::cc12:e7ff:fe27:2c7f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 b) TX bytes:648 (648.0 b)
virbr0 Link encap:Ethernet HWaddr 52:54:00:1A:BE:AE
inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
sh-4.1# ping www.baidu.com
PING www.a.shifen.com (61.135.169.125) 56(84) bytes of data.
64 bytes from 61.135.169.125: icmp_seq=1 ttl=46 time=6.19 ms
64 bytes from 61.135.169.125: icmp_seq=2 ttl=46 time=6.17 ms
64 bytes from 61.135.169.125: icmp_seq=3 ttl=46 time=6.11 ms
使用inspect也可以看到該容器的網路資訊使用的是host
sh-4.1# exit
exit
[[email protected] ~]# docker container inspect myhttpd
[
{
"Id": "17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769",
"Created": "2018-11-03T13:29:08.34016135Z",
"Path": "/usr/sbin/apachectl",
"Args": [
" -D",
"FOREGROUND"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 4015,
"ExitCode": 0,
"Error": "",
"StartedAt": "2018-11-03T13:29:08.528631643Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:bbffcf779dd42e070d52a4661dcd3eaba2bed898bed8bbfe41768506f063ad32",
"ResolvConfPath": "/var/lib/docker/containers/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769/resolv.conf",
"HostnamePath": "/var/lib/docker/containers/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769/hostname",
"HostsPath": "/var/lib/docker/containers/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769/hosts",
"LogPath": "/var/lib/docker/containers/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769/17e26c2869f88d8334ee98ea3b3d26e6abe9add5169d1812ffa0a4588935f769-json.log",
"Name": "/myhttpd",
"RestartCount": 0,
"Driver": "overlay2",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "host",
"PortBindings": {},
"RestartPolicy": {
"Name": "no",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": null,
"CapAdd": null,
"CapDrop": null,
"Dns": [],
"DnsOptions": [],
"DnsSearch": [],
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "shareable",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": [],
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": [],
"DeviceCgroupRules": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0,
"MaskedPaths": [
"/proc/acpi",
"/proc/kcore",
"/proc/keys",
"/proc/latency_stats",
"/proc/timer_list",
"/proc/timer_stats",
"/proc/sched_debug",
"/proc/scsi",
"/sys/firmware"
&n