Skip to content

docker的几种网络模式

本文将介绍docker的几种网络模式是如何实现的,只介绍如何实现,不会涉及到任何的docker操作。

本文所依赖的环境为:

root@debian:~# hostnamectl 
 Static hostname: windows11pro
       Icon name: windowsBooks
         Chassis: laptop 💻
Operating System: Debian GNU/Linux 12 (bookworm)  
          Kernel: Linux 6.1.0-26-amd64
    Architecture: x86-64
 Hardware Vendor: ASUSTeK COMPUTER INC.
  Hardware Model: X441UVK
Firmware Version: X441UVK.314
root@debian:~#

docker的网络模式可以分为如下四类。

  • bridge模式,此模式是docker的默认模式,通过创建虚拟网桥bridgeveth的方式配合iptables nat做网络转发。
  • Host模式,该模式使用宿主机的网络配置。
  • Container模式,该模式使用其他容器的网络配置。
  • None模式,该模式不使用任何网络功能。

None模式

所谓的None模式,也是最简单的模式,它在创建进程的时候,会为其分配net namespace,但是在net namespace中,不会为其分配ip地址,让其完全不具备网络互通能力。

请看下面的例子:

[root@debian]:[~][tty:2]# unshare --mount --ipc --pid --net --uts --fork /bin/bash
(unshare) [root@debian]:[~][tty:2]# mount -t proc proc /proc
(unshare) [root@debian]:[~][tty:2]# ps aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.1   7196  3780 pts/2    S    22:06   0:00 /bin/bash
root           3  0.0  0.2  11084  4456 pts/2    R+   22:06   0:00 ps aux
(unshare) [root@debian]:[~][tty:2]#
(unshare) [root@debian]:[~][tty:2]# echo $$
1
(unshare) [root@debian]:[~][tty:2]#
(unshare) [root@debian]:[~][tty:2]# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
(unshare) [root@debian]:[~][tty:2]#
(unshare) [root@debian]:[~][tty:2]# ping 127.0.0.1 -c 4
ping: connect: Network is unreachable
(unshare) [root@debian]:[~][tty:2]#

上述命令使用unshare/bin/bash放入新的namespace中执行,其中就指定了net namespace,但是并未为其分配ip地址,所以该容器下操作不了任何网络。默认情况下,本地回环地址也是关闭的,甚至ping不通127.0.0.1。这就是容器的None模式。

这里再次介绍下unshare命令的含义:

  • --mount:启用mount namespace
  • --ipc:启用ipc namespace
  • --pid:启用pid namespace
  • --net:启用net namespace
  • --uts:启用uts namespace
  • --forkunshare将会fork一个子进程来启动进程。
  • /bin/bash:待启动的进程。

具体namespace相关信息,可以查看这里关于namespace的简单介绍。

https://wangli2025.github.io/2024/11/05/Linux-Namespace.html

Host模式

所谓的host模式,即创建进程的时候不使用net namespace,不做网络隔离,让其和宿主机的一致,即使用宿主机的网络信息。

请看下面的例子。

首先查看宿主机的网络信息。

[root@debian]:[~][tty:2]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:36:bb:82 brd ff:ff:ff:ff:ff:ff
    altname enp2s1
    inet 192.168.11.135/24 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe36:bb82/64 scope link
       valid_lft forever preferred_lft forever
[root@debian]:[~][tty:2]#
[root@debian]:[~][tty:2]# ping 8.8.8.8 -c 2
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=55 time=34.0 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=55 time=34.5 ms

--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1003ms
rtt min/avg/max/mdev = 33.992/34.226/34.461/0.234 ms
[root@debian]:[~][tty:2]#

其次使用unshare启动bash进程并且为其分配相关的namespace,但是不要配置net namespace,因为不需要做网络隔离。

[root@debian]:[~][tty:2]# unshare --mount --ipc --pid --uts --fork /bin/bash
(unshare) [root@debian]:[~][tty:2]# mount -t proc proc /proc
(unshare) [root@debian]:[~][tty:2]# ps aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.1   7196  3868 pts/2    S    22:19   0:00 /bin/bash
root           3  0.0  0.2  11084  4412 pts/2    R+   22:20   0:00 ps aux
(unshare) [root@debian]:[~][tty:2]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:36:bb:82 brd ff:ff:ff:ff:ff:ff
    altname enp2s1
    inet 192.168.11.135/24 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe36:bb82/64 scope link
       valid_lft forever preferred_lft forever
(unshare) [root@debian]:[~][tty:2]# ping -c 2 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=55 time=33.8 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=55 time=34.5 ms

--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 33.767/34.149/34.531/0.382 ms
(unshare) [root@debian]:[~][tty:2]#

上述容器创建完成后,使用mount挂载proc,使用ps aux查看其运行的进程信息,证明该容器已经生效了,而后查询网络,可以看到,容器中的网络和宿主机的网络是一模一样的,

bridge模式

bridge也是Docker默认的模式,实际上底层的逻辑是虚拟网桥bridge配合veth虚拟设备,然后再宿主机上做nat调整来实现的,即只要多个容器的veth的一端连接在一个虚拟网桥bridge上,这些容器就是互相通信的。

bridge模式的网络相对而言较为复杂,需要先在宿主机上创建虚拟网桥bridge

[root@debian]:[~][tty:2]# ip link add name br0 type bridge
[root@debian]:[~][tty:2]#

创建完毕后,可以查看其网络设备。

[root@debian]:[~][tty:2]# ip addr show br0
3: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ee:8c:54:c2:94:34 brd ff:ff:ff:ff:ff:ff
[root@debian]:[~][tty:2]#

然后再为每一个容器都创建一对veth虚拟接口,将其一端接入到需要加入的虚拟网桥中bridge

[root@debian]:[~][tty:2]# ip link add name veth0 type veth peer name veth1
[root@debian]:[~][tty:2]# ip link set veth0 master br0
[root@debian]:[~][tty:2]#

创建完毕后,可以查看其veth设备。

[root@debian]:[~][tty:2]# ip addr show veth0
5: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop master br0 state DOWN group default qlen 1000
    link/ether 06:71:18:de:a8:97 brd ff:ff:ff:ff:ff:ff
[root@debian]:[~][tty:2]# ip addr show veth1
4: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ce:57:9d:8f:3d:49 brd ff:ff:ff:ff:ff:ff
[root@debian]:[~][tty:2]#

创建一个新的容器,并且将veth1移入进去。

[root@debian]:[~][tty:2]# unshare --mount --uts --ipc --net --fork /bin/bash
(unshare) [root@debian]:[~][tty:2]# echo $$
2945
(unshare) [root@debian]:[~][tty:2]#

创建容器后,可以查看其默认的网卡信息。

(unshare) [root@debian]:[~][tty:2]# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
(unshare) [root@debian]:[~][tty:2]#

在宿主机上将veth1网卡移入到2945进程中。

[root@debian]:[~][tty:3]# ip link set veth1 netns 2945
[root@debian]:[~][tty:3]#

移入完成后,可以在容器中查看其ip信息。

(unshare) [root@debian]:[~][tty:2]# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
4: veth1@if5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ce:57:9d:8f:3d:49 brd ff:ff:ff:ff:ff:ff link-netnsid 0
(unshare) [root@debian]:[~][tty:2]#

当链路都创建完毕后,需要为其分配ip地址,在容器内指定默认路由,配置ip转发等。

在宿主机配置br0地址。

[root@debian]:[~][tty:3]# ip addr add 10.0.3.2/24 dev br0
[root@debian]:[~][tty:3]# ip addr show br0
3: br0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ee:8c:54:c2:94:34 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.2/24 scope global br0
       valid_lft forever preferred_lft forever
[root@debian]:[~][tty:3]#

在容器内为veth1分配ip地址。

(unshare) [root@debian]:[~][tty:2]# ip addr add 10.0.3.3/24 dev veth1
(unshare) [root@debian]:[~][tty:2]# ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
4: veth1@if5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ce:57:9d:8f:3d:49 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.3/24 scope global veth1
       valid_lft forever preferred_lft forever
(unshare) [root@debian]:[~][tty:2]#

启动br0veth0以及veth1网卡。

在宿主机启动br0veth0

[root@debian]:[~][tty:3]# ip link set br0 up
[root@debian]:[~][tty:3]# ip link set veth0 up
[root@debian]:[~][tty:3]#
[root@debian]:[~][tty:3]# ip addr show br0
3: br0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether ee:8c:54:c2:94:34 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.2/24 scope global br0
       valid_lft forever preferred_lft forever
[root@debian]:[~][tty:3]#
[root@debian]:[~][tty:3]# ip addr show veth0
5: veth0@if4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue master br0 state LOWERLAYERDOWN group default qlen 1000
    link/ether 06:71:18:de:a8:97 brd ff:ff:ff:ff:ff:ff link-netnsid 0
[root@debian]:[~][tty:3]#
[root@debian]:[~][tty:3]#

这里需要注意,veth0已经接入了虚拟网桥br0中,所以就不需要再为其分配ip地址了。

在容器中启动veth1网卡。

(unshare) [root@debian]:[~][tty:2]# ip link set veth1 up
(unshare) [root@debian]:[~][tty:2]# ip addr
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
4: veth1@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ce:57:9d:8f:3d:49 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.3/24 scope global veth1
       valid_lft forever preferred_lft forever
    inet6 fe80::cc57:9dff:fe8f:3d49/64 scope link
       valid_lft forever preferred_lft forever
(unshare) [root@debian]:[~][tty:2]#

在容器中添加默认路由。

(unshare) [root@debian]:[~][tty:2]# ip route
10.0.3.0/24 dev veth1 proto kernel scope link src 10.0.3.3
(unshare) [root@debian]:[~][tty:2]# ip route add default via 10.0.3.2
(unshare) [root@debian]:[~][tty:2]# ip route
default via 10.0.3.2 dev veth1
10.0.3.0/24 dev veth1 proto kernel scope link src 10.0.3.3
(unshare) [root@debian]:[~][tty:2]#

容器和宿主机相互ping

在宿主机ping容器。

[root@debian]:[~][tty:3]# ping 10.0.3.3 -c 3
PING 10.0.3.3 (10.0.3.3) 56(84) bytes of data.
64 bytes from 10.0.3.3: icmp_seq=1 ttl=64 time=0.035 ms
64 bytes from 10.0.3.3: icmp_seq=2 ttl=64 time=0.049 ms
64 bytes from 10.0.3.3: icmp_seq=3 ttl=64 time=0.043 ms

--- 10.0.3.3 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2036ms
rtt min/avg/max/mdev = 0.035/0.042/0.049/0.005 ms
[root@debian]:[~][tty:3]#

在容器ping宿主机。

(unshare) [root@debian]:[~][tty:2]# ping -c 3 10.0.3.2
PING 10.0.3.2 (10.0.3.2) 56(84) bytes of data.
64 bytes from 10.0.3.2: icmp_seq=1 ttl=64 time=0.019 ms
64 bytes from 10.0.3.2: icmp_seq=2 ttl=64 time=0.031 ms
64 bytes from 10.0.3.2: icmp_seq=3 ttl=64 time=0.044 ms

--- 10.0.3.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2045ms
rtt min/avg/max/mdev = 0.019/0.031/0.044/0.010 ms
(unshare) [root@debian]:[~][tty:2]#

此时在容器中是无法上网的,需要在宿主机上设置ip转发才行。

(unshare) [root@debian]:[~][tty:2]# ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

--- 8.8.8.8 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2041ms

(unshare) [root@debian]:[~][tty:2]#

在宿主机上开启ip转发。

[root@debian]:[~][tty:3]# /usr/sbin/sysctl net.ipv4.conf.all.forwarding=1
net.ipv4.conf.all.forwarding = 1
[root@debian]:[~][tty:3]#

并且使用iptables做一个nat转发规则。

[root@debian]:[~][tty:3]# /usr/sbin/iptables -t nat -A POSTROUTING -s 10.0.3.0/24 -j MASQUERADE
[root@debian]:[~][tty:3]#

此时在容器中,就可以ping通外网了。

(unshare) [root@debian]:[~][tty:2]# ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=54 time=34.1 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=54 time=33.9 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=54 time=34.3 ms

--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2006ms
rtt min/avg/max/mdev = 33.945/34.125/34.321/0.153 ms
(unshare) [root@debian]:[~][tty:2]#

Container模式

所谓的Container模式,其实是指2个容器的都使用同一个net namespace即可。

这里来实操一下,先使用unshare创建一个容器。

首先,先在宿主机创建一对veth

[root@debian]:[~][tty:2]# ip link add name veth0 type veth peer name veth1
[root@debian]:[~][tty:2]# ip addr show veth0
4: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 06:71:18:de:a8:97 brd ff:ff:ff:ff:ff:ff
[root@debian]:[~][tty:2]#
[root@debian]:[~][tty:2]# ip addr show veth1
3: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ce:57:9d:8f:3d:49 brd ff:ff:ff:ff:ff:ff
[root@debian]:[~][tty:2]#
[root@debian]:[~][tty:2]#

接着,便使用unshare创建一个容器。

[root@debian]:[~][tty:2]# unshare --mount --ipc --uts --net --fork /bin/bash
(unshare) [root@debian]:[~][tty:2]#
(unshare) [root@debian]:[~][tty:2]# echo $$
2942
(unshare) [root@debian]:[~][tty:2]#
(unshare) [root@debian]:[~][tty:2]# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
(unshare) [root@debian]:[~][tty:2]#

上述创建容器的时候,没有指定pid namespace,是为了更加方便的查看其pid

在宿主机上,将veth1移如到容器中。

[root@debian]:[~][tty:3]# ip link set veth1 netns 2942
[root@debian]:[~][tty:3]#

回到容器,查看其ip地址。

(unshare) [root@debian]:[~][tty:2]# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: veth1@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ce:57:9d:8f:3d:49 brd ff:ff:ff:ff:ff:ff link-netnsid 0
(unshare) [root@debian]:[~][tty:2]#

并且在容器中设置veth1的地址。

(unshare) [root@debian]:[~][tty:2]# ip addr add 10.0.3.2/24 dev veth1
(unshare) [root@debian]:[~][tty:2]# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: veth1@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ce:57:9d:8f:3d:49 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.2/24 scope global veth1
       valid_lft forever preferred_lft forever
(unshare) [root@debian]:[~][tty:2]#

在宿主机设置veth0的地址。

[root@debian]:[~][tty:3]# ip addr add 10.0.3.3/24 dev veth0
[root@debian]:[~][tty:3]# ip addr show veth0
4: veth0@if3: <BROADCAST,MULTICAST> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 06:71:18:de:a8:97 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.3/24 scope global veth0
       valid_lft forever preferred_lft forever
[root@debian]:[~][tty:3]#

分别启动2个网卡。

(unshare) [root@debian]:[~][tty:2]# ip link set veth1 up
(unshare) [root@debian]:[~][tty:2]#
[root@debian]:[~][tty:3]# ip link set veth0 up
[root@debian]:[~][tty:3]#

尝试相互ping

宿主机ping容器。

[root@debian]:[~][tty:3]# ping -c 2 10.0.3.2
PING 10.0.3.2 (10.0.3.2) 56(84) bytes of data.
64 bytes from 10.0.3.2: icmp_seq=1 ttl=64 time=0.029 ms
64 bytes from 10.0.3.2: icmp_seq=2 ttl=64 time=0.042 ms

--- 10.0.3.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1021ms
rtt min/avg/max/mdev = 0.029/0.035/0.042/0.006 ms
[root@debian]:[~][tty:3]#

容器ping宿主机

(unshare) [root@debian]:[~][tty:2]# ping -c 2 10.0.3.3
PING 10.0.3.3 (10.0.3.3) 56(84) bytes of data.
64 bytes from 10.0.3.3: icmp_seq=1 ttl=64 time=0.015 ms
64 bytes from 10.0.3.3: icmp_seq=2 ttl=64 time=0.076 ms

--- 10.0.3.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1024ms
rtt min/avg/max/mdev = 0.015/0.045/0.076/0.030 ms
(unshare) [root@debian]:[~][tty:2]#

至此,我们基础环境已经搭建好了,现在要额外启动一个进程,并且该进程只有net namesapce使用上述pid2942进程的。

为此,我已经写好了一个简单的c程序,内容如下:

# define _GNU_SOURCE
# include <stdio.h>
# include <stdlib.h>
# include <sched.h>
# include <fcntl.h>
# include <unistd.h>

// 进入指定PID的net namespace
int enterNetworkNamespace(char *pid) {
    char buffer[1024];

    // net namespace文件路径
    snprintf(buffer,1024,"/proc/%s/ns/net",pid);

    int fd = open(buffer,O_RDONLY);
    if (-1 == fd) {
        printf("open net namespace fd error\n");
        fflush(stdout);
        return -1;
    }

    // 使用setns进入namespace
    if (-1 == setns(fd,0)){
        printf("enter net namespace error\n");
        fflush(stdout);
        return -1;
    }

    printf("enter net namespace successful...\n");
    fflush(stdout);
    close(fd);
    return 0;
}

int main() {

    // mount namespace
    // ipc namespace
    // pid namespace
    // uts namespace
    int cloneFlags = CLONE_NEWNS | CLONE_NEWIPC | CLONE_NEWPID | CLONE_NEWUTS;

    // 创建新的namesapce
    if (unshare(cloneFlags) < 0) {
        perror("unshare");
        exit(EXIT_FAILURE);
    }

    // 进入已有的net namespace
    if ( -1 == enterNetworkNamespace("2942")){
        printf("enter namesapce error");
    }

    system("/bin/bash");

    return 0;
}

上述代码非常简单,首先使用unshare创建mount namespaceipc namespacepid namespace以及uts namespace

然后读取进程为2942net namespace描述符,

编译该c程序,并且执行,查看其ip地址。

[root@debian]:[~][tty:3]# gcc testNetNamespace.c
[root@debian]:[~][tty:3]#
[root@debian]:[~][tty:3]# ./a.out
enter net namespace successful...
[root@debian]:[~][tty:3]#
[root@debian]:[~][tty:3]# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ce:57:9d:8f:3d:49 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.2/24 scope global veth1
       valid_lft forever preferred_lft forever
    inet6 fe80::cc57:9dff:fe8f:3d49/64 scope link
       valid_lft forever preferred_lft forever
[root@debian]:[~][tty:3]#
[root@debian]:[~][tty:3]# mount -t proc proc /proc
[root@debian]:[~][tty:3]#
[root@debian]:[~][tty:3]# ps aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.0   2576   900 pts/3    S    02:36   0:00 sh -c /bin/bash
root           2  0.0  0.1   7196  3896 pts/3    S    02:36   0:00 /bin/bash
root           5  0.0  0.2  11084  4408 pts/3    R+   02:37   0:00 ps aux
[root@debian]:[~][tty:3]#

现在2个容器,都使用同一个net namespace,那如果2个容器都监听各自的80端口,那么当请求来的时候,是否2者都可以收到呢?

先在2个容器中使用nc监听tcp80端口。

使用unshare启动的容器,监听80端口。

(unshare) [root@debian]:[~][tty:2]# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ce:57:9d:8f:3d:49 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.2/24 scope global veth1
       valid_lft forever preferred_lft forever
    inet6 fe80::cc57:9dff:fe8f:3d49/64 scope link
       valid_lft forever preferred_lft forever
(unshare) [root@debian]:[~][tty:2]# nc -lp 80

使用c程序启动的容器,监听80端口。

[root@debian]:[~][tty:3]# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ce:57:9d:8f:3d:49 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.2/24 scope global veth1
       valid_lft forever preferred_lft forever
    inet6 fe80::cc57:9dff:fe8f:3d49/64 scope link
       valid_lft forever preferred_lft forever
[root@debian]:[~][tty:3]# nc -lp 80

此时,再启动一个终端,执行 telnet 10.0.3.2 80

[root@debian]:[~][tty:4]# telnet 10.0.3.2 80
Trying 10.0.3.2...
Connected to 10.0.3.2.
Escape character is '^]'.
hello namespaces;
^]
telnet> quit
Connection closed.
[root@debian]:[~][tty:4]#

此时,在观察2个容器的日志,只有一个容器收到了数据。

使用c程序启动的容器。

[root@debian]:[~][tty:3]# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ce:57:9d:8f:3d:49 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.2/24 scope global veth1
       valid_lft forever preferred_lft forever
    inet6 fe80::cc57:9dff:fe8f:3d49/64 scope link
       valid_lft forever preferred_lft forever
[root@debian]:[~][tty:3]# nc -lp 80
------------------------------------------
hello namespaces;
[root@debian]:[~][tty:3]#

unshare程序启动的程序。

(unshare) [root@debian]:[~][tty:2]# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ce:57:9d:8f:3d:49 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.2/24 scope global veth1
       valid_lft forever preferred_lft forever
    inet6 fe80::cc57:9dff:fe8f:3d49/64 scope link
       valid_lft forever preferred_lft forever
(unshare) [root@debian]:[~][tty:2]# nc -lp 80

这是因为tcp是数据流,消费1次就没了。若此时,再请求一次,流量就必然会打到unshare启动的进程了,因为c写的进程由于已经收到了数据,nc程序已经终止了。

[root@debian]:[~][tty:4]# telnet 10.0.3.2 80
Trying 10.0.3.2...
Connected to 10.0.3.2.
Escape character is '^]'.
hello ;
^]
telnet> quit
Connection closed.
[root@debian]:[~][tty:4]#

此时流量,就在c写的程序上了。

(unshare) [root@debian]:[~][tty:2]# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ce:57:9d:8f:3d:49 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.2/24 scope global veth1
       valid_lft forever preferred_lft forever
    inet6 fe80::cc57:9dff:fe8f:3d49/64 scope link
       valid_lft forever preferred_lft forever
(unshare) [root@debian]:[~][tty:2]# nc -lp 80
hello ;
(unshare) [root@debian]:[~][tty:2]#

总结

所谓的docker几种网络模式,底层是变着花的玩net namespace

Host模式,其实是在创建容器的时候,不指定net namespace,让其和宿主机使用同一namespace

bridge模式,其实是在宿主机创建一个虚拟网桥bridge,并且创建一对veth,分别连接容器和bridge,且使用iptablesnat转发,让容器可以上外网。

Conrainer模式,其实是2个容器仅使用同一个net namespace,其他namespace依旧使用各自自己的,这样看起来,2个容器只有网络是一样的,其余都还是隔离隔离状态的。

最后就是None模式,这个就更加简单了,它使用了net namespace,但是不分配任何的网卡给它,这样在容器中就无法进行上网了,当然网络也是被隔离了的。

当了解了namespace之后,会意识到,原来的各种各样的模式,方法,其底层逻辑是如此的简单。




docker的几种网络模式

https://wangli2025.github.io/2024/11/25/docker_network.html

本站均为原创文章,采用 CC BY-NC-ND 4.0 协议。转载请注明出处,不得用于商业用途。

Comments