Skip to content

容器中的网络-veth

本文将简单介绍容器中的网络通信。

本篇文章所使用的环境为:

[root@ubuntu]:[~][tty:0]# hostnamectl
 Static hostname: ubuntu
       Icon name: computer-vm
         Chassis: vm 🖴
    Product UUID: d7a79327-c7b6-4c40-a8d2-abf3244c3abc
  Virtualization: oracle
Operating System: Ubuntu 24.10
          Kernel: Linux 6.11.0-9-generic
    Architecture: x86-64
 Hardware Vendor: innotek GmbH
  Hardware Model: VirtualBox
 Hardware Serial: 0
Firmware Version: VirtualBox
   Firmware Date: Fri 2006-12-01
    Firmware Age: 17y 11month 2w
[root@ubuntu]:[~][tty:0]#

前置铺垫

上篇文章已经介绍了容器是如何挂载rootfs的,本篇将来介绍容器中的网络通信。

容器中的网络隔离,得益于network namespace,请看下面的unshare例子:

[root@ubuntu]:[~][tty:0]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:25:86:57 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.141/24 metric 100 brd 192.168.1.255 scope global dynamic enp0s3
       valid_lft 7178sec preferred_lft 7178sec
    inet6 fe80::a00:27ff:fe24:8657/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
[root@ubuntu]:[~][tty:0]#
[root@ubuntu]:[~][tty:0]# unshare -n --fork /bin/bash
(unshare) [root@ubuntu]:[~][tty:0]#
(unshare) [root@ubuntu]:[~][tty:0]# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
(unshare) [root@ubuntu]:[~][tty:0]#
(unshare) [root@ubuntu]:[~][tty:0]#

当前机器是实际网卡信息是有loenp0s3的,且分配了ip地址,但是使用unsharebash放入到独立的network namespace中后,网络就被隔离了,所以只能看到一个回环地址,且状态还是关闭的(DOWN)。

启用容器中的网络

启用lo网卡

在使用了network namespace的容器中,网络有且只有一个,就是lo网卡,先暂时将其启用,并且尝试ping本地回环地址。

[root@ubuntu]:[~][tty:0]# unshare -m -p -i -u -n --fork /bin/bash
(unshare) [root@ubuntu]:[~][tty:0]# ip a
1: lo: <LOOPBACK> mtu 65536 qdisc noop state DOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
(unshare) [root@ubuntu]:[~][tty:0]#
(unshare) [root@ubuntu]:[~][tty:0]# ping 127.0.0.1
ping: connect: Network is unreachable
(unshare) [root@ubuntu]:[~][tty:0]#
(unshare) [root@ubuntu]:[~][tty:0]#

默认情况下,lo网卡是关闭状态的,所以ping回环地址并不可行,要想ping通,首先需要给启动起来。

(unshare) [root@ubuntu]:[~][tty:0]# ip link set lo up
(unshare) [root@ubuntu]:[~][tty:0]#
(unshare) [root@ubuntu]:[~][tty:0]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host proto kernel_lo
       valid_lft forever preferred_lft forever
(unshare) [root@ubuntu]:[~][tty:0]#
(unshare) [root@ubuntu]:[~][tty:0]# ping 127.0.0.1 -c 4
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.016 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.024 ms
64 bytes from 127.0.0.1: icmp_seq=3 ttl=64 time=0.025 ms
64 bytes from 127.0.0.1: icmp_seq=4 ttl=64 time=0.024 ms

--- 127.0.0.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3052ms
rtt min/avg/max/mdev = 0.016/0.022/0.0

使用ip link set lo up可以将网卡给启动起来,启动后,状态由DOWN变为了UNKNOWN,且也能看到ip地址了,尝试ping 127.0.0.1,也可以进行ping通。

添加veth虚拟网卡

什么是veth虚拟网卡

vethlinux中的虚拟网卡(Virtual Ethernet Device),veth设备总是成对出现的,当使用veth的两端分别连接2个设备后,在对中的一个设备传输数据包时,另一个设备会立即收到。veth 最大的作用便是作为网络命名空间(network namespace)之间的隧道传输。

veth的创建方法为:

# ip link add <p1-name> type veth peer name <p2-name>

其中p1-namep2-name分别是veth2端的名称。

在宿主机创建一对veth网卡:

[root@ubuntu]:[~][tty:1]# ip link add veth0 type veth peer name veth1
[root@ubuntu]:[~][tty:1]#
[root@ubuntu]:[~][tty:1]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:25:86:57 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.141/24 metric 100 brd 192.168.1.255 scope global dynamic enp0s3
       valid_lft 7178sec preferred_lft 7178sec
    inet6 fe80::a00:27ff:fe24:8657/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
3: veth1@veth0: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ae:dd:f7:9d:9b:f8 brd ff:ff:ff:ff:ff:ff
4: veth0@veth1: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 66:77:10:54:81:0e brd ff:ff:ff:ff:ff:ff
[root@ubuntu]:[~][tty:1]#

上面使用ip link add veth0 type veth peer name veth1创建了一对veth网卡,分别为veth0veth1,再次查看网卡信息,增加了veth0veth1,只不过默认状态都是关闭的(DOWN)。

在宿主机的确创建了2个网卡,但是在容器中没有,所以需要将创建的veth其中一个移到容器中去,首先,需要查询需要转移到的容器真实的pid(针对使用了pid namespace而言)。

将虚拟网卡移到容器中

通过下面的命令来查询unshare启动的子命令的pid

[root@ubuntu]:[~][tty:2]# ps axjf | head -n 1 ;ps axjf | grep unshare -A 1 | grep -v grep
   PPID     PID    PGID     SID TTY        TPGID STAT   UID   TIME COMMAND
   1083    1128    1128    1071 pts/0       1129 S        0   0:00  |                   \_ unshare -m -p -i -u -n --fork /bin/bash
   1128    1129    1129    1071 pts/0       1129 S+       0   0:00  |                       \_ /bin/bash
--
[root@ubuntu]:[~][tty:2]#

可以看到,通过unshare创建的bash进程的pid1129

使用如下命令,可以将veth1移入到pid1129所在的network namespace中,即该容器中。

[root@ubuntu]:[~][tty:2]# ip link set veth0 netns 1129
[root@ubuntu]:[~][tty:2]#

此时,再次查询宿主机网卡,会发现veth0已经没了,只有veth1

[root@ubuntu]:[~][tty:2]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host noprefixroute
       valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:25:86:57 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.141/24 metric 100 brd 192.168.1.255 scope global dynamic enp0s3
       valid_lft 7178sec preferred_lft 7178sec
    inet6 fe80::a00:27ff:fe24:8657/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
3: veth1@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ae:dd:f7:9d:9b:f8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
[root@ubuntu]:[~][tty:2]#

再次回到容器中,查看网卡信息。

(unshare) [root@ubuntu]:[~][tty:0]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host proto kernel_lo
       valid_lft forever preferred_lft forever
4: veth0@if3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 66:77:10:54:81:0e brd ff:ff:ff:ff:ff:ff link-netnsid 0
(unshare) [root@ubuntu]:[~][tty:0]#

会发现veth0已经成功挪到了容器中,但是此时因为2个网卡还没有ip地址,所以无法进行通信,我们可以为其设置一个ip地址,这样的话,就可以通信了。

为虚拟网卡分配IP地址

规划如下:

  • 宿主机veth110.0.3.1/24
  • 容器veth010.0.3.3/24

在宿主机设置veth1网卡ip地址。

[root@ubuntu]:[~][tty:2]# ip addr add 10.0.3.1/24 dev veth1
[root@ubuntu]:[~][tty:2]#

查看网卡信息。

[root@ubuntu]:[~][tty:2]# ip a show veth1
3: veth1@if4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ae:dd:f7:9d:9b:f8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.1/24 scope global veth1
       valid_lft forever preferred_lft forever
[root@ubuntu]:[~][tty:2]#

此时网卡还是关闭状态的,需要将其给打开。

[root@ubuntu]:[~][tty:2]# ip link set veth1 up
[root@ubuntu]:[~][tty:2]# ip a show veth1
3: veth1@if4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
    link/ether ae:dd:f7:9d:9b:f8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.1/24 scope global veth1
       valid_lft forever preferred_lft forever
[root@ubuntu]:[~][tty:2]#

状态从DOWN变更为了LOWERLAYERDOWN,这是由于容器中的veth0网卡并未打开而引起的。

接着,需要在容器中对veth0网卡做同样的操作。

在容器中设置veth0网卡的ip地址。

(unshare) [root@ubuntu]:[~][tty:0]# ip addr add 10.0.3.3/24 dev veth0
(unshare) [root@ubuntu]:[~][tty:0]#

查看网卡信息。

(unshare) [root@ubuntu]:[~][tty:0]# ip a show veth0
4: veth0@if3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 66:77:10:54:81:0e brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.3/24 scope global veth0
       valid_lft forever preferred_lft forever
(unshare) [root@ubuntu]:[~][tty:0]#

还是将网卡给打开。

(unshare) [root@ubuntu]:[~][tty:0]# ip link set veth0 up
(unshare) [root@ubuntu]:[~][tty:0]#
(unshare) [root@ubuntu]:[~][tty:0]# ip a show veth0
4: veth0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 66:77:10:54:81:0e brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.3/24 scope global veth0
       valid_lft forever preferred_lft forever
    inet6 fe80::6477:10ff:fe54:810e/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
(unshare) [root@ubuntu]:[~][tty:0]#

veth0的网卡开启后,其状态已经变更为了UP。此时若再次查看宿主机veth1状态,其状态也应该为UP

[root@ubuntu]:[~][tty:2]# ip a show veth1
3: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:dd:f7:9d:9b:f8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.1/24 scope global veth1
       valid_lft forever preferred_lft forever
    inet6 fe80::acdd:f7ff:fe9d:9bf8/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
[root@ubuntu]:[~][tty:2]#

检验虚拟网卡连通性

如果veth虚拟网卡都分配了ip地址,且都开启,状态都为UP的情况下,此时网卡就创建好了,可以尝试在各自的环境中ping一下对方。

在宿主机中ping容器veth0ip

[root@ubuntu]:[~][tty:2]# ip a show veth1
3: veth1@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ae:dd:f7:9d:9b:f8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.1/24 scope global veth1
       valid_lft forever preferred_lft forever
    inet6 fe80::acdd:f7ff:fe9d:9bf8/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
[root@ubuntu]:[~][tty:2]#
[root@ubuntu]:[~][tty:2]# ping -c 4 10.0.3.3
PING 10.0.3.3 (10.0.3.3) 56(84) bytes of data.
64 bytes from 10.0.3.3: icmp_seq=1 ttl=64 time=0.030 ms
64 bytes from 10.0.3.3: icmp_seq=2 ttl=64 time=0.028 ms
64 bytes from 10.0.3.3: icmp_seq=3 ttl=64 time=0.028 ms
64 bytes from 10.0.3.3: icmp_seq=4 ttl=64 time=0.032 ms

--- 10.0.3.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3057ms
rtt min/avg/max/mdev = 0.028/0.029/0.032/0.001 ms
[root@ubuntu]:[~][tty:2]#

在容器中ping宿主机veth1ip

(unshare) [root@ubuntu]:[~][tty:0]# ip a show veth0
4: veth0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 66:77:10:54:81:0e brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.3.3/24 scope global veth0
       valid_lft forever preferred_lft forever
    inet6 fe80::6477:10ff:fe54:810e/64 scope link proto kernel_ll
       valid_lft forever preferred_lft forever
(unshare) [root@ubuntu]:[~][tty:0]#
(unshare) [root@ubuntu]:[~][tty:0]# ping -c 4 10.0.3.1
PING 10.0.3.1 (10.0.3.1) 56(84) bytes of data.
64 bytes from 10.0.3.1: icmp_seq=1 ttl=64 time=0.019 ms
64 bytes from 10.0.3.1: icmp_seq=2 ttl=64 time=0.029 ms
64 bytes from 10.0.3.1: icmp_seq=3 ttl=64 time=0.032 ms
64 bytes from 10.0.3.1: icmp_seq=4 ttl=64 time=0.033 ms

--- 10.0.3.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3099ms
rtt min/avg/max/mdev = 0.019/0.028/0.033/0.005 ms
(unshare) [root@ubuntu]:[~][tty:0]#

如果能够像上述一样,能够互相ping通,就说明veth创建分配正常了。

让容器连上外网

使用veth进行简单的联网

虽然上面例子已经使宿主机和容器能够互相通信了,但是容器内还是无法上网,例如ping 8.8.8.8

(unshare) [root@ubuntu]:[~][tty:0]# ping 8.8.8.8
ping: connect: Network is unreachable
(unshare) [root@ubuntu]:[~][tty:0]#

发现是无法ping通的,但是在宿主机是可以ping通的,例如:

[root@ubuntu]:[~][tty:1]# ping -c 4 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=55 time=34.3 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=55 time=33.8 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=55 time=45.7 ms
64 bytes from 8.8.8.8: icmp_seq=4 ttl=55 time=33.8 ms

--- 8.8.8.8 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3004ms
rtt min/avg/max/mdev = 33.774/36.884/45.674/5.078 ms
[root@ubuntu]:[~][tty:1]#

因为宿主机上的veth1是可以上网的,那想要容器实现上网,可以现将流量发送到veth1上,再由veth1转发出去,当有数据回来的时候,再返给容器中的veth0不就可以了吗?

基于此,可以来做下如下实验。

首先来查看容器的路由信息表。

(unshare) [root@ubuntu]:[~][tty:0]# ip route
10.0.3.0/24 dev veth0 proto kernel scope link src 10.0.3.3
(unshare) [root@ubuntu]:[~][tty:0]#

这只有一条内网的路由信息,所以当我们尝试ping 8.8.8.8的时候,会提示网络不可达。

添加一条默认的路由信息,将路由流量指向宿主机的veth1,即10.0.3.1

(unshare) [root@ubuntu]:[~][tty:0]# ip route add default via 10.0.3.1 dev veth0
(unshare) [root@ubuntu]:[~][tty:0]#
(unshare) [root@ubuntu]:[~][tty:0]# ip route
default via 10.0.3.1 dev veth0
10.0.3.0/24 dev veth0 proto kernel scope link src 10.0.3.3
(unshare) [root@ubuntu]:[~][tty:0]#

在添加完毕后,尝试在容器中ping一下10.0.3.18.8.8.8

(unshare) [root@ubuntu]:[~][tty:0]# ping 10.0.3.1 -c 4
PING 10.0.3.1 (10.0.3.1) 56(84) bytes of data.
64 bytes from 10.0.3.1: icmp_seq=1 ttl=64 time=0.018 ms
64 bytes from 10.0.3.1: icmp_seq=2 ttl=64 time=0.030 ms
64 bytes from 10.0.3.1: icmp_seq=3 ttl=64 time=0.053 ms
64 bytes from 10.0.3.1: icmp_seq=4 ttl=64 time=0.033 ms

--- 10.0.3.1 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3079ms
rtt min/avg/max/mdev = 0.018/0.033/0.053/0.012 ms
(unshare) [root@ubuntu]:[~][tty:0]# ping 8.8.8.8 -c 3
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.

--- 8.8.8.8 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2082ms

(unshare) [root@ubuntu]:[~][tty:0]#

ping 8.8.8.8的时候发现不会提示网络不可达了,而是有在尝试的发送ping数据包,只是都没回来而已。

此时,若在宿主机抓取veth1的报文包,只能收到出去的包,收不到回复的包,报文如下:

[root@ubuntu]:[~][tty:1]# tcpdump -i veth1 -S -s0
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on veth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
07:50:05.432638 IP 10.0.3.3 > dns.google: ICMP echo request, id 33, seq 1, length 64
07:50:06.445881 IP 10.0.3.3 > dns.google: ICMP echo request, id 33, seq 2, length 64
07:50:07.469864 IP 10.0.3.3 > dns.google: ICMP echo request, id 33, seq 3, length 64

这是因为宿主机并未开启转发,需要来设置一下宿主机转发。

在宿主机上,先允许内核进行IP转发:

[root@ubuntu]:[~][tty:2]# sysctl net.ipv4.conf.all.forwarding=1
net.ipv4.conf.all.forwarding = 1
[root@ubuntu]:[~][tty:2]#

而后需要定义一个nat规则,将容器上来的数据包,修改为宿主机自身的,从而进行请求,报文回复后,再进行nat转发,并且发送给容器,规则如下:

[root@ubuntu]:[~][tty:2]# iptables -t nat -A POSTROUTING -s 10.0.3.0/24 -j MASQUERADE
[root@ubuntu]:[~][tty:2]#

上面规则的含义是将地址范围为10.0.3.0/24且为出站流量进行转发,并且进行地址转换。

添加后,可以使用如下命令进行查看。

[root@ubuntu]:[~][tty:2]# iptables -t nat -L -n
Chain PREROUTING (policy ACCEPT)
target     prot opt source               destination

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination
MASQUERADE  0    --  10.0.3.0/24          0.0.0.0/0
[root@ubuntu]:[~][tty:2]#

再次切换到容器中,尝试ping 8.8.8.8

(unshare) [root@ubuntu]:[~][tty:0]# ping 8.8.8.8 -c 3
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=54 time=34.1 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=54 time=33.6 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=54 time=33.7 ms

--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 33.565/33.763/34.075/0.222 ms
(unshare) [root@ubuntu]:[~][tty:0]#

若在宿主机对veth1进行抓包的话,可以看到如下报文。

[root@ubuntu]:[~][tty:1]# tcpdump -i veth1 -S -s0
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on veth1, link-type EN10MB (Ethernet), snapshot length 262144 bytes
08:14:59.339542 IP 10.0.3.3 > dns.google: ICMP echo request, id 34, seq 1, length 64
08:14:59.373604 IP dns.google > 10.0.3.3: ICMP echo reply, id 34, seq 1, length 64
08:15:00.340879 IP 10.0.3.3 > dns.google: ICMP echo request, id 34, seq 2, length 64
08:15:00.374420 IP dns.google > 10.0.3.3: ICMP echo reply, id 34, seq 2, length 64
08:15:01.341874 IP 10.0.3.3 > dns.google: ICMP echo request, id 34, seq 3, length 64
08:15:01.375502 IP dns.google > 10.0.3.3: ICMP echo reply, id 34, seq 3, length 64

至此,让容器连上网就达成了。

总结

vethLinux中的虚拟网卡,是成对出现的,你可以将其理解为“网线”,将veth的2端分别接入2个容器的话,当前可以实现容器的互相通信,也可以将其一端放在宿主机上,一端接入一个容器中,在宿主机上进行nat转发设置,让容器能够和外部网络进行通信。

veth的弊端也有,只能简单的连接2个设备,当需要一个多设备的网络的时候,veth处理起来会变得异常棘,好在Linux提供了虚拟网桥bridge,当然这是后话了。

想要继续写bridge,但是实现太晚了,就这样吧,以后有时间再说。




容器中的网络-veth

https://wangli2025.github.io/2024/11/14/container-network.html

本站均为原创文章,采用 CC BY-NC-ND 4.0 协议。转载请注明出处,不得用于商业用途。

Comments