计算机网络(一)

发布于 作者: Ethan

引言

最难学的是已学过的知识。 主要是EECS 489:Computer Networks课上相关知识。

Mininet

mininet是一个网络模拟软件,可以在单台机器上创建自定义拓扑并测试代码。

原理

底层原理主要有三点:

  1. Network Namespace

    • 每一个Host对应一个独立的network namespace。
    • 每个namespace拥有自己的网络协议栈,包括路由表,ARP cache,接口,etc。
  2. Veth Pair 在不同namespace之间网络通信时,使用veth pair

    • 数据进入veth0,从veth1出去,反之亦然。
    • mininet通过这种机制连接每个虚拟主机到switch。
  3. Open vSwitch (OVS) / Linux Bridge mininet支持两种类型的switch

    • OVS:更常用,支持OpenFlow协议,可精准控制流表,方便研究SDN。
    • Linux Bridge:更基础,只提供二层转发。 当拓扑启动时,会为每个交换机创建一个OVS/Linux Bridge实例,然后将host的veth接口绑定到switch端口。

OVS

OVS是运行在内核/用户态的虚拟交换机,支持标准协议,常用于云平台/虚拟换/SDN环境。 支持: - 高性能转发 - 流表规则控制 - 易扩展

流表(Flow Table)

流表是OVS中定义流量匹配和转发行为的规则表,每一项为一个Flow Entry,包括: 1. 匹配条件(IP,端口,MAC,VLAN等) 2. 动作(转发到端口,修改包头,丢弃等) 3. 计数器(流量统计信息)

OpenFlow协议

一种面向接口协议,用于SDN控制器和交换机之间通信。

  • 控制器:集中控制逻辑,生成策略。控制器通过协议给交换机下命令(遇到这种流量,你就这样转发)
  • 交换机:执行控制器下发的流表规则

SDN(Software-Degined Networking)

一种将网络控制平面与数据平面分离的架构。

  • 控制平面:集中在控制器中(ONOs、Ryu)

  • 数据平面:由交换机实际控制转发

  • 流量可以被定义为

    匹配条件:
    - 源IP = 10.0.0.1
    - 目的IP = 10.0.0.2
    - 协议 = TCP
    - 目的端口 = 80
    

Switch实例

一个网络进程或内核模块,用于实现二层/多层转发逻辑。

OVS Switch 实例:

  • 内核模块(datapath):负责高速转发数据包
  • 用户态守护进程(ovs-vswitchd):处理流表更新、管理和控制接口
  • 数据库进程(ovsdb-server):存储交换机配置

操作

$ sudo mn # 启动最小拓扑
*** No default OpenFlow controller found for default switch!
*** Falling back to OVS Bridge
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2
*** Adding switches:
s1
*** Adding links:
(h1, s1) (h2, s1)
*** Configuring hosts
h1 h2
*** Starting controller

*** Starting 1 switches
s1 ...
*** Starting CLI:
minnet>

# 显示节点
mininet> nodes
available nodes are:
h1 h2 s1

#= 显示链路
mininet> net
h1 h1-eth0:s1-eth1
h2 h2-eth0:s1-eth2
s1 lo:  s1-eth1:h1-eth0 s1-eth2:h2-eth0

# 显示所有节点信息
mininet> dump
<Host h1: h1-eth0:10.0.0.1 pid=99286>
<Host h2: h2-eth0:10.0.0.2 pid=99288>
<OVSBridge s1: lo:127.0.0.1,s1-eth1:None,s1-eth2:None pid=99293>

# 如果输入的第一个string是host/swtich/controller,则命令在该节点上执行
mininet> h1 ifconfig -a
# 这个接口不会被根系统看到,因为属于host进程的namespace
# 而switch运行在根namespace中,所以该命令与terminal中运行相同
h1-eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.0.0.1  netmask 255.0.0.0  broadcast 10.255.255.255
        inet6 fe80::ec67:bbff:fe64:b561  prefixlen 64  scopeid 0x20<link>
        ether ee:67:bb:64:b5:61  txqueuelen 1000  (Ethernet)
        RX packets 29  bytes 3144 (3.1 KB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11  bytes 866 (866.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

# 只有网络是隔离的
mininet> h1 ps -a
    PID TTY          TIME CMD
    913 pts/1    00:00:00 bash
   2172 pts/2    00:00:00 sh
   2177 pts/2    00:00:00 sh
   2181 pts/2    00:00:25 node
    # ......

# 验证连通性
# 如果命令中字符串跟的是节点名,会被替换为IP地址
# h1 ARP查询h2的MAC,广播一个ARP Request
# s1没安装flow entry,不知道如何处理,于是发送了一个`packet_in`消息到控制器
# 控制器发送`packet_out`,指示s1将广播包转发到交换机的其它端口
# h2看到ARP并回复,回复由控制器发送给h1,并且推送一个flow entry(如果IP为10.0.0.2,转发到某个端口)
# h1获得了MAC,通过ICMP Echo Request发送Ping到控制器,并导致一个flow entry推送
mininet> h1 ping -c 1 h2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=1.89 ms

--- 10.0.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.888/1.888/1.888/0.000 ms

# 再次发送
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.853 ms

--- 10.0.0.2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.853/0.853/0.853/0.000 ms
# 时间降低了,因为switch之前已经安装了一个覆盖 ICMP ping 流量的flow entry,因此没有生成控制流量,数据包立即通过交换机

# 启动一个python server
mininet> h1 python -m http.server 80 &
mininet> h2 wget -O - h1
--2025-08-30 09:43:53--  http://10.0.0.1/
Connecting to 10.0.0.1:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 653 [text/html]
Saving to: ‘STDOUT’

-                     0%[                    ]       0  --.-KB/s               <!DOCTYPE HTML>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Directory listing for /</title>
</head>
<body>
<h1>Directory listing for /</h1>
<hr>
<ul>
<li><a href=".git/">.git/</a></li>
<li><a href=".gitignore">.gitignore</a></li>
<li><a href=".python-version">.python-version</a></li>
<li><a href="assignment1_topology.png">assignment1_topology.png</a></li>
<li><a href="cpp/">cpp/</a></li>
<li><a href="measurement/">measurement/</a></li>
<li><a href="README.md">README.md</a></li>
<li><a href="run-mn">run-mn</a></li>
<li><a href="util/">util/</a></li>
<li><a href="VM_Setup_Guide.pdf">VM_Setup_Guide.pdf</a></li>
</ul>
<hr>
</body>
</html>
-                   100%[===================>]     653  --.-KB/s    in 0s

2025-08-30 09:43:53 (171 MB/s) - written to stdout [653/653]

mininet> h1 kill %python
Serving HTTP on 0.0.0.0 port 80 (http://0.0.0.0:80/) ...
10.0.0.2 - - [30/Aug/2025 09:43:53] "GET / HTTP/1.1" 200 -

# 退出
mininet> exit
*** Stopping 0 controllers

*** Stopping 2 links
..
*** Stopping 1 switches
s1
*** Stopping 2 hosts
h1 h2
*** Done
completed in 2246.686 seconds

高级启动选项

回归测试

# Regression Test
# 创建一个最小拓扑,启动了 OpenFlow 控制器,运行了 all-pairs-ping test,并移除了拓扑和控制器
$ sudo -E mn --test pingpair
*** No default OpenFlow controller found for default switch!
*** Falling back to OVS Bridge
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2
*** Adding switches:
s1
*** Adding links:
(h1, s1) (h2, s1)
*** Configuring hosts
h1 h2
*** Starting controller

*** Starting 1 switches
s1 ...
*** Waiting for switches to connect
s1
h1 -> h2
h2 -> h1
*** Results: 0% dropped (2/2 received)
*** Stopping 0 controllers

*** Stopping 2 links
..
*** Stopping 1 switches
s1
*** Stopping 2 hosts
h1 h2
*** Done
completed in 0.347 seconds

# 另一个测试
# 创建了相同的 Mininet,在一台主机上运行了 iperf 服务器,在第二台主机上运行了 iperf 客户端,并解析了所达到的带宽
$ sudo -E mn --test iperf
*** No default OpenFlow controller found for default switch!
*** Falling back to OVS Bridge
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2
*** Adding switches:
s1
*** Adding links:
(h1, s1) (h2, s1)
*** Configuring hosts
h1 h2
*** Starting controller

*** Starting 1 switches
s1 ...
*** Waiting for switches to connect
s1
*** Iperf: testing TCP bandwidth between h1 and h2
*** Results: ['65.6 Gbits/sec', '65.5 Gbits/sec']
*** Stopping 0 controllers

*** Stopping 2 links
..
*** Stopping 1 switches
s1
*** Stopping 2 hosts
h1 h2
*** Done
completed in 6.367 seconds

更改拓扑

使用--topo更改,并传递创建的参数。

# 一个switch,3个host
$ sudo mn --test pingall --topo single,3
*** No default OpenFlow controller found for default switch!
*** Falling back to OVS Bridge
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2 h3
*** Adding switches:
s1
*** Adding links:
(h1, s1) (h2, s1) (h3, s1)
*** Configuring hosts
h1 h2 h3
*** Starting controller

*** Starting 1 switches
s1 ...
*** Waiting for switches to connect
s1
*** Ping: testing ping reachability
h1 -> h2 h3
h2 -> h1 h3
h3 -> h1 h2
*** Results: 0% dropped (6/6 received)
*** Stopping 0 controllers

*** Stopping 3 links
...
*** Stopping 1 switches
s1
*** Stopping 3 hosts
h1 h2 h3
*** Done
completed in 0.374 seconds

# 线性拓扑(每个交换机有一个主机,所有交换机在线形连接)
$ sudo mn --test pingall --topo linear,4
*** No default OpenFlow controller found for default switch!
*** Falling back to OVS Bridge
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2 h3 h4
*** Adding switches:
s1 s2 s3 s4
*** Adding links:
(h1, s1) (h2, s2) (h3, s3) (h4, s4) (s2, s1) (s3, s2) (s4, s3)
*** Configuring hosts
h1 h2 h3 h4
*** Starting controller

*** Starting 4 switches
s1 s2 s3 s4 ...
*** Waiting for switches to connect
s1 s2 s3 s4
*** Ping: testing ping reachability
h1 -> h2 h3 h4
h2 -> h1 h3 h4
h3 -> h1 h2 h4
h4 -> h1 h2 h3
*** Results: 0% dropped (12/12 received)
*** Stopping 0 controllers

*** Stopping 7 links
.......
*** Stopping 4 switches
s1 s2 s3 s4
*** Stopping 4 hosts
h1 h2 h3 h4
*** Done
completed in 0.771 seconds

链路变化

设置链路参数。

# 每个链路延迟10ms,RTT应为40ms,因为 ICMP 请求需要穿越两条链路(一条到交换机,一条到目的地)
$ sudo -E mn --link tc,bw=10,delay=10ms
*** No default OpenFlow controller found for default switch!
*** Falling back to OVS Bridge
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2
*** Adding switches:
s1
*** Adding links:
(10.00Mbit 10ms delay) (10.00Mbit 10ms delay) (h1, s1) (10.00Mbit 10ms delay) (10.00Mbit 10ms delay) (h2, s1)
*** Configuring hosts
h1 h2
*** Starting controller

*** Starting 1 switches
s1 ...(10.00Mbit 10ms delay) (10.00Mbit 10ms delay)
*** Starting CLI:
mininet> iperf
*** Iperf: testing TCP bandwidth between h1 and h2
*** Results: ['9.50 Mbits/sec', '9.44 Mbits/sec']
mininet> h1 ping -c10 h2
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=40.4 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=40.2 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=40.2 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=40.2 ms
64 bytes from 10.0.0.2: icmp_seq=5 ttl=64 time=40.3 ms
64 bytes from 10.0.0.2: icmp_seq=6 ttl=64 time=40.5 ms
64 bytes from 10.0.0.2: icmp_seq=7 ttl=64 time=43.2 ms
64 bytes from 10.0.0.2: icmp_seq=8 ttl=64 time=40.3 ms
64 bytes from 10.0.0.2: icmp_seq=9 ttl=64 time=40.4 ms
64 bytes from 10.0.0.2: icmp_seq=10 ttl=64 time=40.3 ms

--- 10.0.0.2 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 9966ms
rtt min/avg/max/mdev = 40.201/40.595/43.214/0.876 ms

可调节的详细程度

默认的详细程度级别是 info ,它会打印 mininet 在启动和拆除过程中的操作。将其与使用 -v 参数的完整 debug 输出进行比较。

$ sudo -E mn -v debug
*** errRun: ['which', 'controller']
  1*** errRun: ['which', 'ovs-controller']
  1*** errRun: ['which', 'test-controller']
  1*** errRun: ['which', 'ovs-testcontroller']
  1*** No default OpenFlow controller found for default switch!
*** Falling back to OVS Bridge
*** errRun: ['grep', '-c', 'processor', '/proc/cpuinfo']
16
  0*** Setting resource limits
*** Creating network
*** Adding controller
*** Adding hosts:
*** errRun: ['which', 'mnexec']
/usr/bin/mnexec
  0*** errRun: ['which', 'ifconfig']
/usr/sbin/ifconfig
  0_popen ['mnexec', '-cdn', 'env', 'PS1=\x7f', 'bash', '--norc', '--noediting', '-is', 'mininet:h1'] 125449*** h1 : ('unset HISTFILE; stty -echo; set +m',)
unset HISTFILE; stty -echo; set +m
h1 _popen ['mnexec', '-cdn', 'env', 'PS1=\x7f', 'bash', '--norc', '--noediting', '-is', 'mininet:h2'] 125451*** h2 : ('unset HISTFILE; stty -echo; set +m',)
unset HISTFILE; stty -echo; set +m
h2
*** Adding switches:
*** errRun: ['which', 'ovs-vsctl']
/usr/bin/ovs-vsctl
  0*** errRun: ['ovs-vsctl', '-t', '1', 'show']
36705980-50e7-4b6d-866d-7539164c5cc2
    ovs_version: "2.17.9"
  0*** errRun: ['ovs-vsctl', '--version']
ovs-vsctl (Open vSwitch) 2.17.9
DB Schema 8.3.0
  0_popen ['mnexec', '-cd', 'env', 'PS1=\x7f', 'bash', '--norc', '--noediting', '-is', 'mininet:s1'] 125456*** s1 : ('unset HISTFILE; stty -echo; set +m',)
unset HISTFILE; stty -echo; set +m

added intf lo (0) to node s1
*** s1 : ('ifconfig', 'lo', 'up')
s1
*** Adding links:
*** h1 : ('ip link add name h1-eth0 address 8a:be:90:96:d1:de type veth peer name s1-eth1 address 92:ee:f7:2f:b6:25 netns 125456',)

added intf h1-eth0 (0) to node h1
moving h1-eth0 into namespace for h1
*** h1 : ('ifconfig', 'h1-eth0', 'up')

added intf s1-eth1 (1) to node s1
*** s1 : ('ifconfig', 's1-eth1', 'up')
(h1, s1) *** h2 : ('ip link add name h2-eth0 address c2:7a:d2:9e:42:74 type veth peer name s1-eth2 address 96:27:d7:64:99:ad netns 125456',)

added intf h2-eth0 (0) to node h2
moving h2-eth0 into namespace for h2
*** h2 : ('ifconfig', 'h2-eth0', 'up')

added intf s1-eth2 (2) to node s1
*** s1 : ('ifconfig', 's1-eth2', 'up')
(h2, s1)
*** Configuring hosts
h1 *** h1 : ('ifconfig', 'h1-eth0', '10.0.0.1/8', 'up')
*** h1 : ('ifconfig lo up',)
h2 *** h2 : ('ifconfig', 'h2-eth0', '10.0.0.2/8', 'up')
*** h2 : ('ifconfig lo up',)

*** Starting controller

*** Starting 1 switches
s1 ...*** errRun: ovs-vsctl -- --id=@s1-listen create Controller target=\"ptcp:6654\" max_backoff=1000 -- --if-exists del-br s1 -- add-br s1 -- set bridge s1 controller=[@s1-listen] other_config:datapath-id=0000000000000001 fail_mode=standalone other-config:disable-in-band=true other-config:dp-desc=s1 -- add-port s1 s1-eth1 -- set Interface s1-eth1 ofport_request=1 -- add-port s1 s1-eth2 -- set Interface s1-eth2 ofport_request=2
e7c1f8a1-7f22-4981-a3e6-7b1cfdc4b925
  0
*** Starting CLI:
*** errRun: ['stty', 'echo', 'sane', 'intr', '^C']

如果使用output仅打印少量输出:

$ sudo -E mn -v output
mininet>

CLI命令

如果 Mininet 命令行上的第一个短语是 py ,那么该命令将使用 Python 执行。 每个主机、交换机和控制器都有一个相关的 Node 对象。

mininet> py 'hello ' + 'world'
hello world

可访问本地变量:

mininet> py locals()
{'net': <mininet.net.Mininet object at 0x776d521024e0>, 'h1': <Host h1: h1-eth0:10.0.0.1 pid=126252> , 'h2': <Host h2: h2-eth0:10.0.0.2 pid=126254> , 's1': <OVSBridge s1: lo:127.0.0.1,s1-eth1:None,s1-eth2:None pid=126259> }

查看节点可用方法和属性:

mininet> py dir(s1)
['IP', 'MAC', 'OVSVersion', 'TCReapply', '__class__', '__delattr__', '__dict__', '__dir__', '__doc__', '__eq__', '__format__', '__ge__', '__getattribute__', '__getstate__', '__gt__', '__hash__', '__init__', '__init_subclass__', '__le__', '__lt__', '__module__', '__ne__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_popen', '_uuids', 'addIntf', 'argmax', 'attach', 'batch', 'batchShutdown', 'batchStartup', 'bridgeOpts', 'checkSetup', 'cleanup', 'cmd', 'cmdPrint', 'cmds', 'commands', 'config', 'configDefault', 'connected', 'connectionsTo', 'controlIntf', 'controllerUUIDs', 'datapath', 'decoder', 'defaultDpid', 'defaultIntf', 'delIntf', 'deleteIntfs', 'detach', 'dpctl', 'dpid', 'dpidLen', 'execed', 'failMode', 'fdToNode', 'inNamespace', 'inToNode', 'inband', 'intf', 'intfIsUp', 'intfList', 'intfNames', 'intfOpts', 'intfs', 'isOldOVS', 'isSetup', 'lastCmd', 'lastPid', 'linkTo', 'listenPort', 'master', 'monitor', 'mountPrivateDirs', 'name', 'nameToIntf', 'newPort', 'opts', 'outToNode', 'params', 'pexec', 'pid', 'pollOut', 'popen', 'portBase', 'ports', 'privateDirs', 'protocols', 'read', 'readbuf', 'readline', 'reconnectms', 'sendCmd', 'sendInt', 'setARP', 'setDefaultRoute', 'setHostRoute', 'setIP', 'setMAC', 'setParam', 'setup', 'shell', 'slave', 'start', 'startShell', 'stdin', 'stdout', 'stop', 'stp', 'terminate', 'unmountPrivateDirs', 'vsctl', 'waitExited', 'waitOutput', 'waitReadable', 'waiting', 'write']

评估变量方法:

mininet> py h1.IP()
10.0.0.1

开关链路:

# 禁用veth pair的两端
mininet> link s1 h1 down
# 恢复
mininet> link s1 h1 up

有用命令总结

$ nodes       // shows all current nodes in the network
$ dump        // shows all info about current topology
$ net         // shows all network interfaces
$ h1 ping h2  // run the [ping h2] command on h1
$ h1 bash     // enter a terminal inside h1

在host terminal中,其他主机名不会被替换为IP,需注意。

测量带宽

当数据包从A发往B时,数据包所需总时间 = 传播延迟(Propagation Delay)+ 传输延迟(Transmission Delay)。

  • 传播延迟(Propagation Delay / Latency):单个数据包穿越链路所需时间。
  • 传输延迟(Transmission Delay):数据包所有字节推送到链路所需时间(由带宽决定)。

Transmission Delay = Data Size / Bandwidth

虽然无法直接测量传输延迟,但是可以测量发送一个大数据包并收到小的ACK所需总时间(由于ACK的传输延迟可以忽略不记)。

Total Time = Transmission Delay + Forward Propagation Delay + Backward Propagation Delay

两个传输延迟相加称之为Round Trip Time(RTT)。

CMake

项目使用CMake进行构建。 文件应包含:

  • cmake_minimum_required(VERSION 3.16):指定最小CMake版本
  • project(myproject)项目名称,会设置PROJECT_NAME变量

变量通过${VAR_NAME}引用

目标:

  • add_executable(target main.cpp):构建二进制目标

  • add_library(target lib.cpp):构建lib目标

    • add_library(target STATIC lib.cpp):静态库(默认),生成.a
    • add_library(target SHARED lib.cpp):动态,生成.so

链接:

  • target_link_libraries(target lib):告诉linker链接目标,即目标在linking阶段所需要的库
  • 如果lib在另一个文件夹中(如lib),需通过add_subdirectory(lib),并在lib文件夹中添加CMake文件
  • target_include_directories(target PUBLIC/PRIVATE/INTERFACE ${CMAKE_CURRENT_SOURCE_DIRECTORY}):设置目标在编译时需要搜索的头文件路径,并指定访问范围
    • PRIVATE:用于当前目标,但不被库的依赖项和程序其他部分使用
    • PUBLIC:都是用
    • INTERFACE:只对外提供
  • find_package(fmt); target_link_libraries(${PROJECT_NAME} lib fmt::fmt:添加安装的库