万兆线速ddos攻击测试工具pktgen-dpdk
万兆线速ddos攻击测试工具pktgen-dpdk
简介
dpdk是Intel发起的高速网络处理工具,能够旁路Linux在用户空间接管CPU、内存和网卡队列,进行高度并行的网络收发包。pktgen-dpdk是Linux内核发包工具的dpdk加强版,本文介绍了使用此工具如何从一台服务器上提供超过100Gbps的DDOS发包压力测试。
安装
安装过程需要设置hugepage以支持dpdk,编译加载dpdk模块和绑定网卡,编译pktgen-dpdk,运行pktgen-dpdk查看网卡cpu对应关系以设定cpu分配关系。
为顺利安装,需要Linux系统18.04或以上,以及支持dpdk的网卡,dpdk官网可以查询你的网卡是否支持。
内核开启巨页支持
编辑内核启动参数,按物理内存容量分配适当的HugePages容量,页大小可以是2MB和1GB二者之一,这里设置1GB x 8。
compile dpdk per kernel update
vi /etc/default/grub: GRUB_CMDLINE_LINUX="default_hugepagesz=1g hugepagesz=1g hugepages=8" ls /boot/grub/ && update-grub reboot
重启系统后,应能查看到如下结果:
tail /proc/meminfo CmaFree: 0 kB HugePages_Total: 8 HugePages_Free: 8 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 1048576 kB Hugetlb: 67108864 kB DirectMap4k: 305912 kB DirectMap2M: 6938624 kB DirectMap1G: 128974848 kB
编译dpdk
下载并解压dpdk (以版本18.11.5为例),按如下步骤编译。其中前2行export需要放到环境变量中以供以后运行使用。
export RTE_SDK=/path/to/dpdk export RTE_TARGET=x86_64-native-linuxapp-gcc cd ${RTE_SDK} apt install make apt install make-guile apt install gcc apt install libnuma-dev make install T=${RTE_TARGET} -j4 insmod ${RTE_SDK}/${RTE_TARGET}/kmod/igb_uio.ko insmod ${RTE_SDK}/${RTE_TARGET}/kmod/rte_kni.ko
为了快速绑定网卡到dpdk或者解绑回到Linux内核,可以创建脚本dpdk_bind_ports.sh:
#!/bin/bash ports=(0 1) # dpdk nic number, start from 0 # NIC's interface name like eth1/enp130s0f0/eno1 nic=(enp130s0f0 enp130s0f1) # NIC's PCI-ID ids=(82:00.0 82:00.1) #Linux kenerl NIC driver, like i40e/ixgbe/tg3 drv="i40e" cd ${RTE_SDK} if [ "$1" = "-u" ]; then for i in ${ports[@]}; do ./usertools/dpdk-devbind.py -u ${ids[$i]} ./usertools/dpdk-devbind.py -b $drv ${ids[$i]} done else insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko 2>/dev/null insmod ./x86_64-native-linuxapp-gcc/kmod/rte_kni.ko 2>/dev/null for i in ${ports[@]}; do ifconfig ${nic[$i]} down ./usertools/dpdk-devbind.py -u ${ids[$i]} ./usertools/dpdk-devbind.py -b igb_uio ${ids[$i]} done fi ./usertools/dpdk-devbind.py --status-dev net
执行./dpdk_bind_ports.sh将上述所列的两个网口绑定到dpdk,可以看到如下类似结果:
Network devices using DPDK-compatible driver ============================================ 0000:82:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e 0000:82:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=igb_uio unused=i40e Network devices using kernel driver =================================== 0000:01:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno1 drv=tg3 unused=igb_uio *Active*
请按自己的网卡信息编辑上述脚本,ports为0开始的顺序编号,nic和ids数目应该与其一致。
若不知道网卡接口名字,可以先修改驱动名和PCI-ID正确,未知网卡名随意填充,执行./dpdk_bind_ports.sh -u恢复网卡到Linux 内核接管,则输出结果或者ifconfig -a 都可以查到网卡名字。最后,再次执行./dpdk_bind_ports.sh将网卡绑定到dpdk。
编译pktgen-dpdk
下载并解压pktgen-dpdk,执行如下步骤编译:
cd /path/to/pktgen-dpdk apt install liblua5.3-dev apt install libpcap-dev make -j4
执行 ./app/x86_64-native-linuxapp-gcc/pktgen 进入交互式界面,输入page config命令得到网卡和cpu的交互图,以创建-m绑定关系。
Socket : 0 1 Port description Core 0 : [ 0,20] [ 1,21] 0000:01:00.0 : Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe Core 1 : [ 2,22] [ 3,23] 0000:01:00.1 : Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe Core 2 : [ 4,24] [ 5,25] 0000:02:00.0 : Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe Core 3 : [ 6,26] [ 7,27] 0000:02:00.1 : Broadcom Inc. and subsidiaries NetXtreme BCM5720 Gigabit Ethernet PCIe Core 4 : [ 8,28] [ 9,29] 0000:82:00.0 : Intel Corporation X710 for 10GbE SFP+ (rev 01) Core 5 : [10,30] [11,31] 0000:82:00.1 : Intel Corporation X710 for 10GbE SFP+ (rev 01) Core 6 : [12,32] [13,33] Core 7 : [14,34] [15,35] Core 8 : [16,36] [17,37] Core 9 : [18,38] [19,39]
上述信息表明有2个物理cpu 0 和1,每个cpu有10个核心,操作系统统一编号后为0~19,开启超线程之后为0~39。
dpdk绑定的2个网口因为在一个网卡上(PCI-ID是连号的),而同一个网卡物理上只会连到一个物理CPU的PCI-Express Lane,所以这两个网口(dpdk编号0/1)能绑定的CPU编号要么左边一列方括号[8,28]/[10,30],要么右边一列方括号[9,29]/[11,31] (注意方括号里逗号右边的是超线程cpu核心编号)。
Core 4 : [ 8,28] [ 9,29] 0000:82:00.0 : Intel Corporation X710 for 10GbE SFP+ (rev 01) Core 5 : [10,30] [11,31] 0000:82:00.1 : Intel Corporation X710 for 10GbE SFP+ (rev 01)
那么可以先试试能否正确加载:
./app/x86_64-native-linuxapp-gcc/pktgen -m 8.0 -m 10.1
若报告错误,类似 “port 0 on socket ID 1 has different socket ID for lcore 8 socket ID 0”,则左边数字加1再试:
./app/x86_64-native-linuxapp-gcc/pktgen -m 9.0 -m 10.1
总能把这对网口配到正确的物理cpu上:
Copyright (c) <2010-2019>, Intel Corporation. All rights reserved. Powered by DPDK EAL: Detected 40 lcore(s) EAL: Detected 2 NUMA nodes EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Probing VFIO support... EAL: PCI device 0000:82:00.0 on NUMA socket 1 EAL: probe driver: 8086:1572 net_i40e EAL: PCI device 0000:82:00.1 on NUMA socket 1 EAL: probe driver: 8086:1572 net_i40e Lua 5.3.3 Copyright (C) 1994-2016 Lua.org, PUC-Rio *** Copyright (c) <2010-2019>, Intel Corporation. All rights reserved. *** Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<< Port: Name IfIndex Alias NUMA PCI 0: net_i40e 0 1 8086:1572/82:00.0 1: net_i40e 0 1 8086:1572/82:00.1
测试
最终,创建test.cfg并进入攻击交互式界面:
#./app/x86_64-native-linuxapp-gcc/pktgen -- -l ./pktgen.log -PGNT -m 9.0 -m 11.1 -f test.cfg
运行 start 0开始第一个网口,stop 0停止第一个网口,第二个网口类似;
运行 start all开始所有网口,stop all停止所有网口;
运行page help可以看到可用命令,主要有page stats, page xstats, page rate, quit
test.cfg为交互式命令的集合版本,可以参考官方文档生成各种协议的包和指定发包策略。
这里简单列举一个0网口全速发udp 64字节小包的例子:
clear 0 stats reset 0 enable screen enable 0 range disable 0 vlan set 0 size 64 set 0 rate 100 set 0 burst 64 set 0 type ipv4 set 0 proto udp set 0 dst ip 192.168.0.1/24 set 0 src ip 172.0.0.1/16 set 0 sport 12325 set 0 dport 12325 set 0 dst mac 20:04:0f:34:aa:3d set 0 src mac f8:f2:1e:1a:d6:00 range 0 proto udp range 0 src port 10000 10000 60000 1 range 0 dst port 10000 10000 60000 1 set 0 src ip 172.0.0.1/16 range 0 src ip start 172.0.0.1 range 0 src ip min 172.0.0.1 range 0 src ip max 172.0.255.254 range 0 src ip inc 0.0.0.1 set 0 dst ip 192.168.0.1 range 0 dst ip start 192.168.0.1 range 0 dst ip min 192.168.0.1 range 0 dst ip max 192.168.0.1 range 0 dst ip inc 0.0.0.0 disable 0 process disable 0 bonding disable 0 mac_from_arp start 0 arp request range 0 dst mac start 20:04:0f:34:aa:3d range 0 dst mac min 20:04:0f:34:aa:3d range 0 dst mac max 20:04:0f:34:aa:3d range 0 src mac start f8:f2:1e:1a:d6:00 range 0 src mac min f8:f2:1e:1a:d6:00 range 0 src mac max f8:f2:1e:1a:d6:00
也可以用bash脚本gencfg一次性生成多个port的配置:
#!/bin/bash ports="0 1" smacs=("f8:f2:1e:1a:d6:00" "f8:f2:1e:1a:d6:02") dmacs=("20:04:0f:34:aa:3d" "20:04:0f:34:aa:3d") dips=("192.168.0.1" "192.168.0.1") set_nic(){ i=$1 smac=$2 dmac=${3:-"20:04:0f:34:aa:3d"} dip=${4:-"192.168.0.1"} cat <<EOF clear $i stats reset $i enable screen enable $i range disable $i vlan set $i size 64 set $i rate 100 set $i burst 64 set $i type ipv4 set $i proto udp set $i dst ip $dip/24 set $i src ip 172.$i.0.1/16 set $i sport 12325 set $i dport 12325 set $i dst mac $dmac set $i src mac $smac range $i proto udp range $i src port 10000 10000 60000 1 range $i dst port 10000 10000 60000 1 set $i src ip 172.$i.0.1/16 range $i src ip start 172.$i.0.1 range $i src ip min 172.$i.0.1 range $i src ip max 172.$i.255.254 range $i src ip inc 0.0.0.1 set $i dst ip $dip range $i dst ip start $dip range $i dst ip min $dip range $i dst ip max $dip range $i dst ip inc 0.0.0.0 disable $i process disable $i bonding disable $i mac_from_arp start $i arp request range $i dst mac start $dmac range $i dst mac min $dmac range $i dst mac max $dmac range $i src mac start $smac range $i src mac min $smac range $i src mac max $smac #dbg tx_dbg EOF }
最后执行:
./gencfg > test.cfg
这个配置实测过,可以从一台Dell R620 (2 * E5-2660 v2 / 64GB Mem / 8个10G Intel网口)服务器 ,轻松打满8x10Gbps的udp小包,发包总速率达到8x15=120Mpps。考虑到CPU 队列并未用完,如果有更多的网卡,超过100Gbps没有悬念。
- 点赞
- 收藏
- 关注作者
评论(0)