The DPDK Test Plans¶
The following are the test plans for the DPDK DTS automated test system.
Port Blacklist Tests¶
Prerequisites¶
Board with at least 2 DPDK supported NICs attached.
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
Test Case: Testpmd with no blacklisted device¶
Run testpmd in interactive mode and ensure that at least 2 ports are bound and available:
build/testpmd -c 3 -- -i
....
EAL: unbind kernel driver /sys/bus/pci/devices/0000:01:00.0/driver/unbind
EAL: Core 1 is ready (tid=357fc700)
EAL: bind PCI device 0000:01:00.0 to uio driver
EAL: Device bound
EAL: map PCI resource for device 0000:01:00.0
EAL: PCI memory mapped at 0x7fe6b68c7000
EAL: probe driver: 8086:10fb rte_niantic_pmd
EAL: unbind kernel driver /sys/bus/pci/devices/0000:01:00.1/driver/unbind
EAL: bind PCI device 0000:01:00.1 to uio driver
EAL: Device bound
EAL: map PCI resource for device 0000:01:00.1
EAL: PCI memory mapped at 0x7fe6b6847000
EAL: probe driver: 8086:10fb rte_niantic_pmd
EAL: unbind kernel driver /sys/bus/pci/devices/0000:02:00.0/driver/unbind
EAL: bind PCI device 0000:02:00.0 to uio driver
EAL: Device bound
EAL: map PCI resource for device 0000:02:00.0
EAL: PCI memory mapped at 0x7fe6b6580000
EAL: probe driver: 8086:10fb rte_niantic_pmd
EAL: unbind kernel driver /sys/bus/pci/devices/0000:02:00.1/driver/unbind
EAL: bind PCI device 0000:02:00.1 to uio driver
EAL: Device bound
EAL: map PCI resource for device 0000:02:00.1
EAL: PCI memory mapped at 0x7fe6b6500000
Interactive-mode selected
Initializing port 0... done: Link Up - speed 10000 Mbps - full-duplex
Initializing port 1... done: Link Up - speed 10000 Mbps - full-duplex
Initializing port 2... done: Link Up - speed 10000 Mbps - full-duplex
Initializing port 3... done: Link Up - speed 10000 Mbps - full-duplex
Test Case: Testpmd with one port blacklisted¶
Select first available port to be blacklisted and specify it with -b option. For the example above:
build/testpmd -c 3 -b 0000:01:00.0 -- -i
Check that corresponding device is skipped for binding, and only 3 ports are available now::
EAL: probe driver: 8086:10fb rte_niantic_pmd
EAL: unbind kernel driver /sys/bus/pci/devices/0000:01:00.1/driver/unbind
EAL: bind PCI device 0000:01:00.1 to uio driver
EAL: Device bound
EAL: map PCI resource for device 0000:01:00.1
EAL: PCI memory mapped at 0x7f0037912000
EAL: probe driver: 8086:10fb rte_niantic_pmd
EAL: unbind kernel driver /sys/bus/pci/devices/0000:02:00.0/driver/unbind
EAL: bind PCI device 0000:02:00.0 to uio driver
EAL: Device bound
EAL: map PCI resource for device 0000:02:00.0
EAL: PCI memory mapped at 0x7f0037892000
EAL: probe driver: 8086:10fb rte_niantic_pmd
EAL: unbind kernel driver /sys/bus/pci/devices/0000:02:00.1/driver/unbind
EAL: bind PCI device 0000:02:00.1 to uio driver
EAL: Device bound
EAL: map PCI resource for device 0000:02:00.1
EAL: PCI memory mapped at 0x7f0037812000
Interactive-mode selected
Initializing port 0... done: Link Up - speed 10000 Mbps - full-duplex
Initializing port 1... done: Link Up - speed 10000 Mbps - full-duplex
Initializing port 2... done: Link Up - speed 10000 Mbps - full-duplex
Test Case: Testpmd with all but one port blacklisted¶
Blacklist all devices except the last one. For the example above::
build/testpmd -c 3 -b 0000:01:00.0 -b 0000:01:00.0 -b 0000:02:00.0 -- -i
Check that 3 corresponding device is skipped for binding, and only 1 ports is available now::
EAL: probe driver: 8086:10fb rte_niantic_pmd
EAL: unbind kernel driver /sys/bus/pci/devices/0000:02:00.1/driver/unbind
EAL: bind PCI device 0000:02:00.1 to uio driver
EAL: Device bound
EAL: map PCI resource for device 0000:02:00.1
EAL: PCI memory mapped at 0x7f22e9aeb000
Interactive-mode selected
Initializing port 0... done: Link Up - speed 10000 Mbps - full-duplex
RX/TX Checksum Offload Tests¶
The support of RX/TX L3/L4 Checksum offload features by Poll Mode Drivers consists in:
On the RX side:
- Verify IPv4 checksum by hardware for received packets.
- Verify UDP/TCP/SCTP checksum by hardware for received packets.
On the TX side:
- IPv4 checksum insertion by hardware in transmitted packets.
- IPv4/UDP checksum insertion by hardware in transmitted packets.
- IPv4/TCP checksum insertion by hardware in transmitted packets.
- IPv4/SCTP checksum insertion by hardware in transmitted packets (sctp length in 4 bytes).
- IPv6/UDP checksum insertion by hardware in transmitted packets.
- IPv6/TCP checksum insertion by hardware in transmitted packets.
- IPv6/SCTP checksum insertion by hardware in transmitted packets (sctp length in 4 bytes).
RX side, the L3/L4 checksum offload by hardware can be enabled with the
following command of the testpmd
application:
enable-rx-checksum
TX side, the insertion of a L3/L4 checksum by hardware can be enabled with the
following command of the testpmd
application and running in a dedicated
tx checksum mode:
set fwd csum
tx_checksum set mask port_id
The transmission of packet is done with the start
command of the testpmd
application that will receive packets and then transmit the packet out on all
configured ports. mask
is used to indicated what hardware checksum
offload is required on the port_id
. Please check the NIC datasheet for the
corresponding Hardware limits:
bit 0 - insert ip checksum offload if set
bit 1 - insert udp checksum offload if set
bit 2 - insert tcp checksum offload if set
bit 3 - insert sctp checksum offload if set
Prerequisites¶
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
Assuming that ports 0
and 2
are connected to a traffic generator,
launch the testpmd
with the following arguments:
./build/app/testpmd -cffffff -n 1 -- -i --burst=1 --txpt=32 \
--txht=8 --txwt=0 --txfreet=0 --rxfreet=64 --mbcache=250 --portmask=0x5
enable-rx-checksum
Set the verbose level to 1 to display information for each received packet:
testpmd> set verbose 1
Test Case: Validate checksum on the receive packet¶
Setup the csum
forwarding mode:
testpmd> set fwd csum
Set csum packet forwarding mode
Start the packet forwarding:
testpmd> start
csum packet forwarding - CRC stripping disabled - packets/burst=32
nb forwarding cores=1 - nb forwarding ports=10
RX queues=1 - RX desc=128 - RX free threshold=64
RX threshold registers: pthresh=8 hthresh=8 wthresh=4
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=32 hthresh=8 wthresh=8
Configure the traffic generator to send the multiple packets with the following combination: good/bad ip checksum + good/bad udp/tcp checksum.
Except that SCTP header + payload length must be a multiple of 4 bytes. IPv4 + UDP/TCP packet length can range from the minimum length to 1518 bytes.
Then verify that how many packets found with Bad-ipcsum or Bad-l4csum:
testpmd> stop
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
Bad-ipcsum: 0 Bad-l4csum: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
Test Case: Insert IPv4/IPv6 UDP/TCP/SCTP checksum on the transmit packet¶
Setup the csum
forwarding mode:
testpmd> set fwd csum
Set csum packet forwarding mode
Enable the IPv4/UDP/TCP/SCTP checksum offload on port 0:
testpmd> tx_checksum set 0xf 0
testpmd> start
csum packet forwarding - CRC stripping disabled - packets/burst=32
nb forwarding cores=1 - nb forwarding ports=10
RX queues=1 - RX desc=128 - RX free threshold=64
RX threshold registers: pthresh=8 hthresh=8 wthresh=4
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=32 hthresh=8 wthresh=8
Configure the traffic generator to send the multiple packets for the following combination: IPv4/UDP, IPv4/TCP, IPv4/SCTP, IPv6/UDP, IPv6/TCP.
Except that SCTP header + payload length must be a multiple of 4 bytes. IPv4 + UDP/TCP packet length can range from the minimum length to 1518 bytes.
Then verify that the same number of packet are correctly received on the traffic generator side. And IPv4 checksum, TCP checksum, UDP checksum, SCTP CRC32c need be validated as pass by the IXIA.
The IPv4 source address will not be changed by testpmd.
Test Case: Do not insert IPv4/IPv6 UDP/TCP checksum on the transmit packet¶
Setup the csum
forwarding mode:
testpmd> set fwd csum
Set csum packet forwarding mode
Disable the IPv4/UDP/TCP/SCTP checksum offload on port 0:
testpmd> tx_checksum set 0x0 0
testpmd> start
csum packet forwarding - CRC stripping disabled - packets/burst=32
nb forwarding cores=1 - nb forwarding ports=10
RX queues=1 - RX desc=128 - RX free threshold=64
RX threshold registers: pthresh=8 hthresh=8 wthresh=4
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=32 hthresh=8 wthresh=8
Configure the traffic generator to send the multiple packets for the following combination: IPv4/UDP, IPv4/TCP, IPv6/UDP, IPv6/TCP.
IPv4 + UDP/TCP packet length can range from the minimum length to 1518 bytes.
Then verify that the same number of packet are correctly received on the traffic generator side. And IPv4 checksum, TCP checksum, UDP checksum need be validated as pass by the IXIA.
The first byte of source IPv4 address will be increment by testpmd. The checksum is indeed recalculated by software algorithms.
Test Case: Validate RX checksum valid flags on the receive packet¶
Setup the csum
forwarding mode:
testpmd> set fwd csum
Set csum packet forwarding mode
Start the packet forwarding:
testpmd> start
csum packet forwarding - CRC stripping disabled - packets/burst=32
nb forwarding cores=1 - nb forwarding ports=10
RX queues=1 - RX desc=128 - RX free threshold=64
RX threshold registers: pthresh=8 hthresh=8 wthresh=4
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=32 hthresh=8 wthresh=8
Configure the traffic generator to send the multiple packets with the following combination: good/bad ip checksum + good/bad udp/tcp checksum.
Check the Rx checksum flags consistent with expected flags.
Cloud filter Support through Ethtool Tests¶
This feature based on X710 to classify VxLan/Geneve packets and put those into a specified queue in VF for further processing from virtual switch.
Prerequisites¶
Cloud filter feature based on latest i40e out of tree driver. Should also update ethtool and XL710 firmware:
- Ethtool version: 3.18
- i40e driver: i40e-1.5.13_rc1
- Kernel version: 4.2.2
- Xl710 DA2 firmware: 5.02 0x80002282
BIOS setting:
- Enable VT-d and VT-x
Kernel command line:
- Enable Intel IOMMU with below arguments
- intel_iommu=on iommu=pt
Create two VFs from kernel driver:
echo 2 > /sys/bus/pci/devices/0000\:82\:00.0/sriov_numvfs
ifconfig $PF_INTF up
Add vxlan network interface based on PF device:
ip li add vxlan0 type vxlan id 1 group 239.1.1.1 local 127.0.0.1 dev $PF_INTF
ifconfig vxlan0 up
Allocate hugepage for dpdk:
echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
Bind vf device to igb_uio driver and start testpmd with multiple queues:
cd dpdk
modprobe uio
insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko
./tools/dpdk_nic_bind.py --bind=igb_uio 82:02.0 82:02.1
./x86_64-native-linuxapp-gcc/app/testpmd -c ffff -n 4 -- -i --rxq=4 --txq=4 --disable-rss
testpmd> set nbcore 8
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
Test case: cloud filter rule(inner ip)¶
Add cloud filter with inner ip address rule. Flow type ip4 mean this rule only match inner destination ip address. User define field higher 32bit is all 0xf mean vni id is not in the rule. Lower 32bit is 1 mean packet will be forwarded to VF1. Action 3 mean packet will be redirected to queue 3:
ethtool -N $PF_INTF flow-type ip4 dst-ip 192.168.1.1 user-def 0xffffffff00000001 action 3 loc 1
Send vxlan packet with inner ip matched rule:
Ether()/IP()/UDP()/Vxlan()/Ether()/IP(dst="192.168.1.1")/TCP()/Raw('x' * 20)
verify packet received by queue3 of VF1, verify packet type is correct:
testpmd> port 1/queue 3: received 1 packets src=00:00:00:00:00:00 - dst=00:00:00:00:09:00 - type=0x0800 - length=106 - nb_segs=1
- (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer) L4 type: Unknown
- Tunnel type: GRENAT - Inner L2 type: ETHER - Inner L3 type: IPV4_EXT_UNKNOWN - Inner L4 type: UDP
- VXLAN packet: packet type =24721, Destination UDP port =8472, VNI = 1 - Receive queue=0x3
Test case: cloud filter rule(inner mac)¶
Add cloud filter with Inner mac rule. Dst mac mask ff:ff:ff:ff:ff:ff mean outer mac address is not in the rule. Src mac mask 00:00:00:00:00:00 mean inner mac address is in the rule. User define field higher 32bit is all 0xf mean vni id is not in the rule. Lower 32bit is 1 mean packet will be forwarded to VF1. Action 3 mean packet will be redirected to queue 3:
ethtool -N $PF_INTF flow-type ether dst 00:00:00:00:00:00 m \ ff:ff:ff:ff:ff:ff src 00:00:00:00:09:00 m 00:00:00:00:00:00 \ user-def 0xffffffff00000001 action 3 loc 1
Send vxlan packet with inner mac matched rule:
Ether()/IP()/UDP()/Vxlan()/Ether(dst="00:00:00:00:09:00")/IP()/TCP()/Raw('x' * 20)
verify packet received by queue3 of VF1, verify packet type is correct:
testpmd> port 1/queue 3: received 1 packets src=00:00:00:00:00:00 - dst=00:00:00:00:09:00 - type=0x0800 - length=120 - nb_segs=1
- (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer) L4 type: Unknown
- Tunnel type: GRENAT - Inner L2 type: ETHER - Inner L3 type: IPV4_EXT_UNKNOWN - Inner L4 type: TCP
- VXLAN packet: packet type =24721, Destination UDP port =8472, VNI = 0 - Receive queue=0x3
Test case: cloud filter rule(inner mac + outer mac + vni)¶
Add cloud filter with Inner mac + outer mac + vni rule. Dst mac mask 00:00:00:00:00:00 mean outer mac address is in the rule. Src mac mask 00:00:00:00:00:00 mean inner mac address is in the rule. User define field higher 32bit is 0x1 mean vni match 1 is in the rule. Lower 32bit is 1 mean packet will be forwarded to VF1. Action 3 mean packet will be redirected to queue 3:
ethtool -N $PF_INTF flow-type ether dst 00:00:00:00:10:00 m \ 00:00:00:00:00:00 src 00:00:00:00:09:00 m 00:00:00:00:00:00 \ user-def 0x100000001 action 3 loc 1
Send vxlan packet with inner mac match rule:
Ether(dst="00:00:00:00:10:00")/IP()/UDP()/Vxlan(vni=1)/Ether(dst="00:00:00:00:09:00")/IP()/TCP()/Raw('x' * 20)
verify packet received by queue3 of VF1, verify packet type is correct:
testpmd> port 1/queue 3: received 1 packets src=00:00:00:00:00:00 - dst=00:00:00:00:09:00 - type=0x0800 - length=120 - nb_segs=1
- (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer) L4 type: Unknown
- Tunnel type: GRENAT - Inner L2 type: ETHER - Inner L3 type: IPV4_EXT_UNKNOWN - Inner L4 type: TCP
- VXLAN packet: packet type =24721, Destination UDP port =8472, VNI = 0 - Receive queue=0x3
Test case: cloud filter rule(inner mac + inner vlan + vni)¶
Add cloud filter with Inner mac + inner vlan + vni rule. Dst mac mask ff:ff:ff:ff:ff:ff mean outer mac address is not in the rule. Src mac mask 00:00:00:00:00:00 mean inner mac address is in the rule. Vlan 1 mean vlan match is in the rule. User define field higher 32bit is 0x1 mean vni match 1 is in the rule. Lower 32bit is 1 mean packet will be forwarded to VF1. Action 3 mean packet will be redirected to queue 3:
ethtool -N $PF_INTF flow-type ether dst 00:00:00:00:00:00 m \ ff:ff:ff:ff:ff:ff src 00:00:00:00:09:00 m 00:00:00:00:00:00 \ vlan 1 user-def 0x100000001 action 3 loc 1
Send vxlan packet with inner mac match rule:
Ether()/IP()/UDP()/Vxlan(vni=1)/Ether(dst="00:00:00:00:09:00")/Dot1Q(vlan=1)/IP()/TCP()/Raw('x' * 20)
verify packet received by queue3 of VF1, verify packet type is correct:
testpmd> port 1/queue 3: received 1 packets src=00:00:00:00:00:00 - dst=00:00:00:00:09:00 - type=0x0800 - length=124 - nb_segs=1
- (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer) L4 type: Unknown
- Tunnel type: GRENAT - Inner L2 type: ETHER_VLAN - Inner L3 type: IPV4_EXT_UNKNOWN - Inner L4 type: TCP
- VXLAN packet: packet type =24721, Destination UDP port =8472, VNI = 1 - Receive queue=0x3
Test case: cloud filter rule(inner mac + inner vlan)¶
Add cloud filter with Inner mac + inner vlan rule. Dst mac mask ff:ff:ff:ff:ff:ff mean outer mac address is not in the rule. Src mac mask 00:00:00:00:00:00 mean inner mac address is in the rule. Vlan 1 mean vlan match is in the rule. User define field higher 32bit is all 0xf mean vni id is not in the rule. Lower 32bit is 1 mean packet will be forwarded to VF1. Action 3 mean packet will be redirected to queue 3:
ethtool -N $PF_INTF flow-type ether dst 00:00:00:00:00:00 m \ ff:ff:ff:ff:ff:ff src 00:00:00:00:09:00 m 00:00:00:00:00:00 \ vlan 1 user-def 0xffffffff00000001 action 3 loc 1
Send vxlan packet with inner mac match rule:
Ether()/IP()/UDP()/Vxlan(vni=1)/Ether(dst="00:00:00:00:09:00")/Dot1Q(vlan=1)/IP()/TCP()/Raw('x' * 20)
verify packet received by queue3 of VF1, verify packet type is correct:
testpmd> port 1/queue 3: received 1 packets src=00:00:00:00:00:00 - dst=00:00:00:00:09:00 - type=0x0800 - length=124 - nb_segs=1
- (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer) L4 type: Unknown
- Tunnel type: GRENAT - Inner L2 type: ETHER_VLAN - Inner L3 type: IPV4_EXT_UNKNOWN - Inner L4 type: TCP
- VXLAN packet: packet type =24721, Destination UDP port =8472, VNI = 1 - Receive queue=0x3
Test case: Remove cloud filter rule¶
Remove cloud filter rule in location 1:
ethtool -N $PF_INTF delete 1
Dump rule and check there’s no rule listed:
ethtool -n $PF_INTF
Total 0 rules
Send packet match last rule:
Ether(dst not match PF&VF)/IP()/UDP()/Vxlan(vni=1)/Ether(dst="00:00:00:00:09:00")/Dot1Q(vlan=1)/IP()/TCP()/Raw('x' * 20)
Check packet only received on PF device.
Test case: Multiple cloud filter rules¶
Add cloud filter with Inner mac + inner vlan rule. Dst mac mask ff:ff:ff:ff:ff:ff mean outer mac address is not in the rule. Src mac mask 00:00:00:00:00:00 mean inner mac address is in the rule. Vlan 1 mean vlan match is in the rule. User define field higher 32bit is all 0xf mean vni id is not in the rule. Lower 32bit is 1 mean packet will be forwarded to VF1. Action 3 mean packet will be redirected to queue 3:
ethtool -N $PF_INTF flow-type ether dst 00:00:00:00:00:00 m \ ff:ff:ff:ff:ff:ff src 00:00:00:00:09:00 m 00:00:00:00:00:00 \ vlan 1 user-def 0xffffffff00000001 action 3 loc 1
Add another cloud filter with Inner mac + inner vlan rule. Dst mac mask ff:ff:ff:ff:ff:ff mean outer mac address is not in the rule. Src mac mask 00:00:00:00:00:00 mean inner mac address is in the rule. Vlan 2 mean vlan match is in the rule. User define field higher 32bit is all 0xf mean vni id is not in the rule. Lower 32bit is 0 mean packet will be forwarded to VF0. Action 3 mean packet will be redirected to queue 3. Locate 2 mean this rule will be added to index 2:
ethtool -N $PF_INTF flow-type ether dst 00:00:00:00:00:00 m \ ff:ff:ff:ff:ff:ff src 00:00:00:00:10:00 m 00:00:00:00:00:00 \ vlan 2 user-def 0xffffffff00000000 action 3 loc 2
Dump cloud filter rules:
ethtool -n $PF_INTF 64 RX rings available Total 2 rules
Send packet match rule 1:
Ether()/IP()/UDP()/Vxlan(vni=1)/Ether(dst="00:00:00:00:09:00")/Dot1Q(vlan=1)/IP()/TCP()/Raw('x' * 20)
verify packet received by queue3 of VF1, verify packet type is correct:
testpmd> port 1/queue 3: received 1 packets src=00:00:00:00:00:00 - dst=00:00:00:00:09:00 - type=0x0800 - length=124 - nb_segs=1
- (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer) L4 type: Unknown
- Tunnel type: GRENAT - Inner L2 type: ETHER_VLAN - Inner L3 type: IPV4_EXT_UNKNOWN - Inner L4 type: TCP
- VXLAN packet: packet type =24721, Destination UDP port =8472, VNI = 1 - Receive queue=0x3
Send packet match rule 2:
Ether()/IP()/UDP()/Vxlan(vni=1)/Ether(dst="00:00:00:00:10:00")/Dot1Q(vlan=2)/IP()/TCP()/Raw('x' * 20)
verify packet received by queue3 of VF0, verify packet type is correct:
testpmd> port 0/queue 3: received 1 packets src=00:00:00:00:00:00 - dst=00:00:00:00:09:00 - type=0x0800 - length=124 - nb_segs=1
- (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer) L4 type: Unknown - Tunnel type: GRENAT
- Inner L2 type: ETHER_VLAN - Inner L3 type: IPV4_EXT_UNKNOWN - Inner L4 type: TCP
- VXLAN packet: packet type =24721, Destination UDP port =8472, VNI = 1 - Receive queue=0x3
Test case: Bifurcated between kernel VF and dpdk VF¶
Add cloud filter with inner ip address rule. Flow type ip4 mean this rule only match inner destination ip address. User define field higher 32bit is all 0xf mean vni id is not in the rule. Lower 32bit is 1 mean packet will be forwarded to VF1. Action 3 mean packet will be redirected to queue 3:
ethtool -N $PF_INTF flow-type ip4 dst-ip 192.168.1.1 user-def 0xffffffff00000001 action 3 loc 1
Add cloud filter with inner ip address rule. Flow type ip4 mean this rule only match inner destination ip address. User define field higher 32bit is all 0xf mean vni id is not in the rule. Lower 32bit is 0 mean packet will be forwarded to VF0. Action 0 mean packet will be redirected to queue 0:
ethtool -N $PF_INTF flow-type ip4 dst-ip 192.168.2.1 user-def 0xffffffff00000000 action 0 loc 2
Send vxlan packet which matched first rule:
Ether()/IP()/UDP()/Vxlan()/Ether()/IP(dst="192.168.1.1")/UDP()/Raw('x' * 20)
verify packet received by queue3 of VF1, verify packet type is correct:
testpmd> port 1/queue 3: received 1 packets src=00:00:00:00:00:00 - dst=00:00:00:00:09:00 - type=0x0800 - length=106 - nb_segs=1
- (outer) L2 type: ETHER - (outer) L3 type: IPV4_EXT_UNKNOWN - (outer) L4 type: Unknown
- Tunnel type: GRENAT - Inner L2 type: ETHER - Inner L3 type: IPV4_EXT_UNKNOWN - Inner L4 type: UDP
- VXLAN packet: packet type =24721, Destination UDP port =8472, VNI = 1 - Receive queue=0x3
Send vxlan packet which matched second rule:
Ether()/IP()/UDP()/Vxlan()/Ether()/IP(dst="192.168.2.1")/UDP()/Raw('x' * 20)
verify packet received by VF0, verify packet content is correct
Coremask Tests¶
Prerequisites¶
This test will run in any machine able to run test
. No traffic will be sent.
No extra needs for ports.
Test Case: individual coremask¶
Launch test
once per core, set the core mask for the core:
./x86_64-default-linuxapp-gcc/app/test -c <One core mask> -n 4
Verify: every time the application is launched the core is properly detected and used.
Stop test
.
Test Case: big coremask¶
Launch test
with a mask bigger than the available cores:
./x86_64-default-linuxapp-gcc/app/test -c <128 bits mask> -n 4
Verify: the application handles the mask properly and all the available cores are detected and used.
Stop test
.
Test Case: all cores¶
Launch test
with all the available cores:
./x86_64-default-linuxapp-gcc/app/test -c <All cores mask> -n 4
Verify: all the cores have been detected and used by the application.
Stop test
.
Test Case: wrong coremask¶
Launch test
with several wrong masks:
./x86_64-default-linuxapp-gcc/app/test -c <Wrong mask> -n 4
Verify: the application complains about the mask and does not start.
Stop test
.
Cryptodev Performance Application Tests¶
Description¶
This document provides the test plan for testing Cryptodev performance by crypto perf application. The crypto perf application is a DPDK app under DPDK app folder.
Crypto perf application supports most of Cryptodev PMDs (polling mode driver) Intel QuickAssist Technology DH895xxC/DH_C62xx hardware accelerator (QAT PMD), AESNI MB PMD, AESNI GCM PMD, NULL PMD, KASUMI PMD, SNOW3G PMD,ZUC PMD or OPENSSL library PMD.
AESNI MB PMD algorithm table The table below contains AESNI MB algorithms which supported in crypto perf. Part of the algorithms are not supported currently.
Algorithm | Mode | Detail |
aes | cbc | Encrypt/Decrypt;Key size: 128, 192, 256 bits |
aes | ctr | Encrypt/Decrypt;Key size: 128, 192, 256 bits |
md | md5 | |
sha | sha1, sha2-224, sha2-384, sha2-256, sha2-512 | |
hmac | Support md5 and sha implementations sha1, sha2-224, sha2-256, sha2-384, sha2-512 Key Size versus Block size support: Key Size must be <= block size; Mac Len Supported sha1 10, 12, 16, 20 bytes; Mac Len Supported sha2-256 16, 24, 32 bytes; Mac Len Supported sha2-384 24,32, 40, 48 bytes; Mac Len Supported sha2-512 32, 40, 48, 56, 64 bytes; |
QAT algorithm table: The table below contains QAT Algorithms which supported in crypto perf. Part of the algorithms are not supported currently.
Algorithm | Mode | Detail |
aes | cbc | Encrypt/Decrypt;Key size: 128, 192, 256 bits |
aes | ctr | Encrypt/Decrypt;Key size: 128, 192, 256 bits |
3des | cbc | Encrypt/Decrypt;Key size: 128, 192 bits |
3des | ctr | Encrypt/Decrypt;Key size: 128, 192 bits |
md | md5 | |
sha | sha1, sha2-224, sha2-256, sha2-384, sha2-512 | |
hmac | Support md5 and sha implementations sha1, sha2-224, sha2-256, sha2-384, sha2-512 Key Size versus Block size support: Key Size must be <= block size; Mac Len Supported sha1 10, 12, 16, 20 bytes; Mac Len Supported sha2-256 16, 24, 32 bytes; Mac Len Supported sha2-384 24,32, 40, 48 bytes; Mac Len Supported sha2-512 32, 40, 48, 56, 64 bytes; |
|
aes | gcm | Key Sizes:128, 192, 256 bits; Associated Data Length: 0 ~ 240 bytes; Payload Length: 0 ~ (2^32 -1) bytes; IV source: external; IV Lengths: 96 bits; Tag Lengths: 8, 12, 16 bytes; |
kasumi | f8 | Encrypt/Decrypt; Key size: 128 |
f9 | Generate/Verify; Key size: 128 | |
snow3g | uea2 | Encrypt/Decrypt; Key size: 128 |
uia2 | Generate/Verify; Key size: 128 |
AESNI_GCM algorithm table The table below contains AESNI GCM PMD algorithms which are supported in crypto perf
Algorithm | Mode | Detail | |
aes | gcm | Encrypt/Decrypt;Key Sizes:128, 256 bits; IV source: external; IV Lengths: 96 bits; Generate/Verify;Key Sizes:128,192,256 bits; Associated Data Length: 0 ~ 240 bytes; Payload Length: 0 ~ (2^32 -1) bytes; Tag Lengths: 8, 12, 16 bytes; |
aes | gmac | Generate/Verify;Key Sizes:128,192,256 bits; Associated Data Length: 0 ~ 240 bytes; Payload Length: 0 ~ (2^32 -1) bytes; Tag Lengths: 8, 12, 16 bytes; |
NULL algorithm table The table below contains NULL algorithms which are supported in crypto perf. Part of the algorithms are not supported currently.
Algorithm | Mode | Detail |
null | null | Encrypt/Decrypt;Key Sizes:0 bits; IV Lengths: 0 bits; Generate/Verify;Key Sizes:0 bits; Associated Data Length: 1 bytes; Payload Length: 0 bytes; Tag Lengths: 0 bytes; |
KASUMI algorithm table The table below contains KASUMI algorithms which are supported in crypto perf.
Algorithm | Mode | Detail |
kasumi | f8 | Encrypt/Decrypt;Key Sizes:128 bits; IV source: external; IV Lengths: 64 bits; |
kasumi | f9 | Generate/Verify;Key Sizes:128 bits; Payload Length: 64 bytes; Tag Lengths: 4 bytes; |
SNOW3G algorithm table The table below contains SNOW3G algorithms which are supported in crypto perf.
Algorithm | Mode | Detail |
snow3g | uea2 | Encrypt/Decrypt;Key Sizes:128 bits; IV source: external; IV Lengths: 128 bits; |
snow3g | uia2 | Generate/Verify;Key Sizes:128 bits; Payload Length: 128 bytes; Tag Lengths: 4 bytes; |
ZUC algorithm table The table below contains ZUC algorithms which are supported in crypto perf.
Algorithm | Mode | Detail |
zuc | eea3 | Encrypt/Decrypt;Key Sizes:128 bits; IV source: external; IV Lengths: 128 bits; |
zuc | eia2 | Generate/Verify;Key Sizes:128 bits; Payload Length: 128 bytes; Tag Lengths: 4 bytes; |
OPENSSL algorithm table: The table below contains OPENSSL algorithms which are supported in crypto perf.
Algorithm | Mode | Detail |
aes | cbc | Encrypt/Decrypt;Key size: 128, 192, 256 bits |
aes | ctr | Encrypt/Decrypt;Key size: 128, 192, 256 bits |
3des | cbc | Encrypt/Decrypt;Key size: 128, 192 bits |
3des | ctr | Encrypt/Decrypt;Key size: 128, 192 bits |
md | md5 | |
sha | sha1, sha2-224, sha2-256, sha2-384, sha2-512 | |
hmac | Support md5 and sha implementations sha1, sha2-224, sha2-256, sha2-384, sha2-512 Key Size versus Block size support: Key Size must be <= block size; Mac Len Supported sha1 10, 12, 16, 20 bytes; Mac Len Supported sha2-256 16, 24, 32 bytes; Mac Len Supported sha2-384 24,32, 40, 48 bytes; Mac Len Supported sha2-512 32, 40, 48, 56, 64 bytes; |
|
aes | gcm | Encrypt/Decrypt;Key Sizes:128 bits; IV source: external; IV Lengths: 96 bits; Associated Data Length: 0 ~ 240 bytes; Generate/Verify; 128, 192,256 bytes; Payload Length: 64,128 bytes; Tag Lengths: 16 bytes; |
aes | gmac | Generate/Verify;Key Sizes:128,192,256 bits; Associated Data Length: 0 ~ 240 bytes; Payload Length: 8 ~ (2^32 -4) bytes; Tag Lengths:16 bytes; |
Prerequisites¶
To test Cryptodev performance, an application test_crypto_perf is added into DPDK.
The test commands of test_crypto_perf is below:
./build/app/dpdk-test-crypto-perf -c COREMASK --vdev (AESNI_MB|QAT|AESNI_GCM|OPENSSL|SNOW3G|KASUMI|ZUC|NULL) -w (PCI:DEVICE:FUNCTION) -w (PCI:DEVICE:FUNCTION) -- --ptest (throughput|latency) --devtype (crypto_aesni_mb|crypto_qat|crypto_aes_gcm|crypto_openssl|crypto_snow3g|crypto_kasumi|crypto_zuc|crypto_null) --optype (aead|cipher-only|auth-only|cipher-then-auth|auth-then-cipher) --cipher-algo (ALGO) --cipher-op (encrypt|decrypt) --cipher-key-sz (key_size) --cipher-iv-sz (iv_size) --auth-algo (ALGO) --auth-op (generate|verify) --auth-key-sz (key_size) --auth-aad-sz (aad_size) --auth-digest-sz (digest_size) --total-ops (ops_number) --burst-sz (burst_size) --buffer-sz (buffer_size)
Test case: Cryptodev performance test¶
+----------+ +----------+
| | | |
| | --------------> | |
| Tester | | DUT |
| | | |
| | <-------------> | |
+----------+ +----------+
common:
--vdev (AESNI_MB|QAT|AESNI_GCM|OPENSSL|SNOW3G|KASUMI|ZUC|NULL) this value can be set as : crypto_aesni_mb_pmd, crypto_aes_gcm_pmd, crypto_openssl_pmd, crypto_snow3g_pmd, crypto_kasumi_pmd, crypto_zuc_pmd or crypto_null_pmd . if pmd is QAT this parameter should not be set
-w (PCI:DEVICE:FUNCTION) this value is the port whitelist or QAT device whitelist . if vdev is set and devtype is not crypto_qat , the QAT device whitelist is not needed , but you also can set it on the cmd line .
--optype (aead|cipher-only|auth-only|cipher-then-auth|auth-then-cipher): if cipher-algo is aes-gcm or gmac this value must be set to aead . otherwise it will be set to others. please notice , null algorithm only support cipher-only test.
other parameters please reference above table’s parameter .
QAT PMD Command line Eg:
./build/app/dpdk-test-crypto-perf -c 0xf -w 0000:01:00.0 -w 0000:03:3d.0 -- --ptest throughput --devtype crypto_qat --optype cipher-then-auth --cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --cipher-iv-sz 16 --auth-algo sha1-hmac --auth-op generate --auth-key-sz 64 --auth-aad-sz 0 --auth-digest-sz 20 --total-ops 10000000 --burst-sz 32 --buffer-sz 1024
AESNI_MB PMD Command line Eg:
./build/app/dpdk-test-crypto-perf -c 0xf --vdev crypto_aesni_mb_pmd -w 0000:01:00.0 -w 0000:03:3d.0 -- --ptest throughput --devtype crypto_aesni_mb --optype cipher-then-auth --cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --cipher-iv-sz 16 --auth-algo sha1-hmac --auth-op generate --auth-key-sz 64 --auth-aad-sz 0 --auth-digest-sz 20 --total-ops 10000000 --burst-sz 32 --buffer-sz 1024
AESNI_GCM PMD Command line Eg:
./build/app/dpdk-test-crypto-perf -c 0xf --vdev crypto_aesni_gcm_pmd -w 0000:01:00.0 -w 0000:03:3d.0 -- --ptest throughput --devtype crypto_aesni_gcm --optype aead --cipher-algo aes-gcm --cipher-op encrypt --cipher-key-sz 16 --cipher-iv-sz 12 --auth-algo aes-gcm --auth-op generate --auth-key-sz 16 --auth-aad-sz 4 --auth-digest-sz 12 --total-ops 10000000 --burst-sz 32 --buffer-sz 1024
OPENSSL PMD Command line Eg:
./build/app/dpdk-test-crypto-perf -c 0xf --vdev crypto_openssl_pmd -w 0000:01:00.0 -w 0000:03:3d.0 -- --ptest throughput --devtype crypto_openssl --optype cipher-then-auth --cipher-algo aes-cbc --cipher-op encrypt --cipher-key-sz 16 --cipher-iv-sz 16 --auth-algo sha1-hmac --auth-op generate --auth-key-sz 64 --auth-aad-sz 0 --auth-digest-sz 20 --total-ops 10000000 --burst-sz 32 --buffer-sz 64
NULL PMD Command line Eg:
./build/app/dpdk-test-crypto-perf -c 0xf --vdev crypto_null_pmd -w 0000:01:00.0 -w 0000:03:3d.0 -- --ptest throughput --devtype crypto_null --optype cipher-only --cipher-algo null --cipher-op encrypt --cipher-key-sz 0 --cipher-iv-sz 0 --total-ops 10000000 --burst-sz 32 --buffer-sz 1024
KASUMI PMD Command line Eg:
./build/app/dpdk-test-crypto-perf -c 0xf --vdev crypto_kasumi_pmd -w 0000:01:00.0 -w 0000:03:3d.0 -- --ptest throughput --devtype crypto_kasumi --optype cipher-then-auth --cipher-algo kasumi-f8 --cipher-op encrypt --cipher-key-sz 16 --cipher-iv-sz 8 --auth-algo kasumi-f9 --auth-op generate --auth-key-sz 16 --auth-aad-sz 8 --auth-digest-sz 4 --total-ops 10000000 --burst-sz 32 --buffer-sz 1024
SNOW3G PMD Command line Eg:
./build/app/dpdk-test-crypto-perf -c 0xf --vdev crypto_snow3g_pmd -w 0000:01:00.0 -w 0000:03:3d.0 -- --ptest throughput --devtype crypto_snow3g --optype cipher-then-auth --cipher-algo snow3g-uea2 --cipher-op encrypt --cipher-key-sz 16 --cipher-iv-sz 16 --auth-algo snow3g-uia2 --auth-op generate --auth-key-sz 16 --auth-aad-sz 16 --auth-digest-sz 4 --total-ops 10000000 --burst-sz 32 --buffer-sz 1024
ZUC PMD Command line Eg:
./build/app/dpdk-test-crypto-perf -c 0xf --vdev crypto_zuc_pmd -w 0000:01:00.0 -w 0000:03:3d.0 -- --ptest throughput --devtype crypto_zuc_mb --optype cipher-then-auth --cipher-algo zuc-eea3 --cipher-op encrypt --cipher-key-sz 16 --cipher-iv-sz 16 --auth-algo zuc-eia3 --auth-op generate --auth-key-sz 16 --auth-aad-sz 16 --auth-digest-sz 4 --total-ops 10000000 --burst-sz 32 --buffer-sz 1024
Fortville DDP (Dynamic Device Personalization) Tests¶
FVL6 supports DDP (Dynamic Device Personalization) to program analyzer/parser via AdminQ. Profile can be used to update FVL configuration tables via MMIO configuration space, not microcode or firmware itself. For microcode/FW changes new HW/FW/NVM image must be uploaded to the NIC. Profiles will be stored in binary files and need to be passed to AQ to program FVL during initialization stage.
With DDP, MPLS (Multi-protocol Label Switching) can be supported by NVM with profile updated. Below HW features have be enabled for MPLS:
- MPLS packet type recognition
- Cloud filter for MPLS with MPLS label
Only 25G NIC supports DDP and MPLS so far.
Prerequisites¶
Host PF in DPDK driver. Create 1 VF from 1 PF with DPDK driver:
./tools/dpdk-devbind.py -b igb_uio 81:00.0 echo 1 >/sys/bus/pci/devices/0000:81:00.0/max_vfs
Detach VF from the host:
rmmod i40evf
Pass through VF 81:10.0 to vm0, start vm0.
Login vm0, then bind VF0 device to igb_uio driver.
Start testpmd on host and vm0 in chained port topology, add txq/rxq to enable multi-queues. In general, PF’s max queue is 64, VF’s max queue is 4:
./testpmd -c f -n 4 -- -i --port-topology=chained --tx-offloads=0x8fff --txq=4 --rxq=4
Test Case 1: Load dynamic device personalization¶
Stop testpmd port before loading profile:
testpmd > stop port all
Load profile mplsogre-l2.pkgo which is a binary file:
testpmd > ddp add (port_id) (profile_path
Check profile info successfully:
testpmd > ddp get list (port_id)
Start testpmd port
testpmd > start port all
Note
Loading ddp is the prerequisite for below MPLS relative cases, operate global reset or lanconf tool to recover original setting. Global Reset Trigger reg is 0xb8190, first cmd is core reset, second cmd is global reset:
testpmd > write reg 0 0xb8190 1
testpmd > write reg 0 0xb8190 2
Test Case 2: MPLS udp packet for PF¶
Add udp flow rule for PF, set label as random 20 bits, queue should be among configured queue number:
testpmd > flow create 0 ingress pattern eth / ipv4 / udp / mpls label is 0x12345 / end actions pf / queue index <id> / end
Set fwd rxonly, enable output and start PF and VF testpmd
Send udp MPLS packet with good checksum, udp dport is 6635, label is same as configured rule:
sendp([Ether()/IP()/UDP(dport=6635)/MPLS(label=0x12345)/Ether()/IP() /TCP()], iface=txItf)
Check PF could receive configured label udp packet, checksum is good, queue is configured queue
Send udp MPLS packet with bad checksum, udp dport is 6635, label is same as configured rule:
sendp([Ether()/IP()/UDP(chksum=0x1234,dport=6635)/MPLS(label=0x12345)/Ether() /IP()/TCP()], iface=txItf)
Check PF could receive configured label udp packet, checksum is good, queue is configured queue
Test Case 3: MPLS gre packet for PF¶
Add gre flow rule for PF, set label as random 20 bits, queue should be among configured queue number:
testpmd > flow create 0 ingress pattern eth / ipv4 / gre / mpls label is 0xee456 / end actions pf / queue index <id> / end
Set fwd rxonly, enable output and start PF and VF testpmd
Send gre MPLS packet with good checksum, gre proto is 8847, label is same as configured rule:
sendp([Ether()/IP(proto=47)/GRE(proto=0x8847)/MPLS(label=0xee456)/Ether() /IP()/UDP()], iface=txItf)
Check VF could receive configured label gre packet, checksum is good, queue is configured queue
Send gre MPLS packet with bad checksum, gre proto is 8847, label is same as configured rule:
sendp([Ether()/IP(proto=47)/GRE(chksum=0x1234,proto=0x8847)/MPLS(label=0xee456) /Ether()/IP()/UDP()], iface=txItf)
Check VF could receive configured label gre packet, checksum is good, queue is configured queue
Test Case 4: MPLS udp packet for VF¶
Add udp flow rule for VF, set label as random 20 bits, queue should be among configured queue number:
testpmd > flow create 0 ingress pattern eth / ipv4 / udp / mpls label is 0x234 / end actions vf id 0 / queue index <id> / end
Set fwd rxonly, enable output and start PF and VF testpmd
Send udp MPLS packet with good checksum, udp dport is 6635, label is same as configured rule:
sendp([Ether()/IP()/UDP(dport=6635)/MPLS(label=0x234)/Ether()/IP()/TCP()], iface=txItf)
Check VF could receive configured label udp packet, checksum is good, queue is configured queue
Send udp MPLS packet with bad checksum, udp dport is 6635, label is same as configured rule:
sendp([Ether()/IP()/UDP(chksum=0x1234,dport=6635)/MPLS(label=0x234)/Ether() /IP()/TCP()], iface=txItf)
Check VF could receive configured label udp packet, checksum is good, queue is configured queue
Test Case 5: MPLS gre packet for VF¶
Add gre flow rule for VF, set label as random 20 bit, queue should be among configured queue number:
testpmd > flow create 0 ingress pattern eth / ipv4 / gre / mpls label is 0xffff / end actions vf id 0 / queue index <id> / end
Set fwd rxonly, enable output and start PF and VF testpmd
Send gre MPLS packet with good checksum, gre proto is 8847, label is same as configured rule:
sendp([Ether()/IP(proto=47)/GRE(proto=0x8847)/MPLS(label=0xffff)/Ether() /IP()/UDP()], iface=txItf)
Check VF could receive configured label gre packet, checksum is good, queue is configured queue
Send gre MPLS packet with bad checksum, gre proto is 8847, label is same as configured rule:
sendp([Ether()/IP(proto=47)/GRE(chksum=0x1234,proto=0x8847)/MPLS(label=0xffff) /Ether()/IP()/UDP()], iface=txItf)
Check VF could receive configured label gre packet, checksum is good, queue is configured queue
Dual VLAN Offload Tests¶
The support of Dual VLAN offload features by Poll Mode Drivers consists in:
- Dynamically enable/disable inner VLAN filtering on an interface on 82576/82599,
- Dynamically enable/disable extended VLAN mode on 82576/82599,
- Dynamically configure outer VLAN TPID value, i.e. S-TPID value, on 82576/82599.
Prerequisites¶
In this feature, Only 82576 and 82599 are supported.
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
Assuming that ports 0
and 1
are connected to the traffic generator’s port A
and B
,
launch the testpmd
with the following arguments:
./build/app/testpmd -c ffffff -n 3 -- -i --burst=1 --txpt=32 \
--txht=8 --txwt=0 --txfreet=0 --rxfreet=64 --mbcache=250 --portmask=0x3
The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
Test Case: Enable/Disable VLAN packets filtering¶
Setup the mac
forwarding mode:
testpmd> set fwd mac
Set mac packet forwarding mode
Enable vlan filtering on port 0:
testpmd> vlan set filter on 0
Check whether the mode is set successful:
testpmd> show port info 0
********************* Infos for port 0 *********************
MAC address: 90:E2:BA:1B:DF:60
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 127
VLAN offload:
strip off
filter on
qinq(extend) off
start forwarding packets:
testpmd> start
mac packet forwarding - CRC stripping disabled - packets/burst=32
nb forwarding cores=1 - nb forwarding ports=10
RX queues=1 - RX desc=128 - RX free threshold=64
RX threshold registers: pthresh=8 hthresh=8 wthresh=4
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=32 hthresh=8 wthresh=8
- Configure the traffic generator to send VLAN packets with the Tag Identifier
1
and send 1 packet on portA
.Verify that the VLAN packet cannot been received in portB
.
Disable vlan filtering on port 0
:
testpmd> vlan set filter off 0
Check whether the mode is set successful:
testpmd> show port info 0
********************* Infos for port 0 *********************
MAC address: 90:E2:BA:1B:DF:60
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 127
VLAN offload:
strip off
filter off
qinq(extend) off
- Configure the traffic generator to send VLAN packets with the Tag Identifier
1
and send 1 packet on portA
.Verify that the VLAN packet can been received in portB
with VLAN Tag Identifier1
.
Test Case: Add/Remove VLAN Tag Identifier pass VLAN filtering¶
Enable VLAN filtering on port 0
:
testpmd> vlan set filter on 0
Add a VLAN Tag Identifier 1
on port 0
:
testpmd> rx_vlan add 1 0
- Configure the traffic generator to send VLAN packets with the Tag Identifier
1
and send 1 packet on portA
.Verify that the VLAN packet can been received in portB
.
Remove the VLAN Tag Identifier 1
on port 0
:
testpmd> rx_vlan rm 1 0
- Configure the traffic generator to send VLAN packets with the Tag Identifier
1
and send 1 packet on portA
.Verify that the VLAN packet cannot been received in portB
.
Test Case: Enable/Disable VLAN header striping¶
Enable vlan packet forwarding on port 0
first:
testpmd> vlan set filter off 0
Enable vlan header striping on port 0
:
testpmd> vlan set strip on 0
Check whether the mode is set successful:
testpmd> show port info 0
********************* Infos for port 0 *********************
MAC address: 90:E2:BA:1B:DF:60
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 127
VLAN offload:
strip on
filter off
qinq(extend) off
Configure the traffic generator to send VLAN packets with the Tag Identifier
1
and send 1 packet on port A
. Verify that the packet without VLAN Tag
Identifier can been received in port B
.
Disable vlan header striping on port 0
:
testpmd> vlan set strip off 0
Check whether the mode is set successfully:
testpmd> show port info 0
********************* Infos for port 0 *********************
MAC address: 90:E2:BA:1B:DF:60
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 127
VLAN offload:
strip off
filter off
qinq(extend) off
Configure the traffic generator to send VLAN packets with the Tag Identifier
1
and send 1 packet on port A
. Verify that the packet with VLAN Tag
Identifier 1
can been received in port B
.
Test Case: Enable/Disable VLAN header striping in queue¶
Enable vlan packet forwarding on port 0
first:
testpmd> vlan set filter off 0
Disable vlan header striping on port 0
:
testpmd> vlan set strip off 0
Disable vlan header striping in queue 0 on port 0
:
testpmd> vlan set stripq off 0,0
Configure the traffic generator to send VLAN packets with the Tag Identifier
1
and send 1 packet on port A
. Verify that the packet with VLAN Tag
Identifier 1
can been received in port B
.
Enable vlan header striping in queue 0 on port 0
:
testpmd> vlan set stripq on 0,0
Configure the traffic generator to send VLAN packets with the Tag Identifier
1
and send 1 packet on port A
. Verify that the packet without VLAN Tag
Identifier 1
can been received in port B
.
Enable vlan header striping on port 0
.
MISSING COMMAND
Configure the traffic generator to send VLAN packets with the Tag Identifier
1
and send 1 packet on port A
. Verify that the packet without VLAN Tag
Identifier 1
can been received in port B
.
Test Case: Enable/Disable VLAN header inserting¶
Enable vlan packet forwarding on port 0
first:
testpmd> vlan set filter off 0
Insert VLAN Tag Identifier 1
on port 1
:
testpmd> tx_vlan set 1 1
Configure the traffic generator to send VLAN packet without VLAN Tag Identifier
and send 1 packet on port A
. Verify that the packet can been received on port
B
with VLAN Tag Identifier 1
.
Delete the VLAN Tag Identifier 1
on port 1
:
testpmd> tx_vlan reset 1
Configure the traffic generator to send VLAN packet without VLAN Tag Identifier
and send 1 packet on port A
. Verify that the packet can been received on port
B
without VLAN Tag Identifier.
Test Case: Configure receive port outer VLAN TPID¶
Enable vlan header QinQ on port 0
firstly to support set TPID:
testpmd> vlan set qinq on 0
Check whether the mode is set successfully:
testpmd> show port info 0
********************* Infos for port 0 *********************
MAC address: 90:E2:BA:1B:DF:60
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 127
VLAN offload:
strip off
filter off
qinq(extend) on
Set Tag Protocol ID 0x1234
on port 0
.
Nic only support inner model, except Fortville:
testpmd> vlan set inner tpid 0x1234 0
Enable vlan packet filtering and strip on port 0
testpmd> vlan set filter on 0
testpmd> vlan set strip on 0
Configure the traffic generator to send VLAN packet whose outer vlan tag is 0x1
,
inter vlan tag is 0x2
and outer Tag Protocol ID is 0x8100
and send 1 packet
on port A
. Verify that one packet whose vlan header has not been strip has been
received on port B
.
Set Tag Protocol ID 0x8100
on port 0
:
testpmd> vlan set inner tpid 0x8100 0
Configure the traffic generator to send VLAN packet whose outer vlan tag is 0x1
,
inter vlan tag is 0x2
and outer Tag Protocol ID is 0x8100
and send 1 packet
on port A
. Verify that no packets has been received on port B
Test Case: Strip/Filter/Extend/Insert enable/disable synthetic test¶
Do the synthetic test following the below table and check the result is the same
as the table(the inserted VLAN Tag Identifier is limited to 0x3
, and all modes
except insert are set on rx port).
Outer vlan | Inner vlan | Vlan strip | Vlan filter | Vlan extend | Vlan insert | Pass/ Drop | Outer vlan | Inner vlan |
0x1 | 0x2 | no | no | no | no | pass | 0x1 | 0x2 |
0x1 | 0x2 | yes | no | no | no | pass | no | 0x2 |
0x1 | 0x2 | no | yes,0x1 | no | no | pass | 0x1 | 0x2 |
0x1 | 0x2 | no | yes,0x2 | no | no | drop | no | no |
0x1 | 0x2 | yes | yes,0x1 | no | no | pass | no | 0x2 |
0x1 | 0x2 | yes | yes,0x2 | no | no | drop | no | no |
0x1 | 0x2 | no | no | yes | no | pass | 0x1 | 0x2 |
0x1 | 0x2 | yes | no | yes | no | pass | no | 0x1 |
0x1 | 0x2 | no | yes,0x1 | yes | no | drop | no | no |
0x1 | 0x2 | no | yes,0x2 | yes | no | pass | 0x1 | 0x2 |
0x1 | 0x2 | yes | yes,0x1 | yes | no | drop | no | no |
0x1 | 0x2 | yes | yes,0x2 | yes | no | pass | no | 0x1 |
0x1 | 0x2 | no | no | no | yes | pass | 0x3 | 0x1 0x2 |
0x1 | 0x2 | yes | no | no | yes | pass | 0x3 | 0x2 |
0x1 | 0x2 | no | yes,0x1 | no | yes | pass | 0x3 | 0x1 0x2 |
0x1 | 0x2 | no | yes,0x2 | no | yes | drop | no | no |
0x1 | 0x2 | yes | yes,0x1 | no | yes | pass | 0x3 | 0x2 |
0x1 | 0x2 | yes | yes,0x2 | no | yes | drop | no | no |
0x1 | 0x2 | no | no | yes | yes | pass | 0x3 | 0x1 0x2 |
0x1 | 0x2 | yes | no | yes | yes | pass | 0x3 | 0x1 |
0x1 | 0x2 | no | yes,0x1 | yes | yes | drop | no | no |
0x1 | 0x2 | no | yes,0x2 | yes | yes | pass | 0x3 | 0x1 0x2 |
0x1 | 0x2 | yes | yes,0x1 | yes | yes | drop | no | no |
0x1 | 0x2 | yes | yes,0x2 | yes | yes | pass | 0x3 | 0x1 |
Test Case: Strip/Filter/Extend/Insert enable/disable random test¶
Choose the above table’s item randomly 30 times and verify that the result is right.
- At last, stop packet forwarding and quit the application::
- testpmd> stop testpmd> quit
Dynamic Driver Configuration Tests¶
The purpose of this test is to check that it is possible to change the configuration of a port dynamically. The following command can be used to change the promiscuous mode of a specific port:
set promisc PORTID on|off
A traffic generator sends traffic with a different destination mac
address than the one that is configured on the port. Once the
testpmd
application is started, it is possible to display the
statistics of a port using:
show port stats PORTID
When promiscuous mode is disabled, packet must not be received. When it is enabled, packets must be received. The change occurs without stopping the device or restarting the application.
Prerequisites¶
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
Connect the traffic generator to one of the ports (8 in this example). The size of the packets is not important, in this example it was 64.
Start the testpmd application.
Use the ‘show port’ command to see the MAC address and promiscuous mode for port 8. The default value for promiscuous mode should be enabled:
testpmd> show port info 8
********************* Infos for port 8 *********************
MAC address: 00:1B:21:6D:A3:6E
Link status: up
Link speed: 1000 Mbps
Link duplex: full-duplex
Promiscuous mode: enabled
Allmulticast mode: disabled
Test Case: Default Mode¶
The promiscuous mode should be enabled by default. In promiscuous mode all packets should be received.
Read the stats for port 8 before sending packets.:
testpmd> show port stats 8
######################## NIC statistics for port 8 ########################
RX-packets: 1 RX-errors: 0 RX-bytes: 64
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Send a packet with destination MAC address different than the port 8 address.:
testpmd> show port stats 8
######################## NIC statistics for port 8 ########################
RX-packets: 2 RX-errors: 0 RX-bytes: 128
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that the packet was received (RX-packets incremented). Send a packet with with destination MAC address equal with the port 8 address.:
testpmd> show port stats 8
######################## NIC statistics for port 8 ########################
RX-packets: 3 RX-errors: 0 RX-bytes: 192
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that the packet was received (RX-packets incremented).
Test Case: Disable Promiscuous Mode¶
Disable promiscuous mode and verify that the packets are received only for the packet with destination address matching the port 8 address.:
testpmd> set promisc 8 off
Send a packet with destination MAC address different than the port 8 address.:
testpmd> show port stats 8
######################## NIC statistics for port 8 ########################
RX-packets: 3 RX-errors: 0 RX-bytes: 192
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that no packet was received (RX-packets is the same).
Send a packet with destination MAC address equal to the port 8 address.:
######################## NIC statistics for port 8 ########################
RX-packets: 4 RX-errors: 0 RX-bytes: 256
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that the packet was received (RX-packets incremented).
Test Case: Enable Promiscuous Mode¶
Verify that promiscuous mode is still disabled::
testpmd> show port info 8
********************* Infos for port 8 *********************
MAC address: 00:1B:21:6D:A3:6E
Link status: up
Link speed: 1000 Mbps
Link duplex: full-duplex
Promiscuous mode: disabled
Allmulticast mode: disabled
Enable promiscuous mode and verify that the packets are received for any destination MAC address.:
testpmd> set promisc 8 on
testpmd> show port stats 8
######################## NIC statistics for port 8 ########################
RX-packets: 4 RX-errors: 0 RX-bytes: 256
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
testpmd> show port stats 8
Send a packet with destination MAC address different than the port 8 address.:
testpmd> show port stats 8
######################## NIC statistics for port 8 ########################
RX-packets: 5 RX-errors: 0 RX-bytes: 320
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that the packet was received (RX-packets incremented).
Send a packet with with destination MAC address equal with the port 8 address.:
testpmd> show port stats 8
######################## NIC statistics for port 8 ########################
RX-packets: 6 RX-errors: 0 RX-bytes: 384
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that the packet was received (RX-packets incremented).
Test Case: Disable Promiscuous Mode broadcast¶
Disable promiscuous mode and verify that the packets are received broadcast packet.:
testpmd> set promisc all off
testpmd> set fwd io
testpmd> clear port stats all
Send a packet with destination MAC address different than the port 0 address.:
testpmd> show port stats 1
######################## NIC statistics for port 1 ########################
RX-packets: 0 RX-errors: 0 RX-bytes: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that no packet was fwd (port 1 TX-packets is 0):
testpmd> clear port stats all
Send a broadcast packet:
######################## NIC statistics for port 1 ########################
RX-packets: 0 RX-errors: 0 RX-bytes: 0
TX-packets: 1 TX-errors: 0 TX-bytes: 80
############################################################################
Verify that the packet was received and fwd(TX-packets is 1).
Test Case: Disable Promiscuous Mode Multicast¶
Disable promiscuous mode and verify that the packets are received multicast packet.:
testpmd> set promisc all off
testpmd> set fwd io
testpmd> clear port stats all
testpmd> set allmulti all off
Send a packet with destination MAC is multicast mac eg: 01:00:00:33:00:01.:
testpmd> show port stats 1
######################## NIC statistics for port 1 ########################
RX-packets: 0 RX-errors: 0 RX-bytes: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that no packet was fwd (port 1 TX-packets is 0):
testpmd> clear port stats all
testpmd> set allmulti all on
Send a packet with destination MAC is Multicast mac eg: 01:00:00:33:00:01.:
######################## NIC statistics for port 1 ########################
RX-packets: 0 RX-errors: 0 RX-bytes: 0
TX-packets: 1 TX-errors: 0 TX-bytes: 80
############################################################################
Verify that the packet was received and fwd(TX-packets is 1).
External Tag (E-tag) Tests¶
In some systems an additional external tag (E-tag) can be present before the VLAN. NIC X550 support VLANs in presence of external tags. E-tag mode is used for systems where the device adds a tag to identify a subsystem (usually a VM) and the near end switch adds a tag indicating the destination subsystem.
The support of E-tag features by X550 consists in: - The filtering of received E-tag packets - E-tag header stripping by VF device in received packets - E-tag header insertion by VF device in transmitted packets - E-tag forwarding to assigned VF by E-tag id
Prerequisites¶
Create 2VF devices from PF device:
./dpdk_nic_bind.py --st 0000:84:00.0 'Device 1563' drv=igb_uio unused= echo 2 > /sys/bus/pci/devices/0000\:84\:00.0/max_vfs
Detach VFs from the host, bind them to pci-stub driver:
/sbin/modprobe pci-stub using `lspci -nn|grep -i ethernet` got VF device id, for example "8086 1565":: echo "8086 1565" > /sys/bus/pci/drivers/pci-stub/new_id echo 0000:84:10.0 > /sys/bus/pci/devices/0000:84:10.0/driver/unbind echo 0000:84:10.0 > /sys/bus/pci/drivers/pci-stub/bind echo 0000:84:10.2 > /sys/bus/pci/devices/0000:84:10.2/driver/unbind echo 0000:84:10.2 > /sys/bus/pci/drivers/pci-stub/bind
Passthrough VF 84:10.0 & 84:10.2 to vm0 and start vm0:
/usr/bin/qemu-system-x86_64 -name vm0 -enable-kvm \ -cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \ -device pci-assign,host=84:10.0,id=pt_0 \ -device pci-assign,host=84:10.2,id=pt_1
Login vm0 and them bind VF devices to igb_uio driver:
./tools/dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
Start host testpmd, set it in rxonly mode and enable verbose output:
testpmd -c f -n 3 -- -i testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
Start guest testpmd, set it in mac forward mode:
testpmd -c 0x3 -n 1 -- -i --tx-offloads=0x8fff testpmd> set fwd mac testpmd> start
Test Case 1: L2 tunnel filter¶
Enable E-tag l2 tunnel support means enabling ability of parsing E-tag packet. This ability should be enabled before we enable filtering, forwarding, offloading for this specific type of tunnel:
testpmd> port config 0 l2-tunnel E-tag enable
Send 802.1BR packet to PF and VFs, check packet normally received.
- type=0x893f - length=150 - nb_segs=1 - (outer) L2 type: Unknown
- (outer) L3 type: IPV4 - (outer) L4 type: UDP
Test Case 2: E-tag filter¶
Enable E-tag packet forwarding and add E-tag on VF0, Send 802.1BR packet with broadcast mac and check packet only received on VF0:
testpmd> E-tag set forwarding on port 0
testpmd> E-tag set filter add e-tag-id 1000 dst-pool 0 port 0
Same E-tag forwarding to VF1, Send 802.1BR packet with broadcast mac and check packet only received on VF1:
testpmd> E-tag set filter add e-tag-id 1000 dst-pool 1 port 0
Same E-tag forwarding to PF0, Send 802.1BR packet with broadcast mac and check packet only received on PF:
testpmd> E-tag set filter add e-tag-id 1000 dst-pool 2 port 0
Remove E-tag, Send 802.1BR packet with broadcast mac and check packet not received:
testpmd> E-tag set filter del e-tag-id 1000 port 0
Test Case 3: E-tag insertion¶
Enable E-tag insertion in VF0, send normal packet to VF1 and check forwarded packet contain E-tag:
testpmd> E-tag set insertion on port-tag-id 1000 port 0 vf 0
Test Case 4: E-tag strip¶
Enable E-tag strip on PF, Send 802.1BR packet to VF and check forwarded packet without E-tag:
testpmd> E-tag set stripping on port 0
Disable E-tag strip on PF, Send 802.1BR packet and check forwarded packet with E-tag:
testpmd> E-tag set stripping off port 0
External Mempool Handler Tests¶
External Mempool Handler feature is an extension to the mempool API that allows users to add and use an alternative mempool handler, which allows external memory subsystems such as external hardware memory management systems and software based memory allocators to be used with DPDK.
Test Case 1: Multiple producers and multiple consumers¶
Change default mempool handler operations to “ring_mp_mc”
Start test app and verify mempool autotest passed:
test -n 4 -c f RTE>> mempool_autotest
Start testpmd with two ports and start forwarding:
testpmd -c 0x6 -n 4 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> set fwd mac testpmd> start
Send hundreds of packets from tester ports
verify forwarded packets sequence and integrity
Test Case 2: Single producer and Single consumer¶
Change default mempool operation to “ring_sp_sc”
Start test app and verify mempool autotest passed:
test -n 4 -c f RTE>> mempool_autotest
Start testpmd with two ports and start forwarding:
testpmd -c 0x6 -n 4 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> set fwd mac testpmd> start
Send hundreds of packets from tester ports
verify forwarded packets sequence and integrity
Test Case 3: Single producer and Multiple consumers¶
Change default mempool operation to “ring_sp_mc”
Start test app and verify mempool autotest passed:
test -n 4 -c f RTE>> mempool_autotest
Start testpmd with two ports and start forwarding:
testpmd -c 0x6 -n 4 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> set fwd mac testpmd> start
Send hundreds of packets from tester ports
verify forwarded packets sequence and integrity
Test Case 4: Multiple producers and single consumer¶
Change default mempool operation to “ring_mp_sc”
Start test app and verify mempool autotest passed:
test -n 4 -c f RTE>> mempool_autotest
Start testpmd with two ports and start forwarding:
testpmd -c 0x6 -n 4 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> set fwd mac testpmd> start
Send hundreds of packets from tester ports
verify forwarded packets sequence and integrity
Test Case 4: Stack mempool handler¶
Change default mempool operation to “stack”
Start test app and verify mempool autotest passed:
test -n 4 -c f RTE>> mempool_autotest
Start testpmd with two ports and start forwarding:
testpmd -c 0x6 -n 4 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> set fwd mac testpmd> start
Send hundreds of packets from tester ports
verify forwarded packets sequence and integrity
Niantic Flow Director Tests¶
Description¶
This document provides the plan for testing the Flow Director (FDir) feature of the Intel 82599 10GbE Ethernet Controller. FDir allows an application to add filters that identify specific flows (or sets of flows), by examining the VLAN header, IP addresses, port numbers, protocol type (IPv4/IPv6, UDP/TCP, SCTP), or a two-byte tuple within the first 64 bytes of the packet.
There are two types of filters:
- Perfect match filters, where there must be a match between the fields of received packets and the programmed filters.
- Signature filters, where there must be a match between a hash-based signature if the fields in the received packet.
There is also support for global masks that affect all filters by masking out some fields, or parts of fields from the matching process.
Within DPDK, the FDir feature can be configured through the API in the
lib_ethdev library, and this API is used by the testpmd
application.
Note that RSS features can not be enabled at the same time as FDir.
Prerequisites¶
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
The DUT has a dual port Intel 82599 10GbE Ethernet Controller, with one of these ports connected to a port on another device that is controlled by the Scapy packet generator.
The Ethernet interface identifier of the port that Scapy will use must be known. In all tests below, it is referred to as “eth9”.
The following packets should be created in Scapy. Any reasonable MAC address can be given but other fields must be as shown:
p_udp=Ether(src=get_if_hwaddr("eth9"), dst="00:1B:21:91:3D:2C")/IP(src="192.168.0.1",
dst="192.168.0.2")/UDP(sport=1024,dport=1024)
p_udp1=Ether(src=get_if_hwaddr("eth9"), dst="00:1B:21:91:3D:2C")/IP(src="192.168.1.1",
dst="192.168.1.2")/UDP(sport=0,dport=0)
p_tcp=Ether(src=get_if_hwaddr("eth9"), dst="00:1B:21:91:3D:2C")/IP(src="192.168.0.1",
dst="192.168.0.2")/TCP(sport=1024,dport=1024)
p_ip=Ether(src=get_if_hwaddr("eth9"), dst="00:1B:21:91:3D:2C")/IP(src="192.168.0.1",
dst="192.168.0.2")
p_ipv6_udp=Ether(src=get_if_hwaddr("eth9"), dst="00:1B:21:91:3D:2C")/
IPv6(src="2001:0db8:85a3:0000:0000:8a2e:0370:7000",
dst="2001:0db8:85a3:0000:0000:8a2e:0370:7338")/UDP(sport=1024,dport=1024)
p_udp_1=Ether(src=get_if_hwaddr("eth9"), dst="00:1B:21:91:3D:2C")/
IP(src="192.168.0.1", dst="192.168.0.1")/UDP(sport=1024,dport=1024)
p_udp_2=Ether(src=get_if_hwaddr("eth9"), dst="00:1B:21:91:3D:2C")
/IP(src="192.168.0.15", dst="192.168.0.15")/UDP(sport=1024,dport=1024)
p_udp_3=Ether(src=get_if_hwaddr("eth9"), dst="00:1B:21:91:3D:2C")/
IP(src="192.168.0.1", dst="192.168.1.1")/UDP(sport=1024,dport=1024)
p_udp_4=Ether(src=get_if_hwaddr("eth9"), dst="00:1B:21:91:3D:2C")/
IP(src="10.11.12.1", dst="10.11.12.2")/UDP(sport=0x4400,dport=0x4500)
p_udp_5=Ether(src=get_if_hwaddr("eth9"), dst="00:1B:21:91:3D:2C")/
IP(src="10.11.12.1", dst="10.11.12.2")/UDP(sport=0x4411,dport=0x4517)
p_udp_6=Ether(src=get_if_hwaddr("eth9"), dst="00:1B:21:91:3D:2C")/
IP(src="10.11.12.1", dst="10.11.12.2")/UDP(sport=0x4500,dport=0x5500)
p_gre1=Ether(src=get_if_hwaddr("eth9"), dst="00:1B:21:91:3D:2C")/
IP(src="192.168.0.1", dst="192.168.0.2")/GRE(proto=0x1)/IP()/UDP()
p_gre2=Ether(src=get_if_hwaddr("eth9"), dst="00:1B:21:91:3D:2C")/
IP(src="192.168.0.1", dst="192.168.0.2")/GRE(proto=0xff)/IP()/UDP()
The test commands below assume that port 0 on the DUT is the port that is
connected to the traffic generator. All fdir cmdline please see doc on http://www.dpdk.org/doc/guides/testpmd_app_ug/testpmd_funcs.html#filter-functions. If this is not the case, the following
testpmd
commands must be changed, and also the --portmask
parameter.
show port fdir <port>
add_perfect_filter <port>
add_signature_filter <port>
set_masks_filter <port>
rx_vlan add all <port>
Most of the tests below involve sending single packets from the generator and
checking if the packets match the configured filter, and go to a set queue. To
see this, there must be multiple queues, setup by passing the following command-
line arguments: --nb-cores=2 --rxq=2 --txq=2
. And at run-time, the
forwarding mode must be set to rxonly, and the verbosity level > 0:
testpmd> set verbose 1
testpmd> set fwd rxonly
Test case: Setting memory reserved for FDir filters¶
Each FDir filter requires space in the Rx Packet Buffer (perfect filters require 32 B of space, and signature filters require 8 B of space). The total amount of memory - and therefore the number of concurrent filters - can be set when initializing FDir.
Sub-case: Reserving 64 KB¶
Start the testpmd
application as follows:
./testpmd -c 0xf -- -i --portmask=0x1 --disable-rss --pkt-filter-mode=perfect --pkt-filter-size=64K
Check with the show port fdir
command that the amount of FDIR filters that
are free to be used is equal to 2048 (2048 * 32B = 64KB).:
testpmd> show port fdir 0
######################## FDIR infos for port 0 ########################
collision: 0 free: 2048
maxhash: 0 maxlen: 0
add : 0 remove : 0
f_add: 0 f_remove: 0
########################################################################
Sub-case: Reserving 128 KB¶
Start the testpmd
application as follows:
./testpmd -c 0xf -- -i --portmask=0x1 --disable-rss --pkt-filter-mode=perfect --pkt-filter-size=128K
Check with the show port fdir
command that the amount of FDIR filters that
are free to be used is equal to 4096 (4096 * 32B = 128KB).:
testpmd> show port fdir 0
######################## FDIR infos for port 0 ########################
collision: 0 free: 4096
maxhash: 0 maxlen: 0
add : 0 remove : 0
f_add: 0 f_remove: 0
########################################################################
Sub-case: Reserving 256 KB¶
Start the testpmd
application as follows:
./testpmd -c 0xf -- -i --portmask=0x1 --disable-rss --pkt-filter-mode=perfect --pkt-filter-size=256K
Check with the show port fdir
command that the amount of FDIR filters that
are free to be used is equal to 8192 (8192 * 32B = 256KB).:
testpmd> show port fdir 0
######################## FDIR infos for port 0 ########################
collision: 0 free: 8192
maxhash: 0 maxlen: 0
add : 0 remove : 0
f_add: 0 f_remove: 0
########################################################################
Test case: Control levels of FDir match reporting¶
The status of FDir filter matching for each packet can be reported by the hardware through the RX descriptor of each received packet, and this information is copied into the packet mbuf, that can be examined by the application.
There are three different reporting modes, that can be set in testpmd using the
--pkt-filter-report-hash
command line argument:
Sub-case: --pkt-filter-report-hash=none
mode¶
In this mode FDir reporting mode, matches are never reported.
Start the testpmd
application as follows:
./testpmd -c 0xf -- -i --portmask=0x1 --nb-cores=2 --rxq=2 --txq=2
--disable-rss --pkt-filter-mode=perfect --pkt-filter-report-hash=none
testpmd> set verbose 1
testpmd> set fwd rxonly
testpmd> start
Send the p_udp
packet with Scapy on the traffic generator and check that no
FDir information is printed:
testpmd> port 0/queue 0: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Add a perfect filter to match the p_udp
packet, and send the packet again.
No Dir information is printed, but it can be seen that the packet went to queue
1:
testpmd> add_perfect_filter 0 udp src 192.168.0.1 1024 dst 192.168.0.2 1024
flexbytes 0x800 vlan 0 queue 1 soft 0x14
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Sub-case: --pkt-filter-report-hash=match
mode¶
In this mode FDir reporting mode, FDir information is printed for packets that
match a filter.
Start the testpmd
application as follows:
./testpmd -c 0xf -- -i --portmask=0x1 --nb-cores=2 --rxq=2 --txq=2 --disable-rss
--pkt-filter-mode=perfect --pkt-filter-report-hash=match
testpmd> set verbose 1
testpmd> set fwd rxonly
testpmd> start
Send the p_udp
packet with Scapy on the traffic generator and check that no
FDir information is printed:
testpmd> port 0/queue 0: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Add a perfect filter to match the p_udp
packet, and send the packet again.
This time, the match is indicated (PKT_RX_PKT_RX_FDIR
), and its details
(hash, id) printed
testpmd> add_perfect_filter 0 udp src 192.168.0.1 1024 dst 192.168.0.2 1024
flexbytes 0x800 vlan 0 queue 1 soft 0x14
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60
-nb_segs=1 - FDIR hash=0x43c - FDIR id=0x14
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Update the perfect filter to match the p_udp1
packet, and send the packet again.
This time, the match is indicated (PKT_RX_PKT_RX_FDIR
), and its details
(hash, id) printed
testpmd> add_perfect_filter 0 udp src 192.168.1.1 1024 dst 192.168.1.2 0
flexbytes 0x800 vlan 0 queue 1 soft 0x14
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60
-nb_segs=1 - FDIR hash=0x43c - FDIR id=0x14
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Remove the perfect filter match the p_udp1
and p_udp
packets, and send the packet again.
Check that no FDir information is printed:
testpmd> port 0/queue 0: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Sub-case: --pkt-filter-report-hash=always
mode¶
In this mode FDir reporting mode, FDir information is printed for every received
packet.
Start the testpmd
application as follows:
./testpmd -c 0xf -- -i --portmask=0x1 --nb-cores=2 --rxq=2 --txq=2 --disable-rss
--pkt-filter-mode=perfect --pkt-filter-report-hash=always
testpmd> set verbose 1
testpmd> set fwd rxonly
testpmd> start
Send the p_udp
packet with Scapy on the traffic generator and check the
output (FDIR id=0x0):
testpmd> port 0/queue 0: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60
- nb_segs=1 - FDIR hash=0x43c - FDIR id=0x0
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Add a perfect filter to match the p_udp
packet, and send the packet again.
This time, the filter ID is different, and the packet goes to queue 1
testpmd> add_perfect_filter 0 udp src 192.168.0.1 1024 dst 192.168.0.2 1024
flexbytes 0x800 vlan 0 queue 1 soft 0x14
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60
- nb_segs=1 - FDIR hash=0x43c - FDIR id=0x14
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Test case: FDir signature matching mode¶
This test adds signature filters to the hardware, and then checks whether sent
packets match those filters. In order to this, the packet should first be sent
from Scapy
before the filter is created, to verify that it is not matched by
a FDir filter. The filter is then added from the testpmd
command line and
the packet is sent again.
Launch the userland testpmd
application as follows:
./testpmd -c 0xf -- -i --portmask=1 --nb-cores=2 --rxq=2 --txq=2 --disable-rss
--pkt-filter-mode=signature
testpmd> set verbose 1
testpmd> set fwd rxonly
testpmd> start
Send the p_udp
packet and verify that there is not a match. Then add the
filter and check that there is a match:
testpmd> add_signature_filter 0 udp src 192.168.0.1 1024 dst 192.168.0.2
1024 flexbytes 0x800 vlan 0 queue 1
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
- FDIR hash=0x143c - FDIR id=0xe230
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Send the p_tcp
packet and verify that there is not a match. Then add the
filter and check that there is a match:
testpmd> add_signature_filter 0 tcp src 192.168.0.1 1024 dst 192.168.0.2 1024
flexbytes 0x800 vlan 0 queue 1
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
- FDIR hash=0x1b47 - FDIR id=0xbd2b
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Send the p_ip
packet and verify that there is not a match. Then add the
filter and check that there is a match:
testpmd> add_signature_filter 0 ip src 192.168.0.1 0 dst 192.168.0.2 0 flexbytes 0x800 vlan 0 queue 1
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
- FDIR hash=0x1681 - FDIR id=0xf3ed
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Send the p_ipv6_udp
packet and verify that there is not a match. Then add the
filter and check that there is a match:
testpmd> add_signature_filter 0 udp src 2001:0db8:85a3:0000:0000:8a2e:0370:7000 1024
dst 2001:0db8:85a3:0000:0000:8a2e:0370:7338 1024 flexbytes 0x86dd vlan 0 queue 1
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x86dd - length=62 - nb_segs=1
- FDIR hash=0x4aa - FDIR id=0xea83
PKT_RX_PKT_RX_FDIR
PKT_RX_IPV6_HDR
Test case: FDir perfect matching mode¶
This test adds perfect-match filters to the hardware, and then checks whether
sent packets match those filters. In order to this, the packet should first be
sent from Scapy
before the filter is created, to verify that it is not
matched by a FDir filter. The filter is then added from the testpmd
command
line and the packet is sent again.:
./testpmd -c 0xf -- -i --portmask=1 --nb-cores=2 --rxq=2 --txq=2 --disable-rss
--pkt-filter-mode=perfect
testpmd> set verbose 1
testpmd> set fwd rxonly
testpmd> start
Send the p_udp
packet and verify that there is not a match. Then add the
filter and check that there is a match:
testpmd> add_perfect_filter 0 udp src 192.168.0.1 1024 dst 192.168.0.2 1024
flexbytes 0x800 vlan 0 queue 1 soft 0x14
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
- FDIR hash=0x43c - FDIR id=0x14
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Update the perfect filter match the p_udp1
packet and send the packet and check
that there is a match:
testpmd> add_perfect_filter 0 udp src 192.168.1.1 1024 dst 192.168.1.2 0
flexbytes 0x800 vlan 0 queue 1 soft 0x14
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60
-nb_segs=1 - FDIR hash=0x43c - FDIR id=0x14
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Remove the perfect filter match the p_udp1
and p_udp
packets, and send the packet again.
Check that no FDir information is printed:
testpmd> port 0/queue 0: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Send the p_tcp
packet and verify that there is not a match. Then add the
filter and check that there is a match:
testpmd> add_perfect_filter 0 tcp src 192.168.0.1 1024 dst 192.168.0.2 1024
flexbytes 0x800 vlan 0 queue 1 soft 0x15
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
- FDIR hash=0x347 - FDIR id=0x15
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Send the p_ip
packet and verify that there is not a match. Then add the
filter and check that there is a match:
testpmd> add_perfect_filter 0 ip src 192.168.0.1 0 dst 192.168.0.2 0
flexbytes 0x800 vlan 0 queue 1 soft 0x17
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
- FDIR hash=0x681 - FDIR id=0x17
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Test case: FDir filter masks¶
This section tests the functionality of the setting FDir masks to to affect
which fields, or parts of fields are used in the matching process. Note that
setting up a mask resets all the FDir filters, so the testpmd
application
does not have to be relaunched for each sub-case.
Launch the userland testpmd
application:
./testpmd -c 0xf -- -i --portmask=1 --nb-cores=2 --rxq=2 --txq=2 --disable-rss
--pkt-filter-mode=perfect
testpmd> set verbose 1
testpmd> set fwd rxonly
testpmd> start
Sub-case: IP address masking¶
Create the following IPv4 mask on port 0. This mask means the lower byte of the source and destination IP addresses will not be considered in the matching process:
testpmd> set_masks_filter 0 only_ip_flow 0 src_mask 0xffffff00 0xffff
dst_mask 0xffffff00 0xffff flexbytes 1 vlan_id 1 vlan_prio 1
Then, add the following perfect IPv4 filter:
testpmd> add_perfect_filter 0 udp src 192.168.0.0 1024 dst 192.168.0.0 1024
flexbytes 0x800 vlan 0 queue 1 soft 0x17
Then send the p_udp_1
, p_udp_2
, and p_udp_3
packets from Scapy. The
first two packets should match the masked filter, but the third packet will not,
as it differs in the second lowest IP address byte.:
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
- FDIR hash=0x6cf - FDIR id=0x17
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
- FDIR hash=0x6cf - FDIR id=0x17
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
port 0/queue 0: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Sub-case: Port masking¶
Create the following mask on port 0. This mask means the lower byte of the source and destination ports will not be considered in the matching process:
testpmd> set_masks_filter 0 only_ip_flow 0 src_mask 0xffffffff 0xff00
dst_mask 0xffffffff 0xff00 flexbytes 1 vlan_id 1 vlan_prio 1
Then, add the following perfect IPv4 filter:
testpmd> add_perfect_filter 0 udp src 10.11.12.1 0x4400 dst 10.11.12.2 0x4500
flexbytes 0x800 vlan 0 queue 1 soft 0x4
Then send the p_udp_4
, p_udp_5
, and p_udp_6
packets from Scapy. The
first two packets should match the masked filter, but the third packet will not,
as it differs in higher byte of the port numbers.:
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
- FDIR hash=0x41d - FDIR id=0x4
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
- FDIR hash=0x41d - FDIR id=0x4
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
port 0/queue 0: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Sub-case: L4Type field masking¶
Create the following mask on port 0. This mask means that the L4type field of packets will not be considered. Note that in this case, the source and the destination port masks are irrelevant and must be set to zero:
testpmd> set_masks_filter 0 only_ip_flow 1 src_mask 0xffffffff 0x0
dst_mask 0xffffffff 0x0 flexbytes 1 vlan_id 1 vlan_prio 1
Then, add the following perfect IPv4 filter:
testpmd> add_perfect_filter 0 ip src 192.168.0.1 0 dst 192.168.0.2 0
flexbytes 0x800 vlan 0 queue 1 soft 0x42
Then send the p_udp
and p_tcp
packets from Scapy. Both packets will
match the filter:
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
- FDIR hash=0x681 - FDIR id=0x42
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=60 - nb_segs=1
- FDIR hash=0x681 - FDIR id=0x42
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Test case: FDir flexbytes
filtering¶
The FDir feature supports setting up filters that can match on any two byte
field within the first 64 bytes of a packet. Which byte offset to use is
set by passing command line arguments to testpmd
. In this test a value of
18
corresponds to the bytes at offset 36 and 37, as the offset is in 2-byte
units:
./testpmd -c 0xf -- -i --portmask=1 --nb-cores=2 --rxq=2 --txq=2 --disable-rss
--pkt-filter-mode=perfect --pkt-filter-flexbytes-offset=18
testpmd> set verbose 1
testpmd> set fwd rxonly
testpmd> start
Send the p_gre1
packet and verify that there is not a match. Then add the
filter and check that there is a match:
testpmd> add_perfect_filter 0 ip src 192.168.0.1 0 dst 192.168.0.2 0 flexbytes 0x1 vlan 0 queue 1 soft 0x1
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=66 - nb_segs=1
- FDIR hash=0x18b - FDIR id=0x1
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Send the p_gre2
packet and verify that there is not a match. Then add a
second filter and check that there is a match:
testpmd> add_perfect_filter 0 ip src 192.168.0.1 0 dst 192.168.0.2 0 flexbytes 0xff vlan 0 queue 1 soft 0xff
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=66 - nb_segs=1 - FDIR hash=0x3a1 - FDIR id=0xff
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Sub-case: flexbytes
FDir masking¶
A mask can also be applied to the flexbytes
filter:
testpmd> set_masks_filter 0 only_ip_flow 0 src_mask 0xffffffff 0xffff
dst_mask 0xffffffff 0xffff flexbytes 0 vlan_id 1 vlan_prio 1
Then, add the following perfect filter (same as first filter in prev. test), and
check that this time both packets match (p_gre1
and p_gre2
):
testpmd> add_perfect_filter 0 ip src 192.168.0.1 0 dst 192.168.0.2 0 flexbytes 0x0 vlan 0 queue 1 soft 0x42
testpmd> port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=66 - nb_segs=1 - FDIR hash=0x2f3 - FDIR id=0x42
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
port 0/queue 1: received 1 packets
src=00:1B:21:53:1F:14 - dst=00:1B:21:91:3D:2C - type=0x0800 - length=66 - nb_segs=1 - FDIR hash=0x2f3 - FDIR id=0x42
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Test case: FDir VLAN field filtering¶
Connect port 0 of the DUT to a traffic generator capable of sending packets with VLAN headers.
Then launch the testpmd
application, and enable VLAN packet reception:
./testpmd -c 0xf -- -i --portmask=1 --nb-cores=2 --rxq=2 --txq=2 --disable-rss --pkt-filter-mode=perfect
testpmd> set verbose 1
testpmd> set fwd rxonly
testpmd> rx_vlan add all 0
testpmd> start
From the traffic generator, transmit a packet with the following details, and verify that it does not match any FDir filters.:
- VLAN ID = 0x0FFF
- IP source address = 192.168.0.1
- IP destination address = 192.168.0.2
- UDP source port = 1024
- UDP destination port = 1024
Then, add the following perfect VLAN filter, resend the packet and verify that it matches the filter:
testpmd> add_perfect_filter 0 udp src 192.168.0.1 1024 dst 192.168.0.2 1024
flexbytes 0x8100 vlan 0xfff queue 1 soft 0x47
testpmd> port 0/queue 1: received 1 packets
src=00:00:03:00:03:00 - dst=00:00:03:00:02:00 - type=0x0800 - length=64 - nb_segs=1
- FDIR hash=0x7e9 - FDIR id=0x47 - VLAN tci=0xfff
PKT_RX_VLAN_PKT
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Sub-case: VLAN field masking¶
First, set the following mask to disable the matching of the VLAN field, and add a perfect filter to match any VLAN identifier:
testpmd> set_masks_filter 0 only_ip_flow 0 src_mask 0xffffffff 0xffff
dst_mask 0xffffffff 0xffff flexbytes 1 vlan_id 0 vlan_prio 0
testpmd> add_perfect_filter 0 udp src 192.168.0.1 1024 dst 192.168.0.2 1024
flexbytes 0x8100 vlan 0 queue 1 soft 0x47
Then send the same packet above, but with the VLAN field change first to 0x001, and then to 0x0017. The packets should still match the filter::
testpmd> port 0/queue 1: received 1 packets
src=00:00:03:00:03:00 - dst=00:00:03:00:02:00 - type=0x0800 - length=64 - nb_segs=1
- FDIR hash=0x7e8 - FDIR id=0x47 - VLAN tci=0x1
PKT_RX_VLAN_PKT
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
port 0/queue 1: received 1 packets
src=00:00:03:00:03:00 - dst=00:00:03:00:02:00 - type=0x0800 - length=64 - nb_segs=1
- FDIR hash=0x7e8 - FDIR id=0x47 - VLAN tci=0x17
PKT_RX_VLAN_PKT
PKT_RX_PKT_RX_FDIR
PKT_RX_IP_CKSUM
PKT_RX_IPV4_HDR
Test Case : test with ipv4 TOS, PROTO, TTL¶
start testpmd and initialize flow director flex payload configuration:
./testpmd -c fffff -n 4 -- -i --disable-rss --pkt-filter-mode=perfect --rxq=8 --txq=8 --nb-cores=8 testpmd> port stop 0 testpmd> flow_director_flex_payload 0 l2 (0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15) testpmd> flow_director_flex_payload 0 l3 (0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15) testpmd> flow_director_flex_payload 0 l4 (0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15) testpmd> flow_director_flex_mask 0 flow all (0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff) testpmd> port start 0 testpmd> set verbose 1 testpmd> set fwd rxonly testpmd> start
Note:
assume FLEXBYTES = "0x11,0x11,0x22,0x22,0x33,0x33,0x44,0x44,0x55,0x55,0x66,0x66,0x77,0x77,0x88,0x88" assume payload = "\x11\x11\x22\x22\x33\x33\x44\x44\x55\x55\x66\x66\x77\x77\x88\x88"
setup the fdir input set of IPv4:
testpmd> set_fdir_input_set 0 ipv4-other none select testpmd> set_fdir_input_set 0 ipv4-other src-ipv4 add testpmd> set_fdir_input_set 0 ipv4-other dst-ipv4 add
add ipv4-tos to fdir input set, set tos to 16 and 8:
testpmd> set_fdir_input_set 0 ipv4-other ipv4-tos add setup flow director filter rules,
rule_1:
flow_director_filter 0 mode IP add flow ipv4-other src 192.168.1.1 dst 192.168.1.2 tos 16 proto 255 ttl 255 vlan 0 \ flexbytes (FLEXBYTES) fwd pf queue 1 fd_id 1
rule_2:
flow_director_filter 0 mode IP add flow ipv4-other src 192.168.1.1 dst 192.168.1.2 tos 8 proto 255 ttl 255 vlan 0 \ flexbytes (FLEXBYTES) fwd pf queue 2 fd_id 2
send packet to DUT,
packet_1:
sendp([Ether(dst="%s")/IP(src="192.168.0.1", dst="192.168.0.2", tos=16, proto=255, ttl=255)/Raw(%s)], iface="%s")'\ %(dst_mac, payload, itf)
packet_1 should be received by queue 1.
packet_2:
sendp([Ether(dst="%s")/IP(src="192.168.0.1", dst="192.168.0.2", tos=8, proto=255, ttl=255)/Raw(%s)], iface="%s")'\ %(dst_mac, payload, itf)
packet_2 should be received by queue 2.
- Delete rule_1, send packet_1 again, packet_1 should be received by queue 0.
- Delete rule_2, send packet_2 again, packet_2 should be received by queue 0.
add ipv4-proto to fdir input set, set proto to 253 and 254:
testpmd> set_fdir_input_set 0 ipv4-other ipv4-proto add
setup flow director filter rules rule_3:
flow_director_filter 0 mode IP add flow ipv4-other src 192.168.1.1 dst 192.168.1.2 tos 16 proto 253 ttl 255 vlan 0 \ flexbytes (FLEXBYTES) fwd pf queue 3 fd_id 3
rule_4:
flow_director_filter 0 mode IP add flow ipv4-other src 192.168.1.1 dst 192.168.1.2 tos 8 proto 254 ttl 255 vlan 0 \ flexbytes (FLEXBYTES) fwd pf queue 4 fd_id 4
send packet to DUT,
packet_3:
'sendp([Ether(dst="%s")/IP(src="192.168.0.1", dst="192.168.0.2", tos=16, proto=253, ttl=255)/Raw(%s)], iface="%s")'\ %(dst_mac, payload, itf)
packet_3 should be received by queue 3.
packet_4:
'sendp([Ether(dst="%s")/IP(src="192.168.0.1", dst="192.168.0.2", tos=8, proto=254, ttl=255)/Raw(%s)], iface="%s")'\ %(dst_mac, payload, itf)
packet_4 should be received by queue 4.
- Delete rule_3, send packet_3 again, packet_3 should be received by queue 0.
- Delete rule_4, send packet_4 again, packet_4 should be received by queue 0.
test ipv4-ttl, set ttl to 32 and 64:
testpmd> set_fdir_input_set 0 ipv4-other ipv4-ttl add
setup flow director filter rules, rule_5:
flow_director_filter 0 mode IP add flow ipv4-other src 192.168.1.1 dst 192.168.1.2 tos 16 proto 253 ttl 32 vlan 0 \ flexbytes (FLEXBYTES) fwd pf queue 5 fd_id 5
rule_6:
flow_director_filter 0 mode IP add flow ipv4-other src 192.168.1.1 dst 192.168.1.2 tos 8 proto 254 ttl 64 vlan 0 \ flexbytes (FLEXBYTES) fwd pf queue 6 fd_id 6
send packet to DUT,
packet_5:
'sendp([Ether(dst="%s")/IP(src="192.168.0.1", dst="192.168.0.2", tos=16, proto=253, ttl=32)/Raw(%s)], iface="%s")'\ %(dst_mac, payload, itf)
packet_5 should be received by queue 5.
packet_6:
'sendp([Ether(dst="%s")/IP(src="192.168.0.1", dst="192.168.0.2", tos=8, proto=254, ttl=64)/Raw(%s)], iface="%s")'\ %(dst_mac, payload, itf)
packet_6 should be received by queue 6.
- Delete rule_5, send packet_5 again, packet_5 should be received by queue 0.
- Delete rule_6, send packet_6 again, packet_6 should be received by queue 0.
removed all entry of fdir:
testpmd>flush_flow_director 0 testpmd>show port fdir 0
Example:
flow_director_filter 0 mode IP add flow ipv4-other src 192.168.1.1 dst 192.168.1.2 tos 16 proto 255 ttl 255 vlan 0 flexbytes (0x11,0x11,0x22,0x22,0x33,0x33,0x44,0x44,0x55,0x55,0x66,0x66,0x77,0x77,0x88,0x88) fwd pf queue 1 fd_id 1
flow_director_filter 0 mode IP add flow ipv4-other src 192.168.1.1 dst 192.168.1.2 tos 8 proto 255 ttl 255 vlan 0 flexbytes (0x11,0x11,0x22,0x22,0x33,0x33,0x44,0x44,0x55,0x55,0x66,0x66,0x77,0x77,0x88,0x88) fwd pf queue 2 fd_id 2
sendp([Ether(src="00:00:00:00:00:01", dst="00:00:00:00:01:00")/IP(src="192.168.1.1", dst="192.168.1.2", tos=16, proto=255, ttl=255)/Raw(load="\x11\x11\x22\x22\x33\x33\x44\x44\x55\x55\x66\x66\x77\x77\x88\x88")], iface="ens260f0")
sendp([Ether(src="00:00:00:00:00:01", dst="00:00:00:00:01:00")/IP(src="192.168.1.1", dst="192.168.1.2", tos=8, proto=255, ttl=255)/Raw(load="\x11\x11\x22\x22\x33\x33\x44\x44\x55\x55\x66\x66\x77\x77\x88\x88")], iface="ens260f0")
Test Case 2: test with ipv6 tc, next-header, hop-limits¶
start testpmd and initialize flow director flex payload configuration:
./testpmd -c fffff -n 4 -- -i --disable-rss --pkt-filter-mode=perfect --rxq=8 --txq=8 --nb-cores=8 testpmd> port stop 0 testpmd> flow_director_flex_payload 0 l2 (0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15) testpmd> flow_director_flex_payload 0 l3 (0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15) testpmd> flow_director_flex_payload 0 l4 (0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15) testpmd> flow_director_flex_mask 0 flow all (0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff) testpmd> port start 0 testpmd> set verbose 1 testpmd> set fwd rxonly testpmd> start
Note:
assume FLEXBYTES = "0x11,0x11,0x22,0x22,0x33,0x33,0x44,0x44,0x55,0x55,0x66,0x66,0x77,0x77,0x88,0x88" assume payload = "\x11\x11\x22\x22\x33\x33\x44\x44\x55\x55\x66\x66\x77\x77\x88\x88"
setup the fdir input set of IPv6:
testpmd> set_fdir_input_set 0 ipv6-other none select testpmd> set_fdir_input_set 0 ipv6-other src-ipv6 add testpmd> set_fdir_input_set 0 ipv6-other dst-ipv6 add
add ipv6-tc to fdir input set, set tc to 16 and 8:
testpmd> set_fdir_input_set 0 ipv6-other ipv6-tc add
setup flow director filter rules,
rule_1:
flow_director_filter 0 mode IP add flow ipv6-other src 2000::1 dst 2000::2 tos 16 proto 255 ttl 64 vlan 0 \ flexbytes (FLEXBYTES) fwd pf queue 1 fd_id 1
rule_2:
flow_director_filter 0 mode IP add flow ipv6-other src 2000::1 dst 2000::2 tos 8 proto 255 ttl 64 vlan 0 \ flexbytes (FLEXBYTES) fwd pf queue 2 fd_id 2
send packet to DUT,
packet_1:
'sendp([Ether(dst="%s")/IPv6(src="2000::1", dst="2000::2", tc=16, nh=255, hlim=64)/Raw(%s)], iface="%s")' \ %(dst_mac, payload, itf)
packet_1 should be received by queue 1.
packet_2:
'sendp([Ether(dst="%s")/IPv6(src="2000::1", dst="2000::2", tc=8, nh=255, hlim=64)/Raw(%s)], iface="%s")' \ %(dst_mac, payload, itf)
packet_2 should be received by queue 2.
- Delete rule_1, send packet_1 again, packet_1 should be received by queue 0.
- Delete rule_2, send packet_2 again, packet_2 should be received by queue 0.
add ipv6-next-header to fdir input set, set nh to 253 and 254:
testpmd> set_fdir_input_set 0 ipv6-other ipv6-next-header add
setup flow director filter rules, rule_3:
flow_director_filter 0 mode IP add flow ipv6-other src 2000::1 dst 2000::2 tos 16 proto 253 ttl 255 vlan 0 \ flexbytes (FLEXBYTES) fwd pf queue 3 fd_id 3
rule_4:
flow_director_filter 0 mode IP add flow ipv6-other src 2000::1 dst 2000::2 tos 8 proto 254 ttl 255 vlan 0 \ flexbytes (FLEXBYTES) fwd pf queue 4 fd_id 4
send packet to DUT,
packet_3:
'sendp([Ether(dst="%s")/IPv6(src="2000::1", dst="2000::2", tc=16, nh=253, hlim=64)/Raw(%s)], iface="%s")'\ %(dst_mac, payload, itf)
packet_3 should be received by queue 3.
packet_4:
'sendp([Ether(dst="%s")/IPv6(src="2000::1", dst="2000::2", tc=8, nh=254, hlim=64)/Raw(%s)], iface="%s")'\ %(dst_mac, payload, itf)
packet_4 should be received by queue 4.
- Delete rule_3, send packet_3 again, packet_3 should be received by queue 0.
- Delete rule_4, send packet_4 again, packet_4 should be received by queue 0.
add ipv6-hop-limits to fdir input set, set hlim to 32 and 64:
testpmd> set_fdir_input_set 0 ipv6-other ipv6-hop-limits add
setup flow director filter rules, rule_5:
flow_director_filter 0 mode IP add flow ipv6-other src 2000::1 dst 2000::2 tos 16 proto 253 ttl 32 vlan 0 \ flexbytes (FLEXBYTES) fwd pf queue 5 fd_id 5
rule_6:
flow_director_filter 0 mode IP add flow ipv6-other src 2000::1 dst 2000::2 tos 8 proto 254 ttl 64 vlan 0 \ flexbytes (FLEXBYTES) fwd pf queue 6 fd_id 6
send packet to DUT,
packet_5:
'sendp([Ether(dst="%s")/IPv6(src="2000::1", dst="2000::2", tc=16, nh=253, hlim=32)/Raw(%s)], iface="%s")'\ %(dst_mac, payload, itf)
packet_5 should be received by queue 5.
packet_6:
'sendp([Ether(dst="%s")/IPv6(src="2000::1", dst="2000::2", tc=8, nh=254, hlim=64)/Raw(%s)], iface="%s")'\ %(dst_mac, payload, itf)
packet_6 should be received by queue 6.
- Delete rule_5, send packet_5 again, packet_5 should be received by queue 0.
- Delete rule_6, send packet_6 again, packet_6 should be received by queue 0.
removed all entry of fdir:
testpmd>flush_flow_director 0 testpmd>show port fdir 0
Example:
flow_director_filter 0 mode IP add flow ipv6-other src 2000::1 dst 2000::2 tos 16 proto 255 ttl 64 vlan 0 flexbytes (0x11,0x11,0x22,0x22,0x33,0x33,0x44,0x44,0x55,0x55,0x66,0x66,0x77,0x77,0x88,0x88) fwd pf queue 1 fd_id 1
flow_director_filter 0 mode IP add flow ipv6-other src 2000::1 dst 2000::2 tos 8 proto 255 ttl 64 vlan 0 flexbytes (0x11,0x11,0x22,0x22,0x33,0x33,0x44,0x44,0x55,0x55,0x66,0x66,0x77,0x77,0x88,0x88) fwd pf queue 2 fd_id 2
flow_director_filter 0 mode IP add flow ipv6-other src 2000::1 dst 2000::2 tos 16 proto 253 ttl 64 vlan 0 flexbytes (0x11,0x11,0x22,0x22,0x33,0x33,0x44,0x44,0x55,0x55,0x66,0x66,0x77,0x77,0x88,0x88) fwd pf queue 3 fd_id 3
flow_director_filter 0 mode IP add flow ipv6-other src 2000::1 dst 2000::2 tos 8 proto 254 ttl 64 vlan 0 flexbytes (0x11,0x11,0x22,0x22,0x33,0x33,0x44,0x44,0x55,0x55,0x66,0x66,0x77,0x77,0x88,0x88) fwd pf queue 4 fd_id 4
flow_director_filter 0 mode IP add flow ipv6-other src 2000::1 dst 2000::2 tos 16 proto 253 ttl 32 vlan 0 flexbytes (0x11,0x11,0x22,0x22,0x33,0x33,0x44,0x44,0x55,0x55,0x66,0x66,0x77,0x77,0x88,0x88) fwd pf queue 5 fd_id 5
flow_director_filter 0 mode IP add flow ipv6-other src 2000::1 dst 2000::2 tos 8 proto 254 ttl 48 vlan 0 flexbytes (0x11,0x11,0x22,0x22,0x33,0x33,0x44,0x44,0x55,0x55,0x66,0x66,0x77,0x77,0x88,0x88) fwd pf queue 6 fd_id 6
sendp([Ether(src="00:00:00:00:00:01", dst="00:00:00:00:01:00")/IPv6(src="2000::1", dst="2000::2", tc=16, nh=255, hlim=64)/Raw(load="\x11\x11\x22\x22\x33\x33\x44\x44\x55\x55\x66\x66\x77\x77\x88\x88")], iface="ens260f0")
sendp([Ether(src="00:00:00:00:00:01", dst="00:00:00:00:01:00")/IPv6(src="2000::1", dst="2000::2", tc=8, nh=255, hlim=64)/Raw(load="\x11\x11\x22\x22\x33\x33\x44\x44\x55\x55\x66\x66\x77\x77\x88\x88")], iface="ens260f0")
sendp([Ether(src="00:00:00:00:00:01", dst="00:00:00:00:01:00")/IPv6(src="2000::1", dst="2000::2", tc=16, nh=253, hlim=64)/Raw(load="\x11\x11\x22\x22\x33\x33\x44\x44\x55\x55\x66\x66\x77\x77\x88\x88")], iface="ens260f0")
sendp([Ether(src="00:00:00:00:00:01", dst="00:00:00:00:01:00")/IPv6(src="2000::1", dst="2000::2", tc=8, nh=254, hlim=64)/Raw(load="\x11\x11\x22\x22\x33\x33\x44\x44\x55\x55\x66\x66\x77\x77\x88\x88")], iface="ens260f0")
sendp([Ether(src="00:00:00:00:00:01", dst="00:00:00:00:01:00")/IPv6(src="2000::1", dst="2000::2", tc=16, nh=253, hlim=32)/Raw(load="\x11\x11\x22\x22\x33\x33\x44\x44\x55\x55\x66\x66\x77\x77\x88\x88")], iface="ens260f0")
sendp([Ether(src="00:00:00:00:00:01", dst="00:00:00:00:01:00")/IPv6(src="2000::1", dst="2000::2", tc=8, nh=254, hlim=48)/Raw(load="\x11\x11\x22\x22\x33\x33\x44\x44\x55\x55\x66\x66\x77\x77\x88\x88")], iface="ens260f0")
Test Case 3: test with ivlan (qinq not work)¶
start testpmd and initialize flow director flex payload configuration:
./testpmd -c fffff -n 4 -- -i --disable-rss --pkt-filter-mode=perfect --rxq=8 --txq=8 --nb-cores=8 testpmd> port stop 0 testpmd> flow_director_flex_payload 0 l2 (0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15) testpmd> flow_director_flex_payload 0 l3 (0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15) testpmd> flow_director_flex_payload 0 l4 (0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15) testpmd> flow_director_flex_mask 0 flow all (0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff) testpmd> port start 0 testpmd> vlan set qinq on 0 testpmd> set verbose 1 testpmd> set fwd rxonly testpmd> start
Note:
assume FLEXBYTES = "0x11,0x11,0x22,0x22,0x33,0x33,0x44,0x44,0x55,0x55,0x66,0x66,0x77,0x77,0x88,0x88" assume payload = "\x11\x11\x22\x22\x33\x33\x44\x44\x55\x55\x66\x66\x77\x77\x88\x88"
setup the fdir input set:
testpmd> set_fdir_input_set 0 ipv4-udp none select testpmd> set_fdir_input_set 0 ipv4-udp ivlan add
setup flow director filter rules,
rule_1:
flow_director_filter 0 mode IP add flow ipv4-udp src 192.168.1.1 1021 dst 192.168.1.2 1022 tos 16 ttl 255 \ vlan 1 flexbytes (FLEXBYTES) fwd pf queue 1 fd_id 1
rule_2:
flow_director_filter 0 mode IP add flow ipv4-udp src 192.168.1.1 1021 dst 192.168.1.2 1022 tos 16 ttl 255 \ vlan 15 flexbytes (FLEXBYTES) fwd pf queue 2 fd_id 2
rule_3:
flow_director_filter 0 mode IP add flow ipv4-udp src 192.168.1.1 1021 dst 192.168.1.2 1022 tos 16 ttl 255 \ vlan 255 flexbytes (FLEXBYTES) fwd pf queue 3 fd_id 3
rule_4:
flow_director_filter 0 mode IP add flow ipv4-udp src 192.168.1.1 1021 dst 192.168.1.2 1022 tos 16 ttl 255 \ vlan 4095 flexbytes (FLEXBYTES) fwd pf queue 4 fd_id 4
send packet to DUT,
packet_1:
'sendp([Ether(dst="%s")/Dot1Q(id=0x8100,vlan=16)/Dot1Q(id=0x8100,vlan=1)/IP(src="192.168.0.1",dst="192.168.0.2", \ tos=16, ttl=255)/UDP(sport="1021",dport="1022")/Raw(%s)], iface="%s")' % (dst_mac, payload, itf)
packet_1 should be received by queue 1.
packet_2:
'sendp([Ether(dst="%s")/Dot1Q(id=0x8100,vlan=16)/Dot1Q(id=0x8100,vlan=15)/IP(src="192.168.0.1",dst="192.168.0.2", \ tos=16, ttl=255)/UDP(sport="1021",dport="1022")/Raw(%s)], iface="%s")' % (dst_mac, payload, itf)
packet_2 should be received by queue 2.
packet_3:
'sendp([Ether(dst="%s")/Dot1Q(id=0x8100,vlan=16)/Dot1Q(id=0x8100,vlan=255)/IP(src="192.168.0.1",dst="192.168.0.2", \ tos=16, ttl=255)/UDP(sport="1021",dport="1022")/Raw(%s)], iface="%s")' % (dst_mac, payload, itf)
packet_3 should be received by queue 3.
packet_4:
'sendp([Ether(dst="%s")/Dot1Q(id=0x8100,vlan=16)/Dot1Q(id=0x8100,vlan=4095)/IP(src="192.168.0.1",dst="192.168.0.2", \ tos=16, ttl=255)/UDP(sport="1021",dport="1022")/Raw(%s)], iface="%s")' % (dst_mac, payload, itf)
packet_4 should be received by queue 4.
- Delete rule_1, send packet_1 again, packet_1 should be received by queue 0.
- Delete rule_2, send packet_2 again, packet_2 should be received by queue 0.
- Delete rule_3, send packet_3 again, packet_3 should be received by queue 0.
- Delete rule_4, send packet_4 again, packet_4 should be received by queue 0.
removed all entry of fdir:
testpmd>flush_flow_director 0 testpmd>show port fdir 0
VEB Switch and floating VEB Tests¶
VEB Switching Introduction¶
IEEE EVB tutorial: http://www.ieee802.org/802_tutorials/2009-11 /evb-tutorial-draft-20091116_v09.pdf
Virtual Ethernet Bridge (VEB) - This is an IEEE EVB term. A VEB is a VLAN Bridge internal to Fortville that bridges the traffic of multiple VSIs over an internal virtual network.
Virtual Ethernet Port Aggregator (VEPA) - This is an IEEE EVB term. A VEPA multiplexes the traffic of one or more VSIs onto a single Fortville Ethernet port. The biggest difference between a VEB and a VEPA is that a VEB can switch packets internally between VSIs, whereas a VEPA cannot.
Virtual Station Interface (VSI) - This is an IEEE EVB term that defines the properties of a virtual machine’s (or a physical machine’s) connection to the network. Each downstream v-port on a Fortville VEB or VEPA defines a VSI. A standards-based definition of VSI properties enables network management tools to perform virtual machine migration and associated network re-configuration in a vendor-neutral manner.
My understanding of VEB is that it’s an in-NIC switch(MAC/VLAN), and it can support VF->VF, PF->VF, VF->PF packet forwarding according to the NIC internal switch. It’s similar as Niantic’s SRIOV switch.
Floating VEB Introduction¶
Floating VEB is based on VEB Switching. It will address 2 problems:
Dependency on PF: When the physical port is link down, the functionality of the VEB/VEPA will not work normally. Even only data forwarding between the VF is required, one PF port will be wasted to create the related VEB.
Ensure all the traffic from VF can only forwarding within the VFs connect to the floating VEB, cannot forward to the outside world.
Prerequisites for VEB testing¶
Get the pci device id of DUT, for example:
./dpdk-devbind.py --st 0000:05:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens785f0 drv=i40e unused=
Host PF in kernel driver. Create 2 VFs from 1 PF with kernel driver, and set the VF MAC address at PF:
echo 2 > /sys/bus/pci/devices/0000\:05\:00.0/sriov_numvfs ./dpdk-devbind.py --st 0000:05:02.0 'XL710/X710 Virtual Function' unused= 0000:05:02.1 'XL710/X710 Virtual Function' unused= ip link set ens785f0 vf 0 mac 00:11:22:33:44:11 ip link set ens785f0 vf 1 mac 00:11:22:33:44:12
Host PF in DPDK driver. Create 2VFs from 1 PF with dpdk driver:
./dpdk-devbind.py -b igb_uio 05:00.0 echo 2 >/sys/bus/pci/devices/0000:05:00.0/max_vfs ./dpdk-devbind.py --st 0000:05:02.0 'XL710/X710 Virtual Function' unused=i40evf,igb_uio 0000:05:02.1 'XL710/X710 Virtual Function' unused=i40evf,igb_uio
Bind the VFs to dpdk driver:
./tools/dpdk-devbind.py -b igb_uio 05:02.0 05:02.1
Reserve huge pages memory(before using DPDK):
echo 4096 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB /nr_hugepages mkdir /mnt/huge mount -t hugetlbfs nodev /mnt/huge
Test Case1: Floating VEB inter VF-VF¶
Summary: 1 DPDK PF, then create 2VF, PF in the host running dpdk testpmd, and VF0 are running dpdk testpmd, VF0 send traffic, and set the packet’s DEST MAC to VF1, check if VF1 can receive the packets. Check Inter VF-VF MAC switch when PF is link down as well as up.
Launch PF testpmd:
./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -w 05:00.0,enable_floating_veb=1 --file-prefix=test1 -- -i
In the host, run testpmd with floating parameters and make the link down:
./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -w 05:00.0,enable_floating_veb=1 --file-prefix=test1 -- -i testpmd> port start all testpmd> show port info all
In VM1, run testpmd:
./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 05:02.0 --file-prefix=test2 -- -i --crc-strip testpmd>mac_addr add 0 vf1_mac_address testpmd>set fwd rxonly testpmd>start testpmd>show port stats all
In VM2, run testpmd:
./testpmd -c 0xf00 -n 4 --socket-mem 1024,1024 -w 05:02.1 --file-prefix=test3 -- -i --crc-strip --eth-peer=0,vf1_mac_address testpmd>set fwd txonly testpmd>start testpmd>show port stats all
Check if VF1 can get all the packets. Check the packet content is no corrupted. RX-packets=TX-packets, but there is a little RX-error. RF receive no packets.
Set “testpmd> port stop all” and “testpmd> start” in step2, then run the step3-4 again. same result.
Test Case2: Floating VEB PF can’t get traffic from VF¶
DPDK PF, then create 1VF, PF in the host running dpdk testpmd, send traffic from PF to VF0, VF0 can’t receive any packets; send traffic from VF0 to PF, PF can’t receive any packets either.
In host, launch testpmd:
./testpmd -c 0x3 -n 4 -w 82:00.0,enable_floating_veb=1 -- -i testpmd> set fwd rxonly testpmd> port start all testpmd> start testpmd> show port stats all
In VM1, run testpmd:
./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,pf_mac_addr testpmd>set fwd txonly testpmd>start testpmd>show port stats all
Check if PF can not get any packets, so VF1->PF is not working.
Set “testpmd> port stop all” in step2, then run the test case again. Same result.
Test Case3 Floating VEB VF can’t receive traffic from outside world¶
DPDK PF, then create 1VF, send traffic from tester to VF1, in floating mode, check VF1 can’t receive traffic from tester.
Start VM1 with VF1, see the prerequisite part.
In host, launch testpmd:
./testpmd -c 0x3 -n 4 -w 82:00.0,enable_floating_veb=1 -- -i testpmd> set fwd mac testpmd> port start all testpmd> start testpmd> show port stats all
In VM1, run testpmd:
./testpmd -c 0x3 -n 4 -- -i testpmd>show port info all //get VF_mac_address testpmd>set fwd rxonly testpmd>start testpmd>show port stats all
In tester, run scapy:
packet=Ether(dst="VF_mac_address")/IP()/UDP()/Raw('x'*20) sendp(packet,iface="enp132s0f0")
Check if VF1 can not get any packets, so tester->VF1 is not working.
Set “testpmd> port stop all” in step2 in Host, then run the test case again. same result.PF can’t receive any packets.
Test Case4: Floating VEB VF can not communicate with legacy VEB VF¶
Summary: DPDK PF, then create 4VFs and 4VMs, VF1,VF3,VF4, floating VEB; VF2, legacy VEB. Make PF link down(the cable can be plugged out), VFs in VMs are running dpdk testpmd.
- VF1 send traffic, and set the packet’s DEST MAC to VF2, check VF2 can not receive the packets.
- VF1 send traffic, and set the packet’s DEST MAC to VF3, check VF3 can receive the packets.
- VF4 send traffic, and set the packet’s DEST MAC to VF3, check VF3 can receive the packets.
- VF2 send traffic, and set the packet’s DEST MAC to VF1, check VF1 can not receive the packets.
Check Inter-VM VF-VF MAC switch when PF is link down as well as up.
Launch PF testpmd:
./testpmd -c 0x3 -n 4
-w "82:00.0,enable_floating_veb=1,floating_veb_list=0;2-3" -- -i
Start VM1 with VF1, VM2 with VF2, VM3 with VF3, VM4 with VF4,see the prerequisite part.
In the host, run testpmd with floating parameters and make the link down:
./testpmd -c 0x3 -n 4 -w "82:00.0,enable_floating_veb=1,floating_veb_list=0;2-3" -- -i //VF1 and VF3 in floating VEB, VF2 in legacy VEB testpmd> port stop all //this step should be executed after vf running testpmd. testpmd> show port info all
VF1 send traffic, and set the packet’s DEST MAC to VF2, check VF2 can not receive the packets.
In VM2, run testpmd:
./testpmd -c 0x3 -n 4 -- -i testpmd>set fwd rxonly testpmd>mac_addr add 0 vf2_mac_address //set the vf2_mac_address testpmd>start testpmd>show port stats all
In VM1, run testpmd:
./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,vf2_mac_address testpmd>set fwd txonly testpmd>start testpmd>show port stats all
Check VF2 can not get any packets, so VF1->VF2 is not working.
VF1 send traffic, and set the packet’s DEST MAC to VF3, check VF3 can receive the packets.
In VM3, run testpmd:
./testpmd -c 0x3 -n 4 -- -i testpmd>set fwd rxonly testpmd>show port info all //get the vf3_mac_address testpmd>start testpmd>show port stats all
In VM1, run testpmd:
./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,vf3_mac_address testpmd>set fwd txonly testpmd>start testpmd>show port stats all Check VF3 can get all the packets. Check the packet content is no corrupted. so VF1->VF2 is working.
VF2 send traffic, and set the packet’s DEST MAC to VF1, check VF1 can not receive the packets.
In VM1, run testpmd:
./testpmd -c 0x3 -n 4 -- -i testpmd>set fwd rxonly testpmd>show port info all //get the vf1_mac_address testpmd>start testpmd>show port stats all
In VM2, run testpmd:
./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,vf1_mac_address testpmd>set fwd txonly testpmd>start testpmd>show port stats all
Check VF1 can not get any packets, so VF2->VF1 is not working.
Set “testpmd> port start all” and “testpmd> start” in step2, then run the step3-5 again. same result.
Test Case5: PF interaction with Floating VF and legacy VF¶
DPDK PF, then create 2VFs, VF0 is in floating VEB, VF1 is in legacy VEB.
Send traffic from VF0 to PF, then check PF will not see any traffic;
Send traffic from VF1 to PF, then check PF will receive all the packets.
send traffic from tester to VF0, check VF0 can’t receive traffic from tester.
send traffic from tester to VF1, check VF1 can receive all the traffic from tester.
In host, launch testpmd:
./testpmd -c 0x3 -n 4 -w 82:00.0,enable_floating_veb=1,floating_veb_list=0 -- -i testpmd> set fwd rxonly testpmd> port start all testpmd> start testpmd> show port stats all
In VF1, run testpmd:
./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,pf_mac_addr testpmd>set fwd txonly testpmd>start testpmd>show port stats all
Check PF can not get any packets, so VF1->PF is not working.
In VF2, run testpmd:
./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,pf_mac_addr testpmd>set fwd txonly testpmd>start testpmd>show port stats all Check PF can get all the packets, so VF2->PF is working.
Set “testpmd> port stop all” in step2 in Host, then run the test case again. same result.
In host, launch testpmd:
./testpmd -c 0x3 -n 4 -w 82:00.0,enable_floating_veb=1,floating_veb_list=0 -- -i testpmd> set fwd mac testpmd> port start all testpmd> start testpmd> show port stats all
In VF1, run testpmd:
./testpmd -c 0x3 -n 4 -- -i testpmd>show port info all //get VF1_mac_address testpmd>set fwd rxonly testpmd>start testpmd>show port stats all
In tester, run scapy:
packet=Ether(dst="VF1_mac_address")/IP()/UDP()/Raw('x'*20) sendp(packet,iface="enp132s0f0")Check VF1 can not get any packets, so tester->VF1 is not working.
In VF2, run testpmd:
./testpmd -c 0x3 -n 4 -- -i testpmd>show port info all //get VF2_mac_address testpmd>set fwd rxonly testpmd>start testpmd>show port stats all
In tester, run scapy:
packet=Ether(dst="VF2_mac_address")/IP()/UDP()/Raw('x'*20) sendp(packet,iface="enp132s0f0")Check VF1 can get all the packets, so tester->VF2 is working.
- Set “testpmd> port stop all” in step2 in Host, then run the test case again. VF1 and VF2 cannot receive any packets. (because PF link down, and PF can’t receive any packets. so even if VF2 can’t receive any packets.)
Fortville Granularity Configuration of RSS and 32-bit GRE key Tests¶
Description¶
This document provides test plan for testing the function of Fortville:
Support granularity configuration of RSS
By default Fortville uses hash input set preloaded from NVM image which includes all fields IPv4/v6+TCP/UDP port. Potential problem for this is global configuration per device and can affect all ports. It is required that hash input set can be configurable, such as using IPv4 only or IPv6 only or IPv4/v6+TCP/UDP.
support 32-bit GRE keys
By default Fortville extracts only 24 bits of GRE key to FieldVector (NVGRE use case) but for Telco use cases full 32-bit GRE key is needed. It is required that both 24-bit and 32-bit keys for GRE should be supported. the test plan is to test the API to switch between 24-bit and 32-bit keys
Prerequisites¶
- Hardware:
- 1x Fortville_eagle NIC (4x 10G)
- 1x Fortville_spirit NIC (2x 40G)
- 2x Fortville_spirit_single NIC (1x 40G)
- software:
Test Case 1: test with flow type ipv4-tcp¶
config testpmd on DUT
set up testpmd with Fortville NICs:
./testpmd -c 0x1ffff -n 4 -- -i --coremask=0x1fffe --portmask=0x3 --rxq=16 --txq=16 --tx-offloads=0x8fff
Reta Configuration(optional, if not set, will use default):
testpmd> port config 0 rss reta (hash_index,queue_id)
PMD fwd only receive the packets:
testpmd> set fwd rxonly
rss received package type configuration:
testpmd> port config all rss tcp
set hash function:
testpmd>set_hash_global_config 0 toeplitz ipv4-tcp enable
verbose configuration:
testpmd> set verbose 8
start packet receive:
testpmd> start
using scapy to send packets with ipv4-tcp on tester:
sendp([Ether(dst="%s")/IP(src="192.168.0.%d", dst="192.168.0.%d")/TCP(sport=1024,dport=1025)], iface="%s")
then got hash value and queue value that output from the testpmd on DUT.
set hash input set to “none” by testpmd on dut:
testpmd> set_hash_input_set 0 ipv4-tcp none select
send packet as step 2, got hash value and queue value that output from the testpmd on DUT, the values should be different from the values in step 2.
set hash input set by testpmd on dut, enable src-ipv4 & dst-ipv4:
testpmd> set_hash_input_set 0 ipv4-tcp src-ipv4 add testpmd> set_hash_input_set 0 ipv4-tcp dst-ipv4 add
send packet as step 2, got hash value and queue value that output from the testpmd on DUT, the values should be different from the values in step 2.
set hash input set by testpmd on dut, enable src-ipv4, dst-ipv4, tcp-src-port, tcp-dst-port:
testpmd> set_hash_input_set 0 ipv4-tcp tcp-src-port add testpmd> set_hash_input_set 0 ipv4-tcp tcp-dst-port add
send packet as step 2, got hash value and queue value that output from the testpmd on DUT, the values should be different with the values from step 3 & step 4, should be same as step 2.
set hash input set by testpmd on dut, enable tcp-src-port, tcp-dst-port:
testpmd> set_hash_input_set 0 ipv4-tcp none select testpmd> set_hash_input_set 0 ipv4-tcp tcp-src-port add testpmd> set_hash_input_set 0 ipv4-tcp tcp-dst-port add
send packet as step 2, got hash value and queue value that output from the testpmd on DUT, the values should be should be different with the values from step2 & step 3 & step 4 & step 5.
So it can be approved that with flow type ipv4-tcp, rss hash can be calculated by only included IPv4 fields or only included TCP fields or both IPv4+TCP fields.
Test Case 2: test with flow type ipv4-udp¶
config testpmd on DUT
set up testpmd with Fortville NICs:
./testpmd -c 0x1ffff -n 4 -- -i --coremask=0x1fffe --portmask=0x3 --rxq=16 --txq=16 --tx-offloads=0x8fff
Reta Configuration(optional, if not set, will use default):
testpmd> port config 0 rss reta (hash_index,queue_id)
PMD fwd only receive the packets:
testpmd> set fwd rxonly
rss received package type configuration:
testpmd> port config all rss udp
set hash function:
testpmd>set_hash_global_config 0 toeplitz ipv4-udp enable
verbose configuration:
testpmd> set verbose 8
start packet receive:
testpmd> start
using scapy to send packets with ipv4-udp on tester:
sendp([Ether(dst="%s")/IP(src="192.168.0.%d", dst="192.168.0.%d")/UDP(sport=1024,dport=1025)], iface="%s"))
then got hash value and queue value that output from the testpmd on DUT.
set hash input set to “none” by testpmd on dut:
testpmd> set_hash_input_set 0 ipv4-udp none select
send packet as step 2, got hash value and queue value that output from the testpmd on DUT, the values should be different from the values in step 2.
set hash input set by testpmd on dut, enable src-ipv4 and dst-ipv4:
testpmd> set_hash_input_set 0 ipv4-udp src-ipv4 add testpmd> set_hash_input_set 0 ipv4-udp dst-ipv4 add
send packet as step 2, got hash value and queue value that output from the testpmd on DUT, the values should be different from the values in step 2 & step 3.
set hash input set by testpmd on dut, enable src-ipv4, dst-ipv4, udp-src-port, udp-dst-port:
testpmd> set_hash_input_set 0 ipv4-udp udp-src-port add testpmd> set_hash_input_set 0 ipv4-udp udp-dst-port add
send packet as step 2, got hash value and queue value that output from the testpmd on DUT, the values should be should be different with the values from step 3 & step 4, should be same as step 2.
set hash input set by testpmd on dut, enable udp-src-port, udp-dst-port:
testpmd> set_hash_input_set 0 ipv4-udp none select testpmd> set_hash_input_set 0 ipv4-udp udp-src-port add testpmd> set_hash_input_set 0 ipv4-udp udp-dst-port add
send packet as step 2, got hash value and queue value that output from the testpmd on DUT, the values should be should be different with the values from step2 & step 3 & step 4 & step 5.
So it can be approved that with flow type ipv4-udp, rss hash can be calculated by only included IPv4 fields or only included UDP fields or both IPv4+UDP fields.
Test Case 3: test with flow type ipv6-tcp¶
test method is same as Test Case 1, but it need change all ipv4 to ipv6, and using scapy to send packets with ipv6-tcp on tester:
sendp([Ether(dst="%s")/IPv6(src="3ffe:2501:200:1fff::%d", dst="3ffe:2501:200:3::%d")/TCP(sport=1024,dport=1025)], iface="%s")
and the test result should be same as Test Case 1.
Test Case 4: test with flow type ipv6-udp¶
test method is same as Test Case 2, but it need change all ipv4 to ipv6, and using scapy to send packets with ipv6-udp on tester:
sendp([Ether(dst="%s")/IPv6(src="3ffe:2501:200:1fff::%d", dst="3ffe:2501:200:3::%d")/UDP(sport=1024,dport=1025)], iface="%s")
and the test result should be same as Test Case 2.
Test Case 5: test dual vlan(QinQ)¶
config testpmd on DUT
set up testpmd with Fortville NICs:
./testpmd -c 0x1ffff -n 4 -- -i --coremask=0x1fffe --portmask=0x3 --rxq=16 --txq=16 --tx-offloads=0x8fff
set qinq on:
testpmd> vlan set qinq on <port_id>
Reta Configuration(optional, if not set, will use default):
testpmd> port config 0 rss reta (hash_index,queue_id)
PMD fwd only receive the packets:
testpmd> set fwd rxonly
verbose configuration:
testpmd> set verbose 8
start packet receive:
testpmd> start
rss received package type configuration:
testpmd> port config all rss ether
using scapy to send packets with dual vlan (QinQ) on tester:
sendp([Ether(dst="%s")/Dot1Q(id=0x8100,vlan=%s)/Dot1Q(id=0x8100,vlan=%s)], iface="%s")
then got hash value and queue value that output from the testpmd on DUT.
set hash input set to “none” by testpmd on dut:
testpmd> set_hash_input_set 0 l2_payload none select
send packet as step 2, got hash value and queue value that output from the testpmd on DUT, the value should be same with the values in step 2.
set hash input set by testpmd on dut, enable ovlan field:
testpmd> set_hash_input_set 0 l2_payload ovlan add
send packet as step 2, got hash value and queue value that output from the testpmd on DUT, the value should be different with the values in step 2.
set hash input set by testpmd on dut, enable ovlan, ivlan field:
testpmd> set_hash_input_set 0 l2_payload ivlan add
send packet as step 2, got hash value and queue value that output from the testpmd on DUT, the value should be different with the values in step 2.
Test Case 6: 32-bit GRE keys and 24-bit GRE keys test¶
config testpmd on DUT
set up testpmd with Fortville NICs:
./testpmd -c 0x1ffff -n 4 -- -i --coremask=0x1fffe --portmask=0x3 --rxq=16 --txq=16 --tx-offloads=0x8fff
Reta Configuration(optional, if not set, will use default):
testpmd> port config 0 rss reta (hash_index,queue_id)
PMD fwd only receive the packets:
testpmd> set fwd rxonly
rss received package type configuration:
testpmd> port config all rss all
set hash function:
testpmd>set_hash_global_config 0 toeplitz ipv4-other enable
verbose configuration:
testpmd> set verbose 8
start packet receive:
testpmd> start
using scapy to send packets with GRE header on tester:
sendp([Ether(dst="%s")/IP(src="192.168.0.1",dst="192.168.0.2",proto=47)/GRE(key_present=1,proto=2048,key=67108863)/IP()], iface="%s")
then got hash value and queue value that output from the testpmd on DUT.
set hash input set to “none” by testpmd on dut:
testpmd> set_hash_input_set 0 ipv4-other none select
send packet as step 2, got hash value and queue value that output from the testpmd on DUT, the value should be different with the values in step 2.
set hash input set by testpmd on dut, enable src-ipv4, dst-ipv4:
testpmd> set_hash_input_set 0 ipv4-other src-ipv4 add testpmd> set_hash_input_set 0 ipv4-other dst-ipv4 add
send packet as step 2, got hash value and queue value that output from the testpmd on DUT, the value should be same with the values in step 2.
set hash input set and gre-key-len=3 by testpmd on dut, enable gre-key:
testpmd> global_config 0 gre-key-len 3 testpmd> set_hash_input_set 0 ipv4-other gre-key add
send packet as step 2, got hash value and queue value that output from the testpmd on DUT, the values should be different with the values in step 2.
set gre-key-len=4 by testpmd on dut, enable gre-key:
testpmd> global_config 0 gre-key-len 4
send packet as step 2, got hash value and queue value that output from the testpmd on DUT, the values should be different with the values in step 4.
So with gre-key-len=3 (24bit gre key) or gre-key-len=4 (32bit gre key), different rss hash value and queue value can be got, it can be proved that 32bit & 24bit gre key are supported by Fortville.
FM10k FTAG Forwarding Tests¶
FM10000 supports the addition of a Fabric Tag (FTAG) to carry special information between Switches, between Switch and PCIe Host Interface or between Switch and Tunneling Engines. This tag is essential for a set of switches to behave like one switch (switch aggregation).
The FTAG is placed at the beginning of the frame. The case will validate packet forwarding function based on FTAG.
Prerequisites¶
Turn on CONFIG_RTE_LIBRTE_FM10K_FTAG_FWD in common_linuxapp configuration file. Startup testpoint and export Port0 and Port1’s GLORT ID.
Strip port logic value from mac table information. There’s the sample output from RubyRapid. From the output, port0’s logic value is 4122 and port1’s logic value is 4123:
<0>% show mac table all
MAC Mode FID1 FID2 Type Value Trig ...
------------------ --------- ---- ---- ------ ------ -----
00:00:00:00:01:01 Dynamic 1 NA Local 1 1
a0:36:9f:60:b6:6e Static 1 NA PF/VF 4506 1
a0:36:9f:60:b6:68 Static 1 NA PF/VF 4123 1
00:00:00:00:01:00 Dynamic 1 NA Local 1 1
a0:36:9f:60:b6:66 Static 1 NA PF/VF 4122 1
Strip port glort ID from stacking information. There’s the sample output from RubyRapid. Logic port0’s GLORT ID is 0x4000. Logic port1’s GLORT ID is 0x4200:
show stacking logical-port all
<0>% show stacking logical-port all
SW GLORT LOGICAL PORT PORT TYPE
---- ----- --------------- ---------
...
0 0x4000 4122 ?
0 0x4200 4123 ?
Add port’s GLORT ID into environment variables:
export PORT1_GLORT=0x4200
export PORT0_GLORT=0x4000
Test Case: Ftag forwarding unit test¶
port 0 pci 85:00.0, port 1 pci 87:00.0,start test application:
./x86_64-native-linuxapp-gcc/app/test -c f -n 4 -w 0000:85:00.0,enable_ftag=1 -w 0000:87:00.0,enable_ftag=1
Run FTAG test function:
RTE>>fm10k_ftag_autotest
Send one packet to Port0 and verify packet with ftag forwarded to Port1:
Receive 1 packets on port 0 test for FTAG RX passed Send out 1 packets with FTAG on port 0 Receive 1 packets on port 1 test for FTAG TX passed Test OK
Send one packet to Port1 and verify packet with ftag forwarded to Port0:
Receive 1 packets on port 0 test for FTAG RX passed Send out 1 packets with FTAG on port 0 Receive 1 packets on port 1 test for FTAG TX passed Test OK
Generic Filter Tests¶
Description¶
This document provides the plan for testing the generic filter feature of 10GbE and 1GbE Ethernet Controller.In testpmd, app provides Generic Filter API to manage filter rules for kinds of packets, and calls the API to manage HW filters in HW, or SW filters in SW table.
- A generic filter provides an ability to identify specific flows or sets of flows and routes them to dedicated queues.
- Based on the Generic Filter mechanism, all the SYN packets are placed in an assigned queue.
- Based on the Generic Filter mechanism, all packets belonging to L3/L4 flows to be placed in a specific HW queue.Each filter consists of a 5-tuple (protocol, source and destination IP addresses, source and destination TCP/UDP/SCTP port) and routes packets into one of the Rx queues
- L2 Ethertype Filters provides an ability to identify packets by their L2 Ethertype and assigns them to receive queues. Testpmd app is used to test all types of HW filters. Case 1~9 are the function test for the above app while case 11-12 are the performance test for Niantic, I350, 82580 and 82576.
Prerequisites¶
Assuming that ports 0
and 1
are connected to a traffic generator’s port A
and B
.
Setup for testpmd
¶
Launch the app testpmd
with the following arguments:
./testpmd -c fffff -n 4 -- -i --disable-rss --rxq=4 --txq=4 --nb-cores=16 --nb-ports=2
The -n command is used to select the number of memory channels. It should be matched with the number of memory channels on that setup. The value of rxq and txq is 1 by default, it’s necessary to increase them, and make sure rxq and txq more than one. At the same time rss is enable by default, so disable it. Map port queues to statistic counter registers. Fortville not support this function:
testpmd>set stat_qmap rx 0 0 0
testpmd>set stat_qmap rx 0 1 1
testpmd>set stat_qmap rx 0 2 2
testpmd>set stat_qmap rx 0 3 3
Setup for receive all packet and disable vlan strip function:
testpmd>vlan set strip off 0
testpmd>vlan set strip off 1
testpmd>vlan set filter off 0
testpmd>vlan set filter off 1
testpmd>set flush_rx on
Test Case 1: SYN filter¶
SYN filters might routes TCP packets with their SYN flag set into an assigned queue. By filtering such packets to an assigned queue, security software can monitor and act on SYN attacks.
Enable SYN filters with queue 2 on port 0:
testpmd> syn_filter 0 add priority high queue 2
Then setup for receive:
testpmd> start
Configure the traffic generator to send 5 SYN packets and 5 non-SYN packets . Reading the stats for port 0 after sending packets:
testpmd> stop
Verify that the packets are received (RX-packets incremented)on the queue 2. Set off SYN filter:
testpmd>syn_filter 0 del priority high queue 2
testpmd>start
Send 5 SYN packets, then reading the stats for port 0 after sending packets:
testpmd> stop
Verify that the packets are not received (RX-packets do not increased)on the queue 2 and syn filter is removed.
Test Case 2: 5-tuple Filter¶
This filter identifies specific L3/L4 flows or sets of L3/L4 flows and routes them to dedicated queues. Each filter consists of a 5-tuple (protocol, source and destination IP addresses, source and destination TCP/UDP/SCTP port) and routes packets into one of the Rx queues. The 5-tuple filters are configured via dst_ip, src_ip, dst_port, src_port, protocol and Mask.This case supports two type NIC(niantic, 82576), and their command line are different. niantic and 82576 register are different, for niantic TCP flags not need config,so used 0, 82576 must config tcp flags, the tcp flags means the package is a SYN package. Enable the 5-tuple Filter with queue 3 on port 0 for niantic:
testpmd> 5tuple_filter 0 add dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 1 src_port 1 protocol 0x06 mask 0x1f tcp_flags 0x0 priority 3 queue 3
Enable the 5-tuple Filter with queue 3 on port 0 for 82576:
testpmd> 5tuple_filter 0 add dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 1 src_port 1 protocol 0x06 mask 0x1f flags 0x02 priority 3 queue 3
Then setup for receive:
testpmd> start
If the NIC type is niantic, then send different type packets such as (dst_ip = 2.2.2.5 src_ip = 2.2.2.4 dst_port = 1 src_port = 1 protocol = tcp) and arp:
testpmd> stop
Verify that the packets (dst_ip = 2.2.2.5 src_ip = 2.2.2.4 dst_port = 1 src_port = 1 protocol = tcp)or (dst_ip = 2.2.2.5 src_ip = 2.2.2.4 dst_port = 1 src_port = 1 protocol = tcp, flags = 0x2) are received (RX-packets doesn’t incremented)on the queue 3.Remove L3/L4 5-tuple filter. Disable 5-tuple Filters:
testpmd> 5tuple_filter 0 del dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 1 src_port 1 protocol 0x06 mask 0x1f flags 0x02 priority 3 queue 3
testpmd> start
Send packets(dst_ip = 2.2.2.5 src_ip = 2.2.2.4 dst_port = 1 src_port = 1 protocol = tcp) or (dst_ip = 2.2.2.5 src_ip = 2.2.2.4 dst_port = 1 src_port = 1 protocol = tcp flags = 0x2). Then reading the stats for port 0 after sending packets:
testpmd> stop
Verify that the packets are not received (RX-packets do not increased)on the queue 3. A 5-bit field that masks each of the fields in the 5-tuple (L4 protocol, IP addresses, TCP/UDP ports). If 5-tuple fields are masked with 0x0 (mask = 0x0), the filter will routes all the packets(ip) on the assigned queue.For instance, enable the 5-tuple Filters with queue 3 on port 0 for niantic. however, the value of mask is set 0x0:
testpmd> 5tuple_filter 0 add dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 1 src_port 1 protocol tcp mask 0x0 flags 0x0 priority 3 queue 3
Test Case 3: ethertype filter¶
Enable the receipt of ARP packets with queue 2 on port 0:
testpmd> ethertype_filter 0 add ethertype 0x0806 priority disable 0 queue 2
Then setup for receive:
testpmd> start
Configure the traffic generator to send 15 ARP packets and 15 non ARP packets:
testpmd> stop
Verify that the arp packets are received (RX-packets incremented)on the queue 2 . remove ethertype filter:
testpmd> ethertype_filter 0 del ethertype 0x0806 priority disable 0 queue 2
testpmd> start
Configure the traffic generator to send 15 ARP packets.
testpmd> stop
Also, you can change the value of priority to set a new filter except the case the value of ethertype is 0x0800 with priority enable .The rest of steps are same.
For instance, enable priority filter(just support niantic):
testpmd> ethertype_filter 0 add ethertype 0x0806 priority enable 1 queue 2
Test Case 4: 10GB Multiple filters¶
Enable ethertype filter, SYN filter and 5-tuple Filter on the port 0 at same time. Assigning different filters to different queues on port 0:
testpmd> syn_filter 0 add priority high queue 1
testpmd> ethertype_filter 0 add ethertype 0x0806 priority disable 0 queue 3
testpmd> 5tuple_filter 0 add dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 1 src_port 1 protocol tcp mask 0x1f priority 3 queue 3
testpmd> start
Configure the traffic generator to send different packets. Such as,SYN packets, ARP packets, IP packets and packets with(dst_ip = 2.2.2.5 src_ip = 2.2.2.4 dst_port = 1 src_port = 1 protocol = tcp):
testpmd> stop
Verify that different packets are received (RX-packets incremented)on the assigned queue. Remove ethertype filter:
testpmd> ethertype_filter 0 del ethertype 0x0806 priority disable 0 queue 3
testpmd>start
Send SYN packets, ARP packets and packets with (dst_ip = 2.2.2.5 src_ip = 2.2.2.4 dst_port = 1 src_port = 1 protocol = tcp):
testpmd> stop
Verify that all packets are received (RX-packets incremented)on the assigned queue except arp packets, remove 5-tuple filter:
testpmd>5tuple_filter 0 del dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 1 src_port 1 protocol tcp mask 0x1f priority 3 queue 3
testpmd> start
Send different packets such as,SYN packets, ARP packets, packets with (dst_ip = 2.2.2.5 src_ip = 2.2.2.4 dst_port = 1 src_port = 1 protocol = tcp):
testpmd>stop
Verify that only SYN packets are received (RX-packets incremented)on the assigned queue set off SYN filter:
testpmd>syn_filter 0 del priority high queue 1
testpmd>start
Configure the traffic generator to send 5 SYN packets:
testpmd>stop
Verify that the packets are not received (RX-packets do not increased)on the queue 1.
Test Case 5: 2-tuple filter¶
This case is designed for NIC type:I350, 82580. Enable the receipt of udp packets with queue 1 on port 0:
testpmd> 2tuple_filter 0 add protocol 0x11 1 dst_port 64 1 flags 0 priority 3 queue 1
Then setup for receive:
testpmd> start
Send 15 udp packets(dst_port = 15, protocol = udp) and 15 non udp packets. Reading the stats for port 0 after sending packets:
testpmd> stop
Verify that the udp packets are received (RX-packets incremented)on the queue 1. Remove 2tuple filter:
testpmd> 2tuple_filter 0 del protocol 0x11 1 dst_port 64 1 flags 0 priority 3 queue 1
testpmd> start
Configure the traffic generator to send udp packets(dst_port = 15, protocol = udp). Reading the stats for port 0 after sending packets:
testpmd> stop
Verify that the packets are not received (RX-packets do not increased)on the queue 1. Also, you can change the value of protocol or dstport or flags to set a new filter.the rest of steps are same. For example:
Enable the receipt of UDP packets with queue 1 on port 1:
testpmd> 2tuple_filter 1 add protocol 0x011 1 dst_port 64 1 flags 0 priority 3 queue 2
Enable the receipt of TCP packets with flags on queue 1 of port 1:
testpmd> 2tuple_filter 1 add protocol 0x06 1 dst_port 64 1 flags 0x3F priority 3 queue 3
Test Case 6: flex filter¶
This case is designed for NIC type:I350, 82576,82580. Enable the receipt of packets(context) with queue 1 on port 0:
testpmd> flex_filter 0 add len 16 bytes 0x0123456789abcdef0000000008060000 mask 000C priority 3 queue 1
If flex Filter is added successfully, it displays:
bytes[0]:01 bytes[1]:23 bytes[2]:45 bytes[3]:67 bytes[4]:89 bytes[5]:ab bytes[6]:cd bytes[7]:ef bytes[8]:00 bytes[9]:00 bytes[10]:00 bytes[11]:00 bytes[12]:08 bytes[13]:06 bytes[14]:00 bytes[15]:00
mask[0]:00 mask[1]:0c
Then setup for receive:
testpmd> start
Configure the traffic generator to send packets(context) and arp packets. Reading the stats for port 0 after sending packets:
testpmd> stop
Verify that the arp packets are received (RX-packets incremented)on the queue 1. Remove flex filter:
testpmd> flex_filter 0 add len 16 bytes 0x0123456789abcdef0000000008060000 mask 000C priority 3 queue 1
testpmd> start
Configure the traffic generator to send packets(context).Reading the stats for port 0 after sending packets:
testpmd> stop
Verify that the packets are not received (RX-packets do not increased)on the queue 1. Also, you can change the value of length or context or mask to set a new filter.the rest of steps are same:
testpmd> flex_filter 0 add len 32 bytes 0x0123456789abcdef00000000080600000123456789abcdef0000000008060000 mask 000C000C priority 1 queue 2
Test Case 7: priority filter¶
This case is designed for NIC (niantic,I350, 82576 and 82580). If packets are match on different filters with same type, the filter with high priority will be receive packets. For example, packets are match on two five-tuple filters with different priority, the filter with high priority will be receive packets. if packets are match on different filters with different type, packets based on the above criteria and the following order.when syn set priority high, syn filter has highest priority than others filter. And flex filter has higher priority than 2-tuple filter. If the Nic is niantic, enable the 5-tuple filter:
testpmd> 5tuple_filter 0 add dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 1 src_port 1 protocol 0x06 mask 0x1f flags 0x0 priority 2 queue 2
testpmd> 5tuple_filter 0 add dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 2 src_port 2 protocol 0x06 mask 0x18 flags 0x0 priority 3 queue 3
testpmd> start
Configure the traffic generator to send packets (dst_ip = 2.2.2.5 src_ip = 2.2.2.4 dst_port = 1 src_port = 1 protocol = tcp).
testpmd> stop
packets are received (RX-packets be increased)on the queue 2. Remove the 5tuple filter with high priority:
testpmd>5tuple_filter 0 del dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 1 src_port 1 protocol 0x06 mask 0x1f flags 0x0 priority 2 queue 2
testpmd> start
Configure the traffic generator to send packets (dst_ip = 2.2.2.5 src_ip = 2.2.2.4 dst_port = 1 src_port = 1 protocol = tcp)
testpmd> stop
packets are received (RX-packets be increased)on the queue 3. If the Nic is I350 or 82580, enable the 2-tuple and flex filters:
testpmd> flex_filter 0 add len 16 bytes 0x0123456789abcdef0000000008000000 mask 000C priority 2 queue 1
testpmd> 2tuple_filter 0 add protocol 0x11 1 dst_port 64 1 flags 0 priority 3 queue 2
testpmd> start
Configure the traffic generator to send packets (dst_ip = 2.2.2.5 src_ip = 2.2.2.4 dst_port = 64 src_port = 1 protocol = udp).
testpmd> stop
packets are received (RX-packets be increased)on the queue 2. Remove the 2tuple filter with high priority:
testpmd> 2tuple_filter 0 add protocol 0x11 1 dst_port 64 1 flags 0 priority 3 queue 2
testpmd> start
Configure the traffic generator to send packets (dst_ip = 2.2.2.5 src_ip = 2.2.2.4 dst_port = 64 src_port = 1 protocol = udp),
testpmd> stop
packets are received (RX-packets be increased)on the queue 1. If the Nic is 82576, enable the syn and 2-tuple filter:
testpmd>5tuple_filter 0 add dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 1 src_port 1 protocol 0x06 mask 0x1f flags 0x02 priority 3 queue 3
testpmd>syn_filter 0 add priority high queue 2
testpmd> start
Configure the traffic generator to send packets (dst_ip = 2.2.2.5 src_ip = 2.2.2.4 dst_port = 1 src_port = 1 protocol = tcp flags = “S”).
testpmd>stop
packets are received (RX-packets be increased)on the queue 2. Remove the syn filter with high priority:
testpmd>syn_filter 0 del priority high queue 2
testpmd>start
Configure the traffic generator to send packets (dst_ip = 2.2.2.5 src_ip = 2.2.2.4 dst_port = 64 src_port = 1 protocol = tcp flags = “S”).
testpmd> stop
packets are received (RX-packets be increased)on the queue 3.
Test Case 8: 1GB Multiple filters¶
This case is designed for NIC(I350, 82576,82580). Enable syn filter and ethertype filter on the port 0 at the same time. Assigning different filters to different queues on port 0.Enable the filters:
testpmd> syn_filter 0 add priority high queue 1
testpmd> ethertype_filter 0 add ethertype 0x0806 priority disable 0 queue 3
testpmd> start
Configure the traffic generator to send ethertype packets and arp packets:
testpmd> stop
Then Verify that the packet are received on the queue 1,queue 3. Remove all the filter:
testpmd> syn_filter 0 add priority high queue 1
testpmd> ethertype_filter 0 add ethertype 0x0806 priority disable 0 queue 3
Configure the traffic generator to send udp packets and arp packets. Then Verify that the packet are not received on the queue 1 and queue 3:
testpmd> quit
Test Case 9: jumbo framesize filter¶
This case is designed for NIC (niantic,I350, 82576 and 82580). Since
Testpmd
could transmits packets with jumbo frame size , it also could
transmit above packets on assigned queue. Launch the app testpmd
with the
following arguments:
testpmd -c ffff -n 4 -- -i --disable-rss --rxq=4 --txq=4 --nb-cores=8 --nb-ports=2 --rxd=1024 --txd=1024 --burst=144 --txpt=32 --txht=8 --txwt=0 --txfreet=0 --rxfreet=64 --mbcache=200 --mbuf-size=2048 --max-pkt-len=9600
testpmd>set stat_qmap rx 0 0 0
testpmd>set stat_qmap rx 0 1 1
testpmd>set stat_qmap rx 0 2 2
testpmd>vlan set strip off 0
testpmd>vlan set strip off 1
testpmd>vlan set filter off 0
testpmd>vlan set filter off 1
Enable the syn filters with large size:
testpmd> syn_filter 0 add priority high queue 1
testpmd> start
Configure the traffic generator to send syn packets(framesize=2000):
testpmd> stop
Then Verify that the packet are received on the queue 1. Remove the filter:
testpmd> syn_filter 0 del priority high queue 1
Configure the traffic generator to send syn packets and s. Then Verify that the packet are not received on the queue 1:
testpmd> quit
Test Case 10: 128 queues¶
This case is designed for NIC(niantic). Since NIC(niantic) has 128 transmit
queues, it should be supports 128 kinds of filter if Hardware have enough
cores. Launch the app testpmd
with the following arguments:
./testpmd -c fffff -n 4 -- -i --disable-rss --rxq=128 --txq=128 --nb-cores=16 --nb-ports=2 --total-num-mbufs=60000
testpmd>set stat_qmap rx 0 0 0
testpmd>set stat_qmap rx 0 64 1
testpmd>set stat_qmap rx 0 64 2
testpmd>vlan set strip off 0
testpmd>vlan set strip off 1
testpmd>vlan set filter off 0
testpmd>vlan set filter off 1
Enable the 5-tuple Filters with different queues (64,127) on port 0 for niantic:
testpmd> 5tuple_filter 0 add dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 1 src_port 1 protocol 0x06 mask 0x1f flags 0x0 priority 3 queue 64 index 1
testpmd> 5tuple_filter 0 add dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 2 src_port 1 protocol 0x06 mask 0x1f flags 0x0 priority 3 queue 127 index 1
Send packets(dst_ip = 2.2.2.5 src_ip = 2.2.2.4 dst_port = 1 src_port = 1 protocol = tcp) and (dst_ip = 2.2.2.5 src_ip = 2.2.2.4 dst_port = 2 src_port = 1 protocol = tcp ). Then reading the stats for port 0 after sending packets. packets are received on the queue 64 and queue 127 When setting 5-tuple Filter with queue(128), it will display failure because the number of queues no more than 128.
Test Case 11: 10G NIC Performance¶
This case is designed for Niantic. It provides the performance data with and without generic filter:
Launch app without filter
./testpmd -c fffff -n 4 -- -i --disable-rss --rxq=4 --txq=4 --nb-cores=16 --nb-ports=2
testpmd> start
Send the packets stream from packet generator:
testpmd> quit
Enable the filters on app:
./testpmd -c fffff -n 4 -- -i --disable-rss --rxq=4 --txq=4 --nb-cores=16 --nb-ports=2
testpmd>set stat_qmap rx 0 0 0
testpmd>set stat_qmap rx 0 1 1
testpmd>set stat_qmap rx 0 2 2
testpmd>set stat_qmap rx 0 3 3
testpmd>set flush_rx on
testpmd> add_syn_filter 0 priority high queue 1
testpmd> add_ethertype_filter 0 ethertype 0x0806 priority disable 0 queue 2 index 1
testpmd> add_5tuple_filter 0 dst_ip 2.2.2.5 src_ip 2.2.2.4 dst_port 1 src_port 1 protocol 0x06 mask 0x1f flags 0x02 priority 3 queue 3 index 1
testpmd> start
Send the packets stream from packet generator:
testpmd> quit
Frame Size | disable filter | enable filter |
64 | ||
128 | ||
256 | ||
512 | ||
1024 | ||
1280 | ||
1518 |
Test Case 12: 1G NIC Performance¶
This case is designed for NIC (I350, 82580, and 82576). It provides the performance data with and without generic filter:
./testpmd -c fffff -n 4 -- -i --disable-rss --rxq=4 --txq=4 --nb-cores=16 --nb-ports=2
testpmd> start
Send the packets stream from packet generator:
testpmd> quit
Enable the filter:
./testpmd -c fffff -n 4 -- -i --disable-rss --rxq=4 --txq=4 --nb-cores=16 --nb-ports=2
testpmd>set stat_qmap rx 0 0 0
testpmd>set stat_qmap rx 0 1 1
testpmd>set stat_qmap rx 0 2 2
testpmd>set stat_qmap rx 0 3 3
testpmd>set flush_rx on
testpmd> add_syn_filter 0 priority high queue 1
testpmd> add_ethertype_filter 0 ethertype 0x0806 priority disable 0 queue 2 index 1
testpmd> start
Send the packets stream from packet generator:
testpmd> quit
Frame Size | disable filter | enable filter |
64 | ||
128 | ||
256 | ||
512 | ||
1024 | ||
1280 | ||
1518 |
DPDK Hotplug API Tests¶
This test for Hotplug API feature can be run on linux userspace. It will check if NIC port can be attached and detached without exiting the application process. Furthermore, it will check if it can reconfigure new configurations for a port after the port is stopped, and if it is able to restart with those new configurations. It is based on testpmd application.
The test is performed by running the testpmd application and using a traffic generator. Port configurations can be set interactively, and still be set at the command line when launching the application in order to be compatible with previous test framework.
Prerequisites¶
Assume DPDK managed at least one device for physical or none for virtual. This feature only supports igb_uio now, for uio_pci_generic is on the way, will test it after enabled.
To run the testpmd application in linuxapp environment with 4 lcores, 4 channels with other default parameters in interactive mode:
$ ./testpmd -c 0xf -n 4 -- -i
Test ENV:
- All test cases can be run in 32bit and 64bit platform.
- OS support: Fedora, Ubuntu, RHEL, SUSE, but freebsd will not be included as hotplug has no plan to support that platform
- All kernel version(from 2.6) can be support, for vfio need kernel version greater than 3.6.
- Virtualization support: KVM/VMware/Xen, container is in the roadmap
Test Case 1: port detach & attach for physical devices with igb_uio¶
Start testpmd:
$ ./testpmd -c 0xf -n 4 -- -i
Bind new physical port to igb_uio(assume BDF 0000:02:00.0):
# ./tools/dpdk_nic_bind -b igb_uio 0000:02:00.0
Attach port 0:
run "port attach 0000:02:00.0" run "port start 0" run "show port info 0", check port 0 info display.
Check package forwarding when startup:
run "start", then "show port stats 0" check forwarding packages start. run "port detach 0", check the error message of port not stopped. run "stop", then "show port stats 0", check forwarding packages stopped.
Detach port 0 after port closed:
run "port stop 0" run "port close 0". run "port detach 0", check port detached successful.
Re-attach port 0(assume BDF 0000:02:00.0):
run "port attach 0000:02:00.0", run "port start 0". run "show port info 0", check port 0 info display.
Check package forwarding after re-attach:
run "start", then "show port stats 0" check forwarding packages start. run "port detach 0", check the error message of port not stopped. run "stop", then "show port stats 0", check forwarding packages stopped.
Test Case 2: port detach and attach for physical devices with vfio¶
Start testpmd:
$ ./testpmd -c 0xf -n 4 -- -i
Bind new physical port to igb_uio(assume BDF 0000:02:00.0):
# ./tools/dpdk_nic_bind -b vfio-pci 0000:02:00.0
Attach port 0(assume BDF 0000:02:00.0):
run "port attach 0000:02:00.0" run "port start 0" run "show port info 0", check port 0 info display.
Detach port 0 after port closed:
run "port stop 0", then "show port stats 0", check port stopped. run "port close 0". run "port detach 0", check detach status(should fail as no detach support at the moment for vfio).
Test Case 3: port detach & attach for physical devices with uio_pci_generic¶
This case should be enabled after uio_pci_generic enabled for DPDK
Start testpmd:
$ ./testpmd -c 0xf -n 4 -- -i
Bind new physical port to igb_uio(assume BDF 0000:02:00.0):
# ./tools/dpdk_nic_bind -b uio_pci_generic 0000:02:00.0
Attach port 0(assume BDF 0000:02:00.0):
run "port attach 0000:02:00.0" run "port start 0" run "show port info 0", check port 0 info display.
Check package forwarding when startup:
run "start", then "show port stats 0" check forwarding packages start. run "port detach 0", check the error message of port not stopped. run "stop", then "show port stats 0", check forwarding packages stopped.
Detach port 0 after port closed:
run "port stop 0" run "port close 0". run "port detach 0", check port detached successful.
Re-attach port 0(assume BDF is 0000:02:00.0):
run "port attach 0000:02:00.0", run "port start 0". run "show port info 0", check port 0 info display.
Check package forwarding after re-attach:
run "start", then "show port stats 0" check forwarding packages start. run "port detach 0", check the error message of not stopped. run "stop", then "show port stats 0", check forwarding packages stopped.
Test Case 4: port detach & attach for physical devices with igb_uio¶
Bind driver before testpmd started, port will start automatically
Bind new physical port to igb_uio(assume BDF 0000:02:00.0):
# ./tools/dpdk_nic_bind -b uio_pci_generic 0000:02:00.0
Start testpmd:
$ ./testpmd -c 0xf -n 4 -- -i
Check package forwarding when startup:
run "start", then "show port stats 0" check forwarding packages start. run "port detach 0", check the error message of port not stopped. run "stop", then "show port stats 0", check forwarding packages stopped.
Detach port 0 after port closed:
run "port stop 0" run "port close 0". run "port detach 0", check port detached successful.
Re-attach port 0(assume BDF 0000:02:00.0):
run "port attach 0000:02:00.0", run "port start 0". run "show port info 0", check port 0 info display.
Check package forwarding after re-attach:
run "start", then "show port stats 0" check forwarding packages start. run "port detach 0", check the error message of port not stopped. run "stop", then "show port stats 0", check forwarding packages stopped.
Test Case 5: port detach & attach for virtual devices¶
Start testpmd:
$ ./testpmd -c 0xf -n 4 -- -i
Attach virtual device as port 0:
run "port attach eth_pcap0,iface=xxxx", where "xxxx" is one workable ifname. run "port start 0". run "show port info 0", check port 0 info display correctly.
Check package forwarding after port start:
run "start", then "show port stats 0" check forwarding packages start. run "port detach 0", check the error message of port not stopped. run "stop", then "show port stats 0", check forwarding packages stopped.
Detach port 0 after port closed:
run "port stop 0". run "port close 0". run "port detach 0", check port detached successful.
Re-attach port 0:
run "port attach eth_pcap0,iface=xxxx", where "xxxx" is one workable ifname. run "port start 0". run "show port info 0", check port 0 info display correctly.
Check package forwarding after port start:
run "start", then "show port stats 0" check forwarding packages start. run "port detach 0", check the error message of port not stopped. run "stop", then "show port stats 0", check forwarding packages stopped.
Test Case 6: port detach & attach for virtual devices, with “–vdev”¶
Start testpmd, “”xxxx” is one workable ifname:
$ ./testpmd -c 0xf -n 4 --vdev "eth_pcap0,iface=xxxx" -- -i
Check package forwarding after port start:
run "start", then "show port stats 0" check forwarding packages start. run "port detach 0", check the error message of port not stopped. run "stop", then "show port stats 0", check forwarding packages stopped.
Detach port 0 after port closed:
run "port stop 0". run "port close 0". run "port detach 0", check port detached successful.
Re-attach port 0:
run "port attach eth_pcap0,iface=xxxx", where "xxxx" is one workable ifname. run "port start 0". run "show port info 0", check port 0 info display correctly.
Check package forwarding after port start:
run "start", then "show port stats 0" check forwarding packages start. run "port detach 0", check the error message of port not stopped. run "stop", then "show port stats 0", check forwarding packages stopped.
IEEE1588 Precise Time Protocol Tests¶
The functional test of the IEEE1588 Precise Time Protocol offload support
in Poll Mode Drivers is done with a specific ieee1588` forwarding mode
of the testpmd
application.
In this mode, packets are received one by one and are expected to be
PTP V2 L2 Ethernet frames with the specific Ethernet type 0x88F7
,
containing PTP SYNC
messages (version 2 at offset 1, and message ID
0 at offset 0).
When started, the test enables the IEEE1588 PTP offload support of each
controller. It makes them automatically filter and timestamp the receipt
of incoming PTP SYNC
messages contained in PTP V2 Ethernet frames.
Conversely, when stopped, the test disables the IEEE1588 PTP offload support
of each controller,
While running, the test checks that each received packet is a valid IEEE1588
PTP V2 Ethernet frame with a message of type PTP_SYNC_MESSAGE
, and that
the packet has been identified and timestamped by the hardware.
For this purpose, it checks that the corresponding PKT_RX_IEEE1588_PTP
and PKT_RX_IEEE1588_TMST
flags have been set in the mbufs returned
by the PMD receive function.
Then, the test checks that the two NIC registers holding the timestamp of a received PTP packet are effectively valid, and that they contain a value greater than their previous value.
If everything is OK, the test sends the received packet as-is on the same port,
requesting for its transmission to be timestamped by the hardware.
For this purpose, it set the PKT_TX_IEEE1588_TMST
flag of the mbuf before
sending it.
The test finally checks that the two NIC registers holding the timestamp of
a transmitted PTP packet are effectively valid, and that they contain a value
greater than their previous value.
Prerequisites¶
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
The support of the IEEE1588 Precise Time Protocol in Poll Mode Drivers must
be configured at compile-time with the CONFIG_RTE_LIBRTE_IEEE1588
option.
Configure the packet format for the traffic generator to be IEEE1588 PTP
Ethernet type 0x88F7
and containing PTP SYNC
(version 2 at offset 1,
and message ID 0 at offset 0).
Start the testpmd
application with the following parameters:
-cffffff -n 3 -- -i --rxpt=0 --rxht=0 --rxwt=0 \
--txpt=39 --txht=0 --txwt=0
The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
Test Case: Enable IEEE1588 PTP packet reception and generation¶
Select the ieee1588
test forwarding mode and start the test:
testpmd> set fwd ieee1588
Set ieee1588 packet forwarding mode
testpmd> start
ieee1588 packet forwarding - CRC stripping disabled - packets/burst=16
nb forwarding cores=1 - nb forwarding ports=2
RX queues=1 - RX desc=128 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=39 hthresh=0 wthresh=0
testpmd>
On the traffic generator side send one IEEE1588 PTP packet.
Verify that the testpmd application outputs something like this, with a timestamp different than 0:
testpmd> Port 8 IEEE1588 PTP V2 SYNC Message filtered by hardware
Port 8 RX timestamp value 0x78742550448000000
Port 8 TX timestamp value 0x78742561472000000 validated after 2 micro-seconds
Port 8 IEEE1588 PTP V2 SYNC Message filtered by hardware
Port 8 RX timestamp value 0x79165536192000000
Port 8 TX timestamp value 0x79165545648000000 validated after 2 micro-seconds
Verify that the TX timestamp is bigger than the RX timestamp. Verify that the second RX timestamp is bigger than the first RX timestamp. Verify that the TX IEEE1588 PTP packet is received by the traffic generator.
Test Case: Disable IEEE1588 PTP packet reception and generation¶
Stop the IEEE1588 fwd:
testpmd> stop
...
testpmd>
Send one IEEE1588 PTP packet
Verify that the packet is not filtered by the HW (IEEE1588 PTP V2 SYNC Message is not displayed). (??? Is it correct? Should we set fwd rxonly ?)
One-shot Rx Interrupt Tests¶
One-shot Rx interrupt feature will split rx interrupt handling from other interrupts like LSC interrupt. It implemented one handling mechanism to eliminate non-deterministic DPDK polling thread wakeup latency.
VFIO’ multiple interrupt vectors support mechanism to enable multiple event fds serving per Rx queue interrupt handling. UIO has limited interrupt support, specifically it only support a single interrupt vector, which is not suitable for enabling multi queues Rx/Tx interrupt.
Prerequisites¶
Each of the 10Gb Ethernet* ports of the DUT is directly connected in full-duplex to a different port of the peer traffic generator.
Assume PF port PCI addresses are 0000:08:00.0 and 0000:08:00.1, their Interfaces name are p786p1 and p786p2. Assume generated VF PCI address will be 0000:08:10.0, 0000:08:10.1.
Iommu pass through feature has been enabled in kernel:
intel_iommu=on iommu=pt
Support igb_uio and vfio driver, if used vfio, kernel need 3.6+ and enable vt-d in bios. When used vfio, requested to insmod two drivers vfio and vfio-pci.
Test Case1: PF interrupt pmd with different queue¶
Run l3fwd-power with one queue per port:
l3fwd-power -c 7 -n 4 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Send one packet to Port0 and Port1, check that thread on core1 and core2 waked up:
L3FWD_POWER: lcore 1 is waked up from rx interrupt on port1,rxq0
L3FWD_POWER: lcore 2 is waked up from rx interrupt on port1,rxq0
Check the packet has been normally forwarded.
After the packet forwarded, thread on core1 and core 2 will return to sleep:
L3FWD_POWER: lcore 1 sleeps until interrupt on port0,rxq0 triggers
L3FWD_POWER: lcore 2 sleeps until interrupt on port0,rxq0 triggers
Send packet flows to Port0 and Port1, check that thread on core1 and core2 will keep up awake.
Run l3fwd-power with random number queue per port, if is 4:
l3fwd-power -c 7 -n 4 -- -p 0x3 -P --config="0,0,0),(0,1,1),\
(0,2,2),(0,3,3),(0,4,4)"
Send packet with increased dest IP to Port0, check that all threads waked up
Send packet flows to Port0 and Port1, check that thread on core1 and core2 will keep up awake.
Run l3fwd-power with 32 queues per port:
l3fwd-power -c ffffffff -n 4 -- -p 0x3 -P --config="(0,0,0),(0,1,1),\
(0,2,2),(0,3,3),(0,4,4),(0,5,5),(0,6,6),(0,7,7),(0,8,8),
(0,9,9),(0,10,10),(0,11,11),(0,12,12),(0,13,13),(0,14,14),\
(0,15,15),\
(1,0,16),(1,1,17),(1,2,18),(1,3,19),(1,4,20),(1,5,21),(1,6,22),\
(1,7,23),(1,8,24),(1,9,25),(1,10,26),(1,11,27),(1,12,28),\
(1,13,29),(1,14,30),\(1,15,31)"
Send packet with increased dest IP to Port0, check that all threads waked up
igb_uio driver only uses one queue 0
Test Case2: PF lsc interrupt with vfio¶
Run l3fwd-power with one queue per port:
l3fwd-power -c 7 -n 4 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Plug out Port0 cable, check that link down interrupt captured and handled by pmd driver.
Plug out Port1 cable, check that link down interrupt captured and handled by pmd driver.
Plug in Port0 cable, check that link up interrupt captured and handled by pmd driver.
Plug in Port1 cable, check that link up interrupt captured and handled by pmd driver.
Test Case3: PF interrupt pmd latency test¶
Setup validation scenario the case as test1 Send burst packet flow to Port0 and Port1, use IXIA capture the maximum latency.
Compare latency(l3fwd-power PF interrupt pmd with uio) with l3fwd latency.
Setup validation scenario the case as test2 Send burst packet flow to Port0 and Port1, use IXIA capture the maximum latency.
IP fragmentation Tests¶
The IP fragmentation results are produced using ‘’ip_fragmentation’’ application. The test application should run with both IPv4 and IPv6 fragmentation.
Prerequisites¶
Hardware requirements:
For each CPU socket, each memory channel should be populated with at least 1x DIMM
Board is populated with at least 2x 1GbE or 10GbE ports. Special PCIe restrictions may be required for performance. For example, the following requirements should be met for Intel 82599 (Niantic) NICs:
- NICs are plugged into PCIe Gen2 or Gen3 slots
- For PCIe Gen2 slots, the number of lanes should be 8x or higher
- A single port from each NIC should be used, so for 2x ports, 2x NICs should be used
NIC ports connected to traffic generator. It is assumed that the NIC ports P0, P1, P2, P3 (as identified by the DPDK application) are connected to the traffic generator ports TG0, TG1, TG2, TG3. The application-side port mask of NIC ports P0, P1, P2, P3 is noted as PORTMASK in this section. Traffic generator should support sending jumbo frames with size up to 9K.
BIOS requirements:
- Intel Hyper-Threading Technology is ENABLED
- Hardware Prefetcher is DISABLED
- Adjacent Cache Line Prefetch is DISABLED
- Direct Cache Access is DISABLED
Linux kernel requirements:
- Linux kernel has the following features enabled: huge page support, UIO, HPET
- Appropriate number of huge pages are reserved at kernel boot time
- The IDs of the hardware threads (logical cores) per each CPU socket can be determined by parsing the file /proc/cpuinfo. The naming convention for the logical cores is: C{x.y.z} = hyper-thread z of physical core y of CPU socket x, with typical values of x = 0 .. 3, y = 0 .. 7, z = 0 .. 1. Logical cores C{0.0.1} and C{0.0.1} should be avoided while executing the test, as they are used by the Linux kernel for running regular processes.
Software application requirements
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio modprobe vfio-pci usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
- The test can be run with IPv4 package. The LPM table used for IPv4 packet routing is:
Entry # LPM prefix (IP/length) Output port 0 100.10.0.0/16 P2 1 100.20.0.0/16 P2 2 100.30.0.0/16 P0 3 100.40.0.0/16 P0 - The test can be run with IPv6 package, which follows rules below.
- There is no support for Hop-by-Hop or Routing extension headers in the packet to be fragmented. All other optional headers, which are not part of the unfragmentable part of the IPv6 packet are supported.
- When a fragment is generated, its identification field in the IPv6 fragmentation extension header is set to 0. This is not RFC compliant, but proper identification number generation is out of the scope of the application and routers in an IPv6 path are not allowed to fragment in the first place… Generating that identification number is the job of a proper IP stack.
- The LPM table used for IPv6 packet routing is:
Entry # LPM prefix (IP/length) Output port 0 101:101:101:101:101:101:101:101/48 P2 1 201:101:101:101:101:101:101:101/48 P2 2 301:101:101:101:101:101:101:101/48 P0 3 401:101:101:101:101:101:101:101/48 P0 The following items are configured through the command line interface of the application:
- The set of one or several RX queues to be enabled for each NIC port
- The set of logical cores to execute the packet forwarding task
- Mapping of the NIC RX queues to logical cores handling them.
Test Case 1: IP Fragmentation normal ip packet forward¶
With 1 input and 1 output port make sure that IP header and contents of the header are forwarded correctly for the frame sizes: 64, 128, 256, 512,1024, 1518 bytes.
Test Case 2: IP Fragmentation Don’t fragment¶
In TG set IP flag “Don’t fragment” and make sure that frames with size 1519 bytes are discarded by ip_frag.
Test Case 3: IP Fragmentation May fragment¶
In TG set IP flag “May fragment” and send frames with the following sizes: 1519 bytes, 2K, 3K, 4K, 5K, 6K, 7K, 8K, 9K. For each of them check that:
- Check number of output packets.
- Check header of each output packet: length, ID, fragment offset, flags.
- Check payload: size and contents as expected, not corrupted.
Test Case 4: Throughput test¶
The test report should provide the throughput rate measurements (in mpps and % of the line rate for 2x NIC ports) for the following input frame sizes: 64 bytes, 1518 bytes, 1519 bytes, 2K, 9k.
The following configurations should be tested:
# of ports | Socket/Core/HyperThread | Total # of sw threads |
2 | 1S/1C/1T | 1 |
2 | 1S/1C/2T | 2 |
2 | 1S/2C/1T | 2 |
2 | 2S/1C/1T | 2 |
Command line:
./ip_fragmentation -c <LCOREMASK> -n 4 -- [-P] -p PORTMASK
-q <NUM_OF_PORTS_PER_THREAD>
Generic Routing Encapsulation (GRE) Tests¶
Generic Routing Encapsulation (GRE) is a tunneling protocol developed by Cisco Systems that can encapsulate a wide variety of network layer protocols inside virtual point-to-point links over an Internet Protocol network. Fortville support GRE packet detecting, checksum computing and filtering.
Prerequisites¶
Fortville nic should be on the DUT.
Test Case 1: GRE ipv4 packet detect¶
Start testpmd and enable rxonly forwarding mode:
testpmd -c ffff -n 4 -- -i --tx-offloads=0x8fff
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
Send packet as table listed and packet type match each layer.
Outer Vlan | Outer IP | Tunnel | Inner L3 | Inner L4 |
No | Ipv4 | GRE | Ipv4 | Udp |
No | Ipv4 | GRE | Ipv4 | Tcp |
No | Ipv4 | GRE | Ipv4 | Sctp |
Yes | Ipv4 | GRE | Ipv4 | Udp |
Yes | Ipv4 | GRE | Ipv4 | Tcp |
Yes | Ipv4 | GRE | Ipv4 | Sctp |
Test Case 2: GRE ipv6 packet detect¶
Start testpmd and enable rxonly forwarding mode:
testpmd -c ffff -n 4 -- -i --tx-offloads=0x8fff
testpmd> set fwd rxonly
testpmd> set verbose 1
testpmd> start
Send packet as table listed and packet type match each layer:
Ether()/IPv6(nh=47)/GRE()/IP()/UDP()/Raw('x'*40)
Ether()/IPv6(nh=47)/GRE(proto=0x86dd)/IPv6()/UDP()/Raw('x'*40)
Outer Vlan | Outer IP | Tunnel | Inner L3 | Inner L4 |
No | Ipv6 | GRE | Ipv4 | Udp |
No | Ipv6 | GRE | Ipv4 | Tcp |
No | Ipv6 | GRE | Ipv4 | Sctp |
Yes | Ipv6 | GRE | Ipv4 | Udp |
Yes | Ipv6 | GRE | Ipv4 | Tcp |
Yes | Ipv6 | GRE | Ipv4 | Sctp |
Outer Vlan | Outer IP | Tunnel | Inner L3 | Inner L4 |
No | Ipv6 | GRE | Ipv6 | Udp |
No | Ipv6 | GRE | Ipv6 | Tcp |
No | Ipv6 | GRE | Ipv6 | Sctp |
Yes | Ipv6 | GRE | Ipv6 | Udp |
Yes | Ipv6 | GRE | Ipv6 | Tcp |
Yes | Ipv6 | GRE | Ipv6 | Sctp |
Test Case 3: GRE packet filter¶
Start testpmd with multi queues:
testpmd -c ff -n 3 -- -i --rxq=4 --txq=4 --tx-offloads=0x8fff
testpmd> set fwd rxonly
testpmd> set nbcore 4
testpmd> set verbose 1
testpmd> start
Add GRE filter that forward inner ip address 0.0.0.0 to queue 3:
testpmd> tunnel_filter add 0 XX:XX:XX:XX:XX:XX YY:YY:YY:YY:YY:YY \
0.0.0.0 1 ipingre iip 0 3
Send packet inner ip address matched and check packet received by queue 3:
p = Ether()/IP()/GRE()/IP(dst="0.0.0.0")/UDP()
Remove tunnel filter and check same packet received by queue 0:
testpmd> tunnel_filter rm 0 XX:XX:XX:XX:XX:XX YY:YY:YY:YY:YY:YY \
0.0.0.0 1 ipingre iip 0 3
Add GRE filter that forward outer ip address 0.0.0.0 to queue 3:
testpmd> tunnel_filter add 0 XX:XX:XX:XX:XX:XX YY:YY:YY:YY:YY:YY \
0.0.0.0 1 ipingre oip 0 3
Send packet outer ip address matched and check packet received by queue 3.
Remove tunnel filter and check same packet received by queue 0:
testpmd> tunnel_filter rm 0 XX:XX:XX:XX:XX:XX YY:YY:YY:YY:YY:YY \
0.0.0.0 1 ipingre oip 0 3
Test Case 4: GRE packet chksum offload¶
Start testpmd with hardware checksum offload enabled:
testpmd -c ff -n 3 -- -i --tx-offloads=0x8fff --enable-rx-cksum --port-topology=loop
testpmd> set verbose 1
testpmd> set fwd csum
testpmd> csum set ip hw 0
testpmd> csum set udp hw 0
testpmd> csum set sctp hw 0
testpmd> csum set outer-ip hw 0
testpmd> csum set tcp hw 0
testpmd> csum parse_tunnel on 0
testpmd> start
Send packet with wrong outer IP checksum and check forwarded packet IP checksum is correct:
Ether()/IP(chksum=0x0)/GRE()/IP()/TCP()
Send packet with wrong inner IP checksum and check forwarded packet IP checksum is correct:
Ether()/IP()/GRE()/IP(chksum=0x0)/TCP()
Send packet with wrong inner TCP checksum and check forwarded packet TCP checksum is correct:
Ether()/IP()/GRE()/IP()/TCP(chksum=0x0)
Send packet with wrong inner UDP checksum and check forwarded packet UDP checksum is correct:
Ether()/IP()/GRE()/IP()/UDP(chksum=0xffff)
Send packet with wrong inner SCTP checksum and check forwarded packet SCTP checksum is correct:
Ether()/IP()/GRE()/IP()/SCTP(chksum=0x0)
IP Pipeline Application Tests¶
The ip_pipeline application
is the main DPDK Packet Framework (PFW)
application.
The application allows setting of a pipeline through the PFW. Currently the application set a pipeline using 2 main features, routing and flow control and, in addition, ARP is used.
The application has an interactive session when started to allow in-app configuration.
This application uses 5 CPU cores, reception, flow control, routing and transmission.
The traffic will pass through the pipeline if meets the following conditions:
- If
flow add all
is used in the setup then:- TCP/IPv4
- IP destination = A.B.C.D with A = 0 and B,C,D random
- IP source = 0.0.0.0
- TCP destination port = 0
- TCP source port = 0
- If
flow add all
is not used then there is no restrictions.
Prerequisites¶
Launch the ip_pipeline
app with 5 lcores and two ports:
$ examples/ip_pipeline/build/ip_pipeline -c 0x3e -n <memory channels> -- -p
<ports mask>
The expected prompt is:
pipeline>
The selected ports will be called 0 and 1 in the following instructions.
Tcpdump is used in test as a traffic sniffer unless otherwise stated. Tcpdump is set in both ports to check that traffic is sent and forwarded, or not forwarded.
Scapy is used in test as traffic generator unless otherwise stated.
The PCAP driver is used in some tests as a traffic generator and sniffer.
NOTE: ip_pipeline
is currently hardcoded to start the reception from ports
automatically. Prior to running the test described in this document this
behavior has to be modified by commenting out the following lines in
examples/ip_pipeline/pipeline_rx.c
:
/* Enable input ports */
for (i = 0; i < app.n_ports; i ++) {
if (rte_pipeline_port_in_enable(p, port_in_id[i])) {
rte_panic("Unable to enable input port %u\n", port_in_id[i]);
}
}
Test Case: test_incremental_ip¶
Create a PCAP file containing permutations of the following parameters:
- TCP/IPv4.
- 64B size.
- Number of frames sent. 1, 3, 63, 64, 65, 127, 128.
- Interval between frames. 0s, 0.7s.
- Incremental destination IP address. 1 by 1 increment on every frame.
- Maximum IP address 255.128.0.0.
Start the ip_pipeline
application as described in prerequisites. Run the
default config script:
pipeline> run examples/ip_pipeline/ip_pipeline.sh
Start port reception:
link 0 up link 1 up
Send the generated PCAP file from port 1 to 0, check that all frames are forwarded to port 0. Send the generated PCAP file from port 0 to 1, check that all frames are forwarded to port 0.
Stop port reception:
link 0 down link 1 down
Test Case: test_frame_sizes¶
Create a PCAP file containing permutations of the following parameters:
- TCP/IPv4.
- Frame size 64, 65, 128.
- 100 frames.
- 0.5s interval between frames.
- Incremental destination IP address. 1 by 1 increment on every frame.
- Maximum IP address 255.128.0.0.
Start the ip_pipeline
application as described in prerequisites. Run the
default config script:
pipeline> run examples/ip_pipeline/ip_pipeline.sh
Start port reception:
link 0 up link 1 up
Send the generated PCAP file from port 1 to 0, check that all frames are forwarded to port 0. Send the generated PCAP file from port 0 to 1, check that all frames are forwarded to port 0.
Stop port reception:
link 0 down link 1 down
Test Case: test_pcap_incremental_ip¶
Compile the DPDK to use the PCAP driver. Modify the target config file to allow PCAP driver:
sed -i 's/CONFIG_RTE_LIBRTE_PMD_PCAP=n$/CONFIG_RTE_LIBRTE_PMD_PCAP=y/' config/defconfig_<target>
Create a PCAP file containing permutations of the following parameters:
- TCP/IPv4.
- 64B size.
- Number of frames sent. 1, 3, 63, 64, 65, 127, 128.
- Incremental destination IP address. 1 by 1 increment on every frame.
- Maximum IP address 255.128.0.0.
Start the ip_pipeline
application using pcap devices:
$ ./examples/ip_pipeline/build/ip_pipeline -c <core mask> -n <mem channels> --use-device <pcap devices> -- -p 0x3
<pcap devices>: 'eth_pcap0;rx_pcap=/root/<input pcap file 0>;tx_pcap=/tmp/port0out.pcap,eth_pcap1;rx_pcap=/root/<input pcap file 1>;tx_pcap=/tmp/port1out.pcap'
Run the default config script:
pipeline> run examples/ip_pipeline/ip_pipeline.sh
As the traffic is sent and received by PCAP devices the traffic flow is triggered by enabling the ports:
link 0 up link 1 up
Wait 1s to allow all frames to be sent and stop the ports:
link 0 down link 1 down
Check the results PCAP files tmp/port0out.pcap
and tmp/port1out.pcap
,
the frames must be received in port 0, tmp/port0out.pcap
.
Test Case: test_pcap_frame_sizes¶
Compile DPDK to use PCAP driver. Modify the target config file to allow PCAP driver:
sed -i 's/CONFIG_RTE_LIBRTE_PMD_PCAP=n$/CONFIG_RTE_LIBRTE_PMD_PCAP=y/'
config/defconfig_<target>
Create a PCAP file containing permutations of the following parameters:
- TCP/IPv4.
- Frame sizes 64, 65, 128.
- Number of frames sent. 1, 3, 63, 64, 65, 127, 128.
- Incremental destination IP address. 1 by 1 increment on every frame.
- Maximum IP address 255.128.0.0.
Start the ip_pipeline
application using pcap devices:
$ ./examples/ip_pipeline/build/ip_pipeline -c <core mask> -n <mem channels> --use-device <pcap devices> -- -p 0x3
<pcap devices>: 'eth_pcap0;rx_pcap=/root/<input pcap file 0>;tx_pcap=/tmp/port0out.pcap,eth_pcap1;rx_pcap=/root/<input pcap file 1>;tx_pcap=/tmp/port1out.pcap'
Run the default config script:
pipeline> run examples/ip_pipeline/ip_pipeline.sh
As the traffic is sent and received by PCAP devices the traffic flow is triggered by enabling the ports:
link 0 up
link 1 up
Wait 1s to allow all frames to be sent and stop the ports:
link 0 down
link 1 down
Check the results PCAP files tmp/port0out.pcap
and tmp/port1out.pcap
,
the frames must be received in port 0, tmp/port0out.pcap
.
Test Case: test_flow_management¶
This test checks the flow addition and removal feature in the packet framework.
Create a PCAP file containing the following traffic:
- TCP/IPv4.
- Frame size 64.
- Source IP address 0.0.0.0
- Destination IP addresses: ‘0.0.0.0’, ‘0.0.0.1’, ‘0.0.0.127’, ‘0.0.0.128’, ‘0.0.0.255’, ‘0.0.1.0’, ‘0.0.127.0’, ‘0.0.128.0’, ‘0.0.129.0’, ‘0.0.255.0’, ‘0.127.0.0’, ‘0.127.1.0’, ‘0.127.127.0’, ‘0.127.255.0’, ‘0.127.255.255’
Start the ip_pipeline
application as described in prerequisites and set up
the following configuration:
pipeline> arp add 0 0.0.0.1 0a:0b:0c:0d:0e:0f
pipeline> arp add 1 0.128.0.1 1a:1b:1c:1d:1e:1f
pipeline> route add 0.0.0.0 9 0 0.0.0.1
pipeline> route add 0.128.0.0 9 1 0.128.0.1
Start port reception:
link 0 up link 1 up
Send the pcap file and check that the number of frames forwarded matches the number of flows added (starting at 0)
Add a new flow matching one of the IP address:
pipeline> flow add 0.0.0.0 <dst IP> 0 0 0 <port>
Repeat Step 1 until all the frames pass
Remove a flow previously added:
pipeline> flow del 0.0.0.0 <dst IP> 0 0 0
Check if a frames less is forwarded.
Repeat from step 4 until no frames are forwarded.
Test Case: test_route_management¶
This test checks the route addition and removal feature in the packet framework.
Create a PCAP file containing the following traffic:
- TCP/IPv4.
- Frame size 64.
- Source IP address 0.0.0.0
- Destination IP addresses: ‘0.0.0.0’, ‘0.0.0.1’, ‘0.0.0.127’, ‘0.0.0.128’, ‘0.0.0.255’, ‘0.0.1.0’, ‘0.0.127.0’, ‘0.0.128.0’, ‘0.0.129.0’, ‘0.0.255.0’, ‘0.127.0.0’, ‘0.127.1.0’, ‘0.127.127.0’, ‘0.127.255.0’, ‘0.127.255.255’
Start the ip_pipeline
application as described in prerequisites and set up
the following configuration:
pipeline> arp add 0 0.0.0.1 0a:0b:0c:0d:0e:0f
pipeline> arp add 1 0.128.0.1 1a:1b:1c:1d:1e:1f
pipeline> flow add all
Start port reception:
link 0 up link 1 up
Send the pcap file and check that the number of frames forwarded matches the number of routes added (starting at 0)
Add a new route matching one of the IP address:
pipeline> route add <src IP> 32 <port> 0.0.0.1
Repeat Step 1 until all the frames pass
Remove a route previously added:
pipeline> route del <dst IP> 32
Check if a frames less is forwarded.
Repeat from step 4 until no frames are forwarded.
IP Reassembly Tests¶
This document provides a test plan for benchmarking of the IP Reassembly sample application. This is a simple example app featuring packet processing using Intel® Data Plane Development Kit (Intel® DPDK) that show-cases the use of IP fragmented packets reassembly.
Prerequisites¶
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
1x Intel® 82599 (Niantic) NICs (1x 10GbE full duplex optical ports per NIC) plugged into the available PCIe Gen2 8-lane slots.
Test Case: Send 1K packets, 4 fragments each and 1K maxflows¶
Sample command:
./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=10s
Sends 1K packets split in 4 fragments each with a maxflows
of 1K.
It expects:
- 4K IP packets to be sent to the DUT.
- 1K TCP packets being forwarded back to the TESTER.
- 1K packets with a valid TCP checksum.
Test Case: Send 2K packets, 4 fragments each and 1K maxflows¶
Sample command:
./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=10s
Sends 2K packets split in 4 fragments each with a maxflows
of 1K.
It expects:
- 8K IP packets to be sent to the DUT.
- 1K TCP packets being forwarded back to the TESTER.
- 1K packets with a valid TCP checksum.
Test Case: Send 4K packets, 7 fragments each and 4K maxflows¶
Sample command:
./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=4096 --flowttl=10s
Modifies the sample app source code to enable up to 7 fragments per packet.
Sends 4K packets split in 7 fragments each with a maxflows
of 4K.
It expects:
- 28K IP packets to be sent to the DUT.
- 4K TCP packets being forwarded back to the TESTER.
- 4K packets with a valid TCP checksum.
Test Case: Send +1K packets and ttl 3s; wait +ttl; send 1K packets¶
Sample command:
./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=3s
Sends 1100 packets split in 4 fragments each.
It expects:
- 4400 IP packets to be sent to the DUT.
- 1K TCP packets being forwarded back to the TESTER.
- 1K packets with a valid TCP checksum.
Then waits until the flowttl
timeout and sends 1K packets.
It expects:
- 4K IP packets to be sent to the DUT.
- 1K TCP packets being forwarded back to the TESTER.
- 1K packets with a valid TCP checksum.
Test Case: Send more packets than maxflows; only maxflows packets are forwarded back¶
Sample command:
./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1023 --flowttl=5s
Sends 1K packets with maxflows
equal to 1023.
It expects:
- 4092 IP packets to be sent to the DUT.
- 1023 TCP packets being forwarded back to the TESTER.
- 1023 packets with a valid TCP checksum.
Then sends 1023 packets.
It expects:
- 4092 IP packets to be sent to the DUT.
- 1023 TCP packets being forwarded back to the TESTER.
- 1023 packets with a valid TCP checksum.
Finally waits until the flowttl
timeout and re-send 1K packets.
It expects:
- 4092 IP packets to be sent to the DUT.
- 1023 TCP packets being forwarded back to the TESTER.
- 1023 packets with a valid TCP checksum.
Test Case: Send more fragments than supported¶
Sample command:
./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=10s
Sends 1 packet split in 5 fragments while the maximum number of supported fragments per packet is 4.
It expects:
- 5 IP packets to be sent to the DUT.
- 0 TCP packets being forwarded back to the TESTER.
- 0 packets with a valid TCP checksum.
Test Case: Send 3 frames and delay the 4th; no frames are forwarded back¶
Sample command:
./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=3s
Creates 1 packet split in 4 fragments. Sends the first 3 fragments and waits
until the flowttl
timeout. Then sends the 4th fragment.
It expects:
- 4 IP packets to be sent to the DUT.
- 0 TCP packets being forwarded back to the TESTER.
- 0 packets with a valid TCP checksum.
Test Case: Send jumbo frames¶
Sample command:
./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=10s --enable-jumbo --max-pkt-len=9500
Sets the NIC MTU to 9000 and sends 1K packets of 8900B split in 4 fragments of 2500B at the most. The reassembled packet size will not be bigger than the MTU previously defined.
It expects:
- 4K IP packets to be sent to the DUT.
- 1K TCP packets being forwarded back to the TESTER.
- 1K packets with a valid TCP checksum.
Test Case: Send jumbo frames without enable them in the app¶
Sample command:
./examples/ip_reassembly/build/ip_reassembly -c 0x2 -n 4 -- \
-P -p 0x2 --config "(1,0,1)" --maxflows=1024 --flowttl=10s
Sends jumbo packets in the same way the previous test case does but without enabling support within the sample app.
It expects:
- 4K IP packets to be sent to the DUT.
- 0 TCP packets being forwarded back to the TESTER.
- 0 packets with a valid TCP checksum.
Jumbo Frame Tests¶
The support of jumbo frames by Poll Mode Drivers consists in enabling a port to receive Jumbo Frames with a configurable maximum packet length that is greater than the standard maximum Ethernet frame length (1518 bytes), up to a maximum value imposed by the hardware.
Prerequisites¶
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
Assuming that ports 0
and 1
of the test target are directly connected
to the traffic generator, launch the testpmd
application with the following
arguments:
./build/app/testpmd -cffffff -n 3 -- -i --rxd=1024 --txd=1024 \
--burst=144 --txpt=32 --txht=0 --txfreet=0 --rxfreet=64 \
--mbcache=200 --portmask=0x3 --mbuf-size=2048 --max-pkt-len=9600
The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
Setting the size of the mbuf data buffer to 2048 and the maximum packet length to 9600 (CRC included) makes input Jumbo Frames to be stored in multiple buffers by the hardware RX engine.
Start packet forwarding in the testpmd
application with the start
command. Then, make the Traffic Generator transmit to the target’s port 0
packets of lengths (CRC included) 1517, 1518, 9599, and 9600 respectively.
Check that the same amount of frames and bytes are received back by the
Traffic Generator from its port connected to the target’s port 1.
Then, make the Traffic Generator transmit to the target’s port 0 packets of length (CRC included) 9600 and check that no packet is received by the Traffic Generator from its port connected to the target’s port 1.
Configuring the Maximum Length of Jumbo Frames¶
The maximum length of Jumbo Frames is configured with the parameter
--max-pkt-len=N
that is supplied in the set of parameters when launching
the testpmd
application.
Functional Tests of Jumbo Frames¶
Testing the support of Jumbo Frames in Poll Mode Drivers consists in configuring the maximum packet length with a value greater than 1518, and in sending to the test machine packets with the following lengths (CRC included):
- packet length = 1518 - 1
- packet length = 1518
- packet length = 1518 + 1
- packet length = maximum packet length - 1
- packet length = maximum packet length
- packet length = maximum packet length + 1
The cases 1) and 2) check that packets of standard lengths are still received when enabling the receipt of Jumbo Frames. The cases 3), 4) and 5) check that Jumbo Frames of lengths greater than the standard maximum frame (1518) and lower or equal to the maximum frame length can be received. The case 6) checks that packets larger than the configured maximum packet length are effectively dropped by the hardware.
Test Case: Normal frames with no jumbo frame support¶
Send a packet with size 1517 bytes
testpmd> show port stats 0
######################## NIC statistics for port 0 ########################
RX-packets: 0 RX-errors: 0 RX-bytes: 0
TX-packets: 1 TX-errors: 0 TX-bytes: 1517
############################################################################
testpmd> show port stats 1
######################## NIC statistics for port 1 ########################
RX-packets: 1 RX-errors: 0 RX-bytes: 1517
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that TX-bytes on port 0 and RX-bytes on port 1 are 1517
Send a packet with size 1518 bytes
testpmd> show port stats 0
######################## NIC statistics for port 0 ########################
RX-packets: 0 RX-errors: 0 RX-bytes: 0
TX-packets: 1 TX-errors: 0 TX-bytes: 1518
############################################################################
testpmd> show port stats 1
######################## NIC statistics for port 1 ########################
RX-packets: 1 RX-errors: 0 RX-bytes: 1518
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that TX-bytes on port 0 and RX-bytes on port 1 are 1518
Test Case: Jumbo frames with no jumbo frame support¶
Send a packet with size 1519 bytes
testpmd> show port stats 0
######################## NIC statistics for port 0 ########################
RX-packets: 0 RX-errors: 0 RX-bytes: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
testpmd> show port stats 1
######################## NIC statistics for port 1 ########################
RX-packets: 0 RX-errors: 1 RX-bytes: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that TX-bytes on port 0 and RX-bytes on port 1 are 0
Test Case: Normal frames with jumbo frame support¶
Start testpmd with jumbo frame support enabled
./testpmd -cffffff -n 3 -- -i --rxd=1024 --txd=1024 \
--burst=144 --txpt=32 --txht=8 --txwt=8 --txfreet=0 --rxfreet=64 \
--mbcache=200 --portmask=0x3 --mbuf-size=2048 --max-pkt-len=9600
Send a packet with size 1517 bytes
testpmd> show port stats 0
######################## NIC statistics for port 0 ########################
RX-packets: 0 RX-errors: 0 RX-bytes: 0
TX-packets: 1 TX-errors: 0 TX-bytes: 1517
############################################################################
testpmd> show port stats 1
######################## NIC statistics for port 1 ########################
RX-packets: 1 RX-errors: 0 RX-bytes: 1517
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that TX-bytes on port 0 and RX-bytes on port 1 are 1517
Send a packet with size 1518 bytes
testpmd> show port stats 0
######################## NIC statistics for port 0 ########################
RX-packets: 0 RX-errors: 0 RX-bytes: 0
TX-packets: 1 TX-errors: 0 TX-bytes: 1518
############################################################################
testpmd> show port stats 1
######################## NIC statistics for port 1 ########################
RX-packets: 1 RX-errors: 0 RX-bytes: 1518
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that TX-bytes on port 0 and RX-bytes on port 1 are 1518
Test Case: Jumbo frames with jumbo frame support¶
Send a packet with size 1519 bytes
testpmd> show port stats 0
######################## NIC statistics for port 0 ########################
RX-packets: 0 RX-errors: 0 RX-bytes: 0
TX-packets: 1 TX-errors: 0 TX-bytes: 1519
############################################################################
testpmd> show port stats 1
######################## NIC statistics for port 1 ########################
RX-packets: 1 RX-errors: 0 RX-bytes: 1519
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that TX-bytes on port 0 and RX-bytes on port 1 are 1519
Send a packet with size 9599 bytes
testpmd> show port stats 0
######################## NIC statistics for port 0 ########################
RX-packets: 0 RX-errors: 0 RX-bytes: 0
TX-packets: 1 TX-errors: 0 TX-bytes: 9599
############################################################################
testpmd> show port stats 1
######################## NIC statistics for port 1 ########################
RX-packets: 1 RX-errors: 0 RX-bytes: 9599
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that TX-bytes on port 0 and RX-bytes on port 1 are 9599.
Send a packet with size 9600 bytes
testpmd> show port stats 0
######################## NIC statistics for port 0 ########################
RX-packets: 0 RX-errors: 0 RX-bytes: 0
TX-packets: 1 TX-errors: 0 TX-bytes: 9600
############################################################################
testpmd> show port stats 1
######################## NIC statistics for port 1 ########################
RX-packets: 1 RX-errors: 0 RX-bytes: 9600
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that TX-bytes on port 0 and RX-bytes on port 1 are 9600.
Test Case: Frames bigger than jumbo frames, with jumbo frame support¶
Send a packet with size 9601 bytes
testpmd> show port stats 0
######################## NIC statistics for port 0 ########################
RX-packets: 0 RX-errors: 0 RX-bytes: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
testpmd> show port stats 1
######################## NIC statistics for port 1 ########################
RX-packets: 0 RX-errors: 1 RX-bytes: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that TX-bytes on port 0 and RX-bytes on port 1 are 0.
Kernel NIC Interface (KNI) Tests¶
Description¶
This document provides the plan for testing the Kernel NIC Interface application with support of rte_kni kernel module. Kernel NIC Interface is a DPDK alternative solution to the existing linux tun-tap interface for the exception path. Kernel NIC Interface allows the standard Linux net tools(ethtool/ifconfig/tcpdump) to facilitate managing the DPDK port. At the same time, it add an interface with the kernel net stack. The test supports Multi-Thread KNI.
All kni model parameter detail info on user guides:http://dpdk.org/doc/guides/sample_app_ug/kernel_nic_interface.html
The rte_kni
kernel module can be installed by a lo_mode
parameter.
loopback disabled:
insmod rte_kni.ko
insmod rte_kni.ko "lo_mode=lo_mode_none"
insmod rte_kni.ko "lo_mode=unsupported string"
loopback mode=lo_mode_ring enabled:
insmod rte_kni.ko "lo_mode=lo_mode_ring"
loopback mode=lo_mode_ring_skb enabled:
insmod rte_kni.ko "lo_mode=lo_mode_ring_skb"
The rte_kni
kernel module can also be installed by a kthread_mode
parameter. This parameter is single
by default.
kthread single:
insmod rte_kni.ko
insmod rte_kni.ko "kthread_mode=single"
kthread multiple:
insmod rte_kni.ko
insmod rte_kni.ko "kthread_mode=multiple"
The kni
application is run with EAL parameters and parameters for the
application itself. For details about the EAL parameters, see the relevant
DPDK Getting Started Guide. This application supports two parameters for
itself.
--config="(port id, rx lcore, tx lcore, kthread lcore, kthread lcore, ...)"
: Port and cores selection. Kernel threads are ignored ifkthread_mode
is notmultiple
.
ports cores:
e.g.:
--config="(0,1,2),(1,3,4)" No kernel thread specified.
--config="(0,1,2,21),(1,3,4,23)" One kernel thread in use.
--config="(0,1,2,21,22),(1,3,4,23,25) Two kernel threads in use.
-P
: Promiscuous mode. This is off by default.
Prerequisites¶
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
The DUT has at least 2 DPDK supported IXGBE NIC ports.
The DUT has to be able to install rte_kni kernel module and launch kni application with a default configuration (This configuration may change form a system to another):
rmmod rte_kni
rmmod igb_uio
insmod ./x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko
./examples/kni/build/app/kni -c 0xa0001e -n 4 -- -P -p 0x3 --config="(0,1,2,21),(1,3,4,23)" &
Test Case: ifconfig testing¶
Launch the KNI application. Assume that port 2 and 3
are used to this
application. Cores 1 and 2 are used to read from NIC, cores 2 and 4 are used
to write to NIC, threads 21 and 23 are used by the kernel.
As the kernel module is installed using "kthread_mode=single"
the core
affinity is set using taskset
:
./build/app/kni -c 0xa0001e -n 4 -- -P -p 0xc --config="(2,1,2,21),(3,3,4,23)"
Verify whether the interface has been added:
ifconfig -a
If the application is launched successfully, it will add two interfaces in
kernel net stack named vEth2_0
, vEth3_0
.
Interface name start with vEth
followed by the port number and an
additional incremental number depending on the number of kernel threads:
vEth2_0: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 00:00:00:00:00:00 txqueuelen 1000 (Ethernet)
RX packets 14 bytes 2098 (2.0 KiB)
RX errors 0 dropped 10 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vEth3_0: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 00:00:00:00:00:00 txqueuelen 1000 (Ethernet)
RX packets 13 bytes 1756 (1.7 KiB)
RX errors 0 dropped 10 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Verify whether ifconfig can set Kernel NIC Interface up:
ifconfig vEth2_0 up
Now vEth2_0
is up and has IPv6 address:
vEth2_0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet6 fe80::92e2:baff:fe37:92f8 prefixlen 64 scopeid 0x20<link>
ether 90:e2:ba:37:92:f8 txqueuelen 1000 (Ethernet)
RX packets 30 bytes 4611 (4.5 KiB)
RX errors 0 dropped 21 overruns 0 frame 0
TX packets 6 bytes 468 (468.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Verify whether ifconfig can add an ipv6 address:
ifconfig vEth2_0 add fe80::1
vEth2_0
has added ipv6 address:
29: vEth2_0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
inet6 fe80::1/128 scope link
valid_lft forever preferred_lft forever
inet6 fe80::92e2:baff:fe37:92f8/64 scope link
valid_lft forever preferred_lft forever
Delete the IPv6 address:
ifconfig vEth2_0 del fe80::1
The port deletes it:
29: vEth2_0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
inet6 fe80::92e2:baff:fe37:92f8/64 scope link
valid_lft forever preferred_lft forever
Set MTU parameter:
ifconfig vEth2_0 mtu 1300
vEth2_0
changes the mtu parameter:
29: vEth2_0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc pfifo_fast state UNKNOWN mode DEFAULT qlen 1000
link/ether 90:e2:ba:37:92:f8 brd ff:ff:ff:ff:ff:ff
Verify whether ifconfig can set ip address:
ifconfig vEth2_0 192.168.2.1 netmask 255.255.255.192
ip -family inet address show dev vEth2_0
vEth2_0
has IP address and netmask now:
29: vEth2_0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc pfifo_fast state UNKNOWN qlen 1000
inet 192.168.2.1/26 brd 192.168.2.63 scope global vEth2_0
Verify whether ifconfig can set vEth2_0
down:
ifconfig vEth2_0 down
ifconfig vEth2_0
vEth2_0
is down and no ipv6 address:
vEth2_0: flags=4098<BROADCAST,MULTICAST> mtu 1300
inet 192.168.2.1 netmask 255.255.255.192 broadcast 192.168.2.63
ether 90:e2:ba:37:92:f8 txqueuelen 1000 (Ethernet)
RX packets 70 bytes 12373 (12.0 KiB)
RX errors 0 dropped 43 overruns 0 frame 0
TX packets 25 bytes 4132 (4.0 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Repeat all the steps for interface vEth3_0
Test Case: Ping and Ping6 testing¶
If the application is launched successfully, it will add two interfaces in
kernel net stack named vEth2_0
, vEth3_0
.
Assume the link status of vEth2_0
is up and set ip address is 192.168.2.1
and vEth3_0
is up and set ip address is 192.168.3.1
. Verify the
command ping:
ping -w 1 -I vEth2_0 192.168.2.1
it can receive all packets and no packet loss:
PING 192.168.2.1 (192.168.2.1) from 192.168.2.1 vEth2_0: 56(84) bytes of data.
64 bytes from 192.168.2.1: icmp_req=1 ttl=64 time=0.040 ms
--- 192.168.2.1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms
Assume port A
on tester is linked with port 2
on DUT. Verify the
command ping from tester:
ping -w 1 -I "port A" 192.168.2.1
it can receive all packets and no packet loss.
Verify a wrong address:
ping -w 1 -I vEth2_0 192.168.0.123
no packets is received:
PING 192.168.0.123 (192.168.0.123) from 192.168.0.1 vEth2_0: 56(84) bytes of data.
--- 192.168.0.123 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
Verify the command ping6:
ping6 -w 1 -I vEth2_0 "Eth2_0's ipv6 address"
it can receive all packets and no packet loss:
PING fe80::92e2:baff:fe08:d6f0(fe80::92e2:baff:fe08:d6f0) from fe80::92e2:baff:fe08:d6f0 vEth2_0: 56 data bytes
64 bytes from fe80::92e2:baff:fe08:d6f0: icmp_seq=1 ttl=64 time=0.070 ms
--- fe80::92e2:baff:fe08:d6f0 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms
Verify the command ping6 from tester:
ping6 -w 1 -I "port A" "Eth2_0's ipv6 address"
it can receive all packets and no packet loss.
Verify a wrong ipv6 address:
ping6 -w 1 -I vEth2_0 "random ipv6 address"
no packets is received:
PING fe80::92e2:baff:fe08:d6f1(fe80::92e2:baff:fe08:d6f1) from fe80::92e2:baff:fe08:d6f0 vEth2_0: 56 data bytes
--- fe80::92e2:baff:fe08:d6f1 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms
Repeat all the steps for interface vEth3_0
Test Case: Tcpdump testing¶
Assume port A and B
on packet generator connects to NIC port 2 and 3
.
Trigger the packet generator of bursting packets from port A and B`, then
check if tcpdump can capture all packets. The packets should include
``tcp
packets, udp
packets, icmp
packets, ip
packets,
ether+vlan tag+ip
packets, ether
packets.
Verify whether tcpdump can capture packets:
tcpdump -i vEth2_0
tcpdump -i vEth3_0
Test Case: Ethtool testing¶
In this time, KNI can only support ethtool commands which is to get information. So all below commands are to show information commands.
Verify whether ethtool can show Kernel NIC Interface’s standard information:
ethtool vEth2_0
Verify whether ethtool can show Kernel NIC Interface’s driver information:
ethtool -i vEth2_0
Verify whether ethtool can show Kernel NIC Interface’s statistics:
ethtool -S vEth2_0
Verify whether ethtool can show Kernel NIC Interface’s pause parameters:
ethtool -a vEth2_0
Verify whether ethtool can show Kernel NIC Interface’s offload parameters:
ethtool -k vEth2_0
Verify whether ethtool can show Kernel NIC Interface’s RX/TX ring parameters:
ethtool -g vEth2_0
Verify whether ethtool can show Kernel NIC Interface’s Coalesce parameters. It is not currently supported:
ethtool -c vEth2_0
Verify whether ethtool can show Kernel NIC Interface’s MAC registers:
ethtool -d vEth2_0
Verify whether ethtool can show Kernel NIC Interface’s EEPROM dump:
ethtool -e vEth2_0
Repeat all the steps for interface vEth3_0
Test Case: Packets statistics testing¶
Install the kernel module with loopback parameter lo_mode=lo_mode_ring_skb
and launch the KNI application.
Assume that port 2 and 3
are used by this application:
rmmod kni
insmod ./kmod/rte_kni.ko "lo_mode=lo_mode_ring_skb"
./build/app/kni -c 0xff -n 3 -- -p 0xf -i 0xf -o 0xf0
Assume port A and B
on tester connects to NIC port 2 and 3
.
Get the RX packets count and TX packets count:
ifconfig vEth2_0
Send 5 packets from tester. And check whether both RX and TX packets of
vEth2_0
have increased 5.
Repeat for interface vEth3_0
Test Case: Stress testing¶
Insert the rte_kni kernel module 50 times while changing the parameters. Iterate through lo_mode and kthread_mode values sequentially, include wrong values. After each insertion check whether kni application can be launched successfully.
Insert the kernel module 50 times while changing randomly the parameters. Iterate through lo_mode and kthread_mode values randomly, include wrong values. After each insertion check whether kni application can be launched successfully:
rmmod rte_kni
insmod ./kmod/rte_kni.ko <Changing Parameters>
./build/app/kni -c 0xa0001e -n 4 -- -P -p 0xc --config="(2,1,2,21),(3,3,4,23)"
Using dmesg
to check whether kernel module is loaded with the specified
parameters. Some permutations, those with wrong values, must fail to
success. For permutations with valid parameter values, verify the application can be
successfully launched and then close the application using CTRL+C.
Test Case: loopback mode performance testing¶
Compare performance results for loopback mode using:
lo_mode: lo_mode_fifo and lo_mode_fifo_skb.
kthread_mode: single and multiple.
Number of ports: 1 and 2.
Number of virtual interfaces per port: 1 and 2
Frame sizes: 64 and 256.
Cores combinations:
- Different cores for Rx, Tx and Kernel.
- Shared core between Rx and Kernel.
- Shared cores between Rx and Tx.
- Shared cores between Rx, Tx and Kernel.
- Multiple cores for Kernel, implies multiple virtual interfaces per port.
insmod ./x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko <lo_mode and kthread_mode parameters>
./examples/kni/build/app/kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
At this point, the throughput is measured and recorded for the different frame sizes. After this, the application is closed using CTRL+C.
The measurements are presented in a table format.
lo_mode | kthread_mode | Ports | Config | 64 | 256 |
---|---|---|---|---|---|
Test Case: bridge mode performance testing¶
Compare performance results for bridge mode using:
kthread_mode: single and multiple.
Number of ports: 2
Number of ports: 1 and 2.
Number of flows per port: 1 and 2
Number of virtual interfaces per port: 1 and 2
Frame size: 64.
Cores combinations:
- Different cores for Rx, Tx and Kernel.
- Shared core between Rx and Kernel.
- Shared cores between Rx and Tx.
- Shared cores between Rx, Tx and Kernel.
- Multiple cores for Kernel, implies multiple virtual interfaces per port.
The application is launched and the bridge is setup using the commands below:
insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko <kthread_mode parameter>
./build/app/kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
ifconfig vEth2_0 up
ifconfig vEth3_0 up
brctl addbr "br_kni"
brctl addif br_kni vEth2_0
brctl addif br_kni vEth3_0
ifconfig br_kni up
At this point, the throughput is measured and recorded. After this, the application is closed using CTRL+C and the bridge deleted:
ifconfig br_kni down
brctl delbr br_kni
The measurements are presented in a table format.
kthread_mode | Flows | Config | 64 |
---|---|---|---|
Test Case: bridge mode without KNI performance testing¶
Compare performance results for bridge mode using only Kernel bridge, no DPDK support. Use:
- Number of ports: 2
- Number of flows per port: 1 and 2
- Frame size: 64.
Set up the interfaces and the bridge:
rmmod rte_kni
ifconfig vEth2_0 up
ifconfig vEth3_0 up
brctl addbr "br1"
brctl addif br1 vEth2_0
brctl addif br1 vEth3_0
ifconfig br1 up
At this point, the throughput is measured and recorded. After this, the application is closed using CTRL+C and the bridge deleted:
ifconfig br1 down
brctl delbr br1
The measurements are presented in a table format.
Flows | 64 |
---|---|
1 | |
2 |
Test Case: routing mode performance testing¶
Compare performance results for routing mode using:
kthread_mode: single and multiple.
Number of ports: 2
Number of ports: 1 and 2.
Number of virtual interfaces per port: 1 and 2
Frame size: 64 and 256.
Cores combinations:
- Different cores for Rx, Tx and Kernel.
- Shared core between Rx and Kernel.
- Shared cores between Rx and Tx.
- Shared cores between Rx, Tx and Kernel.
- Multiple cores for Kernel, implies multiple virtual interfaces per port.
The application is launched and the bridge is setup using the commands below:
echo 1 > /proc/sys/net/ipv4/ip_forward
insmod ./x86_64-default-linuxapp-gcc/kmod/rte_kni.ko <kthread_mode parameter>
./build/app/kni -c <Core mask> -n 4 -- -P -p <Port mask> --config="<Ports/Cores configuration>" &
ifconfig vEth2_0 192.170.2.1
ifconfig vEth3_0 192.170.3.1
route add -net 192.170.2.0 netmask 255.255.255.0 gw 192.170.2.1
route add -net 192.170.3.0 netmask 255.255.255.0 gw 192.170.3.1
arp -s 192.170.2.2 vEth2_0
arp -s 192.170.3.2 vEth3_0
At this point, the throughput is measured and recorded. After this, the application is closed using CTRL+C.
The measurements are presented in a table format.
kthread_mode | Ports | Config | 64 | 256 |
---|---|---|---|---|
Test Case: routing mode without KNI performance testing¶
Compare performance results for routing mode using only Kernel, no DPDK support. Use:
- Number of ports: 2
- Frame size: 64 and 256
Set up the interfaces and the bridge:
echo 1 > /proc/sys/net/ipv4/ip_forward
rmmod rte_kni
ifconfig vEth2_0 192.170.2.1
ifconfig vEth3_0 192.170.3.1
route add -net 192.170.2.0 netmask 255.255.255.0 gw 192.170.2.1
route add -net 192.170.3.0 netmask 255.255.255.0 gw 192.170.3.1
arp -s 192.170.2.2 vEth2_0
arp -s 192.170.3.2 vEth3_0
At this point, the throughput is measured and recorded. After this, the application is closed using CTRL+C.
The measurements are presented in a table format.
Ports | 64 | 256 |
---|---|---|
1 | ||
2 |
CryptoDev API Tests¶
Description¶
This document provides the plan for testing CryptoDev API. CryptoDev API provides the ability to do encryption/decryption by integrating QAT (Intel® QuickAssist Technology) into DPDK. The QAT provides poll mode crypto driver support for Intel® QuickAssist Adapter 8950 hardware accelerator.
The testing of CrytpoDev API should be tested under either Intel QuickAssist Technology DH895xxC hardware accelerator or AES-NI library.
AES-NI algorithm table The table below contains AES-NI Algorithms with CryptoDev API. Part of the algorithms are not supported currently.
Algorithm | Mode | Detail |
AES | CBC | Encrypt/Decrypt;Key size: 128, 192, 256 bits |
SHA | SHA-1, SHA-224, SHA-384, SHA-256, SHA-512 | |
HMAC | Support SHA implementations SHA-1, SHA-224, SHA-256, SHA-384, SHA-512; Key Size versus Block size support: Key Size must be <= block size; Mac Len Supported SHA-1 10, 12, 16, 20 bytes; Mac Len Supported SHA-256 16, 24, 32 bytes; Mac Len Supported SHA-384 24,32, 40, 48 bytes; Mac Len Supported SHA-512 32, 40, 48, 56, 64 bytes; |
QAT algorithm table: The table below contains Cryptographic Algorithm Validation with CryptoDev API. Part of the algorithms are not supported currently.
Algorithm | Mode | Detail |
AES | CBC | Encrypt/Decrypt;Key size: 128, 192, 256 bits |
SHA | SHA-1, SHA-224, SHA-256, SHA-512 | |
HMAC | Support SHA implementations SHA-1, SHA-224, SHA-256, SHA-512; Key Size versus Block size support: Key Size must be <= block size; Mac Len Supported SHA-1 10, 12, 16, 20 bytes; Mac Len Supported SHA-224 14,16,20,24,28 bytes; Mac Len Supported SHA-256 16, 24, 32 bytes; Mac Len Supported SHA-384 24,32, 40, 48 bytes; Mac Len Supported SHA-512 32, 40, 48, 56, 64 bytes; |
|
GCM | Key Sizes:128, 192, 256 bits; Associated Data Length: 0 ~ 240 bytes; Payload Length: 0 ~ (2^32 -1) bytes; IV source: external; IV Lengths: 96 bits; Tag Lengths: 8, 12, 16 bytes; |
|
Snow3G | UEA2 | Encrypt/Decrypt; Key size: 128 |
UIA2 | Encrypt/Decrypt; Key size: 128 |
Limitations¶
- Chained mbufs are not supported.
- Hash only is not supported.
- Cipher only is not supported (except Snow3g).
- Only in-place is currently supported (destination address is the same as source address).
- Only supports the session-oriented API implementation by QAT. Support session-oriented and session-less APIs with AES-NI.
- Not performance tuned.
Prerequisites¶
To test CryptoDev API, an example l2fwd-crypto is added into DPDK.
The test commands of l2fwd-crypto is below:
./examples/l2fwd-crypto/build/app/l2fwd-crypto -n 4 -c COREMASK -- \
-p PORTMASK -q NQ --cdev (AESNI_MB|QAT) \
--chain (HASH_CIPHER|CIPHER_HASH) --cipher_algo (ALGO) \
--cipher_op (ENCRYPT|DECRYPT) --cipher_key (key_value) \
--iv (key_value) --auth_algo (ALGO) \
--auth_op (GENERATE|VERIFY) --auth_key (key_value) --sessionless
The operation of l2fwd-crypto are in 2 ways.
- For method CIPHER_HASH, the l2fwd-crypto will encrypt payload in packet first. Then do authentication for the encrypted data.
- For method HASH_CIPHER, the l2fwd-crypto will authenticate payload in packet first. Then do encryption for the encrypted data.
To do the function test, scapy can be used as traffic generator. To do the performance test, traffic generator can be hardware equipment or software traffic generator.
The CryptoDev API supports Fedora or FreeBSD.
QAT/AES-NI installation¶
If CryptoDev needs to use QAT to do encryption/decryption, QAT should be installed correctly. The steps how to install QAT is described in DPDK code directory dpdk/doc/guides/cryptodevs/qat.rst.
Once the driver is loaded, the software versions may be checked for each dh89xxCC_devX device as follows:
more /proc/icp_dh895xcc_dev0/version
+--------------------------------------------------+
| Hardware and Software versions for device 0 |
+--------------------------------------------------+
|Hardware Version: A0 SKU4 |
|Firmware Version: 2.3.0 |
|MMP Version: 1.0.0 |
|Driver Version: 2.3.0 |
|Lowest Compatible Driver: 2.3 |
|QuickAssist API CY Version: 1.8 |
|QuickAssist API DC Version: 1.4 |
+--------------------------------------------------+
If CryptoDev needs to use AES-NI to do encryption/decryption, AES-NI library should be install correctly. The steps how to use AES-NI library is described in DPDK code directory dpdk/doc/guides/cryptodevs/aesni_mb.rst.
Test case: Configuration test¶
CryptoDev API supports different configuration. This test tests different configuration with CryptoDev API.
Test case: CryptoDev Unit test¶
The CryptoDev API has Unit test cases to support basic API level testing.
- Compile Unit test
- cd isg_cid-dpdk_org/app/test make
Sub-case: AES-NI test case¶
run ./test -c 0xf -n 2 -- -i
>>cryptodev_aesni_autotest
Sub-case: QAT test case¶
run ./test -c 0xf -n 2 -- -i
>>cryptodev_qat_autotest
Test case: CryptoDev Function test¶
For function test, the DUT forward UDP packets generated by scapy.
After sending single packet from Scapy, CrytpoDev function encrypt/decrypt the payload in packet by using algorithm setting in command. The l2fwd-crypto forward the packet back to tester. Use TCPDump to capture the received packet on tester. Then tester parses the payload and compare the payload with correct answer pre-stored in scripts:
+----------+ +----------+
| | | |
| | --------------> | |
| Tester | | DUT |
| | | |
| | <-------------> | |
+----------+ +----------+
Sub-case: AES-NI test case¶
Cryptodev AES-NI algorithm validation matrix is showed in table below.
Method | Cipher_algo | Cipher_op | Cipher_key | Auth_algo | Auth_op |
CIPHER_HASH | AES_CBC | ENCRYPT | 128 | SHA1_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 192 | SHA1_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 256 | SHA1_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 128 | SHA256_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 128 | SHA384_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 128 | SHA512_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 128 | XCBC_MAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | MD5_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 192 | SHA1_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 256 | SHA1_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA224_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA256_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA384_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA512_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | AES_XCMC_MAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA1_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA224_HMAC | GENERATE |
Sub-case: QAT AES test case¶
Cryptodev QAT AES algorithm validation matrix is showed in table below.
Method | Cipher_algo | Cipher_op | Cipher_key | Auth_algo | Auth_op |
CIPHER_HASH | AES_CBC | ENCRYPT | 192 | SHA1_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 256 | SHA1_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 128 | SHA256_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 128 | SHA384_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 128 | SHA512_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 128 | XCBC_MAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | MD5_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 192 | SHA1_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 256 | SHA1_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA224_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA256_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA384_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA512_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | AES_XCMC_MAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA1_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA224_HMAC | GENERATE |
Sub-case: QAT GCM test case¶
Cryptodev GCM algorithm validation matrix is showed in table below.
Method | Cipher_algo | Cipher_op | Cipher_key | Auth_algo | Auth_op |
CIPHER_HASH | AES_GCM | ENCRYPT | 128 | SHA1_HMAC | GENERATE |
CIPHER_HASH | AES_GCM | ENCRYPT | 192 | SHA1_HMAC | GENERATE |
CIPHER_HASH | AES_GCM | ENCRYPT | 256 | SHA1_HMAC | GENERATE |
CIPHER_HASH | AES_GCM | ENCRYPT | 128 | SHA256_HMAC | GENERATE |
CIPHER_HASH | AES_GCM | ENCRYPT | 128 | SHA384_HMAC | GENERATE |
CIPHER_HASH | AES_GCM | ENCRYPT | 128 | SHA512_HMAC | GENERATE |
CIPHER_HASH | AES_GCM | ENCRYPT | 128 | XCBC_MAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 128 | MD5_HMAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 192 | SHA1_HMAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 256 | SHA1_HMAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 128 | SHA224_HMAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 128 | SHA256_HMAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 128 | SHA384_HMAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 128 | SHA512_HMAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 128 | AES_XCMC_MAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 128 | SHA1_HMAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 128 | SHA224_HMAC | GENERATE |
Sub-case: AES-NI GCM test case¶
Cryptodev GCM algorithm validation matrix is showed in table below.
Method | Cipher_algo | Cipher_op | Cipher_key | Auth_algo | Auth_op |
CIPHER_HASH | AES_GCM | ENCRYPT | 128 | SHA1_HMAC | GENERATE |
CIPHER_HASH | AES_GCM | ENCRYPT | 192 | SHA1_HMAC | GENERATE |
CIPHER_HASH | AES_GCM | ENCRYPT | 256 | SHA1_HMAC | GENERATE |
CIPHER_HASH | AES_GCM | ENCRYPT | 128 | SHA256_HMAC | GENERATE |
CIPHER_HASH | AES_GCM | ENCRYPT | 128 | SHA384_HMAC | GENERATE |
CIPHER_HASH | AES_GCM | ENCRYPT | 128 | SHA512_HMAC | GENERATE |
CIPHER_HASH | AES_GCM | ENCRYPT | 128 | XCBC_MAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 128 | MD5_HMAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 192 | SHA1_HMAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 256 | SHA1_HMAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 128 | SHA224_HMAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 128 | SHA256_HMAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 128 | SHA384_HMAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 128 | SHA512_HMAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 128 | AES_XCMC_MAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 128 | SHA1_HMAC | GENERATE |
HASH_CIPHER | AES_GCM | ENCRYPT | 128 | SHA224_HMAC | GENERATE |
Sub-case: QAT Snow3G test case¶
Cryptodev Snow3G algorithm validation matrix is showed in table below. Cipher only, hash-only and chaining functionality is supported for Snow3g.
Method | Cipher_algo | Cipher_op | Cipher_key |
CIPHER | ECB | ENCRYPT | 128 |
Test case: CryptoDev performance test¶
For performance test, the DUT forward UDP packets generated by traffic generator. Also, queue and core number should be set into maxi mun number:
+----------+ +----------+
| | | |
| | --------------> | |
| IXIA | | DUT |
| | | |
| | <-------------> | |
+----------+ +----------+
CryptoDev performance should be measured from different aspects ad below.
Frame Size | 1S/1C/1T | 1S/1C/1T | 1S/2C/1T | 1S/2C/2T | 1S/2C/2T |
64 | |||||
65 | |||||
128 | |||||
256 | |||||
512 | |||||
1024 | |||||
1280 | |||||
1518 |
Sub-case: AES-NI test case¶
Method | Cipher_algo | Cipher_op | Cipher_key | Auth_algo | Auth_op |
CIPHER_HASH | AES_CBC | ENCRYPT | 128 | SHA1_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 192 | SHA1_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 256 | SHA1_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 128 | SHA256_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 128 | SHA384_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | MD5_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 192 | SHA1_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 256 | SHA1_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA224_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA256_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA384_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA512_HMAC | GENERATE |
Sub-case: QAT AES test case¶
Method | Cipher_algo | Cipher_op | Cipher_key | Auth_algo | Auth_op |
CIPHER_HASH | AES_CBC | ENCRYPT | 128 | SHA1_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 192 | SHA1_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 256 | SHA1_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 128 | SHA256_HMAC | GENERATE |
CIPHER_HASH | AES_CBC | ENCRYPT | 128 | SHA384_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | MD5_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 192 | SHA1_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 256 | SHA1_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA224_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA256_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA384_HMAC | GENERATE |
HASH_CIPHER | AES_CBC | ENCRYPT | 128 | SHA512_HMAC | GENERATE |
L2 Forwarding Tests¶
This test application is a basic packet processing application using Intel® DPDK. It is a layer-2 (L2) forwarding application which takes traffic from a single RX port and transmits it with few modification on a single TX port.
For a packet received on a RX port (RX_PORT), it would be transmitted from a TX port (TX_PORT=RX_PORT+1) if RX_PORT is even; otherwise from a TX port (TX_PORT=RX_PORT-1) if RX_PORT is odd. Before being transmitted, the source mac address of the packet would be replaced by the mac address of the TX port, while the destination mac address would be replaced by 00:09:c0:00:00:TX_PORT_ID.
The test application should be run with the wanted paired ports configured using the coremask parameter via the command line. i.e. port 0 and 1 is a valid pair, while port 1 and 2 isn’t. The test is performed by running the test application and using a traffic generator. Tests are run with receiving a variety of size of packets generated by the traffic generator and forwarding back to the traffic generator. The packet loss and the throughput are the right ones need to be measured.
The l2fwd
application is run with EAL parameters and parameters for
the application itself. For details about the EAL parameters, see the relevant
DPDK Getting Started Guide. This application supports two parameters for
itself.
-p PORTMASK
: hexadecimal bitmask of ports to configure-q NQ
: number of queue per lcore (default is 1)
Prerequisites¶
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
Assume port 0 and 1 are connected to the traffic generator, to run the test application in linuxapp environment with 4 lcores, 2 ports and 8 RX queues per lcore:
$ ./l2fwd -n 1 -c f -- -q 8 -p 0x3
Also, if the ports to be tested are different, the port mask should be changed. The lcore used to run the test application and the number of queue used for a lcore could be changed. For benchmarking, the EAL parameters and the parameters for the application itself for different test cases should be the same.
Test Case: Port testing¶
Assume port A
on packet generator connects to NIC port 0
, while port B
on packet generator connects to NIC port 1
. Set the destination mac address
of the packet stream to be sent out from port A
to the mac address of
port 0
, while the destination mac address of the packet stream to be sent out
from port B
to the mac address of port 1
. Other parameters of the packet
stream could be anything valid. Then run the test application as below:
$ ./l2fwd -n 1 -c f -- -q 8 -p 0x3
Trigger the packet generator of bursting packets from port A
, then check if
port 0
could receive them and port 1
could forward them back. Stop it
and then trigger the packet generator of bursting packets from port B
, then
check if port 1
could receive them and port 0
could forward them back.
Test Case: 64/128/256/512/1024/1500
bytes packet forwarding test¶
Set the packet stream to be sent out from packet generator before testing as below.
Frame Size | 1q | 2q | 4q | 8 q |
64 | ||||
65 | ||||
128 | ||||
256 | ||||
512 | ||||
1024 | ||||
1280 | ||||
1518 |
Then run the test application as below:
$ ./l2fwd -n 2 -c f -- -q 1 -p 0x3
The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
Trigger the packet generator of bursting packets to the port 0 and 1 on the onboard NIC to be tested. Then measure the forwarding throughput for different packet sizes and different number of queues.
L3 Forwarding Exact Match Tests¶
The Layer-3 Forwarding results are produced using l3fwd
application.
Prerequisites¶
Hardware requirements:
For each CPU socket, each memory channel should be populated with at least 1x DIMM
Board is populated with 4x 1GbE or 10GbE ports. Special PCIe restrictions may be required for performance. For example, the following requirements should be met for Intel 82599 (Niantic) NICs:
- NICs are plugged into PCIe Gen2 or Gen3 slots
- For PCIe Gen2 slots, the number of lanes should be 8x or higher
- A single port from each NIC should be used, so for 4x ports, 4x NICs should be used
NIC ports connected to traffic generator. It is assumed that the NIC ports P0, P1, P2, P3 (as identified by the DPDK application) are connected to the traffic generator ports TG0, TG1, TG2, TG3. The application-side port mask of NIC ports P0, P1, P2, P3 is noted as PORTMASK in this section.
BIOS requirements:
- Intel Hyper-Threading Technology is ENABLED
- Hardware Prefetcher is DISABLED
- Adjacent Cache Line Prefetch is DISABLED
- Direct Cache Access is DISABLED
Linux kernel requirements:
- Linux kernel has the following features enabled: huge page support, UIO, HPET
- Appropriate number of huge pages are reserved at kernel boot time
- The IDs of the hardware threads (logical cores) per each CPU socket can be determined by parsing the file /proc/cpuinfo. The naming convention for the logical cores is: C{x.y.z} = hyper-thread z of physical core y of CPU socket x, with typical values of x = 0 .. 3, y = 0 .. 7, z = 0 .. 1. Logical cores C{0.0.1} and C{0.0.1} should be avoided while executing the test, as they are used by the Linux kernel for running regular processes.
Software application requirements
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio modprobe vfio-pci usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
- In hash mode, the hash table used for packet routing is:
# | IPv4 destination address | IPv4 source address | Port destination | Port source | L4 protocol | Output port |
0 | 201.0.0.0 | 200.20.0.1 | 102 | 12 | TCP | P1 |
1 | 101.0.0.0 | 100.10.0.1 | 101 | 11 | TCP | P0 |
2 | 211.0.0.0 | 200.40.0.1 | 102 | 12 | TCP | P3 |
3 | 111.0.0.0 | 100.30.0.1 | 101 | 11 | TCP | P2 |
- Traffic generator requirements
The flows need to be configured and started by the traffic generator:
Flow | Traffic Gen. Port | IPv4 Dst. Address | IPv4 Src. Address | Port Dst. | Port Src. | L4 Proto. | IPv4 Dst Addr Mask(Continuous Increment Host) | |
1 | TG0 | 201.0.0.0 | 200.20.0.1 | 102 | 12 | TCP | 255.240.0.0 | |
2 | TG1 | 101.0.0.0| 100.10.0.1 | 101 | 11 | TCP | 255.240.0.0 |
The queue column represents the expected NIC port RX queue where the packet should be written by the NIC hardware when RSS is enabled for that port.
Test Case: Layer-3 Forwarding (in Hash Mode)¶
The following items are configured through the command line interface of the application:
- The set of one or several RX queues to be enabled for each NIC port
- The set of logical cores to execute the packet forwarding task
- Mapping of the NIC RX queues to logical cores handling them.
- The set of hash-entry-num for the exact match
The test report should provide the throughput rate measurements (in mpps and % of the line rate for 4x NIC ports) as listed in the table below:
# | Number of RX Queues per NIC Port | Total Number of NIC RX Queues | Number of Sockets/ Cores/Threads | Total Number of Threads | Number of NIX RX Queues per Thread | Throughput Rate Exact Match Mode | |
mpps | % | |||||||
1 | 1 | 2 | 1S/1C/1T | 1 | 1 | ||
2 | 1 | 2 | 1S/2C/1T | 2 | 1 | ||
3 | 2 | 4 | 1S/4C/1T | 4 | 2 |
The application command line associated with each of the above tests is presented in the table below. The test report should present this table with the actual command line used, replacing the PORTMASK and C{x.y.z} with their actual values used during test execution.
# | Command Line |
1 | ./l3fwd -c coremask -n 3 – -E -p 0x3 –config ‘(P0,0,C{0.1.0}),(P1,0,C{0.1.0})’ |
2 | ./l3fwd -c coremask -n 3 – -E -p 0x3 –config ‘(P0,0,C{0.1.0}),(P1,0,C{0.2.0})’ |
3 | ./l3fwd -c coremask -n 3 – -E -p 0x3 –config ‘(P0,0,C{0.1.0}),(P0,1,C{0.2.0}),(P1,0,C{0.3.0}),(P1,1,C{0.4.0})’ |
L3 Forwarding Tests¶
The Layer-3 Forwarding results are produced using l3fwd
application.
Prerequisites¶
Hardware requirements:
For each CPU socket, each memory channel should be populated with at least 1x DIMM
Board is populated with 4x 1GbE or 10GbE ports. Special PCIe restrictions may be required for performance. For example, the following requirements should be met for Intel 82599 (Niantic) NICs:
- NICs are plugged into PCIe Gen2 or Gen3 slots
- For PCIe Gen2 slots, the number of lanes should be 8x or higher
- A single port from each NIC should be used, so for 4x ports, 4x NICs should be used
NIC ports connected to traffic generator. It is assumed that the NIC ports P0, P1, P2, P3 (as identified by the DPDK application) are connected to the traffic generator ports TG0, TG1, TG2, TG3. The application-side port mask of NIC ports P0, P1, P2, P3 is noted as PORTMASK in this section.
BIOS requirements:
- Intel Hyper-Threading Technology is ENABLED
- Hardware Prefetcher is DISABLED
- Adjacent Cache Line Prefetch is DISABLED
- Direct Cache Access is DISABLED
Linux kernel requirements:
- Linux kernel has the following features enabled: huge page support, UIO, HPET
- Appropriate number of huge pages are reserved at kernel boot time
- The IDs of the hardware threads (logical cores) per each CPU socket can be determined by parsing the file /proc/cpuinfo. The naming convention for the logical cores is: C{x.y.z} = hyper-thread z of physical core y of CPU socket x, with typical values of x = 0 .. 3, y = 0 .. 7, z = 0 .. 1. Logical cores C{0.0.1} and C{0.0.1} should be avoided while executing the test, as they are used by the Linux kernel for running regular processes.
Software application requirements
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio modprobe vfio-pci usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
- In LPM mode, the LPM table used for packet routing is:
# | LPM prefix (IP/length) | Output port |
0 | 10.100.0.0/24 | P1 |
1 | 10.101.0.0/24 | P1 |
2 | 11.100.0.0/24 | P2 |
3 | 11.101.0.0/24 | P2 |
4 | 12.100.0.0/24 | P3 |
5 | 12.101.0.0/24 | P3 |
6 | 13.100.0.0/24 | P4 |
7 | 13.101.0.0/24 | P4 |
- In hash mode, the hash table used for packet routing is:
|
IPv4 destination address | IPv4 source address | Port destination | Port source | L4 protocol | Output port |
0 | 10.100.0.1 | 1.2.3.4 | 10 | 1 | UDP | P1 |
1 | 10.101.0.1 | 1.2.3.4 | 10 | 1 | UDP | P1 |
2 | 11.100.0.1 | 1.2.3.4 | 11 | 1 | UDP | P2 |
3 | 11.101.0.1 | 1.2.3.4 | 11 | 1 | UDP | P2 |
4 | 12.100.0.1 | 1.2.3.4 | 12 | 1 | UDP | P3 |
5 | 12.101.0.1 | 1.2.3.4 | 12 | 1 | UDP | P3 |
6 | 13.100.0.1 | 1.2.3.4 | 13 | 1 | UDP | P0 |
7 | 13.101.0.1 | 1.2.3.4 | 13 | 1 | UDP | P0 |
- Traffic generator requirements
The flows need to be configured and started by the traffic generator:
Flow | Traffic Gen. Port | IPv4 Src. Address | IPv4 Dst. Address | Port Src. | Port Dest. | L4 Proto. | NIC RX Queue (RSS) |
1 | TG0 | 10.100.0.1 | 1.2.3.4 | 10 | 1 | UDP | 0 |
2 | TG0 | 10.101.0.1 | 1.2.3.4 | 10 | 1 | UDP | 1 |
3 | TG1 | 11.100.0.1 | 1.2.3.4 | 11 | 1 | UDP | 0 |
4 | TG1 | 11.101.0.1 | 1.2.3.4 | 11 | 1 | UDP | 1 |
5 | TG2 | 12.100.0.1 | 1.2.3.4 | 12 | 1 | UDP | 0 |
6 | TG2 | 12.101.0.1 | 1.2.3.4 | 12 | 1 | UDP | 1 |
7 | TG3 | 13.100.0.1 | 1.2.3.4 | 13 | 1 | UDP | 0 |
8 | TG3 | 13.101.0.1 | 1.2.3.4 | 13 | 1 | UDP | 1 |
The queue column represents the expected NIC port RX queue where the packet should be written by the NIC hardware when RSS is enabled for that port.
Test Case: Layer-3 Forwarding (in Hash or LPM Mode)¶
The following items are configured through the command line interface of the application:
- The set of one or several RX queues to be enabled for each NIC port
- The set of logical cores to execute the packet forwarding task
- Mapping of the NIC RX queues to logical cores handling them.
The test report should provide the throughput rate measurements (in mpps and % of the line rate for 4x NIC ports) as listed in the table below:
# | Number of RX Queues per NIC Port | Total Number of NIC RX Queues | Number of Sockets/ Cores/Threads | Total Number of Threads | Number of NIX RX Queues per Thread | Throughput Rate LPM Mode | Throughput Rate Hash Mode | ||
mpps | % | mpps | % | ||||||||
1 | 1 | 4 | 1S/1C/1T | 1 | 4 | ||||
2 | 1 | 4 | 1S/1C/2T | 2 | 2 | ||||
3 | 1 | 4 | 1S/2C/1T | 2 | 2 | ||||
4 | 1 | 4 | 1S/2C/2T | 4 | 1 | ||||
5 | 1 | 4 | 1S/4C/1T | 4 | 1 | ||||
6 | 1 | 4 | 2S/1C/1T | 2 | 2 | ||||
7 | 1 | 4 | 2S/1C/2T | 4 | 1 | ||||
8 | 1 | 4 | 2S/2C/1T | 4 | 1 | ||||
9 | 2 | 8 | 1S/1C/1T | 1 | 8 | ||||
10 | 2 | 8 | 1S/1C/2T | 2 | 4 | ||||
11 | 2 | 8 | 1S/2C/1T | 2 | 4 | ||||
12 | 2 | 8 | 1S/2C/2T | 4 | 2 | ||||
13 | 2 | 8 | 1S/4C/1T | 4 | 2 | ||||
14 | 2 | 8 | 1S/4C/2T | 8 | 1 | ||||
15 | 2 | 8 | 2S/1C/1T | 2 | 4 | ||||
16 | 2 | 8 | 2S/1C/2T | 4 | 2 | ||||
17 | 2 | 8 | 2S/2C/1T | 4 | 2 | ||||
18 | 2 | 8 | 2S/2C/2T | 8 | 1 | ||||
19 | 2 | 8 | 2S/4C/1T | 8 | 1 |
The application command line associated with each of the above tests is presented in the table below. The test report should present this table with the actual command line used, replacing the PORTMASK and C{x.y.z} with their actual values used during test execution.
# | Command Line |
1 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P1,0,C{0.1.0}),(P2,0,C{0.1.0}),(P3,0,C{0.1.0})’ |
2 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P1,0,C{0.1.0}),(P2,0,C{0.1.1}),(P3,0,C{0.1.1})’ |
3 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P1,0,C{0.1.0}),(P2,0,C{0.2.0}),(P3,0,C{0.2.0})’ |
4 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P1,0,C{0.1.1}),(P2,0,C{0.2.0}),(P3,0,C{0.2.1})’ |
5 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P1,0,C{0.2.0}),(P2,0,C{0.3.0}),(P3,0,C{0.4.0})’ |
6 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P1,0,C{0.1.0}),(P2,0,C{1.1.0}),(P3,0,C{1.1.0})’ |
7 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P1,0,C{0.1.1}),(P2,0,C{1.1.0}),(P3,0,C{1.1.1})’ |
8 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P1,0,C{0.2.0}),(P2,0,C{1.1.0}),(P3,0,C{1.2.0})’ |
9 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P0,1,C{0.1.0}),(P1,0,C{0.1.0}),(P1,1,C{0.1.0}), (P2,0,C{0.1.0}),(P2,1,C{0.1.0}),(P3,0,C{0.1.0}),(P3,1,C{0.1.0})’ |
10 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P0,1,C{0.1.0}),(P1,0,C{0.1.0}),(P1,1,C{0.1.0}), (P2,0,C{0.1.1}),(P2,1,C{0.1.1}),(P3,0,C{0.1.1}),(P3,1,C{0.1.1})’ |
11 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P0,1,C{0.1.0}),(P1,0,C{0.1.0}),(P1,1,C{0.1.0}), (P2,0,C{0.2.0}),(P2,1,C{0.2.0}),(P3,0,C{0.2.0}),(P3,1,C{0.2.0})’ |
12 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P0,1,C{0.1.0}),(P1,0,C{0.1.1}),(P1,1,C{0.1.1}), (P2,0,C{0.2.0}),(P2,1,C{0.2.0}),(P3,0,C{0.2.1}),(P3,1,C{0.2.1})’ |
13 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P0,1,C{0.1.0}),(P1,0,C{0.2.0}),(P1,1,C{0.2.0}), (P2,0,C{0.3.0}),(P2,1,C{0.3.0}),(P3,0,C{0.4.0}),(P3,1,C{0.4.0})’ |
14 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P0,1,C{0.1.1}),(P1,0,C{0.2.0}),(P1,1,C{0.2.1}), (P2,0,C{0.3.0}),(P2,1,C{0.3.1}),(P3,0,C{0.4.0}),(P3,1,C{0.4.1})’ |
15 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P0,1,C{0.1.0}),(P1,0,C{0.1.0}),(P1,1,C{0.1.0}), (P2,0,C{1.1.0}),(P2,1,C{1.1.0}),(P3,0,C{1.1.0}),(P3,1,C{1.1.0})’ |
16 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P0,1,C{0.1.0}),(P1,0,C{0.1.1}),(P1,1,C{0.1.1}), (P2,0,C{1.1.0}),(P2,1,C{1.1.0}),(P3,0,C{1.1.1}),(P3,1,C{1.1.1})’ |
17 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P0,1,C{0.1.0}),(P1,0,C{0.2.0}),(P1,1,C{0.2.0}), (P2,0,C{1.1.0}),(P2,1,C{1.1.0}),(P3,0,C{1.2.0}),(P3,1,C{1.2.0})’ |
18 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P0,1,C{0.1.1}),(P1,0,C{0.2.0}),(P1,1,C{0.2.1}), (P2,0,C{1.1.0}),(P2,1,C{1.1.1}),(P3,0,C{1.2.0}),(P3,1,C{1.2.1})’ |
19 | ./l3fwd -c 0xffffff -n 3 – -P -p PORTMASK –config ‘(P0,0,C{0.1.0}),(P0,1,C{0.2.0}),(P1,0,C{0.3.0}),(P1,1,C{0.4.0}), (P2,0,C{1.1.0}),(P2,1,C{1.2.0}),(P3,0,C{1.3.0}),(P3,1,C{1.4.0})’ |
Ethernet Link Flow Control Tests¶
The support of Ethernet link flow control features by Poll Mode Drivers consists in:
- At the receive side, if packet buffer is not enough, NIC will send out the pause frame to peer and ask the peer to slow down the Ethernet frame # transmission.
- At the transmit side, if pause frame is received, NIC will slow down the Ethernet frame transmission according to the pause frame.
MAC Control Frame Forwarding consists in:
- Control frames (PAUSE Frames) are taken by the NIC and do not pass to the host.
- When Flow Control and MAC Control Frame Forwarding are enabled the PAUSE frames will be passed to the host and can be handled by testpmd.
Note: Priority flow control is not included in this test plan.
Note: the high_water, low_water, pause_time, send_xon are configured into the NIC register. It is not necessary to validate the accuracy of these parameters. And what change it can cause. The port_id is used to indicate the NIC to be configured. In certain case, a system can contain multiple NIC. However the NIC need not be configured multiple times.
Prerequisites¶
Assuming that ports 0
and 2
are connected to a traffic generator,
launch the testpmd
with the following arguments:
./build/app/testpmd -cffffff -n 3 -- -i --burst=1 --txpt=32 \
--txht=8 --txwt=0 --txfreet=0 --rxfreet=64 --mbcache=250 --portmask=0x5
The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
Test Case: test_perf_flowctrl_on_pause_fwd_on¶
testpmd> set flowctrl rx on tx on high_water low_water pause_time
send_xon mac_ctrl_frame_fwd on autoneg on port_id
Setup the csum
forwarding mode:
testpmd> set fwd csum
Set csum packet forwarding mode
Start the packet forwarding:
testpmd> start
csum packet forwarding - CRC stripping disabled - packets/burst=32
nb forwarding cores=1 - nb forwarding ports=10
RX queues=1 - RX desc=128 - RX free threshold=64
RX threshold registers: pthresh=8 hthresh=8 wthresh=4
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=32 hthresh=8 wthresh=8
Validate the NIC can generate the pause frame? Configure the traffic generator to send IPv4/UDP packet at the length of 66Byte at the line speed (10G). Because the 66Byte packet cannot reach line rate when running with testpmd, so it is expected that the pause frame will be sent to the peer (traffic generator). Ideally this mechanism can avoid the packet loss. And this depends on high_water/low_water and other parameters are configured properly. It is strongly recommended that the user look into the data sheet before doing any flow control configuration. By default, the flow control on 10G is disabled. the flow control for 1G is enabled.
Validate the NIC can deal with the pause frame. Configure the traffic generator to send out large amount of pause frames, this will cause the NIC to disable / slow down the packet transmission according to the pause time. Once the traffic generator stop sending the pause frame, the NIC will restore the packet transmission to the expected rate.
Test Case: test_perf_flowctrl_on_pause_fwd_off¶
testpmd> set flowctrl rx on tx on high_water low_water pause_time
send_xon mac_ctrl_frame_fwd off autoneg on port_id
Validate same behavior as test_perf_flowctrl_on_pause_fwd_on
Test Case: test_perf_flowctrl_rx_on¶
testpmd> set flowctrl rx on tx on high_water low_water pause_time
send_xon mac_ctrl_frame_fwd off autoneg on port_id
Validate same behavior as test_perf_flowctrl_on_pause_fwd_on
Test Case: test_perf_flowctrl_off_pause_fwd_off¶
This is the default mode for 10G PMD, by default, testpmd is running on this mode. no need to execute any command:
testpmd> set flowctrl rx off tx off high_water low_water pause_time
send_xon mac_ctrl_frame_fwd off autoneg on port_id
Validate the NIC won’t generate the pause frame when the packet buffer is not enough. Packet loss can be observed. Validate the NIC will not slow down the packet transmission after receiving the pause frame.
Test Case: test_perf_flowctrl_off_pause_fwd_on¶
testpmd> set flowctrl rx off tx off high_water low_water pause_time
send_xon mac_ctrl_frame_fwd on autoneg on port_id
Validate same behavior as test_perf_flowctrl_off_pause_fwd_off
Test Case: test_perf_flowctrl_tx_on¶
testpmd> set flowctrl rx off tx on high_water low_water pause_time
send_xon mac_ctrl_frame_fwd off autoneg on port_id
Validate same behavior as test_perf_flowctrl_on_pause_fwd_off
Link Status Detection Tests¶
This tests for Detect Link Status feature can be run on linux userspace. It is to check if the userspace interrupt can be received after plugging in/out the cable/fiber on specified NIC port, and if the link status can be updated correctly. Furthermore, it would be better to check if packets can be received and sent on a specified port after its link has just up. So it may need layer 2 forwarding at the same time.
For layer 2 forwarding, a packet received on a RX port (RX_PORT), it would be transmitted from a TX port (TX_PORT=RX_PORT+1) if RX_PORT is even; otherwise from a TX port (TX_PORT=RX_PORT-1) if RX_PORT is odd. Before being transmitted, the source mac address of the packet would be replaced by the mac address of the TX port, while the destination mac address would be replaced by 00:09:c0:00:00:TX_PORT_ID. The test application should be run with the wanted paired ports configured using the coremask parameter via the command line. i.e. port 0 and 1 is a valid pair, while port 1 and 2 isn’t. The test is performed by running the test application and using a traffic generator.
The link_status_interrupt
application is run with EAL parameters and
parameters for the application itself. This application supports three
parameters for itself.
-p PORTMASK
: hexadecimal bitmask of ports to config-q NQ
: number of queue per lcore (default is 1)-T PERIOD
: refresh period in seconds (0/10/86400: disable/default/maximum)
Prerequisites¶
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
The test app need add a cmdline, --vfio-intr=int_x
.
Assume port 0 and 1 are connected to the remote ports, e.g. packet generator. To run the test application in linuxapp environment with 4 lcores, 2 ports and 2 RX queues per lcore:
$ ./link_status_interrupt -c f -- -q 2 -p 0x3
Also, if the ports need to be tested are different, the port mask should be changed. The lcore used to run the test application and the number of queues per lcore could be changed.
Test Case: Link Status Change¶
Run the test application as above command. Then plug out the cable/fiber, or simulate a disconnection. After several seconds, check if the link is actually off. Then plug in the cable/fiber, or simulate a connection. After several seconds, check if the link is actually up, and print its information about duplex and speed.
Test Case: Port available¶
Run the test application as above command with cable/fiber plugged out from both port 0 and 1, then plug it in. After several seconds and the link of all the ports is up. Together with packet generator, do layer 2 forwarding, and check if the packets can be received on port 0/1 and sent out on port 1/0.
Whitelisting Tests¶
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
Prerequisites¶
Assuming that at least a port is connected to a traffic generator,
launch the testpmd
with the following arguments:
./x86_64-default-linuxapp-gcc/build/app/test-pmd/testpmd -c 0xc3 -n 3 -- -i \
--burst=1 --rxpt=0 --rxht=0 --rxwt=0 --txpt=36 --txht=0 --txwt=0 \
--txfreet=32 --rxfreet=64 --mbcache=250 --portmask=0x3
The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
Set the verbose level to 1 to display information for each received packet:
testpmd> set verbose 1
Show port infos for port 0 and store the default MAC address and the maximum number of MAC addresses:
testpmd> show port info 0
********************* Infos for port 0 *********************
MAC address: 00:1B:21:4D:D2:24
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 127
Test Case: add/remove mac addresses¶
Initialize first port without promiscuous mode
:
testpmd> set promisc 0 off
Read the stats for port 0 before sending the packet:
testpmd> show port stats 8
######################## NIC statistics for port 8 ########################
RX-packets: 0 RX-errors: 0 RX-bytes: 64
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Send a packet with default destination MAC address for port 0:
testpmd> show port stats 0
######################## NIC statistics for port 8 ########################
RX-packets: 1 RX-errors: 0 RX-bytes: 128
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that the packet was received (RX-packets incremented).
Send a packet with destination MAC address different than the port 0 address, let’s call it A.:
testpmd> show port stats 0
######################## NIC statistics for port 8 ########################
RX-packets: 1 RX-errors: 0 RX-bytes: 128
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that the packet was not received (RX-packets not incremented).
Add the MAC address A to the port 0:
testpmd> mac_addr add 0 <A>
testpmd> show port stats 0
######################## NIC statistics for port 8 ########################
RX-packets: 2 RX-errors: 0 RX-bytes: 192
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that the packet was received (RX-packets incremented).
Remove the MAC address A to the port 0:
testpmd> mac_addr remove 0 <A>
testpmd> show port stats 0
######################## NIC statistics for port 8 ########################
RX-packets: 2 RX-errors: 0 RX-bytes: 192
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Verify that the packet was not received (RX-packets not incremented).
Test Case: invalid addresses test¶
Add a MAC address of all zeroes to the port 0:
testpmd> mac_addr add 0 00:00:00:00:00:00
Verify that the response is “Invalid argument” (-EINVAL)
Remove the default MAC address:
testpmd> mac_addr remove 0 <default MAC address>
Verify that the response is “Address already in use” (-EADDRINUSE)
Add two times the same address:
testpmd> mac_addr add 0 <A>
testpmd> mac_addr add 0 <A>
Verify that there is no error
Add as many different addresses as maximum MAC addresses (n):
testpmd> mac_addr add 0 <A>
... n-times
testpmd> mac_addr add 0 <A+n>
Add one more different address:
testpmd> mac_addr add 0 <A+n+1>
Verify that the response is “No space left on device” (-ENOSPC)
Niantic Media Access Control Security (MACsec) Tests¶
Description¶
This document provides test plan for testing the MACsec function of Niantic:
IEEE 802.1AE: https://en.wikipedia.org/wiki/IEEE_802.1AE Media Access Control Security (MACsec) is a Layer 2 security technology that provides point-to-point security on Ethernet links between nodes. MACsec, defined in the IEEE 802.1AE-2006 standard, is based on symmetric cryptographic keys. MACsec Key Agreement (MKA) protocol, defined as part of the IEEE 802.1x-2010 standard, operates at Layer 2 to generate and distribute the cryptographic keys used by the MACsec functionality installed in the hardware. As a hop-to-hop Layer 2 security feature, MACsec can be combined with Layer 3 security technologies such as IPsec for end-to-end data security.
MACsec was removed in Fortville since Data Center customers don’t require it. MACsec can be used for LAN / VLAN, Campus, Cloud and NFV environments (Guest and Overlay) to protect and encrypt data on the wire. One benefit of a SW approach to encryption in the cloud is that the payload is encrypted by the tenant, not by the tunnel provider, thus the tenant has full control over the keys.
Admins can configure SC/SA/keys manually or use 802.1x with MACsec extensions. The 802.1X is used for key distribution via the MACsec Key Agreement (MKA) extension.
The driver interface MUST support basic primitives like creation/deletion/enable/disable of SC/SA, Next_PN etc (please do see the macsec_ops in Linux source).
The 82599 only supports GCM-AES-128.
Prerequisites¶
Hardware:
- 1x Niantic NIC (2x 10G)
- 2x IXIA ports (10G)
Software:
Added command:
testpmd>set macsec offload (port_id) on encrypt (on|off) replay-protect (on|off) " Enable MACsec offload. " testpmd>set macsec offload (port_id) off " Disable MACsec offload. " testpmd>set macsec sc (tx|rx) (port_id) (mac) (pi) " Configure MACsec secure connection (SC). " testpmd>set macsec sa (tx|rx) (port_id) (idx) (an) (pn) (key) " Configure MACsec secure association (SA). "
Test Case 1: MACsec packets send and receive¶
Connect the two ixgbe ports with a cable, and bind the two ports to dpdk driver:
./tools/dpdk-devbind.py -b igb_uio 07:00.0 07:00.1
Config the rx port
Start the testpmd of rx port:
./testpmd -c 0xc --socket-mem 1024,1024 --file-prefix=rx -w 0000:07:00.1 \ -- --port-topology=chained -i --crc-strip
Set MACsec offload on:
testpmd>set macsec offload 0 on encrypt on replay-protect on
Set MACsec parameters as rx_port:
testpmd>set macsec sc rx 0 00:00:00:00:00:01 0 testpmd>set macsec sa rx 0 0 0 0 00112200000000000000000000000000
Set MACsec parameters as tx_port:
testpmd>set macsec sc tx 0 00:00:00:00:00:02 0 testpmd>set macsec sa tx 0 0 0 0 00112200000000000000000000000000
Set rxonly:
testpmd>set fwd rxonly
Start:
testpmd>set promisc all on testpmd>start
Config the tx port
Start the testpmd of tx port:
./testpmd -c 0x30 --socket-mem 1024,1024 --file-prefix=tx -w 0000:07:00.0 \ -- --port-topology=chained -i --crc-strip --tx-offloads=0x8fff
Set MACsec offload on:
testpmd>set macsec offload 0 on encrypt on replay-protect on
Set MACsec parameters as tx_port:
testpmd>set macsec sc tx 0 00:00:00:00:00:01 0 testpmd>set macsec sa tx 0 0 0 0 00112200000000000000000000000000
Set MACsec parameters as rx_port:
testpmd>set macsec sc rx 0 00:00:00:00:00:02 0 testpmd>set macsec sa rx 0 0 0 0 00112200000000000000000000000000
Set txonly:
testpmd>set fwd txonly
Start:
testpmd>start
Check the result:
testpmd>stop testpmd>show port xstats 0
stop the packet transmitting on tx_port first, then stop the packet receiving on rx_port.
check the rx data and tx data:
tx_good_packets == rx_good_packets out_pkts_encrypted == in_pkts_ok == tx_good_packets == rx_good_packets out_octets_encrypted == in_octets_decrypted out_octets_protected == in_octets_validated
if you want to check the content of the packet, use the command:
testpmd>set verbose 1
the received packets are Decrypted.
check the ol_flags:
PKT_RX_IP_CKSUM_GOOD
check the content of the packet:
type=0x0800, the ptype of L2,L3,L4: L2_ETHER L3_IPV4 L4_UDP
Test Case 2: MACsec send and receive with different parameters¶
Set “idx” to 1 on both rx and tx sides. check the MACsec packets can be received correctly.
set “idx” to 2 on both rx and tx sides. it can’t be set successfully.
Set “an” to 1/2/3 on both rx and tx sides. check the MACsec packets can be received correctly.
set “an ” to 4 on both rx and tx sides. it can’t be set successfully.
Set “pn” to 0xffffffec on both rx and tx sides. rx port can receive four packets.
set “pn” to 0xffffffed on both rx and tx sides. rx port can receive three packets.
set “pn” to 0xffffffee/0xffffffef on both rx and tx sides. rx port can receive three packets too. But the expected number of packets is 2/1. While the explanation that DPDK developers gave is that it’s hardware’s behavior.
Once the PN reaches a value of 0xFFFFFFF0, hardware clears the Enable Tx LinkSec field in the LSECTXCTRL register to 00b so when pn get to 0xfffffff0, the number of packets received can’t be expected.
set “pn” to 0x100000000 on both rx and tx sides. it can’t be set successfully.
Set “key” to 00000000000000000000000000000000 and ffffffffffffffffffffffffffffffff on both rx and tx sides. check the MACsec packets can be received correctly.
Set “pi” to 1/0xffff on both rx and tx sides. check the MACsec packets can not be received.
set “pi” to 0x10000 on both rx and tx sides. it can’t be set successfully.
Test Case 3: MACsec packets send and normal receive¶
Disable MACsec offload on rx port:
testpmd>set macsec offload 0 off
Start the the packets transfer
Check the result:
testpmd>stop testpmd>show port xstats 0
stop the testpmd on tx_port first, then stop the testpmd on rx_port. the received packets are encrypted.
check the content of the packet:
type=0x88e5 sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x0
you can’t find L3 and L4 information in the packet in_octets_decrypted and in_octets_validated doesn’t increase on last data transfer.
Test Case 4: normal packet send and MACsec receive¶
Enable MACsec offload on rx port:
testpmd>set macsec offload 0 on encrypt on replay-protect on
Disable MACsec offload on tx port:
testpmd>set macsec offload 0 off
Start the the packets transfer:
testpmd>start
Check the result:
testpmd>stop testpmd>show port xstats 0
stop the testpmd on tx_port first, then stop the testpmd on rx_port. the received packets are not encrypted.
check the content of the packet:
type=0x0800, the ptype of L2,L3,L4: L2_ETHER L3_IPV4 L4_UDP
in_octets_decrypted and out_pkts_encrypted doesn’t increase on last data transfer.
Test Case 5: MACsec send and receive with wrong parameters¶
Don’t add “–tx-offloads=0x8fff” in the tx_port command line. the MACsec offload can’t work. The tx packets are normal packets.
Set different pn on rx and tx port, then start the data transfer.
Set the parameters as test case 1, start and stop the data transfer. check the result, rx port can receive and decrypt the packets normally.
Reset the pn of tx port to 0:
testpmd>set macsec sa tx 0 0 0 0 00112200000000000000000000000000
rx port can receive the packets until the pn equals the pn of tx port:
out_pkts_encrypted = in_pkts_late + in_pkts_ok
Set different keys on rx and tx port, then start the data transfer:
the RX-packets=0, in_octets_decrypted == out_octets_encrypted, in_pkts_notvalid == out_pkts_encrypted, in_pkts_ok=0, rx_good_packets=0
Set different pi on rx and tx port(reset on rx_port), then start the data transfer:
in_octets_decrypted == out_octets_encrypted, in_pkts_ok = 0, in_pkts_nosci == out_pkts_encrypted
Set different an on rx and tx port, then start the data transfer:
rx_good_packets=0, in_octets_decrypted == out_octets_encrypted, in_pkts_notusingsa == out_pkts_encrypted, in_pkts_ok=0,
Set different index on rx and tx port, then start the data transfer:
in_octets_decrypted == out_octets_encrypted, in_pkts_ok == out_pkts_encrypted
Test Case 6: performance test of MACsec offload packets¶
Tx linerate
port0 connected to IXIA port5, port1 connected to IXIA port6, set port0 MACsec offload on, set fwd mac:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc -- -i \ --port-topology=chained --crc-strip --tx-offloads=0x8fff
on IXIA side, start IXIA port6 transmit, start the IXIA capture. view the IXIA port5 captured packet, the protocol is MACsec, the EtherType is 0x88E5, and the packet length is 96bytes, while the normal packet length is 32bytes.
The valid frames received rate is 10.78Mpps, and the %linerate is 100%.
Rx linerate
there are three ports 05:00.0 07:00.0 07:00.1. Connect 07:00.0 to 07:00.1 with cable, connect 05:00.0 to IXIA. Bind the three ports to dpdk driver. start two testpmd:
./testpmd -c 0x3 --socket-mem 1024,1024 --file-prefix=rx -w 0000:07:00.1 \ -- --port-topology=chained -i --crc-strip --tx-offloads=0x8fff testpmd>set macsec offload 0 on encrypt on replay-protect on testpmd>set macsec sc rx 0 00:00:00:00:00:01 0 testpmd>set macsec sa rx 0 0 0 0 00112200000000000000000000000000 testpmd>set macsec sc tx 0 00:00:00:00:00:02 0 testpmd>set macsec sa tx 0 0 0 0 00112200000000000000000000000000 testpmd>set fwd rxonly ./testpmd -c 0xc --socket-mem 1024,1024 --file-prefix=tx -b 0000:07:00.1 \ -- --port-topology=chained -i --crc-strip --tx-offloads=0x8fff testpmd>set macsec offload 1 on encrypt on replay-protect on testpmd>set macsec sc rx 1 00:00:00:00:00:02 0 testpmd>set macsec sa rx 1 0 0 0 00112200000000000000000000000000 testpmd>set macsec sc tx 1 00:00:00:00:00:01 0 testpmd>set macsec sa tx 1 0 0 0 00112200000000000000000000000000 testpmd>set fwd mac
start on both two testpmd. start data transmit from IXIA port, the frame size is 64bytes, the Ethertype is 0x0800. The rate is 14.88Mpps.
check the linerate on rxonly port:
testpmd>show port stats 0
It shows “Rx-pps: 10775697”, so the rx %linerate is 100%. check the MACsec packets number on tx side:
testpmd>show port xstats 1
on rx side:
testpmd>show port xstats 0
check the rx data and tx data:
in_pkts_ok == out_pkts_encrypted
External Mempool Handler Tests¶
External Mempool Handler feature is an extension to the mempool API that allows users to add and use an alternative mempool handler, which allows external memory subsystems such as external hardware memory management systems and software based memory allocators to be used with DPDK.
Test Case 1: Multiple producers and multiple consumers mempool handler¶
- Change default mempool operation to “ring_mp_mc”
- Run l2fwd and check packet forwarding normally with this mempool handler.
Test Case 2: Single producer and Single consumer mempool handler¶
- Change default mempool operation to “ring_sp_sc”
- Run l2fwd and check packet forwarding normally with this mempool handler.
Test Case 3: Single producer and Multiple consumers mempool handler¶
- Change default mempool operation to “ring_sp_mc”
- Run l2fwd and check packet forwarding normally with this mempool handler.
Test Case 4: Multiple producers and single consumer mempool handler¶
- Change default mempool operation to “ring_mp_sc”
- Run l2fwd and check packet forwarding normally with this mempool handler.
Test Case 5: External stack mempool handler¶
- Change default mempool operation to “stack”
- Run l2fwd and check packet forwarding normally with this mempool handler.
NIC Statistics Tests¶
This document provides benchmark tests for the userland Intel®
82599 Gigabit Ethernet Controller (Niantic) Poll Mode Driver (PMD).
The userland PMD application runs the IO forwarding mode
test
described in the PMD test plan document with different parameters for
the configuration of Niantic NIC ports.
The core configuration description is:
- 1C/1T: 1 Physical Core, 1 Logical Core per physical core (1 Hyperthread) using core #2 (socket 0, 2nd physical core)
- 1C/2T: 1 Physical Core, 2 Logical Cores per physical core (2 Hyperthreads) using core #2 and #14 (socket 0, 2nd physical core, 2 Hyperthreads)
- 2C/1T: 2 Physical Cores, 1 Logical Core per physical core using core #2 and #4 (socket 0, 2nd and 3rd physical cores)
Prerequisites¶
Each of the 10Gb Ethernet* ports of the DUT is directly connected in full-duplex to a different port of the peer traffic generator.
Using interactive commands, the traffic generator can be configured to send and receive in parallel, on a given set of ports.
The tool vtbwrun
(included in Intel® VTune™ Performance Analyzer)
will be used to monitor memory activities while running network
benchmarks to check the number of Memory Partial Writes
and the
distribution of memory accesses among available Memory Channels. This
will only be done on the userland application, as the tool requires a
Linux environment to be running in order to be used.
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
Test Case: Performance Benchmarking¶
The linuxapp is started with the following parameters, for each of the configurations referenced above:
1C/1T:
-c 0xffffff -n 3 -- -i --coremask=0x4 \
--rxd=512 --txd=512 --burst=32 --txfreet=32 --rxfreet=64 --mbcache=128 --portmask=0xffff \
--rxpt=4 --rxht=4 --rxwt=16 --txpt=36 --txht=0 --txwt=0 --txrst=32
1C/2T:
-c 0xffffff -n 3 -- -i --coremask=0x4004 \
--rxd=512 --txd=512 --burst=32 --txfreet=32 --rxfreet=64 --mbcache=128 --portmask=0xffff \
--rxpt=4 --rxht=4 --rxwt=16 --txpt=36 --txht=0 --txwt=0 --txrst=32
2C/1T:
-c 0xffffff -n 3 -- -i --coremask=0x14 \
--rxd=512 --txd=512 --burst=32 --txfreet=32 --rxfreet=64 --mbcache=128 --portmask=0xffff \
--rxpt=4 --rxht=4 --rxwt=16 --txpt=36 --txht=0 --txwt=0 --txrst=32
The throughput is measured for each of these cases for the packet size of 64, 65, 128, 256, 512, 1024, 1280 and 1518 bytes. The results are printed in the following table:
Frame Size | 1C/1T | 1C/2T | 2C/1 | wirespeed |
64 | ||||
65 | ||||
128 | ||||
256 | ||||
512 | ||||
1024 | ||||
1280 | ||||
1518 |
The memory partial writes are measured with the vtbwrun
application and printed
in the following table::
Sampling Duration: 000000.00 micro-seconds
--- Logical Processor 0 ---||--- Logical Processor 1 ---
---------------------------------------||---------------------------------------
--- Intersocket QPI Utilization ---||--- Intersocket QPI Utilization ---
---------------------------------------||---------------------------------------
--- Reads (MB/s): 0.00 ---||--- Reads (MB/s): 0.00 ---
--- Writes(MB/s): 0.00 ---||--- Writes(MB/s): 0.00 ---
---------------------------------------||---------------------------------------
--- Memory Performance Monitoring ---||--- Memory Performance Monitoring ---
---------------------------------------||---------------------------------------
--- Mem Ch 0: #Ptl Wr: 0000.00 ---||--- Mem Ch 0: #Ptl Wr: 0.00 ---
--- Mem Ch 1: #Ptl Wr: 0000.00 ---||--- Mem Ch 1: Ptl Wr (MB/s): 0.00 ---
--- Mem Ch 2: #Ptl Wr: 0000.00 ---||--- Mem Ch 2: #Ptl Wr: 0.00 ---
--- ND0 Mem #Ptl Wr: 0000.00 ---||--- ND1 #Ptl Wr: 0.00 ---
Fortville NVGRE Tests¶
Cloud providers build virtual network overlays over existing network infrastructure that provide tenant isolation and scaling. Tunneling layers added to the packets carry the virtual networking frames over existing Layer 2 and IP networks. Conceptually, this is similar to creating virtual private networks over the Internet. Fortville will process these tunneling layers by the hardware.
This document provides test plan for Fortville NVGRE packet detecting, checksum computing and filtering.
Prerequisites¶
1x Intel X710 (Fortville) NICs (2x 40GbE full duplex optical ports per NIC) plugged into the available PCIe Gen3 8-lane slot.
1x Intel XL710-DA4 (Eagle Fountain) (1x 10GbE full duplex optical ports per NIC) plugged into the available PCIe Gen3 8-lane slot.
DUT board must be two sockets system and each cpu have more than 8 lcores.
Test Case: NVGRE ipv4 packet detect¶
Start testpmd with tunneling packet type to NVGRE:
testpmd -c 0xffff -n 4 -- -i --rxq=4 --txq=4 --nb-cores=8 --nb-ports=2
Set rxonly packet forwarding mode and enable verbose log:
set fwd rxonly
set verbose 1
Send packet as table listed and check dumped packet type the same as column “Rx packet type”.
Outer L2 | Outer Vlan | Outer L3 | NVGRE | Inner L2 | Inner Vlan | Inner L3 | Inner L4 | Rx packet type | Pkt Error |
---|---|---|---|---|---|---|---|---|---|
Yes | None | Ipv4 | None | None | None | None | None | PKT_RX_IPV4_HDR | None |
Yes | None | Ipv4 | Yes | Yes | None | Ipv4 | Udp | PKT_RX_IPV4_HDR_EXT | None |
Yes | None | Ipv4 | Yes | Yes | None | Ipv4 | Tcp | PKT_RX_IPV4_HDR_EXT | None |
Yes | None | Ipv4 | Yes | Yes | None | Ipv4 | Sctp | PKT_RX_IPV4_HDR_EXT | None |
Yes | Yes | Ipv4 | Yes | Yes | None | Ipv4 | Udp | PKT_RX_IPV4_HDR_EXT | None |
Yes | Yes | Ipv4 | Yes | Yes | Yes | Ipv4 | Udp | PKT_RX_IPV4_HDR_EXT | None |
Test Case: NVGRE ipv6 packet detect¶
Start testpmd with tunneling packet type to NVGRE:
testpmd -c 0xffff -n 2 -- -i --rxq=4 --txq=4 --nb-cores=8 --nb-ports=2
Set rxonly packet forwarding mode and enable verbose log:
set fwd rxonly
set verbose 1
Send ipv6 packet as table listed and check dumped packet type the same as column “Rx packet type”.
Outer L2 | Outer Vlan | Outer L3 | NVGRE | Inner L2 | Inner Vlan | Inner L3 | Inner L4 | Rx packet type | Pkt Error |
---|---|---|---|---|---|---|---|---|---|
Yes | None | Ipv6 | None | None | None | None | None | PKT_RX_IPV6_HDR | None |
Yes | None | Ipv6 | Yes | Yes | None | Ipv6 | Udp | PKT_RX_IPV6_HDR_EXT | None |
Yes | None | Ipv6 | Yes | Yes | None | Ipv6 | Tcp | PKT_RX_IPV6_HDR_EXT | None |
Yes | None | Ipv6 | Yes | Yes | None | Ipv6 | Sctp | PKT_RX_IPV6_HDR_EXT | None |
Yes | Yes | Ipv6 | Yes | Yes | None | Ipv6 | Udp | PKT_RX_IPV6_HDR_EXT | None |
Yes | Yes | Ipv6 | Yes | Yes | Yes | Ipv6 | Udp | PKT_RX_IPV6_HDR_EXT | None |
Test Case: NVGRE IPv4 Filter¶
This test adds NVGRE IPv4 filters to the hardware, and then checks whether
sent packets match those filters. In order to this, the packet should first
be sent from Scapy
before the filter is created, to verify that it is not
matched by a NVGRE IPv4 filter. The filter is then added from the testpmd
command line and the packet is sent again.
Start testpmd:
testpmd -c 0xffff -n 4 -- -i --disable-rss --rxq=4 --txq=4 --nb-cores=8 --nb-ports=2
Set rxonly packet forwarding mode and enable verbose log:
set fwd rxonly
set verbose 1
Add one new NVGRE filter as table listed first:
tunnel_filter add port_id outer_mac inner_mac ip_addr inner_vlan
tunnel_type(vxlan|nvgre) filter_type(imac-ivlan|imac-ivlan-tenid|imac-tenid|imac
|omac-imac-tenid|iip) tenant_id queue_num
For example:
tunnel_filter add 0 11:22:33:44:55:66 00:00:20:00:00:01 192.168.2.2 1
NVGRE imac 1 1
Then send one packet and check packet was forwarded into right queue.
Outer L2 | Outer Vlan | Outer L3 | NVGRE | Inner L2 | Inner Vlan | Inner L3 | Inner L4 | Rx packet type | Pkt Error |
---|---|---|---|---|---|---|---|---|---|
Yes | None | Ipv4 | None | None | None | None | None | PKT_RX_IPV4_HDR | None |
Yes | None | Ipv4 | Yes | Yes | None | Ipv4 | Udp | PKT_RX_IPV4_HDR_EXT | None |
Yes | None | Ipv4 | Yes | Yes | None | Ipv4 | Tcp | PKT_RX_IPV4_HDR_EXT | None |
Yes | None | Ipv4 | Yes | Yes | None | Ipv4 | Sctp | PKT_RX_IPV4_HDR_EXT | None |
Yes | Yes | Ipv4 | Yes | Yes | None | Ipv4 | Udp | PKT_RX_IPV4_HDR_EXT | None |
Yes | Yes | Ipv4 | Yes | Yes | Yes | Ipv4 | Udp | PKT_RX_IPV4_HDR_EXT | None |
Remove NVGRE filter which has been added. Then send one packet and check packet was received in queue 0.
Test Case: NVGRE IPv4 Filter invalid¶
This test adds NVGRE IPv6 filters by invalid command, and then checks command result.
Start testpmd:
testpmd -c 0xffff -n 4 -- -i --disable-rss --rxq=4 --txq=4 --nb-cores=8 --nb-ports=2
Set rxonly packet forwarding mode and enable verbose log:
set fwd rxonly
set verbose 1
Add NVGRE filter as table listed first:
tunnel_filter add port_id outer_mac inner_mac ip_addr inner_vlan
tunnel_type(vxlan|nvgre) filter_type(imac-ivlan|imac-ivlan-tenid|imac-tenid|imac
|omac-imac-tenid|iip) tenant_id queue_num
Validate the filter command with wrong parameter:
- Add Cloud filter with invalid Mac address “00:00:00:00:01” will be failed.
- Add Cloud filter with invalid ip address “192.168.1.256” will be failed.
- Add Cloud filter with invalid vlan “4097” will be failed.
- Add Cloud filter with invalid vni “16777216” will be failed.
- Add Cloud filter with invalid queue id “64” will be failed.
Test Case: NVGRE IPv6 Filter¶
This test adds NVGRE IPv6 filters to the hardware, and then checks whether
sent packets match those filters. In order to this, the packet should first
be sent from Scapy
before the filter is created, to verify that it is not
matched by a NVGRE IPv6 filter. The filter is then added from the testpmd
command line and the packet is sent again.
Start testpmd:
testpmd -c 0xffff -n 4 -- -i --disable-rss --rxq=4 --txq=4 --nb-cores=8 --nb-ports=2
Set rxonly packet forwarding mode and enable verbose log:
set fwd rxonly
set verbose 1
Add NVGRE filter as table listed first:
tunnel_filter add port_id outer_mac inner_mac ip_addr inner_vlan
tunnel_type(vxlan|nvgre) filter_type(imac-ivlan|imac-ivlan-tenid|imac-tenid|imac
|omac-imac-tenid|iip) tenant_id queue_num
For example:
tunnel_filter add 0 11:22:33:44:55:66 00:00:20:00:00:01 192.168.2.2 1
NVGRE imac 1 1
Then send one packet and check packet was forwarded into right queue.
Outer L2 | Outer Vlan | Outer L3 | NVGRE | Inner L2 | Inner Vlan | Inner L3 | Inner L4 | Rx packet type | Pkt Error |
---|---|---|---|---|---|---|---|---|---|
Yes | None | Ipv6 | None | None | None | None | None | PKT_RX_IPV6_HDR | None |
Yes | None | Ipv6 | Yes | Yes | None | Ipv6 | Udp | PKT_RX_IPV6_HDR_EXT | None |
Yes | None | Ipv6 | Yes | Yes | None | Ipv6 | Tcp | PKT_RX_IPV6_HDR_EXT | None |
Yes | None | Ipv6 | Yes | Yes | None | Ipv6 | Sctp | PKT_RX_IPV6_HDR_EXT | None |
Yes | Yes | Ipv6 | Yes | Yes | None | Ipv6 | Udp | PKT_RX_IPV6_HDR_EXT | None |
Yes | Yes | Ipv6 | Yes | Yes | Yes | Ipv6 | Udp | PKT_RX_IPV6_HDR_EXT | None |
Remove NVGRE filter which has been added. Then send one packet and check packet was received in queue 0.
Test Case: NVGRE ipv4 checksum offload¶
This test validates NVGRE IPv4 checksum by the hardware. In order to this, the packet should first
be sent from Scapy
with wrong checksum(0x00) value. Then the pmd forward package while checksum
is modified on DUT tx port by hardware. To verify it, tcpdump captures the
forwarded packet and checks the forwarded packet checksum correct or not.
Start testpmd with tunneling packet type to NVGRE:
testpmd -c 0xffff -n 4 -- -i --rxq=4 --txq=4 --nb-cores=8 --nb-ports=2 --enable-rx-cksum
Set csum packet forwarding mode and enable verbose log:
set fwd csum
csum set ip hw <dut tx_port>
csum set udp hw <dut tx_port>
csum set tcp hw <dut tx_port>
csum set sctp hw <dut tx_port>
csum set nvgre hw <dut tx_port>
csum parse_tunnel on <dut tx_port>
set verbose 1
Send packet with invalid checksum first. Then check forwarded packet checksum correct or not.
Outer L2 | Outer Vlan | Outer L3 | NVGRE | Inner L2 | Inner Vlan | Inner L3 | Inner L4 | Rx packet type | Pkt Error |
---|---|---|---|---|---|---|---|---|---|
Yes | None | Ipv4 | None | None | None | None | None | PKT_RX_IPV4_HDR | None |
Yes | None | Ipv4 (Bad) | Yes | Yes | None | Ipv4 | Udp | PKT_RX_IPV4_HDR_EXT | None |
Yes | None | Ipv4 | Yes | Yes | None | Ipv4 (Bad) | Tcp | PKT_RX_IPV4_HDR_EXT | None |
Yes | None | Ipv4 (Bad) | Yes | Yes | None | Ipv4 (Bad) | Sctp | PKT_RX_IPV4_HDR_EXT | None |
Yes | Yes | Ipv4 (Bad) | Yes | Yes | None | Ipv4 | Udp | PKT_RX_IPV4_HDR_EXT | None |
Yes | Yes | Ipv4 | Yes | Yes | Yes | Ipv4 (Bad) | Udp | PKT_RX_IPV4_HDR_EXT | None |
Test Case: NVGRE ipv6 checksum offload¶
This test validates NVGRE IPv6 checksum by the hardware. In order to this, the packet should first
be sent from Scapy
with wrong checksum(0x00) value. Then the pmd forward package while checksum
is modified on DUT tx port by hardware. To verify it, tcpdump captures the
forwarded packet and checks the forwarded packet checksum correct or not.
Start testpmd with tunneling packet type:
testpmd -c ffff -n 4 -- -i --rxq=4 --txq=4 --nb-cores=8 --nb-ports=2 --enable-rx-cksum
Set csum packet forwarding mode and enable verbose log:
set fwd csum
csum set ip hw <dut tx_port>
csum set udp hw <dut tx_port>
csum set tcp hw <dut tx_port>
csum set sctp hw <dut tx_port>
csum set nvgre hw <dut tx_port>
csum parse_tunnel on <dut tx_port>
set verbose 1
Send packet with invalid checksum first. Then check forwarded packet checksum correct or not.
Outer L2 | Outer Vlan | Outer L3 | NVGRE | Inner L2 | Inner Vlan | Inner L3 | Inner L4 | Rx packet type | Pkt Error |
---|---|---|---|---|---|---|---|---|---|
Yes | None | Ipv6 | None | None | None | None | None | PKT_RX_IPV6_HDR | None |
Yes | None | Ipv6 (Bad) | Yes | Yes | None | Ipv6 | Udp | PKT_RX_IPV6_HDR_EXT | None |
Yes | None | Ipv6 | Yes | Yes | None | Ipv6 (Bad) | Tcp | PKT_RX_IPV6_HDR_EXT | None |
Yes | None | Ipv6 (Bad) | Yes | Yes | None | Ipv6 (Bad) | Sctp | PKT_RX_IPV6_HDR_EXT | None |
Yes | Yes | Ipv6 (Bad) | Yes | Yes | None | Ipv6 | Udp | PKT_RX_IPV6_HDR_EXT | None |
Yes | Yes | Ipv6 | Yes | Yes | Yes | Ipv6 (Bad) | Udp | PKT_RX_IPV6_HDR_EXT | None |
Test Case: NVGRE Checksum Offload Performance Benchmarking¶
The throughput is measured for each of these cases for NVGRE tx checksum offload of “all by software”, “inner l3 offload by hardware”, “inner l4 offload by hardware”, “inner l3&l4 offload by hardware”, “outer l3 offload by hardware”, “outer l4 offload by hardware”, “outer l3&l4 offload by hardware”, “all by hardware”.
The results are printed in the following table:
Calculate Type | 1S/1C/1T Mpps | % linerate | 1S/1C/2T Mpps | % linerate | 1S/2C/1T Mpps | % linerate |
---|---|---|---|---|---|---|
SOFTWARE ALL | ||||||
HW OUTER L3 | ||||||
HW OUTER L4 | ||||||
HW OUTER L3&L4 | ||||||
HW INNER L3 | ||||||
HW INNER L4 | ||||||
HW INNER L3&L4 | ||||||
HARDWARE ALL |
Test Case: NVGRE Tunnel filter Performance Benchmarking¶
The throughput is measured for different NVGRE tunnel filter types. Queue single mean there’s only one flow and forwarded to the first queue. Queue multi mean there are two flows and configure to different queues.
Packet | Filter | Queue | Mpps | % linerate |
---|---|---|---|---|
Normal | None | Single | ||
NVGRE | None | Single | ||
NVGRE | imac-ivlan | Single | ||
NVGRE | imac-ivlan-tenid | Single | ||
NVGRE | imac-tenid | Single | ||
NVGRE | imac | Single | ||
NVGRE | omac-imac-tenid | Single | ||
NVGRE | imac-ivlan | Multi | ||
NVGRE | imac-ivlan-tenid | Multi | ||
NVGRE | imac-tenid | Multi | ||
NVGRE | imac | Multi |
Bonding Tests¶
Provide the ability to support Link Bonding for 1GbE and 10GbE ports similar the ability found in Linux to allow the aggregation of multiple (slave) NICs into a single logical interface between a server and a switch. A new PMD will then process these interfaces based on the mode of operation specified and supported. This provides support for redundant links, fault tolerance and/or load balancing of networks. Bonding may also be used in connection with 802.1q VLAN support. The following is a good overview http://www.cyberciti.biz/howto/question/static/linux-ethernet-bonding-driver-howto.php
Requirements¶
- The Bonding mode SHOULD be specified via an API for a logical bonded interface used for link aggregation.
- A new PMD layer SHALL operate on the bonded interfaces and may be used in connection with 802.1q VLAN support.
- Bonded ports SHALL maintain statistics similar to that of normal ports
- The slave links SHALL be monitor for link status change. See also the concept of up/down time delay to handle situations such as a switch reboots, it is possible that its ports report “link up” status before they become usable.
- The following bonding modes SHALL be available;
- Mode = 0 (balance-rr) Round-robin policy: (default). Transmit packets in sequential order from the first available network interface (NIC) slave through the last. This mode provides load balancing and fault tolerance. Packets may be bulk dequeued from devices then serviced in round-robin manner. The order should be specified so that it corresponds to the other side.
- Mode = 1 (active-backup) Active-backup policy: Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single logical bonded interface’s MAC address is externally visible on only one NIC (port) to avoid confusing the network switch. This mode provides fault tolerance. Active-backup policy is useful for implementing high availability solutions using two hubs
- Mode = 2 (balance-xor) XOR policy: Transmit network packets based on the default transmit policy. The default policy (layer2) is a simple [(source MAC address XOR’d with destination MAC address) modulo slave count]. Alternate transmit policies may be selected. The default transmit policy selects the same NIC slave for each destination MAC address. This mode provides load balancing and fault tolerance.
- Mode = 3 (broadcast) Broadcast policy: Transmit network packets on all slave network interfaces. This mode provides fault tolerance but is only suitable for special cases.
- Mode = 4 (802.3ad) IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification. This mode requires a switch that supports IEEE 802.3ad Dynamic link aggregation. Slave selection for outgoing traffic is done according to the transmit hash policy, which may be changed from the default simple XOR layer2 policy.
- Mode = 5 (balance-tlb) Adaptive transmit load balancing. Linux bonding driver mode that does not require any special network switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave.
- Mode = 6 (balance-alb) Adaptive load balancing. Includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the NIC slaves in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic.
- The available transmit policies SHALL be as follows;
- layer2: Uses XOR of hardware MAC addresses to generate the hash. The formula is (source MAC XOR destination MAC) modulo slave count. This algorithm will place all traffic to a particular network peer on the same slave. This algorithm is 802.3ad compliant.
- layer3+4: This policy uses upper layer protocol information, when available, to generate the hash. This allows for traffic to a particular network peer to span multiple slaves, although a single connection will not span multiple slaves. The formula for unfragmented TCP and UDP packets is ((source port XOR dest port) XOR ((source IP XOR dest IP) AND 0xffff) modulo slave count. For fragmented TCP or UDP packets and all other IP protocol traffic, the source and destination port information is omitted. For non-IP traffic, the formula is the same as for the layer2 transmit hash policy. This policy is intended to mimic the behavior of certain switches, notably Cisco switches with PFC2 as well as some Foundry and IBM products. This algorithm is not fully 802.3ad compliant. A single TCP or UDP conversation containing both fragmented and unfragmented packets will see packets striped across two interfaces. This may result in out of order delivery. Most traffic types will not meet these criteria, as TCP rarely fragments traffic, and most UDP traffic is not involved in extended conversations. Other implementations of 802.3ad may or may not tolerate this noncompliance.
- Upon unbonding the bonding PMD driver MUST restore the MAC addresses that the slaves had before they were enslaved.
- According to the bond type, when the bond interface is placed in promiscuous mode it will propagate the setting to the slave devices as follow: For mode=0, 2, 3 and 4 the promiscuous mode setting is propagated to all slaves.
- Mode=0, 2, 3 generally require that the switch have the appropriate ports grouped together (e.g. Cisco 5500 series with EtherChannel support or may be called a trunk group).
- Goals:
- Provide a forwarding example that demonstrates Link Bonding for 2/4x 1GbE ports and 2x 10GbE with the ability to specify the links to be bound, the port order if required, and the bonding type to be used. MAC address of the bond MUST be settable or taken from its first slave device. The example SHALL also allow the enable/disable of promiscuous mode and disabling of the bonding resulting in the return of the normal interfaces and the ability to bring up and down the logical bonded link.
- Provide the performance for each of these modes.
This bonding test plan is mainly to test basic bonding APIs via testpmd and the supported modes(0-3) and each mode’s performance in R1.7.
Prerequisites for Bonding¶
- NIC and IXIA ports requirements.
- Tester: have 4 10Gb (Niantic) ports and 4 1Gb ports.
- DUT: have 4 10Gb (Niantic) ports and 4 1Gb ports. All functional tests should be done on both 10G and 1G port.
- IXIA: have 4 10G ports and 4 1G ports. IXIA is used for performance test.
- BIOS settings on DUT:
- Enhanced Intel Speedstep—-DISABLED
- Processor C3——————–DISABLED
- Processor C6——————–DISABLED
- Hyper-Threading—————-ENABLED
- Intel VT-d————————-DISABLED
- MLC Streamer——————-ENABLED
- MLC Spatial Prefetcher——–ENABLED
- DCU Data Prefetcher———–ENABLED
- DCU Instruction Prefetcher—-ENABLED
- Direct Cache Access(DCA)——————— ENABLED
- CPU Power and Performance Policy———–Performance
- Memory Power Optimization———————Performance Optimized
- Memory RAS and Performance Configuration–>NUMA Optimized—-ENABLED
- Connections ports between tester/ixia and DUT
- TESTER(Or IXIA)——-DUT
- portA——————port0
- portB——————port1
- portC——————port2
- portD——————port3
Test Setup#1 for Functional test¶
Tester has 4 ports(portA–portD), and DUT has 4 ports(port0-port3), then connect portA to port0, portB to port1, portC to port2, portD to port3.
Test Case1: Basic bonding–Create bonded devices and slaves¶
Use Setup#1.
Create bonded device, add first slave, verify default bonded device has default mode 0 and default primary slave.Below are the sample commands and output:
./app/testpmd -c f -n 4 -- -i
.....
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 10000 Mbps - full-duplex
Port 2 Link Up - speed 10000 Mbps - full-duplex
Port 3 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd> create bonded device 1 1(mode socket, if not set, default mode=0, default socket=0)
Created new bonded device (Port 4)
testpmd> add bonding slave 1 4
Adding port 1 as slave
testpmd> show bonding config 4
Bonding mode: 1
Slaves: [1]
Active Slaves: []
Failed to get primary slave for port=4
testpmd> port start 4
......
Done
testpmd> show bonding config 4
Bonding mode: 1
Slaves: [1]
Active Slaves: [1]
Primary: [1]
Create another bonded device, and check if the slave added to bonded device1 can’t be added to bonded device2:
testpmd> create bonded device 1 1
Created new bonded device (Port 5)
testpmd> add bonding slave 0 4
Adding port 0 as slave
testpmd> add bonding slave 0 5
Failed to add port 0 as slave
Change the bonding mode and verify if it works:
testpmd> set bonding mode 3 4
testpmd> show bonding config 4
Add 2nd slave, and change the primary slave to 2nd slave and verify if it works:
testpmd> add bonding slave 2 4
testpmd> set bonding primary 2 4
testpmd> show bonding config 4
Remove the slaves, and check the bonded device again. Below is the sample command:
testpmd> remove bonding slave 1 4
testpmd> show bonding config 4(Verify that slave1 is removed from slaves/active slaves).
testpmd> remove bonding slave 0 4
testpmd> remove bonding slave 2 4(This command can't be done, since bonded device need at least 1 slave)
testpmd> show bonding config 4
Test Case2: Basic bonding–MAC Address Test¶
Use Setup#1.
Create bonded device, add one slave, verify bonded device MAC address is the slave’s MAC:
./app/testpmd -c f -n 4 -- -i
.....
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 10000 Mbps - full-duplex
Port 2 Link Up - speed 10000 Mbps - full-duplex
Port 3 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd> create bonded device 1 1
testpmd> add bonding slave 1 4
testpmd> show port info 1
********************* Infos for port 1 *********************
MAC address: 90:E2:BA:4A:54:81
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 127
Maximum number of MAC addresses of hash filtering: 4096
VLAN offload:
strip on
filter on
qinq(extend) off
testpmd> show port info 4
********************* Infos for port 4 *********************
MAC address: 90:E2:BA:4A:54:81
Connect to socket: 1
memory allocation on the socket: 0
Link status: down
Link speed: 10000 Mbps
Link duplex: full-duplex
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 1
Maximum number of MAC addresses of hash filtering: 0
VLAN offload:
strip off
filter off
qinq(extend) off
Continue with above case, add 2nd slave, check the configuration of a bonded device. Verify bonded device MAC address is that of primary slave and all slaves’ MAC address is same. Below are the sample commands:
testpmd> add bonding slave 2 4
testpmd> show bonding config 4
testpmd> show port info 1 ------(To check if port1,2,4 has the same MAC address as port1)
testpmd> show port info 4
testpmd> show port info 2
Set the bonded device’s MAC address, and verify the bonded port and slaves’ MAC address have changed to the new MAC address:
testpmd> set bonding mac_addr 4 00:11:22:00:33:44
testpmd> show port info 1 ------(To check if port1,2,4 has the same MAC address as new MAC)
testpmd> show port info 4
testpmd> show port info 2
Change the primary slave to 2nd slave, verify that the bonded device’s MAC and slave’s MAC is still original. Remove 2nd slave from the bonded device, then verify 2nd slave device MAC address is returned to the correct MAC:
testpmd> port start 4(Make sure the port4 has the primary slave)
testpmd> show bonding config 4
testpmd> set bonding primary 2 4
testpmd> show bonding config 4-----(Verify that port2 is primary slave)
testpmd> show port info 4
testpmd> show port info 2
testpmd> show port info 1-----(Verify that the bonding port and the slaves`s MAC is still original)
testpmd> remove bonding slave 2 4
testpmd> show bonding config 4-----(Verify that port1 is primary slave)
testpmd> show port info 2 ------(To check if port2 returned to correct MAC)
testpmd> show port info 4 ------(Verify that bonding device and slave MAC is still original when remove the primary slave)
testpmd> show port info 1
Add another slave(3rd slave), then remove this slave from a bonded device, verify slave device MAC address is returned to the correct MAC:
testpmd> add bonding slave 3 4
testpmd> show bonding config 4
testpmd> remove bonding slave 3 4
testpmd> show bonding config 4
testpmd> show port info 3 ------(To check if port3 has returned to the correct MAC)
Test Case3: Basic bonding–Device Promiscuous Mode Test¶
Use Setup#1.
Create bonded device, add 3 slaves. Set promiscuous mode on bonded eth dev. Verify all slaves of bonded device are changed to promiscuous mode:
./app/testpmd -c f -n 4 -- -i
.....
Port 0 Link Up - speed 10000 Mbps - full-duplex
Port 1 Link Up - speed 10000 Mbps - full-duplex
Port 2 Link Up - speed 10000 Mbps - full-duplex
Port 3 Link Up - speed 10000 Mbps - full-duplex
Done
testpmd> create bonded device 3 1
testpmd> add bonding slave 0 4
testpmd> add bonding slave 1 4
testpmd> add bonding slave 2 4
testpmd> show port info all---(Check if port0,1,2,4 has Promiscuous mode enabled)
********************* Infos for port 0 *********************
MAC address: 90:E2:BA:4A:54:80
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 10000 Mbps
Link duplex: full-duplex
**Promiscuous mode: enabled**
Allmulticast mode: disabled
Maximum number of MAC addresses: 127
Maximum number of MAC addresses of hash filtering: 4096
VLAN offload:
strip on
filter on
qinq(extend) off
Send 1 packet to any bonded slave port(e.g: port0) with a different MAC destination than that of that eth dev(00:11:22:33:44:55) and verify that data is received at slave and bonded device. (port0 and port4):
testpmd> set portlist 3,4
testpmd> port start all
testpmd> start
testpmd> show port stats all----(Verify port0 has received 1 packet, port4 has received 1 packet, also port3 has transmitted 1 packet)
Disable promiscuous mode on bonded device.Verify all slaves of bonded eth dev have changed to be in non-promiscuous mode.This is applied to mode 0,2,3,4, for other mode, such as mode1, this is only applied to active slave:
testpmd> set promisc 4 off
testpmd> show port info all---(Verify that port0,1,2 and 4 has promiscuous mode disabled, and it depends on the mode)
Send 1 packet to any bonded slave port(e.g: port0) with MAC not for that slave and verify that data is not received on bonded device and slave:
testpmd> show port stats all----(Verify port0 has NOT received 1 packet, port4 NOT received 1 packet,too)
Send 1 packet to any bonded slave port(e.g: port0) with that slave’s MAC and verify that data is received on bonded device and slave since the MAC address is correct:
testpmd> show port stats all----(Verify port0 has received 1 packet, port4 received 1 packet,also port3 has transmitted 1 packet)
Test Case4: Mode 0(Round Robin) TX/RX test¶
TX:
Add ports 1-3 as slave devices to the bonded port 5. Send a packet stream from port D on the traffic generator to be forwarded through the bonded port. Verify that traffic is distributed equally in a round robin manner through ports 1-3 on the DUT back to the traffic generator. The sum of the packets received on ports A-C should equal the total packets sent from port D. The sum of the packets transmitted on ports 1-3 should equal the total packets transmitted from port 5 and received on port 4:
./app/testpmd -c f -n 4 -- -i
....
testpmd> create bonded device 0 1
testpmd> add bonding slave 0 4
testpmd> add bonding slave 1 4
testpmd> add bonding slave 2 4
testpmd> set portlist 3,4
testpmd> port start all
testpmd> start
testpmd> show port stats all----(Check port0,1,2,3 and 4 tx/rx packet stats)
Send 100 packets to port3 and verify port3 receive 100 packets, port4 transmit 100 packets,meanwhile the sum of the packets transmitted on port 0-2 should equal the total packets transmitted from port4:
testpmd> show port stats all----(Verify port3 100 rx packets,port0,1,2 have total 100 tx packets,port4 have 100 tx packets)
RX: Add ports 1-3 as slave devices to the bonded port 5. Send a packet stream from port A, B or C on the traffic generator to be forwarded through the bonded port 5 to port 4 Verify the sum of the packets transmitted from the traffic generator port is equal the total received packets on port 5 and transmitted on port 4. Send a packet stream from the other 2 ports on the traffic generator connected to the bonded port slave ports. Verify data transmission/reception counts.
Send 10 packets from port 0-2 to port3:
testpmd> clear port stats all
testpmd> show port stats all----(Verify port0-2 have 10 rx packets respectively,port4 have 30 rx packets,meanwhile port3 have 30 tx packets)
Test Case5: Mode 0(Round Robin) Bring one slave link down¶
Add ports 1-3 as slave devices to the bonded port 5. Bring the link on either port 1, 2 or 3 down. Send a packet stream from port D on the traffic generator to be forwarded through the bonded port. Verify that forwarded traffic is distributed equally in a round robin manner through the active bonded ports on the DUT back to the traffic generator. The sum of the packets received on ports A-C should equal the total packets sent from port D. The sum of the packets transmitted on the active bonded ports should equal the total packets transmitted from port 5 and received on port 4. No traffic should be sent on the bonded port which was brought down. Bring link back up link on bonded port. Verify that round robin return to operate across all bonded ports
Test Case6: Mode 0(Round Robin) Bring all slave links down¶
Add ports 1-3 as slave devices to the bonded port 5. Bring the links down on all bonded ports. Verify that bonded callback for link down is called. Verify that no traffic is forwarded through bonded device
Test Case7: Mode 0(Round Robin) Performance test—-TBD¶
Configure layer2 forwarding(testpmd) between bonded dev and a non bonded dev Uni-directional flow: Use IXIA to generate traffic to non bonded eth dev Verify that tx packet are evenly distrusted across active ports Measure performance through bonded eth dev Test with bonded port with 0, 1 and 2 slave ports.
Test Case8: Mode 1(Active Backup) TX/RX Test¶
Add ports 0-2 as slave devices to the bonded port 4.Set port 0 as active slave on bonded device:
testpmd> create bonded device 1 1
testpmd> add bonding slave 0 4
testpmd> add bonding slave 1 4
testpmd> add bonding slave 2 4
testpmd> show port info 4-----(Check the MAC address of bonded device)
testpmd> set portlist 3,4
testpmd> port start all
testpmd> start
Send a packet stream(100 packets) from port A on the traffic generator to be forwarded through the bonded port4 to port3. Verify the sum of the packets transmitted from the traffic generator portA is equal the total received packets on port0, 4 and Port D and transmitted on port 4:
testpmd> show port stats all---(Verify port0 receive 100 packets, and port4 receive 100 packets, and port3 transmit 100 packets)
Send a packet stream(100 packets) from portD on the traffic generator to be forwarded through port3 to the bonded port4. Verify the sum of the packets(100packets) transmitted from the traffic generator port is equal the total received packets on port4 and portA and transmitted on port4 and port0:
testpmd> show port stats all---(Verify port0/port4 TX 100 packets, and port3 receive 100 packets)
Test Case9: Mode 1(Active Backup) Change active slave, RX/TX test¶
Continuing from Test Case8. Change the active slave port from port0 to port1.Verify that the bonded device’s MAC has changed to slave1’s MAC:
testpmd> set bonding primary 1 4
Repeat the transmission and reception(TX/RX) test verify that data is now transmitted and received through the new active slave and no longer through port0
Test Case10: Mode 1(Active Backup) Link up/down active eth dev¶
Bring link between port A and port0 down. If tester is ixia, can use IxExplorer to set the “Simulate Cable Disconnect” at the port property. Verify that the active slave has been changed from port0. Repeat the transmission and reception test verify that data is now transmitted and received through the new active slave and no longer through port0
Test Case11: Mode 1(Active Backup) Bring all slave links down¶
Bring all slave ports of bonded port down. Verify that bonded callback for link down is called and no active slaves. Verify that data cannot be sent or received through bonded port. Send 100 packets to port3 and verify that bonded port can’t TX 100 packets.
Test Case12: Mode 1(Active Backup) Performance test—TBD¶
Configure layer2 forwarding(testpmd) between bonded dev and a non bonded dev.Note: Make sure the core and the slave port are in the same socket.
Bi-directional flow: Use IXIA to generate traffic to non bonded eth dev(port3) and active port0, port1(non-active);Verify that tx packet are only sent to active port(port0) and bonded port4. Measure performance through slave port0 and port3’s mapped IXIA ports’ RX. Need check frame size 64,128,256,512,1024,1280,1518 related performance numbers.
Try to check that if port0 is link down, port1 can be backup quickly and re-check the performance at port1 and port3’s mapped IXIA ports’ RX:
./app/testpmd -c f -n 4 -- -i --burst=32 --rxfreet=32 --mbcache=250 --txpt=32 --rxht=8 --rxwt=0 --txfreet=32 --txrst=32 --tx-offloads=0x0
testpmd> create bonded device 1 0
testpmd> add bonding slave 0 4
testpmd> add bonding slave 1 4
testpmd> add bonding slave 2 4
testpmd> set portlist 3,4
testpmd> port start all
testpmd> start
Test Case13: Mode 2(Balance XOR) TX Load Balance test¶
Bonded port will activate each slave eth dev based on the following hash function:
((dst_mac XOR src_mac) % (number of slave ports))
Send 300 packets from non-bonded port(port3),and verify these packets will be forwarded to bonded device. The bonded device will transmit these packets to all slaves. Verify that each slave receive correct number of packets according to the policy. The total number of packets which are on slave should be equal as 300 packets.
Test Case14: Mode 2(Balance XOR) TX Load Balance Link down¶
Bring link down of one slave. Send 300 packets from non-bonded port(port3), and verify these packets will be forwarded to bonded device. Verify that each active slave receive correct number of packets(according to the mode policy), and the down slave will not receive packets.
Test Case15: Mode 2(Balance XOR) Bring all slave links down¶
Bring all slave links down. Verify that bonded callback for link down is called. Verify no packet can be sent.
Test Case16: Mode 2(Balance XOR) Layer 3+4 forwarding¶
Use “xmit_hash_policy()” to change to this forwarding mode Create a stream of traffic which will exercise all slave ports using the transmit policy:
((SRC_PORT XOR DST_PORT) XOR ((SRC_IP XOR DST_IP) AND 0xffff) % # of Slaves
Transmit data through bonded device, verify TX packet count for each slave port is as expected
Test Case17: Mode 2(Balance XOR) RX test¶
Send 100 packets to each bonded slaves(port0,1,2) Verify that each slave receives 100 packets and the bonded device receive a total 300 packets. Verify that the bonded device forwards 300 packets to the non-bonded port(port4).
Test Case18: Mode 2(Balance XOR) Performance test–TBD¶
Configure layer2 forwarding(testpmd) between bonded dev and a non bonded dev Bi-directional flow: Use IXIA to generate traffic to non bonded eth dev and port0. Verify that tx packet are distrusted according to XOR policy across active ports Measure performance through bonded eth dev and these active ports mapped IXIA ports’ RX. Test with bonded port with 0, 1 and 2 slave ports.
Test Case19: Mode 3(Broadcast) TX/RX Test¶
Add ports 0-2 as slave devices to the bonded port 4.Set port 0 as active slave on bonded device:
testpmd> create bonded device 3 1
testpmd> add bonding slave 0 4
testpmd> add bonding slave 1 4
testpmd> add bonding slave 2 4
testpmd> show port info 4-----(Check the MAC address of bonded device)
testpmd> set portlist 3,4
testpmd> port start all
testpmd> start
RX: Send a packet stream(100 packets) from port A on the traffic generator to be forwarded through the bonded port4 to port3. Verify the sum of the packets transmitted from the traffic generator portA is equal the total received packets on port0, port4 and portD(Traffic generator):
testpmd> show port stats all---(Verify port0 receive 100 packets, and port4 receive 100 packets, and port3 transmit 100 packets)
TX: Send a packet stream(100 packets) from portD on the traffic generator to be forwarded through port3 to the bonded port4. Verify the sum of the packets(100packets) transmitted from the traffic generator port is equal the total received packets on port4, portA and transmitted to port0.:
testpmd> show port stats all---(Verify port3 RX 100 packets, and port0,1,2,4 TX 100 packets)
Test Case20: Mode 3(Broadcast) Bring one slave link down¶
Bring one slave port link down. Send 100 packets through portD to port3, then port3 forwards to bonded device(port4), verify that the bonded device and other slaves TX the correct number of packets(100 packets for each port).
Test Case21: Mode 3(Broadcast) Bring all slave links down¶
Bring all slave ports of bonded port down Verify that bonded callback for link down is called Verify that data cannot be sent or received through bonded port.
Test Case22: Mode 3(Broadcast) Performance test–TBD¶
Configure layer2 forwarding(testpmd) between bonded dev and a non bonded dev Bi-directional flow: Use IXIA to generate traffic to non bonded eth dev and port0. Verify that tx packet are sent to all slave ports. Measure performance through bonded eth dev and all slaves’ mapped IXIA ports’s RX. Test with bonded port with slave ports 0,1,2. Can try to reduce slave numbers from 3 to 2 to check if performance has any difference.
TestPMD PCAP Tests¶
This document provides tests for the userland Intel(R) 82599 Gigabit Ethernet Controller (Niantic) Poll Mode Driver (PMD) when using pcap files as input and output.
The core configurations description is:
- 2C/1T: 2 Physical Cores, 1 Logical Core per physical core
- 4C/1T: 4 Physical Cores, 1 Logical Core per physical core
Prerequisites¶
This test does not requires connections between DUT and tester as it is focused in PCAP devices created by Test PMD.
It is Test PMD application itself which send and receives traffic from and to PCAP files, no traffic generator is involved.
Test Case: test_send_packets_with_one_device¶
It is necessary to generate the input pcap file for one interface test. The pcap file can be created using scapy. Create a file with 1000 frames with the following structure:
Ether(src='00:00:00:00:00:<last Eth>', dst='00:00:00:00:00:00')/IP(src='192.168.1.1', dst='192.168.1.2')/("X"*26))
<last Eth> goes from 0 to 255 and repeats.
The linuxapp is started with the following parameters:
-c 0xffffff -n 3 --vdev 'eth_pcap0;rx_pcap=in.pcap;tx_pcap=out.pcap' --
-i --port-topology=chained
Start the application and the forwarding, by typing start in the command line of the application. After a few seconds stop the forwarding and quit the application.
Check that the frames of in.pcap and out.pcap files are the same using scapy.
Test Case: test_send_packets_with_two_devices¶
Create 2 pcap files with 1000 and 500 frames as explained in test_send_packets_with_one_device test case.
The linuxapp is started with the following parameters:
-c 0xffffff -n 3 --vdev 'eth_pcap0;rx_pcap=in1.pcap;tx_pcap=out1.pcap,"eth_pcap1;rx_pcap=in2.pcap;tx_pcap=out2.pcap'
-- -i
Start the application and the forwarding, by typing start in the command line of the application. After a few seconds stop the forwarding and quit the application.
Check that the frames of the in1.pcap and out2.pcap, and in2.pcap and out1.pcap file are the same using scapy.
Fortville RSS - Configuring Hash Function Tests¶
This document provides test plan for testing the function of Fortville: Support configuring hash functions.
Prerequisites¶
- 2x Intel® 82599 (Niantic) NICs (2x 10GbE full duplex optical ports per NIC)
- 1x Fortville_eagle NIC (4x 10G)
- 1x Fortville_spirit NIC (2x 40G)
- 2x Fortville_spirit_single NIC (1x 40G)
The four ports of the 82599 connect to the Fortville_eagle; The two ports of Fortville_spirit connect to Fortville_spirit_single. The three kinds of NICs are the target NICs. the connected NICs can send packets to these three NICs using scapy.
Network Traffic¶
The RSS feature is designed to improve networking performance by load balancing the packets received from a NIC port to multiple NIC RX queues, with each queue handled by a different logical core.
- The receive packet is parsed into the header fields used by the hash operation (such as IP addresses, TCP port, etc.)
- A hash calculation is performed. The Fortville supports four hash function: Toeplitz, simple XOR and their Symmetric RSS.
- The seven LSBs of the hash result are used as an index into a 128/512 entry ‘redirection table’. Each entry provides a 4-bit RSS output index.
- There are four cases to test the four hash function.
Test Case: test_toeplitz¶
Testpmd configuration - 16 RX/TX queues per port¶
set up testpmd with fortville NICs:
./testpmd -c fffff -n %d -- -i --coremask=0xffffe --rxq=16 --txq=16
Reta Configuration. 128 reta entries configuration:
testpmd command: port config 0 rss reta (hash_index,queue_id)
PMD fwd only receive the packets:
testpmd command: set fwd rxonly
rss received package type configuration two received packet types configuration:
testpmd command: port config 0 rss ip/udp
verbose configuration:
testpmd command: set verbose 8
set hash functions, can choose symmetric or not, choose port and packet type:
set_hash_function 0 toeplitz
start packet receive:
testpmd command: start
tester Configuration¶
set up scapy
send packets with different type ipv4/ipv4 with tcp/ipv4 with udp/ ipv6/ipv6 with tcp/ipv6 with udp:
sendp([Ether(dst="90:e2:ba:36:99:3c")/IP(src="192.168.0.4", dst="192.168.0.5")], iface="eth3")
test result¶
The testpmd will print the hash value and actual queue of every packet.
- Calculate the queue id: hash value%128or512, then refer to the redirection table to get the theoretical queue id.
- Compare the theoretical queue id with the actual queue id.
Test Case: test_toeplitz_symmetric¶
The same with the above steps, pay attention to “set hash function”, should use:
set_hash_function 0 toeplitz
set_sym_hash_ena_per_port 0 enable
set_sym_hash_ena_per_pctype 0 35 enable
And send packets with the same flow in different direction:
sendp([Ether(dst="90:e2:ba:36:99:3c")/IP(src="192.168.0.4", dst="192.168.0.5")], iface="eth3")
sendp([Ether(dst="90:e2:ba:36:99:3c")/IP(src="192.168.0.5", dst="192.168.0.4")], iface="eth3")
And the hash value and queue should be the same for these two flow .
Test Case: test_simple¶
The same as the above two test cases. Just pay attention to set the hash function to “simple xor”
Test Case: test_simple_symmetric¶
The same as the above two test cases. Just pay attention to set the hash function to “simple xor”
Test Case: test_dynamic_rss_bond_config¶
This case test bond slaves will auto sync rss hash config, it only support by fortville.
set up testpmd with fortville NICs:
./testpmd -c f -n 4 -- -i --portmask 0x3 --tx-offloads=0x8fff
create bond device with mode 3:
create bonded device 3 0
add slave to bond device:
add bonding slave 0 2 add bonding slave 1 2
get default hash algorithm on slave:
get_hash_global_config 0 get_hash_global_config 1
set hash algorithm on slave 0:
set_hash_global_config 0 simple_xor ipv4-other enable
get hash algorithm on slave 0 and 1:
get_hash_global_config 0 get_hash_global_config 1
check slave 0 and 1 use same hash algorithm
Niantic Reta (Redirection table) Tests¶
This document provides test plan for benchmarking of Rss reta(Redirection table) updating for the Intel® 82599 10 Gigabit Ethernet Controller (Niantic) Poll Mode Driver (PMD) in userland runtime configurations. The content of Rss Redirection table are not defined following reset of the Memory Configuration registers. System software must initialize the table prior to enabling multiple receive queues .It can also update the redirection table during run time. Such updates of the table are not synchronized with the arrival time of received packets.
Prerequisites¶
2x Intel® 82599 (Niantic) NICs (2x 10GbE full duplex optical ports per NIC) plugged into the available PCIe Gen2 8-lane slots. To avoid PCIe bandwidth bottlenecks at high packet rates, a single optical port from each NIC is connected to the traffic generator.
Network Traffic¶
The RSS feature is designed to improve networking performance by load balancing the packets received from a NIC port to multiple NIC RX queues, with each queue handled by a different logical core.
- The receive packet is parsed into the header fields used by the hash operation (such as IP addresses, TCP port, etc.)
- A hash calculation is performed. The 82599 supports a single hash function, as defined by MSFT RSS. The 82599 therefore does not indicate to the device driver which hash function is used. The 32-bit result is fed into the packet receive descriptor.
- The seven LSBs of the hash result are used as an index into a 128-entry ‘redirection table’. Each entry provides a 4-bit RSS output index.
The RSS RETA update feature is designed to make RSS more flexible by allowing users to define the correspondence between the seven LSBs of hash result and the queue id(RSS output index) by them self.
Test Case: Results - IO Forwarding Mode¶
The following RX Ports/Queues configurations have to be benchmarked:
- 1 RX port / 2 RX queues (1P/2Q)
- 1 RX port / 9 RX queues (1P/9Q)
- 1 RX ports / 16 RX queues (1P/16Q)
Testpmd configuration - 2 RX/TX queues per port¶
testpmd -cffffff -n 3 -b 0000:05:00.1 -- -i --rxd=512 --txd=512 --burst=32 \
--txpt=36 --txht=0 --txwt=0 --txfreet=32 --rxfreet=64 --txrst=32 --mbcache=128 \
--rxq=2 --txq=2
Testpmd configuration - 9 RX/TX queues per port¶
testpmd -cffffff -n 3 -b 0000:05:00.1 -- -i --rxd=512 --txd=512 --burst=32 \
--txpt=36 --txht=0 --txwt=0 --txfreet=32 --rxfreet=64 --txrst=32 --mbcache=128 \
--rxq=9 --txq=9
Testpmd configuration - 16 RX/TX queues per port¶
testpmd -cffffff -n 3 -b 0000:05:00.1 -- -i --rxd=512 --txd=512 --burst=32 \
--txpt=36 --txht=0 --txwt=0 --txfreet=32 --rxfreet=64 --txrst=32 --mbcache=128 \
--rxq=16 --txq=16
The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup. The -b command is used to prevent the use of pic port to receive packets. It should match the pci number of the pci device.
Testpmd Configuration Options¶
By default, a single logical core runs the test.
The CPU IDs and the number of logical cores running the test in parallel can
be manually set with the set corelist X,Y
and the set nbcore N
interactive commands of the testpmd
application.
Reta Configuration. 128 reta entries configuration:
testpmd command: port config 0 rss reta (hash_index,queue_id)
PMD fwd only receive the packets:
testpmd command: set fwd rxonly
rss received package type configuration two received packet types configuration:
testpmd command: port config 0 rss ip/udp
Verbose configuration:
testpmd command: set verbose 8
Start packet receive:
testpmd command: start
tester Configuration¶
- In order to make most entries of the reta to be tested,the traffic generator has to be configured to randomize the value of the 5-tuple fields of the transmitted IP/UDP packets so that RSS hash function output of 5-tuple fields covers most of reta index.
- Set the package numbers of one burst to a certain value.
Example output (1P/2Q) received by the dut):::¶
packet index | hash output | rss output | actual queue id | pass |
0 | ||||
1 | ||||
2 | ||||
etc. | ||||
125 | ||||
126 | ||||
127 |
Niantic PMD Tests¶
This document provides benchmark tests for the userland Intel®
82599 Gigabit Ethernet Controller (Niantic) Poll Mode Driver (PMD).
The userland PMD application runs the IO forwarding mode
test
described in the PMD test plan document with different parameters for
the configuration of Niantic NIC ports.
The core configuration description is:
- 1C/1T: 1 Physical Core, 1 Logical Core per physical core (1 Hyperthread) using core #2 (socket 0, 2nd physical core)
- 1C/2T: 1 Physical Core, 2 Logical Cores per physical core (2 Hyperthreads) using core #2 and #14 (socket 0, 2nd physical core, 2 Hyperthreads)
- 2C/1T: 2 Physical Cores, 1 Logical Core per physical core using core #2 and #4 (socket 0, 2nd and 3rd physical cores)
Prerequisites¶
Each of the 10Gb Ethernet* ports of the DUT is directly connected in full-duplex to a different port of the peer traffic generator.
Using interactive commands, the traffic generator can be configured to send and receive in parallel, on a given set of ports.
The tool vtbwrun
(included in Intel® VTune™ Performance Analyzer)
will be used to monitor memory activities while running network
benchmarks to check the number of Memory Partial Writes
and the
distribution of memory accesses among available Memory Channels. This
will only be done on the userland application, as the tool requires a
Linux environment to be running in order to be used.
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
Test Case: Packet Checking¶
The linuxapp is started with the following parameters:
-c 0xffffff -n 3 -- -i --coremask=0x4 \
--rxd=512 --txd=512 --burst=32 --txfreet=32 --rxfreet=64 --mbcache=128 --portmask=0xffff \
--rxpt=4 --rxht=4 --rxwt=16 --txpt=36 --txht=0 --txwt=0 --txrst=32
The tester sends packets with different sizes (64, 65, 128, 256, 512, 1024, 1280 and 1518 bytes), using scapy, which will be forwarded by the DUT. The test checks if the packets are correctly forwarded and if both RX and TX packet sizes match.
Test Case: Descriptors Checking¶
The linuxapp is started with the following parameters:
-c 0xffffff -n 3 -- -i --coremask=0x4 \
--rxd={rxd} --txd={txd} --burst=32 --rxfreet=64 --mbcache=128 \
--portmask=0xffff --txpt=36 --txht=0 --txwt=0 --txfreet=32 --txrst=32
IXIA sends packets with different sizes (64, 65, 128, 256, 512, 1024, 1280 and 1518 bytes) for different values of rxd and txd (between 128 and 4096) The packets will be forwarded by the DUT. The test checks if the packets are correctly forwarded.
Test Case: Performance Benchmarking¶
The linuxapp is started with the following parameters, for each of the configurations referenced above:
1C/1T:
-c 0xffffff -n 3 -- -i --coremask=0x4 \
--rxd=512 --txd=512 --burst=32 --txfreet=32 --rxfreet=64 --mbcache=128 --portmask=0xffff \
--rxpt=4 --rxht=4 --rxwt=16 --txpt=36 --txht=0 --txwt=0 --txrst=32
1C/2T:
-c 0xffffff -n 3 -- -i --coremask=0x4004 \
--rxd=512 --txd=512 --burst=32 --txfreet=32 --rxfreet=64 --mbcache=128 --portmask=0xffff \
--rxpt=4 --rxht=4 --rxwt=16 --txpt=36 --txht=0 --txwt=0 --txrst=32
2C/1T:
-c 0xffffff -n 3 -- -i --coremask=0x14 \
--rxd=512 --txd=512 --burst=32 --txfreet=32 --rxfreet=64 --mbcache=128 --portmask=0xffff \
--rxpt=4 --rxht=4 --rxwt=16 --txpt=36 --txht=0 --txwt=0 --txrst=32
The throughput is measured for each of these cases for the packet size of 64, 65, 128, 256, 512, 1024, 1280 and 1518 bytes. The results are printed in the following table:
Frame Size | 1C/1T | 1C/2T | 2C/1 | wirespeed |
64 | ||||
65 | ||||
128 | ||||
256 | ||||
512 | ||||
1024 | ||||
1280 | ||||
1518 |
The memory partial writes are measured with the vtbwrun
application and printed
in the following table::
Sampling Duration: 000000.00 micro-seconds
--- Logical Processor 0 ---||--- Logical Processor 1 ---
---------------------------------------||---------------------------------------
--- Intersocket QPI Utilization ---||--- Intersocket QPI Utilization ---
---------------------------------------||---------------------------------------
--- Reads (MB/s): 0.00 ---||--- Reads (MB/s): 0.00 ---
--- Writes(MB/s): 0.00 ---||--- Writes(MB/s): 0.00 ---
---------------------------------------||---------------------------------------
--- Memory Performance Monitoring ---||--- Memory Performance Monitoring ---
---------------------------------------||---------------------------------------
--- Mem Ch 0: #Ptl Wr: 0000.00 ---||--- Mem Ch 0: #Ptl Wr: 0.00 ---
--- Mem Ch 1: #Ptl Wr: 0000.00 ---||--- Mem Ch 1: Ptl Wr (MB/s): 0.00 ---
--- Mem Ch 2: #Ptl Wr: 0000.00 ---||--- Mem Ch 2: #Ptl Wr: 0.00 ---
--- ND0 Mem #Ptl Wr: 0000.00 ---||--- ND1 #Ptl Wr: 0.00 ---
PTYPE Mapping Tests¶
All PTYPEs (packet types) in DPDK PMDs before are statically defined using static constant map tables. It makes impossible to add a new packet type without first defining them statically and then recompiling DPDK. New NICs are flexible enough to be reconfigured depending on the network environment. In case of FVL new PTYPEs can be added dynamically at device initialization time using corresponding AQ commands. Note that the packet types of the same packet recognized by different hardware may be different, as different hardware may have different capabilities of packet type recognition.
This 32 bits of packet_type can be divided into several sub fields to indicate different packet type information of a packet. The initial design is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel types, inner L2 types, inner L3 types and inner L4 types. All PMDs should translate the offloaded packet types into these 7 fields of information for user applications.
Prerequisites¶
Start testpmd, enable rxonly and verbose mode:
./testpmd -c f -n 4 -- -i --port-topology=chained
Test Case 1: Get ptype mapping¶
Get hardware defined ptype to software defined ptype mapping items:
testpmd> ptype mapping get <port_id> <valid_only>
Note that valid_only parameter:
(0) target represents a specific software defined ptype.
(!0) target is a mask to represent a group of software defined ptypes.
Check the table, first column is hardware ptype, second column is software ptype. Take hw_ptype is 24 for example:
...
22 0x00000391
23 0x00000691
24 0x00000291
26 0x00000191
27 0x00000491
...
[24] = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN |
RTE_PTYPE_L4_UDP,
RTE_PTYPE_L2_ETHER defined as 0x00000001,
RTE_PTYPE_L3_IPV4_EXT_UNKNOWN defined as 0x00000090,
RTE_PTYPE_L4_UDP defined as 0x00000200,
Calculate with L2/L3/L4 mask, we can get the ptype is 0x00000291.
Set <valid_only> as 0, Check that get 0~255 full columns ptype mapping items.
Set <valid_only> as 1, Check that get defined ptype mapping items.
Send packets, check RX dump packets software and hardware ptypes’ correctness as below table:
sendp([Ether()/IP()/IPv6()/UDP()/Raw('\0'*40)],iface=txItf) sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IP()/Raw('\0'*40)],iface=txItf)
Outer L2 Outer L3 Outer L4 Tunnel Inner L2 Inner L3 Inner L4 sw_ptype hw_ptype ETHER IPV4_EXT_UNKNOWN Unknown IP Unknown IPV6_EXT_UNKNOWN | UDP 0x02601091 38 ETHER IPV4_EXT_UNKNOWN Unknown GRENAT ETHER_VLAN IPV4_EXT_UNKNOWN | NONFRAG 0x06426091 75 Dumped packet:
testpmd> port 0/queue 0: received 1 packets src=00:00:00:00:00:00 - dst=FF:FF:FF:FF:FF:FF - type=0x0800 - length=122 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN TUNNEL_IP INNER_L3_IPV6_EXT_UNKNOWN INNER_L4_UDP - sw ptype: L2_ETHER L3_IPV4 TUNNEL_IP INNER_L3_IPV6 INNER_L4_UDP - l2_len=14 - l3_len=20 - tunnel_len=0 - inner_l3_len=40 - inner_l4_len=8 - Receive queue=0x0 ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD port 0/queue 0: received 1 packets src=00:00:00:00:00:00 - dst=FF:FF:FF:FF:FF:FF - type=0x0800 - length=120 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN TUNNEL_GRENAT INNER_L2_ETHER_VLAN INNER_L3_IPV4_EXT_UNKNOWN INNER_L4_NONFRAG - sw ptype: L2_ETHER L3_IPV4 TUNNEL_NVGRE INNER_L2_ETHER_VLAN INNER_L3_IPV4 - l2_len=14 - l3_len=20 - tunnel_len=8 - inner_l2_len=18 - inner_l3_len=20 - Receive queue=0x0 ol_flags: PKT_RX_L4_CKSUM_GOOD PKT_RX_IP_CKSUM_GOOD
Test Case 2: Reset ptype mapping¶
Send packet and dump RX, check outer_L2, outer_L3, outer_L4, tunnel, inner L2, inner L3, inner L4 as below table:
sendp([Ether()/IP()/IPv6()/UDP()/Raw('\0'*40)],iface=txItf)
Outer L2 Outer L3 Outer L4 Tunnel Inner L2 Inner L3 Inner L sw_ptype hw_ptype ETHER IPV4_EXT_UNKNOWN Unknown IP Unknown IPV6_EXT_UNKNOWN UDP 0x02601091 38 Check ptype mapping items: hw_ptype=38, sw_ptype=0x02601091
Update hardware defined ptype to software defined packet type mapping table. Note that hw_ptype should among 0~255, sw_ptype should conform defined mask, e.g. change outer L3 value to 0x000000e0, which is IPV6_EXT_UNKNOWN:
testpmd> ptype mapping update 0 38 0x026010e1
Check ptype mapping hw_ptype=38 and sw_ptype is updated to 0x026010e1
Send packet and dump RX, check outer_L3 is changed to IPV6_EXT_UNKNOWN:
sendp([Ether()/IP()/IPv6()/UDP()/Raw('\0'*40)],iface=txItf)
Reset ptype mapping table to default:
testpmd> ptype mapping reset <port_id>
Check ptype mapping hw_ptype=38 and sw_ptype is updated to 0x02601091
Send packet and dump RX, check outer_L3 is changed to IPV4_EXT_UNKNOWN
Test Case 3: Update ptype mapping¶
Send packets and dump RX, check outer_L2, outer_L3, outer_L4, tunnel, inner L2, inner L3, inner L4 as below table:
sendp([Ether()/IP()/IPv6()/UDP()/Raw('\0'*40)],iface=txItf) sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IP()/Raw('\0'*40)],iface=txItf)
Outer L2 Outer L3 Outer L4 Tunnel Inner L2 Inner L3 Inner L4 sw_ptype hw_ptype ETHER IPV4_EXT_UNKNOWN Unknown IP Unknown IPV6_EXT_UNKNOWN | UDP 0x02601091 38 ETHER IPV4_EXT_UNKNOWN Unknown GRENAT ETHER_VLAN IPV4_EXT_UNKNOWN | NONFRAG 0x06426091 75 Get defined ptype mapping items, check when hw_ptype=38,sw_ptype is 0x02601091, when hw_ptype=75, sw_ptype is 0x06426091
Update hardware defined ptype to software defined packet type mapping table. Note that hw_ptype should among 0~255, sw_ptype should conform defined mask, e.g. change outer L3 value to 0x000000e0, which is IPV6_EXT_UNKNOWN:
testpmd> ptype mapping update 0 38 0x026010e1
Update [75]’s sw_ptype same to [38]’s sw_ptypes:
testpmd> ptype mapping update 0 75 0x026010e1
Check ptype mapping items: when hw_ptype=38, sw_ptype is updated to value 0x026010e1, when hw_ptype=75,sw_ptype is updated to value 0x026010e1
Send packets and dump RX, check outer_L2, outer_L3, outer_L4, tunnel, inner L2, inner L3, inner L4 as below table, outer_L3 is changed to IPV6_EXT_UNKNOWN:
sendp([Ether()/IP()/IPv6()/UDP()/Raw('\0'*40)],iface=txItf) sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IP()/Raw('\0'*40)],iface=txItf)
Outer L2 Outer L3 Outer L4 Tunnel Inner L2 Inner L3 Inner L4 sw_ptype hw_ptype ETHER IPV6_EXT_UNKNOWN Unknown IP Unknown IPV6_EXT_UNKNOWN | UDP 0x026010e1 38 ETHER IPV6_EXT_UNKNOWN Unknown IP Unknown IPV6_EXT_UNKNOWN | UDP 0x026010e1 75 Reset hardware defined ptype to software defined ptype mapping table to default:
testpmd> ptype mapping reset <port_id>
Check ptype mapping items: when hw_ptype=38, sw_ptype is changed back to value 00x02601091, when hw_ptype=75, sw_ptype is changed back to 0x06426091
Send packet and dump RX, check outer_L2, outer_L3, outer_L4, tunnel, inner L2, inner L3, inner L4 as below table:
sendp([Ether()/IP()/IPv6()/UDP()/Raw('\0'*40)],iface=txItf) sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IP()/Raw('\0'*40)],iface=txItf)
Outer L2 Outer L3 Outer L4 Tunnel Inner L2 Inner L3 Inner L4 sw_ptype hw_ptype ETHER IPV4_EXT_UNKNOWN Unknown IP Unknown IPV6_EXT_UNKNOWN | UDP 0x02601091 38 ETHER IPV4_EXT_UNKNOWN Unknown GRENAT ETHER_VLAN IPV4_EXT_UNKNOWN | NONFRAG 0x06426091 75
Test Case 4: Replace ptype mapping¶
Replace a specific or a group of software defined ptypes with a new one:
testpmd> ptype mapping replace <port_id> <target> <mask> <pkt_type>
Note that target is the packet type to be replaced, pkt_type is the new packet type to overwrite, mask is defined as below:
(0) target represents a specific software defined ptype.
(!0) target is a mask to represent a group of software defined ptypes.
Send packets and dump RX, check outer_L2, outer_L3, outer_L4, tunnel, inner L2,inner L3, inner L4 as below table:
sendp([Ether()/IP()/IPv6()/UDP()/Raw('\0'*40)],iface=txItf) sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IP()/Raw('\0'*40)],iface=txItf)
Outer L2 Outer L3 Outer L4 Tunnel Inner L2 Inner L3 Inner L4 sw_ptype hw_ptype ETHER IPV4_EXT_UNKNOWN Unknown IP Unknown IPV6_EXT_UNKNOWN UDP 0x02601091 38 ETHER IPV4_EXT_UNKNOWN Unknown GRENAT ETHER_VLAN IPV4_EXT_UNKNOWN NONFRAG 0x06426091 75 Replace a specific software defined ptypes with a new one. e.g. change outer_L3 from Tunnel GRENAT to IP, so change mask from xxxx6xxx to xxxx1xxx:
testpmd> ptype mapping replace 0 0x06426091 0 0x06421091
Update [38]’s sw_ptype same to [75]’s as 0x06421091:
testpmd> ptype mapping update 0 38 0x06421091
Send packet and dump RX, check outer_L2, outer_L3, outer_L4, tunnel, inner L2, inner L3, inner L4 as below table:
sendp([Ether()/IP()/IPv6()/UDP()/Raw('\0'*40)],iface=txItf) sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IP()/Raw('\0'*40)],iface=txItf)
Outer L2 Outer L3 Outer L4 Tunnel Inner L2 Inner L3 Inner L4 sw_ptype hw_ptype ETHER IPV4_EXT_UNKNOWN Unknown IP ETHER_VLAN IPV4_EXT_UNKNOWN | NONFRAG 0x06421091 38 ETHER IPV4_EXT_UNKNOWN Unknown IP ETHER_VLAN IPV4_EXT_UNKNOWN | NONFRAG 0x06421091 75 Mapping table has at least two same sw_ptype 0x06421091, update a group of 0x06421091 to 0x02601091:
testpmd> ptype mapping replace 0 0x06421091 1 0x02601091
Check ptype mapping items: when hw_ptype=38, sw_ptype is updated to 0x02601091, when hw_ptype=75, sw_ptype is updated to 0x02601091
Send packet and dump RX, check outer_L2, outer_L3, outer_L4, tunnel, inner L2, inner L3, inner L4 as below table:
sendp([Ether()/IP()/IPv6()/UDP()/Raw('\0'*40)],iface=txItf) sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IP()/Raw('\0'*40)],iface=txItf)
Outer L2 Outer L3 Outer L4 Tunnel Inner L2 Inner L3 Inner L4 sw_ptype hw_ptype ETHER IPV4_EXT_UNKNOWN Unknown IP Unknown IPV6_EXT_UNKNOWN | UDP 0x02601091 38 ETHER IPV4_EXT_UNKNOWN Unknown IP Unknown IPV6_EXT_UNKNOWN | UDP 0x02601091 75 Reset hardware defined ptype to software defined ptype mapping table to default:
testpmd> ptype mapping reset <port_id>
Check ptype mapping items: when hw_ptype=38, sw_ptype is changed back to value 00x02601091, when hw_ptype=75, sw_ptype is changed back to 0x06426091
Send packet and dump RX, check outer_L2, outer_L3, outer_L4, tunnel, inner L2, inner L3, inner L4 as below table:
sendp([Ether()/IP()/IPv6()/UDP()/Raw('\0'*40)],iface=txItf) sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IP()/Raw('\0'*40)],iface=txItf)
Outer L2 Outer L3 Outer L4 Tunnel Inner L2 Inner L3 Inner L4 sw_ptype hw_ptype ETHER IPV4_EXT_UNKNOWN Unknown IP Unknown IPV6_EXT_UNKNOWN | UDP 0x02601091 38 ETHER IPV4_EXT_UNKNOWN Unknown GRENAT ETHER_VLAN IPV4_EXT_UNKNOWN | NONFRAG 0x06426091 75
Shutdown API Queue Tests¶
This tests for Shutdown API feature can be run on linux userspace. It will check if NIC port can be stopped and restarted without exiting the application process. Furthermore, it will check if it can reconfigure new configurations for a port after the port is stopped, and if it is able to restart with those new configurations. It is based on testpmd application.
The test is performed by running the testpmd application and using a traffic generator. Port/queue configurations can be set interactively, and still be set at the command line when launching the application in order to be compatible with previous test framework.
Prerequisites¶
Assume port A and B are connected to the remote ports, e.g. packet generator. To run the testpmd application in linuxapp environment with 4 lcores, 4 channels with other default parameters in interactive mode:
$ ./testpmd -c 0xf -n 4 -- -i
Test Case: queue start/stop¶
This case support PF (fortville), VF (fortville,niantic)
Update testpmd source code. Add the following C code in ./app/test-pmd/fwdmac.c:
printf("ports %u queue %u received %u packages\n", fs->rx_port, fs->rx_queue, nb_rx);
Compile testpmd again, then run testpmd.
Run “set fwd mac” to set fwd type
Run “start” to start fwd package
Start packet generator to transmit and receive packets
Run “port 0 rxq 0 stop” to stop rxq 0 in port 0
Start packet generator to transmit and not receive packets
Run “port 0 rxq 0 start” to start rxq 0 in port 0
Run “port 1 txq 1 stop” to start txq 0 in port 1
Start packet generator to transmit and not receive packets but in testpmd it is a “ports 0 queue 0 received 1 packages” print
Run “port 1 txq 1 start” to start txq 0 in port 1
Start packet generator to transmit and receive packets
Test it again with VF
Scattered Packets Tests¶
The support of scattered packets by Poll Mode Drivers consists in making it possible to receive and to transmit scattered multi-segments packets composed of multiple non-contiguous memory buffers. To enforce the receipt of scattered packets, the DMA rings of port RX queues must be configured with mbuf data buffers whose size is lower than the maximum frame length. The forwarding of scattered input packets naturally enforces the transmission of scattered packets by PMD transmit functions.
Configuring the size of mbuf data buffers¶
The size of mbuf data buffers is configured with the parameter --mbuf-size
that is supplied in the set of parameters when launching the testpmd
application.
The default size of the mbuf data buffer is 2048 so that a full 1518-byte
(CRC included) Ethernet frame can be stored in a mono-segment packet.
Functional Tests of Scattered Packets¶
Testing the support of scattered packets in Poll Mode Drivers consists in sending to the test machine packets whose length is greater than the size of mbuf data buffers used to populate the DMA rings of port RX queues.
First, the receipt and the transmission of scattered packets must be tested
with the CRC stripping
option enabled, which guarantees that scattered
packets only contain packet data.
In addition, the support of scattered packets must also be performed with
the CRC stripping
option disabled, to check the special cases of scattered
input packets whose last buffer only contains the whole CRC or part of it.
In such cases, PMD receive functions must free the last buffer when removing
the CRC from the packet before returning it.
As a whole, the following packet lengths (CRC included) must be tested to check all packet memory configurations:
- packet length < mbuf data buffer size
- packet length = mbuf data buffer size
- packet length = mbuf data buffer size + 1
- packet length = mbuf data buffer size + 4
- packet length = mbuf data buffer size + 5
In cases 1) and 2), the hardware RX engine stores the packet data and the CRC in a single buffer.
In case 3), the hardware RX engine stores the packet data and the 3 first bytes of the CRC in the first buffer, and the last byte of the CRC in a second buffer.
In case 4), the hardware RX engine stores all the packet data in the first buffer, and the CRC in a second buffer.
In case 5), the hardware RX engine stores part of the packet data in the first buffer, and the last data byte plus the CRC in a second buffer.
Prerequisites¶
Assuming that ports 0
and 1
of the test target are directly connected
to a Traffic Generator, launch the testpmd
application with the following
arguments:
./build/app/testpmd -cffffff -n 3 -- -i --rxd=1024 --txd=1024 \
--burst=144 --txpt=32 --txht=8 --txwt=8 --txfreet=0 --rxfreet=64 \
--mbcache=200 --portmask=0x3 --mbuf-size=1024
The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
Setting the size of the mbuf data buffer to 1024 makes 1025-bytes input packets (CRC included) and larger packets to be stored in two buffers by the hardware RX engine.
Test Case: Mbuf 1024 traffic¶
Start packet forwarding in the testpmd
application with the start
command.
Send 5 packets of lengths (CRC included) 1023, 1024, 1025, 1028, and 1029.
Check that the same amount of frames and bytes are received back by the Traffic
Generator from its port connected to the target’s port 1.
Short-lived Application Tests¶
This feature is to reduce application start up time, and when exit, do more clean up so that it could be re-run many times.
Prerequisites¶
To test this feature, need to using linux time and start testpmd by: create and mount hugepage, must create more hugepages so that could measure time more obviously:
# echo 8192 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
# mount -t hugetlbfs hugetlbfs /mnt/huge
Bind nic to DPDK:
./tools/dpdk_nic_bind.py -b igb_uio xxxx:xx:xx.x
Start testpmd using time:
# echo quit | time ./testpmd -c 0x3 -n 4 -- -i
Test Case 1: basic fwd testing¶
Start testpmd:
./testpmd -c 0x3 -n 4 -- -i
Set fwd mac
Send packet from pkg
Check all packets could be fwd back
Test Case 2: Get start up time¶
Start testpmd:
echo quit | time ./testpmd -c 0x3 -n 4 --huge-dir /mnt/huge -- -i
Get the time stats of the startup
Repeat step 1~2 for at least 5 times to get the average
Test Case 3: Clean up with Signal – testpmd¶
Create 4G hugepages, so that could save times when repeat:
echo 2048 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages mount -t hugetlbfs hugetlbfs /mnt/huge1
Start testpmd:
./testpmd -c 0x3 -n 4 --huge-dir /mnt/huge1 -- -i
Set fwd mac
Send packets from pkg
Check all packets could be fwd back
Kill the testpmd in shell using below commands alternately:
SIGINT: pkill -2 testpmd SIGTERM: pkill -15 testpmd
Repeat step 1-6 for 20 times, and packet must be fwd back with no error for each time.
Test Case 4: Clean up with Signal – l2fwd¶
Create 4G hugepages, so that could save times when repeat:
echo 2048 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages mount -t hugetlbfs hugetlbfs /mnt/huge1
Start testpmd:
./l2fwd -c 0x3 -n 4 --huge-dir /mnt/huge1 -- -p 0x01
Set fwd mac
Send packets from pkg
Check all packets could be fwd back
Kill the testpmd in shell using below commands alternately:
SIGINT: pkill -2 l2fwd SIGTERM: pkill -15 l2fwd
Repeat step 1-6 for 20 times, and packet must be fwd back with no error for each time.
Test Case 5: Clean up with Signal – l3fwd¶
Create 4G hugepages, so that could save times when repeat:
echo 2048 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages mount -t hugetlbfs hugetlbfs /mnt/huge1
Start testpmd:
./l3fwd -c 0x3 -n 4 --huge-dir /mnt/huge1 -- -p 0x01 --config="(0,0,1)"
Set fwd mac
Send packets from pkg
Check all packets could be fwd back
Kill the testpmd in shell using below commands alternately:
SIGINT: pkill -2 l3fwd SIGTERM: pkill -15 l3fwd
Repeat step 1-6 for 20 times, and packet must be fwd back with no error for each time.
Shutdown API Feature Tests¶
This tests for Shutdown API feature can be run on linux userspace. It will check if NIC port can be stopped and restarted without exiting the application process. Furthermore, it will check if it can reconfigure new configurations for a port after the port is stopped, and if it is able to restart with those new configurations. It is based on testpmd application.
The test is performed by running the testpmd application and using a traffic generator. Port/queue configurations can be set interactively, and still be set at the command line when launching the application in order to be compatible with previous test framework.
Prerequisites¶
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
Assume port A and B are connected to the remote ports, e.g. packet generator. To run the testpmd application in linuxapp environment with 4 lcores, 4 channels with other default parameters in interactive mode:
$ ./testpmd -c 0xf -n 4 -- -i
Test Case: Stop and Restart¶
- If the testpmd application is not launched, run it as above command. Follow below steps to check if it works well after re-configuring all ports without changing any configurations.
- Run
start
to start forwarding packets. - Check that testpmd is able to forward traffic.
- Run
stop
to stop forwarding packets. - Run
port stop all
to stop all ports. - Check on the tester side that the ports are down using ethtool.
- Run
port start all
to restart all ports. - Check on the tester side that the ports are up using ethtool
- Run
start
again to restart the forwarding, then start packet generator to transmit and receive packets, and check if testpmd is able to receive and forward packets successfully.
Test Case: Reset RX/TX Queues¶
- If the testpmd application is not launched, run it as above command. Follow below steps to check if it works well after reconfiguring all ports without changing any configurations.
- Run
port stop all
to stop all ports. - Run
port config all rxq 2
to change the number of receiving queues to two. - Run
port config all txq 2
to change the number of transmitting queues to two. - Run
port start all
to restart all ports. - Check with
show config rxtx
that the configuration for these parameters changed. - Run
start
again to restart the forwarding, then start packet generator to transmit and receive packets, and check if testpmd is able to receive and forward packets successfully.
Test Case: Set promiscuous mode¶
- If the testpmd application is not launched, run it as above command. Follow below steps to check if promiscuous mode setting works well after reconfiguring it while all ports are stopped
- Run
port stop all
to stop all ports. - Run
set promisc all off
to disable promiscuous mode on all ports. - Run
port start all
to restart all ports. - Run
start
again to restart the forwarding, then start packet generator to transmit and receive packets, and check that testpmd is NOT able to receive and forward packets successfully. - Run
port stop all
to stop all ports. - Run
set promisc all on
to enable promiscuous mode on all ports. - Run
port start all
to restart all ports. - Run
start
again to restart the forwarding, then start packet generator to transmit and receive packets, and check that testpmd is able to receive and forward packets successfully.
Test Case: Reconfigure All Ports With The Same Configurations (CRC)¶
- If the testpmd application is not launched, run it as above command. Follow below steps to check if it works well after reconfiguring all ports without changing any configurations.
- Run
port stop all
to stop all ports. - Run
port config all crc-strip on
to enable the CRC stripping mode. - Run
port start all
to restart all ports. - Check with
show config rxtx
that the configuration for these parameters changed. - Run
start
again to restart the forwarding, then start packet generator to transmit and receive packets, and check if testpmd is able to receive and forward packets successfully. Check that the packet received is 4 bytes smaller than the packet sent.
Test Case: Change Link Speed¶
- If the testpmd application is not launched, run it as above command. Follow below steps to check if it works well after reconfiguring all ports without changing any configurations.
- Run
port stop all
to stop all ports. - Run
port config all speed SPEED duplex HALF/FULL
to select the new config for the link. - Run
port start all
to restart all ports. - Check on the tester side that the configuration actually changed using ethtool.
- Run
start
again to restart the forwarding, then start packet generator to transmit and receive packets, and check if testpmd is able to receive and forward packets successfully. - Repeat this process for every compatible speed depending on the NIC driver.
Test Case: Enable/Disable Jumbo Frame¶
- If the testpmd application is not launched, run it as above command. Follow below steps to check if it works well after reconfiguring all ports without changing any configurations.
- Run
port stop all
to stop all ports. - Run
port config all max-pkt-len 2048
to set the maximum packet length. - Run
port start all
to restart all ports. - Run
start
again to restart the forwarding, then start packet generator to transmit and receive packets, and check if testpmd is able to receive and forward packets successfully. Check this with the following packet sizes: 2047, 2048 & 2049. Only the third one should fail.
Test Case: Enable/Disable RSS¶
- If the testpmd application is not launched, run it as above command. Follow below steps to check if it works well after reconfiguring all ports without changing any configurations.
- Run
port stop all
to stop all ports. - Run
port config rss ip
to enable RSS. - Run
port start all
to restart all ports. - Run
start
again to restart the forwarding, then start packet generator to transmit and receive packets, and check if testpmd is able to receive and forward packets successfully.
Test Case: Change the Number of rxd/txd¶
- If the testpmd application is not launched, run it as above command. Follow below steps to check if it works well after reconfiguring all ports without changing any configurations.
- Run
port stop all
to stop all ports. - Run
port config all rxd 1024
to change the rx descriptors. - Run
port config all txd 1024
to change the tx descriptors. - Run
port start all
to restart all ports. - Check with
show config rxtx
that the descriptors were actually changed. - Run
start
again to restart the forwarding, then start packet generator to transmit and receive packets, and check if testpmd is able to receive and forward packets successfully.
Test Case: link stats¶
- If the testpmd application is not launched, run it as above command. Follow below steps to check if it works well after reconfiguring all ports without changing any configurations.
- Run
set fwd mac
to set fwd type. - Run
start
to start the forwarding, then start packet generator to transmit and receive packets - Run
set link-down port X
to set all port link down - Check on the tester side that the configuration actually changed using ethtool.
- Start packet generator to transmit and not receive packets
- Run
set link-up port X
to set all port link up - Start packet generator to transmit and receive packets successfully.
SRIOV and InterVM Communication Tests¶
Some applications such as pipelining of virtual appliances and traffic mirroring to virtual appliances require the high performance InterVM communications.
The testpmd application is used to configure traffic mirroring, PF VM receive mode, PFUTA hash table and control traffic to a VF for inter-VM communication.
The 82599 supports four separate mirroring rules, each associated with a destination pool. Each rule is programmed with one of the four mirroring types:
- Pool mirroring: reflect all the packets received to a pool from the network.
- Uplink port mirroring: reflect all the traffic received from the network.
- Downlink port mirroring: reflect all the traffic transmitted to the network.
- VLAN mirroring: reflect all the traffic received from the network in a set of given VLANs (either from the network or from local VMs).
Prerequisites for all 2VMs cases/Mirror 2VMs cases¶
Create two VF interface VF0 and VF1 from one PF interface and then attach them to VM0 and VM1. Suppose PF is 0000:08:00.0.Below are commands which can be used to generate 2VFs and make them in pci-stub modes.:
./tools/pci_unbind.py --bind=igb_uio 0000:08:00.0
echo 2 > /sys/bus/pci/devices/0000\:08\:00.0/max_vfs
echo "8086 10ed" > /sys/bus/pci/drivers/pci-stub/new_id
echo 0000:08:10.0 >/sys/bus/pci/devices/0000\:08\:10.0/driver/unbind
echo 0000:08:10.2 >/sys/bus/pci/devices/0000\:08\:10.2/driver/unbind
echo 0000:08:10.0 >/sys/bus/pci/drivers/pci-stub/bind
echo 0000:08:10.2 >/sys/bus/pci/drivers/pci-stub/bind
Start PF driver on the Host and skip the VFs.:
./x86_64-default-linuxapp-gcc/app/testpmd -c f \
-n 4 -b 0000:08:10.0 -b 0000:08:10.2 -- -i
For VM0 start up command, you can refer to below command.:
qemu-system-x86_64 -name vm0 -enable-kvm -m 2048 -smp 4 -cpu host \
-drive file=/root/Downloads/vm0.img -net nic,macaddr=00:00:00:00:00:01 \
-net tap,script=/etc/qemu-ifup \
-device pci-assign,host=08:10.0 -vnc :1 --daemonize
The /etc/qemu-ifup can be below script, need you to create first:
#!/bin/sh
set -x
switch=br0
if [ -n "$1" ];then
/usr/sbin/tunctl -u `whoami` -t $1
/sbin/ip link set $1 up
sleep 0.5s
/usr/sbin/brctl addif $switch $1
exit 0
else
echo "Error: no interface specified"
exit 1
fi
Similar for VM0, please refer to below command for VM1:
qemu-system-x86_64 -name vm1 -enable-kvm -m 2048 -smp 4 -cpu host \
-drive file=/root/Downloads/vm1.img \
-net nic,macaddr=00:00:00:00:00:02 \
-net tap,script=/etc/qemu-ifup \
-device pci-assign,host=08:10.2 -vnc :4 -daemonize
If you want to run all common 2VM cases, please run testpmd on VM0 and VM1 and start traffic forward on the VM hosts. Some specific prerequisites need to be set up in each case:
VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
VF0 testpmd-> set fwd rxonly
VF0 testpmd-> start
VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
VF1 testpmd-> set fwd mac
VF1 testpmd-> start
Test Case1: InterVM communication test on 2VMs¶
Set the VF0 destination mac address to VF1 mac address, packets send from VF0 will be forwarded to VF1 and then send out:
VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
VF1 testpmd-> show port info 0
VF1 testpmd-> set fwd mac
VF1 testpmd-> start
VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- --eth-peer=0,"VF1 mac" -i
VF0 testpmd-> set fwd mac
VF0 testpmd-> start
Send 10 packets with VF0 mac address and make sure the packets will be forwarded by VF1.
Test Case2: Mirror Traffic between 2VMs with Pool mirroring¶
Set up common 2VM prerequisites.
Add one mirror rule that will mirror VM0 income traffic to VM1:
PF testpmd-> set port 0 mirror-rule 0 pool-mirror 0x1 dst-pool 1 on
Send 10 packets to VM0 and verify the packets has been mirrored to VM1 and forwarded the packet.
After test need reset mirror rule:
PF testpmd-> reset port 0 mirror-rule 0
Test Case3: Mirror Traffic between 2VMs with Uplink mirroring¶
Set up common 2VM prerequisites.
Add one mirror rule that will mirror VM0 income traffic to VM1:
PF testpmd-> set port 0 mirror-rule 0 uplink-mirror dst-pool 1 on
Send 10 packets to VM0 and verify the packets has been mirrored to VM1 and forwarded the packet.
After test need reset mirror rule:
PF testpmd-> reset port 0 mirror-rule 0
Test Case4: Mirror Traffic between 2VMs with Downlink mirroring¶
Run testpmd on VM0 and VM1 and start traffic forward on the VM hosts:
VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
Add one mirror rule that will mirror VM0 outcome traffic to VM1:
PF testpmd-> set port 0 mirror-rule 0 downlink-mirror dst-pool 1 on
Make sure VM1 in receive only mode, VM0 send 16 packets, and verify the VM0 packets has been mirrored to VM1:
VF1 testpmd-> set fwd rxonly
VF1 testpmd-> start
VF0 testpmd-> start tx_first
Note: don’t let VF1 fwd packets since downlink mirror will mirror back the packets to received packets, which will be an infinite loop.
After test need reset mirror rule:
PF testpmd-> reset port 0 mirror-rule 0
Test Case5: Mirror Traffic between VMs with Vlan mirroring¶
Set up common 2VM prerequisites.
Add rx vlan-id 0 on VF0, add one mirror rule that will mirror VM0 income traffic with specified vlan to VM1:
PF testpmd-> rx_vlan add 0 port 0 vf 0x1
PF testpmd-> set port 0 mirror-rule 0 vlan-mirror 0 dst-pool 1 on
Send 10 packets with vlan-id0/vm0 MAC to VM0 and verify the packets has been mirrored to VM1 and forwarded the packet.
After test need reset mirror rule:
PF testpmd-> reset port 0 mirror-rule 0
Test Case6: Mirror Traffic between 2VMs with Vlan & Pool mirroring¶
Set up common 2VM prerequisites.
Add rx vlan-id 3 of VF1, and 2 mirror rules, one is VM0 income traffic to VM1, one is VM1 vlan income traffic to VM0:
PF testpmd-> rx_vlan add 3 port 0 vf 0x2
PF testpmd-> set port 0 mirror-rule 0 pool-mirror 0x1 dst-pool 1 on
PF testpmd-> set port 0 mirror-rule 1 vlan-mirror 3 dst-pool 0 on
Send 2 flows one by one, first 10 packets with VM0 mac, and the second 100 packets with VM1 vlan and mac, and verify the first 10 packets has been mirrored first to VM1, second 100 packets go to VM0 and the packets have been forwarded.
After test need reset mirror rule:
PF testpmd-> reset port 0 mirror-rule 0
PF testpmd-> reset port 0 mirror-rule 1
Test Case7: Mirror Traffic between 2VMs with Uplink & Downlink mirroring¶
Run testpmd on VM0 and VM1 and start traffic forward on the VM hosts:
VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
Add 2 mirror rules that will mirror VM0 outcome and income traffic to VM1:
PF testpmd-> set port 0 mirror-rule 0 downlink-mirror dst-pool 1 on
PF testpmd-> set port 0 mirror-rule 0 uplink-mirror dst-pool 1 on
Make sure VM1 in receive only mode, VM0 first send 16 packets, and verify the VM0 packets has been mirrored to VM1:
VF1 testpmd-> set fwd rxonly
VF1 testpmd-> start
VF0 testpmd-> start tx_first
Note: don’t let VF1 fwd packets since downlink mirror will mirror back the packets to received packets, which will be an infinite loop.
Send 10 packets to VF0 with VF0 MAC from ixia, verify that all VF0 received packets and transmitted packets will mirror to VF1:
VF0 testpmd-> stop
VF0 testpmd-> start
After test need reset mirror rule:
PF testpmd-> reset port 0 mirror-rule 0
Test Case8: Mirror Traffic between 2VMs with Vlan & Pool & Uplink & Downlink mirroring¶
Run testpmd on VM0 and VM1 and start traffic forward on the VM hosts:
VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
Add rx vlan-id 0 on VF0 and add 4 mirror rules:
PF testpmd-> reset port 0 mirror-rule 1
PF testpmd-> set port 0 mirror-rule 0 downlink-mirror dst-pool 1 on
PF testpmd-> set port 0 mirror-rule 1 uplink-mirror dst-pool 1 on
PF testpmd-> rx_vlan add 0 port 0 vf 0x2
PF testpmd-> set port 0 mirror-rule 2 vlan-mirror 0 dst-pool 0 on
PF testpmd-> set port 0 mirror-rule 3 pool-mirror 0x1 dst-pool 1 on
Make sure VM1 in receive only mode, VM0 first send 16 packets, and verify the VM0 packets has been mirrored to VM1, VF1, RX, 16packets (downlink mirror):
VF1 testpmd-> set fwd rxonly
VF1 testpmd-> start
VF0 testpmd-> start tx_first
Note: don’t let VF1 fwd packets since downlink mirror will mirror back the packets to received packets, which will be an infinite loop.
Send 1 packet to VF0 with VF0 MAC from ixia, check if VF0 RX 1 packet and TX 1 packet, and VF1 has 2 packets mirror from VF0(uplink mirror/downlink/pool):
VF0 testpmd-> stop
VF0 testpmd-> set fwd mac
VF0 testpmd-> start
Send 1 packet with VM1 vlan id and mac, and verify that VF0 have 1 RX packet, 1 TX packet, and VF1 have 2 packets(downlink mirror):
VF0 testpmd-> stop
VF0 testpmd-> set fwd rxonly
VF0 testpmd-> start
After test need reset mirror rule:
PF testpmd-> reset port 0 mirror-rule 0
PF testpmd-> reset port 0 mirror-rule 1
PF testpmd-> reset port 0 mirror-rule 2
PF testpmd-> reset port 0 mirror-rule 3
Test Case9: Add Multi exact MAC address on VF¶
Add an exact destination mac address on VF0:
PF testpmd-> mac_addr add port 0 vf 0 00:11:22:33:44:55
Send 10 packets with dst mac 00:11:22:33:44:55 to VF0 and make sure VF0 will receive the packets.
Add another exact destination mac address on VF0:
PF testpmd-> mac_addr add port 0 vf 0 00:55:44:33:22:11
Send 10 packets with dst mac 00:55:44:33:22:11 to VF0 and make sure VF0 will receive the packets.
After test need restart PF and VF for clear exact mac addresses, first quit VF, then quit PF.
Test Case10: Enable/Disable one uta MAC address on VF¶
Enable PF promisc mode and enable VF0 accept uta packets:
PF testpmd-> set promisc 0 on
PF testpmd-> set port 0 vf 0 rxmode ROPE on
Add an uta destination mac address on VF0:
PF testpmd-> set port 0 uta 00:11:22:33:44:55 on
Send 10 packets with dst mac 00:11:22:33:44:55 to VF0 and make sure VF0 will the packets.
Disable PF promisc mode, repeat step3, check VF0 should not accept uta packets:
PF testpmd-> set promisc 0 off
PF testpmd-> set port 0 vf 0 rxmode ROPE off
Test Case11: Add Multi uta MAC addresses on VF¶
Add 2 uta destination mac address on VF0:
PF testpmd-> set port 0 uta 00:55:44:33:22:11 on
PF testpmd-> set port 0 uta 00:55:44:33:22:66 on
Send 2 flows, first 10 packets with dst mac 00:55:44:33:22:11, another 100 packets with dst mac 00:55:44:33:22:66 to VF0 and make sure VF0 will receive all the packets.
Test Case12: Add/Remove uta MAC address on VF¶
Add one uta destination mac address on VF0:
PF testpmd-> set port 0 uta 00:55:44:33:22:11 on
Send 10 packets with dst mac 00:55:44:33:22:11 to VF0 and make sure VF0 will receive the packets.
Remove the uta destination mac address on VF0:
PF testpmd-> set port 0 uta 00:55:44:33:22:11 off
Send 10 packets with dst mac 00:11:22:33:44:55 to VF0 and make sure VF0 will not receive the packets.
Add an uta destination mac address on VF0 again:
PF testpmd-> set port 0 uta 00:11:22:33:44:55 on
Send packet with dst mac 00:11:22:33:44:55 to VF0 and make sure VF0 will receive again and forwarded the packet. This step is to make sure the on/off switch is working.
Test Case13: Pause RX Queues¶
Pause RX queue of VF0 then send 10 packets to VF0 and make sure VF0 will not receive the packets:
PF testpmd-> set port 0 vf 0 rx off
Enable RX queue of VF0 then send 10 packets to VF0 and make sure VF0 will receive the packet:
PF testpmd-> set port 0 vf 0 rx on
Repeat the off/on twice to check the switch capability, and ensure on/off can work stable.
Test Case14: Pause TX Queues¶
Pause TX queue of VF0 then send 10 packets to VF0 and make sure VF0 will not forward the packet:
PF testpmd-> set port 0 vf 0 tx off
Enable RX queue of VF0 then send 10 packets to VF0 and make sure VF0 will forward the packet:
PF testpmd-> set port 0 vf 0 tx on
Repeat the off/on twice to check the switch capability, and ensure on/off can work stable.
Test Case15: Prevent Rx of Broadcast on VF¶
Disable VF0 rx broadcast packets then send broadcast packet to VF0 and make sure VF0 will not receive the packet:
PF testpmd-> set port 0 vf 0 rxmode BAM off
Enable VF0 rx broadcast packets then send broadcast packet to VF0 and make sure VF0 will receive and forward the packet:
PF testpmd-> set port 0 vf 0 rxmode BAM on
Repeat the off/on twice to check the switch capability, and ensure on/off can work stable.
Test Case16: Negative input to commands¶
Input invalid commands on PF/VF to make sure the commands can’t work:
1. PF testpmd-> set port 0 vf 65 tx on
2. PF testpmd-> set port 2 vf -1 tx off
3. PF testpmd-> set port 0 vf 0 rx oneee
4. PF testpmd-> set port 0 vf 0 rx offdd
5. PF testpmd-> set port 0 vf 0 rx oneee
6. PF testpmd-> set port 0 vf 64 rxmode BAM on
7. PF testpmd-> set port 0 vf 64 rxmode BAM off
8. PF testpmd-> set port 0 uta 00:11:22:33:44 on
9. PF testpmd-> set port 7 uta 00:55:44:33:22:11 off
10. PF testpmd-> set port 0 vf 34 rxmode ROPE on
11. PF testpmd-> mac_addr add port 0 vf 65 00:55:44:33:22:11
12. PF testpmd-> mac_addr add port 5 vf 0 00:55:44:88:22:11
13. PF testpmd-> set port 0 mirror-rule 0 pool-mirror 65 dst-pool 1 on
14. PF testpmd-> set port 0 mirror-rule 0xf uplink-mirror dst-pool 1 on
15. PF testpmd-> set port 0 mirror-rule 2 vlan-mirror 9 dst-pool 1 on
16. PF testpmd-> set port 0 mirror-rule 0 downlink-mirror 0xf dst-pool 2 off
17. PF testpmd-> reset port 0 mirror-rule 4
18. PF testpmd-> reset port 0xff mirror-rule 0
Prerequisites for Scaling 4VFs per 1PF¶
Create 4VF interface VF0, VF1, VF2, VF3 from one PF interface and then attach them to VM0, VM1, VM2 and VM3.Start PF driver on the Host and skip the VF driver will has been already attached to VMs:
On PF ./tools/pci_unbind.py --bind=igb_uio 0000:08:00.0
echo 4 > /sys/bus/pci/devices/0000\:08\:00.0/max_vfs
./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -b 0000:08:10.0 -b 0000:08:10.2 -b 0000:08:10.4 -b 0000:08:10.6 -- -i
If you want to run all common 4VM cases, please run testpmd on VM0, VM1, VM2 and VM3 and start traffic forward on the VM hosts. Some specific prerequisites are set up in each case:
VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
VF2 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
VF3 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
Test Case17: Scaling Pool Mirror on 4VFs¶
Make sure prerequisites for Scaling 4VFs per 1PF is set up.
Add one mirror rules that will mirror VM0/VM1/VM2 income traffic to VM3:
PF testpmd-> set port 0 mirror-rule 0 pool-mirror 0x7 dst-pool 3 on
VF0 testpmd-> set fwd rxonly
VF0 testpmd-> start
VF1 testpmd-> set fwd rxonly
VF1 testpmd-> start
VF2 testpmd-> set fwd rxonly
VF2 testpmd-> start
VF3 testpmd-> set fwd rxonly
VF3 testpmd-> start
Send 3 flows to VM0/VM1/VM2, one with VM0 mac, one with VM1 mac, one with VM2 mac, and verify the packets has been mirrored to VM3.
Reset mirror rule:
PF testpmd-> reset port 0 mirror-rule 0
Set another 2 mirror rules. VM0/VM1 income traffic mirror to VM2 and VM3:
PF testpmd-> set port 0 mirror-rule 0 pool-mirror 0x3 dst-pool 2 on
PF testpmd-> set port 0 mirror-rule 1 pool-mirror 0x3 dst-pool 3 on
Send 2 flows to VM0/VM1, one with VM0 mac, one with VM1 mac and verify the packets has been mirrored to VM2/VM3 and VM2/VM3 have forwarded these packets.
Reset mirror rule:
PF testpmd-> reset port 0 mirror-rule 0
PF testpmd-> reset port 0 mirror-rule 1
Test Case18: Scaling Uplink Mirror on 4VFs¶
Make sure prerequisites for Scaling 4VFs per 1PF is set up.
Add one mirror rules that will mirror all income traffic to VM2 and VM3:
PF testpmd-> set port 0 mirror-rule 0 uplink-mirror dst-pool 2 on
PF testpmd-> set port 0 mirror-rule 1 uplink-mirror dst-pool 3 on
VF0 testpmd-> set fwd rxonly
VF0 testpmd-> start
VF1 testpmd-> set fwd rxonly
VF1 testpmd-> start
VF2 testpmd-> set fwd rxonly
VF2 testpmd-> start
VF3 testpmd-> set fwd rxonly
VF3 testpmd-> start
Send 4 flows to VM0/VM1/VM2/VM3, one packet with VM0 mac, one packet with VM1 mac, one packet with VM2 mac, and one packet with VM3 mac and verify the income packets has been mirrored to VM2 and VM3. Make sure VM2/VM3 will have 4 packets.
Reset mirror rule:
PF testpmd-> reset port 0 mirror-rule 0
PF testpmd-> reset port 0 mirror-rule 1
Test Case19: Scaling Downlink Mirror on 4VFs¶
Make sure prerequisites for scaling 4VFs per 1PF is set up.
Add one mirror rules that will mirror all outcome traffic to VM2 and VM3:
PF testpmd-> set port 0 mirror-rule 0 downlink-mirror dst-pool 2 on
PF testpmd-> set port 0 mirror-rule 1 downlink-mirror dst-pool 3 on
VF0 testpmd-> set fwd mac
VF0 testpmd-> start
VF1 testpmd-> set fwd mac
VF1 testpmd-> start
VF2 testpmd-> set fwd rxonly
VF2 testpmd-> start
VF3 testpmd-> set fwd rxonly
VF3 testpmd-> start
Send 2 flows to VM0/VM1, one with VM0 mac, one with VM1 mac, and verify VM0/VM1 will forward these packets. And verify the VM0/VM1 outcome packets have been mirrored to VM2 and VM3.
Reset mirror rule:
PF testpmd-> reset port 0 mirror-rule 0
PF testpmd-> reset port 0 mirror-rule 1
Test Case20: Scaling Vlan Mirror on 4VFs¶
Make sure prerequisites for scaling 4VFs per 1PF is set up.
Add 3 mirror rules that will mirror VM0/VM1/VM2 vlan income traffic to VM3:
PF testpmd-> rx_vlan add 1 port 0 vf 0x1
PF testpmd-> rx_vlan add 2 port 0 vf 0x2
PF testpmd-> rx_vlan add 3 port 0 vf 0x4
PF testpmd-> set port 0 mirror-rule 0 vlan-mirror 1,2,3 dst-pool 3 on
VF0 testpmd-> set fwd mac
VF0 testpmd-> start
VF1 testpmd-> set fwd mac
VF1 testpmd-> start
VF2 testpmd-> set fwd mac
VF2 testpmd-> start
VF3 testpmd-> set fwd mac
VF3 testpmd-> start
Send 3 flows to VM0/VM1/VM2, one with VM0 mac/vlanid, one with VM1 mac/vlanid, one with VM2 mac/vlanid,and verify the packets has been mirrored to VM3 and VM3 has forwards these packets.
Reset mirror rule:
PF testpmd-> reset port 0 mirror-rule 0
Set another 2 mirror rules. VM0/VM1 income traffic mirror to VM2 and VM3:
PF testpmd-> set port 0 mirror-rule 0 vlan-mirror 1 dst-pool 2 on
PF testpmd-> set port 0 mirror-rule 1 vlan-mirror 2 dst-pool 3 on
Send 2 flows to VM0/VM1, one with VM0 mac/vlanid, one with VM1 mac/vlanid and verify the packets has been mirrored to VM2 and VM3, then VM2 and VM3 have forwarded these packets.
Reset mirror rule:
PF testpmd-> reset port 0 mirror-rule 0
PF testpmd-> reset port 0 mirror-rule 1
Test Case21: Scaling Vlan Mirror & Pool Mirror on 4VFs¶
Make sure prerequisites for scaling 4VFs per 1PF is set up.
Add 3 mirror rules that will mirror VM0/VM1 vlan income traffic to VM2, VM0/VM1 pool will come to VM3:
PF testpmd-> rx_vlan add 1 port 0 vf 0x1
PF testpmd-> rx_vlan add 2 port 0 vf 0x2
PF testpmd-> set port 0 mirror-rule 0 vlan-mirror 1 dst-pool 2 on
PF testpmd-> set port 0 mirror-rule 1 vlan-mirror 2 dst-pool 2 on
PF testpmd-> set port 0 mirror-rule 2 pool-mirror 0x3 dst-pool 3 on
VF0 testpmd-> set fwd mac
VF0 testpmd-> start
VF1 testpmd-> set fwd mac
VF1 testpmd-> start
VF2 testpmd-> set fwd mac
VF2 testpmd-> start
VF3 testpmd-> set fwd mac
VF3 testpmd-> start
Send 2 flows to VM0/VM1, one with VM0 mac/vlanid, one with VM1 mac/vlanid, and verify the packets has been mirrored to VM2 and VM3, and VM2/VM3 have forwarded these packets.
Reset mirror rule:
PF testpmd-> reset port 0 mirror-rule 0
PF testpmd-> reset port 0 mirror-rule 1
PF testpmd-> reset port 0 mirror-rule 2
Set 3 mirror rules. VM0/VM1 income traffic mirror to VM2, VM2 traffic will mirror to VM3:
PF testpmd-> set port 0 mirror-rule 0 vlan-mirror 1,2 dst-pool 2 on
PF testpmd-> set port 0 mirror-rule 2 pool-mirror 0x2 dst-pool 3 on
Send 2 flows to VM0/VM1, one with VM0 mac/vlanid, one with VM1 mac/vlanid and verify the packets has been mirrored to VM2, VM2 traffic will be mirrored to VM3, then VM2 and VM3 have forwarded these packets.
Reset mirror rule:
PF testpmd-> reset port 0 mirror-rule 0
PF testpmd-> reset port 0 mirror-rule 1
PF testpmd-> reset port 0 mirror-rule 2
Test Case22: Scaling Uplink Mirror & Downlink Mirror on 4VFs¶
Make sure prerequisites for scaling 4VFs per 1PF is set up.
Add 2 mirror rules that will mirror all income traffic to VM2, all outcome traffic to VM3. Make sure VM2 and VM3 rxonly:
PF testpmd-> set port 0 mirror-rule 0 uplink-mirror dst-pool 2 on
PF testpmd-> set port 0 mirror-rule 1 downlink-mirror dst-pool 3 on
VF0 testpmd-> set fwd mac
VF0 testpmd-> start
VF1 testpmd-> set fwd mac
VF1 testpmd-> start
VF2 testpmd-> set fwd rxonly
VF2 testpmd-> start
VF3 testpmd-> set fwd rxonly
VF3 testpmd-> start
Send 2 flows to VM0/VM1, one with VM0 mac, one with VM1 mac and make sure VM0/VM1 will forward packets. Verify the income packets have been mirrored to VM2, the outcome packets has been mirrored to VM3.
Reset mirror rule:
PF testpmd-> reset port 0 mirror-rule 0
PF testpmd-> reset port 0 mirror-rule 1
Test Case23: Scaling Pool & Vlan & Uplink & Downlink Mirror on 4VFs¶
Make sure prerequisites for scaling 4VFs per 1PF is set up.
Add mirror rules that VM0 vlan mirror to VM1, all income traffic mirror to VM2, all outcome traffic mirror to VM3, all VM1 traffic will mirror to VM0. Make sure VM2 and VM3 rxonly:
PF testpmd-> rx_vlan add 1 port 0 vf 0x1
PF testpmd-> set port 0 mirror-rule 0 vlan-mirror 1 dst-pool 1 on
PF testpmd-> set port 0 mirror-rule 1 pool-mirror 0x2 dst-pool 0 on
PF testpmd-> set port 0 mirror-rule 2 uplink-mirror dst-pool 2 on
PF testpmd-> set port 0 mirror-rule 3 downlink-mirror dst-pool 3 on
VF0 testpmd-> set fwd mac
VF0 testpmd-> start
VF1 testpmd-> set fwd mac
VF1 testpmd-> start
VF2 testpmd-> set fwd rxonly
VF2 testpmd-> start
VF3 testpmd-> set fwd rxonly
VF3 testpmd-> start
Send 10 packets to VM0 with VM0 mac/vlanid, verify that VM1 will be mirrored and packets will be forwarded, VM2 will have all income traffic mirrored, VM3 will have all outcome traffic mirrored
Send 10 packets to VM1 with VM1 mac, verify that VM0 will be mirrored and packets will be forwarded, VM2 will have all income traffic mirrored; VM3 will have all outcome traffic mirrored
Reset mirror rule:
PF testpmd-> reset port 0 mirror-rule 0
PF testpmd-> reset port 0 mirror-rule 1
PF testpmd-> reset port 0 mirror-rule 2
PF testpmd-> reset port 0 mirror-rule 3
Test Case24: Scaling InterVM communication on 4VFs¶
Set the VF0 destination mac address to VF1 mac address, packets send from VF0 will be forwarded to VF1 and then send out. Similar for VF2 and VF3:
VF1 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
VF1 testpmd-> show port info 0
VF1 testpmd-> set fwd mac
VF1 testpmd-> start
VF0 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- --eth-peer=0,"VF1 mac" -i
VF0 testpmd-> set fwd mac
VF0 testpmd-> start
VF3 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- -i
VF3 testpmd-> show port info 0
VF3 testpmd-> set fwd mac
VF3 testpmd-> start
VF2 ./x86_64-default-linuxapp-gcc/app/testpmd -c f -n 4 -- --eth-peer=0,"VF3 mac" -i
VF2 testpmd-> set fwd mac
VF2 testpmd-> start
Send 2 flows, one with VF0 mac address and make sure the packets will be forwarded by VF1, another with VF2 mac address and make sure the packets will be forwarded by VF3.
Stability Tests¶
This is the test report for the Intel® DPDK Linux user space stability tests described in the test plan document.
Test Case: Stress test¶
Run under heavy traffic for a long time. At the end of the test period, check that the traffic is still flowing and there is no drop in the throughput rate.
Recommended test configuration: testpmd application using a single logical core to handle line rate traffic from two 10GbE ports. Recommended test duration: 24 hours.
Test Case: Repetitive system restart¶
Check that the system is still working after the application is shut down and restarted repeatedly under heavy traffic load. After the last test iteration, the traffic should still be flowing through the system with no drop in the throughput rate.
Recommended test configuration: testpmd application using a single logical core to handle line rate traffic from two 10GbE ports.
Test Case: Packet integrity test¶
Capture output packets selectively and check that the packet headers are as expected, with the payload not corrupted or truncated.
Recommended test configuration: testpmd application using a single logical core to handle line rate traffic from two 10GbE ports.
Test Case: Cable removal test¶
Check that the traffic stops when the cable is removed and resumes with no drop in the throughput rate after the cable is reinserted.
Test Case: Mix of different NIC types¶
Check that a mix of different NIC types is supported. The system should recognize all the NICs that are part of the system and are supported by the Intel DPDK PMD. Check that ports from NICs of different type can send and receive traffic at the same time.
Recommended test configuration: testpmd application using a single logical core to handle line rate traffic from two 1GbE ports (e.g. Intel 82576 NIC) and two 10GbE ports (e.g. Intel 82599 NIC).
Test Case: Coexistence of kernel space drivers with Poll Mode Drivers¶
Verify that Intel DPDK PMD running in user space can work with the kernel space space NIC drivers.
Recommended test configuration: testpmd application using a single logical core to handle line rate traffic from two 1GbE ports (e.g. Intel 82576 NIC) and two 10GbE ports (e.g. Intel 82599 NIC). Kernel space driver for Intel 82576 NIC used for management.
Transmit Segmentation Offload (TSO) Tests¶
Description¶
This document provides the plan for testing the TSO (Transmit Segmentation Offload, also called Large Send offload - LSO) feature of Intel Ethernet Controller, including Intel 82599 10GbE Ethernet Controller and Fortville 40GbE Ethernet Controller. TSO enables the TCP/IP stack to pass to the network device a larger ULP datagram than the Maximum Transmit Unit Size (MTU). NIC divides the large ULP datagram to multiple segments according to the MTU size.
Prerequisites¶
The DUT must take one of the Ethernet controller ports connected to a port on another device that is controlled by the Scapy packet generator.
The Ethernet interface identifier of the port that Scapy will use must be known. On tester, all offload feature should be disabled on tx port, and start rx port capture:
ethtool -K <tx port> rx off tx off tso off gso off gro off lro off
ip l set <tx port> up
tcpdump -n -e -i <rx port> -s 0 -w /tmp/cap
On DUT, run pmd with parameter “–enable-rx-cksum”. Then enable TSO on tx port and checksum on rx port. The test commands is below:
#enable hw checksum on rx port
tx_checksum set ip hw 0
tx_checksum set udp hw 0
tx_checksum set tcp hw 0
tx_checksum set sctp hw 0
set fwd csum
# enable TSO on tx port
*tso set 800 1
Test case: csum fwd engine, use TSO¶
This test uses Scapy
to send out one large TCP package. The dut forwards package
with TSO enable on tx port while rx port turns checksum on. After package send out
by TSO on tx port, the tester receives multiple small TCP package.
Turn off tx port by ethtool on tester:
ethtool -K <tx port> rx off tx off tso off gso off gro off lro off
ip l set <tx port> up
capture package rx port on tester:
tcpdump -n -e -i <rx port> -s 0 -w /tmp/cap
Launch the userland testpmd
application on DUT as follows:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0xffffffff -n 2 -- -i --rxd=512 --txd=512
--burst=32 --rxfreet=64 --mbcache=128 --portmask=0x3 --txpt=36 --txht=0 --txwt=0
--txfreet=32 --txrst=32 --enable-rx-cksum
testpmd> set verbose 1
# enable hw checksum on rx port
testpmd> tx_checksum set ip hw 0
testpmd> tx_checksum set udp hw 0
testpmd> tx_checksum set tcp hw 0
testpmd> tx_checksum set sctp hw 0
# enable TSO on tx port
testpmd> tso set 800 1
# set fwd engine and start
testpmd> set fwd csum
testpmd> start
Test IPv4() in scapy:
sendp([Ether(dst="%s", src="52:00:00:00:00:00")/IP(src="192.168.1.1",dst="192.168.1.2")/UDP(sport=1021,dport=1021)/Raw(load="\x50"*%s)], iface="%s")
Test IPv6() in scapy:
sendp([Ether(dst="%s", src="52:00:00:00:00:00")/IPv6(src="FE80:0:0:0:200:1FF:FE00:200", dst="3555:5555:6666:6666:7777:7777:8888:8888")/UDP(sport=1021,dport=1021)/Raw(load="\x50"*%s)], iface="%s"
Test case: csum fwd engine, use TSO tunneling¶
This test uses Scapy
to send out one large TCP package. The dut forwards package
with TSO enable on tx port while rx port turns checksum on. After package send out
by TSO on tx port, the tester receives multiple small TCP package.
Turn off tx port by ethtool on tester:
ethtool -K <tx port> rx off tx off tso off gso off gro off lro off
ip l set <tx port> up
capture package rx port on tester:
tcpdump -n -e -i <rx port> -s 0 -w /tmp/cap
Launch the userland testpmd
application on DUT as follows:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0xffffffff -n 2 -- -i --rxd=512 --txd=512
--burst=32 --rxfreet=64 --mbcache=128 --portmask=0x3 --txpt=36 --txht=0 --txwt=0
--txfreet=32 --txrst=32 --enable-rx-cksum
testpmd> set verbose 1
# enable hw checksum on rx port
testpmd> tx_checksum set ip hw 0
testpmd> tx_checksum set udp hw 0
testpmd> tx_checksum set tcp hw 0
testpmd> tx_checksum set sctp hw 0
testpmd> tx_checksum set vxlan hw 0
testpmd> tx_checksum set nvgre hw 0
# enable TSO on tx port
testpmd> tso set 800 1
# set fwd engine and start
testpmd> set fwd csum
testpmd> start
Test vxlan() in scapy:
sendp([Ether(dst="%s",src="52:00:00:00:00:00")/IP(src="192.168.1.1",dst="192.168.1.2")/UDP(sport=1021,dport=4789)/VXLAN(vni=1234)/Ether(dst=%s,src="52:00:00:00:00:00")/IP(src="192.168.1.1",dst="192.168.1.2")/UDP(sport=1021,dport=1021)/Raw(load="\x50"*%s)], iface="%s"
Test nvgre() in scapy:
sendp([Ether(dst="%s",src="52:00:00:00:00:00")/IP(src="192.168.1.1",dst="192.168.1.2",proto=47)/NVGRE()/Ether(dst=%s,src="52:00:00:00:00:00")/IP(src="192.168.1.1",dst="192.168.1.2")/TCP(sport="1021",dport="1021")/("X"*%s)], iface="%s")
Test case: TSO performance¶
Set the packet stream to be sent out from packet generator before testing as below.
Frame Size | 1S/1C/1T | 1S/1C/1T | 1S/2C/1T | 1S/2C/2T | 1S/2C/2T |
64 | |||||
65 | |||||
128 | |||||
256 | |||||
512 | |||||
1024 | |||||
1280 | |||||
1518 |
Then run the test application as below:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0xffffffff -n 2 -- -i --rxd=512 --txd=512
--burst=32 --rxfreet=64 --mbcache=128 --portmask=0x3 --txpt=36 --txht=0 --txwt=0
--txfreet=32 --txrst=32 --enable-rx-cksum
The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
Tx Preparation Forwarding Tests¶
The support of TX preparation forwarding feature consists in:
- Do necessary preparations of packet burst to be safely transmitted on device for desired HW offloads: Set/reset checksum field according to the hardware requirements. Check HW constraints (number of segments per packet, etc).
- Provide information about max segments of TSO and non-TSO packets accepted by device.
APPLICATION (CASE OF USE):
- Application should initialize burst of packets to send, set required tx offload flags and required fields, like l2_len, l3_len, l4_len and tso_segsz.
- Application passes burst to check required conditions to send packets through the NIC.
- The result can be used to send valid packets and restore invalid packets if function fails.
Prerequisites¶
Support igb_uio, test txprep forwarding features on e1000, i40e, ixgbe and fm10k drivers.Send packets from tester platform through the interface eth1 to the tested port 0, then testpmd sends back packet using same port and uses tcpdump to capture packet information:
Tester DUT
eth1 <---> port 0
Turn off all hardware offloads on tester machine:
ethtool -K eth1 rx off tx off tso off gso off gro off lro off
Change mtu for large packet:
ifconfig eth1 mtu 9000
Launch the testpmd
with the following arguments, set --tx-offloads=0x8fff
to
let TX checksum offloads, TSO mode in the “Full Featured” TX path, add
–max-pkt-len for large packet:
./testpmd -c 0x6 -n 4 -- -i --tx-offloads=0x8fff --port-topology=chained
--max-pkt-len=9000
Set the csum
forwarding mode:
testpmd> set fwd csum
Set the verbose level to 1 to display information for each received packet:
testpmd> set verbose 1
Enable hardware checksum for IP/TCP/UDP packets:
testpmd> csum set ip hw 0
testpmd> csum set tcp hw 0
testpmd> csum set udp hw 0
Test Case: TX preparation forwarding of non-TSO packets¶
Set TSO turned off:
testpmd> tso set 0 0
Start the packet forwarding:
testpmd> start
Send few IP/TCP/UDP packets from tester machine to DUT. Check IP/TCP/UDP checksum correctness in captured packet, such as correct as below:
Transmitted packet:
03:06:36.569730 3c:fd:fe:9d:64:30 > 90:e2:ba:63:22:e8, ethertype IPv4
(0x0800), length 104: (tos 0x0, ttl 64, id 1, offset 0, flags [none],
proto TCP (6), length 90)
127.0.0.1.ftp-data > 127.0.0.1.http: Flags [.], cksum 0x1998 (correct),
seq 0:50, ack 0, win 8192, length 50: HTTP
Captured packet:
03:06:36.569816 90:e2:ba:63:22:e8 > 02:00:00:00:00:00, ethertype IPv4
(0x0800), length 104: (tos 0x0, ttl 64, id 1, offset 0, flags [none],
proto TCP (6), length 90)
127.0.0.1.ftp-data > 127.0.0.1.http: Flags [.], cksum 0x1998 (correct),
seq 0:50, ack 1, win 8192, length 50: HTTP
Test Case: TX preparation forwarding of TSO packets¶
Set TSO turned on:
testpmd> tso set 1460 0
TSO segment size for non-tunneled packets is 1460
Start the packet forwarding:
testpmd> start
Send few IP/TCP packets from tester machine to DUT. Check IP/TCP checksum correctness in captured packet and verify correctness of HW TSO offload for large packets. One large TCP packet (5214 bytes + headers) segmented to four fragments (1460 bytes+header,1460 bytes+header,1460 bytes+header and 834 bytes + headers), checksums are also ok:
Transmitted packet:
21:48:24.214136 00:00:00:00:00:00 > 3c:fd:fe:9d:69:68, ethertype IPv6
(0x86dd), length 5288: (hlim 64, next-header TCP (6) payload length: 5234)
::1.ftp-data > ::1.http: Flags [.], cksum 0xac95 (correct), seq 0:5214,
ack 1, win 8192, length 5214: HTTP
Captured packet:
21:48:24.214207 3c:fd:fe:9d:69:68 > 02:00:00:00:00:00, ethertype IPv6
(0x86dd), length 1534: (hlim 64, next-header TCP (6) payload length: 1480)
::1.ftp-data > ::1.http: Flags [.], cksum 0xa641 (correct), seq 0:1460,
ack 1, win 8192, length 1460: HTTP
21:48:24.214212 3c:fd:fe:9d:69:68 > 02:00:00:00:00:00, ethertype IPv6
(0x86dd), length 1534: (hlim 64, next-header TCP (6) payload length: 1480)
::1.ftp-data > ::1.http: Flags [.], cksum 0xae89 (correct), seq 1460:2920,
ack 1, win 8192, length 1460: HTTP
21:48:24.214213 3c:fd:fe:9d:69:68 > 02:00:00:00:00:00, ethertype IPv6
(0x86dd), length 1534: (hlim 64, next-header TCP (6) payload length: 1480)
::1.ftp-data > ::1.http: Flags [.], cksum 0xfdb6 (correct), seq 2920:4380,
ack 1, win 8192, length 1460: HTTP
21:48:24.214215 3c:fd:fe:9d:69:68 > 02:00:00:00:00:00, ethertype IPv6
(0x86dd), length 908: (hlim 64, next-header TCP (6) payload length: 854)
::1.ftp-data > ::1.http: Flags [.], cksum 0xe629 (correct), seq 4380:5214,
ack 1, win 8192, length 834: HTTP
Note: Generally TSO only supports TCP packets but doesn’t support UDP packets due to hardware segmentation limitation, for example packets are sent on niantic NIC, but not segmented.
Packet:
########
# IPv4 #
########
# checksum TCP
p=Ether()/IP()/TCP(flags=0x10)/Raw(RandString(50))
# bad IP checksum
p=Ether()/IP(chksum=0x1234)/TCP(flags=0x10)/Raw(RandString(50))
# bad TCP checksum
p=Ether()/IP()/TCP(flags=0x10, chksum=0x1234)/Raw(RandString(50))
# large packet
p=Ether()/IP()/TCP(flags=0x10)/Raw(RandString(length))
# bad checksum and large packet
p=Ether()/IP(chksum=0x1234)/TCP(flags=0x10,chksum=0x1234)/
Raw(RandString(length))
########
# IPv6 #
########
# checksum TCP
p=Ether()/IPv6()/TCP(flags=0x10)/Raw(RandString(50))
# checksum UDP
p=Ether()/IPv6()/UDP()/Raw(RandString(50))
# bad TCP checksum
p=Ether()/IPv6()/TCP(flags=0x10, chksum=0x1234)/Raw(RandString(50))
# large packet
p=Ether()/IPv6()/TCP(flags=0x10)/Raw(RandString(length))
Unified Packet Type Tests¶
Unified packet type flag is supposed to recognize packet types and support all possible PMDs.
This 32 bits of packet_type can be divided into several sub fields to indicate different packet type information of a packet. The initial design is to divide those bits into fields for L2 types, L3 types, L4 types, tunnel types, inner L2 types, inner L3 types and inner L4 types. All PMDs should translate the offloaded packet types into these 7 fields of information, for user applications.
Prerequisites¶
Enable ABI and disable vector ixgbe driver in dpdk configuration file. Plug in three different types of nic on the board. 1x Intel® XL710-DA2 (Eagle Fountain) 1x Intel® 82599 Gigabit Ethernet Controller 1x Intel® I350 Gigabit Network Connection
Start testpmd and then enable rxonly and verbose mode:
./x86_64-native-linuxapp-gcc/app/testpmd -c f -n 4 -- -i --tx-offloads=0x8fff
set fwd rxonly
set verbose 1
start
Test Case: L2 Packet detect¶
This case checked that whether Timesync, ARP, LLDP detection supported by Fortville.
Send time sync packet from tester:
sendp([Ether(dst='FF:FF:FF:FF:FF:FF',type=0x88f7)/"\\x00\\x02"], iface=txItf)
Check below message dumped by testpmd:
(outer) L2 type: ETHER_Timesync
Send ARP packet from tester:
sendp([Ether(dst='FF:FF:FF:FF:FF:FF')/ARP()], iface=txItf)
Check below message dumped by testpmd:
(outer) L2 type: ETHER_ARP
Send LLDP packet from tester:
sendp([Ether()/LLDP()/LLDPManagementAddress()], iface=txItf)
Check below message dumped by testpmd:
(outer) L2 type: ETHER_LLDP
Test Case: IPv4&L4 packet type detect¶
This case checked that whether L3 and L4 packet can be normally detected. Only Fortville can detect icmp packet. Only niantic and i350 can detect ipv4 extension packet. Fortville did not detect whether packet contain ipv4 header options, so L3 type will be shown as IPV4_EXT_UNKNOWN. Fortville will identify all unrecognized L4 packet as L4_NONFRAG. Only Fortville can identify L4 fragment packet.
Send IP only packet and verify L2/L3/L4 corrected:
sendp([Ether()/IP()/Raw('\0'*60)], iface=txItf)
(outer) L2 type: ETHER
(outer) L3 type: IPV4
(outer) L4 type: Unknown
Send IP+UDP packet and verify L2/L3/L4 corrected:
sendp([Ether()/IP()/UDP()/Raw('\0'*60)], iface=txItf)
(outer) L4 type: UDP
Send IP+TCP packet and verify L2/L3/L4 corrected:
sendp([Ether()/IP()/TCP()/Raw('\0'*60)], iface=txItf)
(outer) L4 type: TCP
Send IP+SCTP packet and verify L2/L3/L4 corrected:
sendp([Ether()/IP()/SCTP()/Raw('\0'*60)], iface=txItf)
(outer) L4 type: SCTP
Send IP+ICMP packet and verify L2/L3/L4 corrected(Fortville):
sendp([Ether()/IP()/ICMP()/Raw('\0'*60)], iface=txItf)
(outer) L4 type: ICMP
Send IP fragment+TCP packet and verify L2/L3/L4 corrected(Fortville):
sendp([Ether()/IP(frag=5)/TCP()/Raw('\0'*60)], iface=txItf)
(outer) L2 type: ETHER
(outer) L3 type: IPV4_EXT_UNKNOWN
(outer) L4 type: L4_FRAG
Send IP extension packet and verify L2/L3 corrected(Niantic,i350):
sendp([Ether()/IP(ihl=10)/Raw('\0'*40)],iface=txItf)
(outer) L3 type: IPV4_EXT
(outer) L4 type: Unknown
Send IP extension+SCTP packet and verify L2/L3/L4 corrected(Niantic,i350):
sendp([Ether()/IP(ihl=10)/SCTP()/Raw('\0'*40)],iface=txItf)
(outer) L3 type: IPV4_EXT
(outer) L4 type: SCTP
Test Case: IPv6&L4 packet type detect¶
This case checked that whether IPv6 and L4 packet can be normally detected. Fortville did not detect whether packet contain ipv6 extension options, so L3 type will be shown as IPV6_EXT_UNKNOWN. Fortville will identify all unrecognized L4 packet as L4_NONFRAG. Only Fortville can identify L4 fragment packet.
Send IPv6 only packet and verify L2/L3/L4 corrected:
sendp([Ether()/IPv6()/Raw('\0'*60)], iface=txItf)
(outer) L2 type: ETHER
(outer) L3 type: IPV6
(outer) L4 type: Unknown
Send IPv6+UDP packet and verify L2/L3/L4 corrected:
sendp([Ether()/IPv6()/UDP()/Raw('\0'*60)], iface=txItf)
(outer) L4 type: UDP
Send IPv6+TCP packet and verify L2/L3/L4 corrected:
sendp([Ether()/IPv6()/TCP()/Raw('\0'*60)], iface=txItf)
(outer) L4 type: TCP
Send IPv6 fragment packet and verify L2/L3/L4 corrected(Fortville):
sendp([Ether()/IPv6()/IPv6ExtHdrFragment()/Raw('\0'*60)],iface=txItf)
(outer) L3 type: IPV6_EXT_UNKNOWN
(outer) L4 type: L4_FRAG
Send IPv6 fragment packet and verify L2/L3/L4 corrected(Niantic,i350):
sendp([Ether()/IPv6()/IPv6ExtHdrFragment()/Raw('\0'*60)],iface=txItf)
(outer) L3 type: IPV6_EXT
(outer) L4 type: Unknown
Test Case: IP in IPv4 tunnel packet type detect¶
This case checked that whether IP in IPv4 tunnel packet can be normally detected by Fortville.
Send IPv4+IPv4 fragment packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IP(frag=5)/UDP()/Raw('\0'*40)], iface=txItf)
(outer) L2 type: ETHER
(outer) L3 type: IPV4_EXT_UNKNOWN
(outer) L4 type: Unknown
Tunnel type: IP
Inner L2 type: Unknown
Inner L3 type: IPV4_EXT_UNKNOWN
Inner L4 type: L4_FRAG
Send IPv4+IPv4 packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IP()/Raw('\0'*40)],iface=txItf)
Inner L4 type: L4_NONFRAG
Send IPv4+IPv4+UDP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IP()/UDP()/Raw('\0'*40)],iface=txItf)
Inner L4 type: UDP
Send IPv4+IPv4+TCP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IP()/TCP()/Raw('\0'*40)],iface=txItf)
Inner L4 type: TCP
Send IPv4+IPv4+SCTP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IP()/SCTP()/Raw('\0'*40)],iface=txItf)
Inner L4 type: SCTP
Send IPv4+IPv4+ICMP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IP()/ICMP()/Raw('\0'*40)],iface=txItf)
Inner L4 type: ICMP
Send IPv4+IPv6 fragment packet and inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6()/IPv6ExtHdrFragment()/Raw('\0'*40)],iface=txItf)
Inner L3 type: IPV6_EXT_UNKNOWN
Inner L4 type: L4_FRAG
Send IPv4+IPv6 packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6()/Raw('\0'*40)],iface=txItf)
Inner L4 type: L4_NONFRAG
Send IPv4+IPv6+UDP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6()/UDP()/Raw('\0'*40)],iface=txItf)
Inner L4 type: UDP
Send IPv4+IPv6+TCP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6()/TCP()/Raw('\0'*40)],iface=txItf)
Inner L4 type: TCP
Send IPv4+IPv6+SCTP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6(nh=132)/SCTP()/Raw('\0'*40)],iface=txItf)
Inner L4 type: SCTP
Send IPv4+IPv6+ICMP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6(nh=58)/ICMP()/Raw('\0'*40)],iface=txItf)
Inner L4 type: ICMP
Test Case: IPv6 in IPv4 tunnel packet type detect by niantic and i350¶
This case checked that whether IPv4 in IPv6 tunnel packet can be normally detected by Niantic and i350.
Send IPv4+IPv6 packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6()/Raw('\0'*40)], iface=txItf)
(outer) L2 type: ETHER
(outer) L3 type: IPV4
(outer) L4 type: Unknown
Tunnel type: IP
Inner L2 type: Unknown
Inner L3 type: IPV6
Inner L4 type: Unknown
Send IPv4+IPv6_EXT packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6()/IPv6ExtHdrRouting()/Raw('\0'*40)], iface=txItf)
Inner L3 type: IPV6_EXT
Send IPv4+IPv6+UDP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6()/UDP()/Raw('\0'*40)], iface=txItf)
Inner L4 type: UDP
Send IPv4+IPv6+TCP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6()/TCP()/Raw('\0'*40)], iface=txItf)
Inner L4 type: TCP
Send IPv4+IPv6_EXT+UDP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6()/IPv6ExtHdrRouting()/UDP()/Raw('\0'*40)],
iface=txItf)
Inner L3 type: IPV6_EXT
Inner L4 type: UDP
Send IPv4+IPv6_EXT+TCP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6()/IPv6ExtHdrRouting()/TCP()/Raw('\0'*40)],
iface=txItf)
Inner L3 type: IPV6_EXT
Inner L4 type: TCP
Test Case: IP in IPv6 tunnel packet type detect¶
This case checked that whether IP in IPv6 tunnel packet can be normally detected by Fortville.
Send IPv4+IPv4 fragment packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IP(frag=5)/UDP()/Raw('\0'*40)],iface=txItf)
(outer) L2 type: ETHER
(outer) L3 type: IPV4_EXT_UNKNOWN
(outer) L4 type: Unknown
Tunnel type: IP
Inner L2 type: Unknown
Inner L3 type: IPV4_EXT_UNKNOWN
Inner L4 type: L4_FRAG
Send IPv4+IPv4 packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IP()/Raw('\0'*40)],iface=txItf)
Inner L4 type: L4_NONFRAG
Send IPv4+IPv4+UDP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IP()/UDP()/Raw('\0'*40)],iface=txItf)
Inner L4 type: UDP
Send IPv4+IPv4+TCP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IP()/TCP()/Raw('\0'*40)],iface=txItf)
Inner L4 type: TCP
Send IPv4+IPv4+SCTP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IP()/SCTP()/Raw('\0'*40)],iface=txItf)
Inner L4 type: SCTP
Send IPv4+IPv4+ICMP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IP()/ICMP()/Raw('\0'*40)],iface=txItf)
Inner L4 type: ICMP
Send IPv4+IPv6 fragment packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6()/IPv6ExtHdrFragment()/Raw('\0'*40)],
iface=txItf)
Inner L3 type: IPV6_EXT_UNKNOWN
Inner L4 type: L4_FRAG
Send IPv4+IPv6 packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6()/Raw('\0'*40)], iface=txItf)
Inner L4 type: L4_NONFRAG
Send IPv4+IPv6+UDP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6()/UDP()/Raw('\0'*40)], iface=txItf)
Inner L4 type: UDP
Send IPv4+IPv6+TCP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6()/TCP()/Raw('\0'*40)], iface=txItf)
Inner L4 type: TCP
Send IPv4+IPv6+SCTP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6(nh=132)/SCTP()/Raw('\0'*40)], iface=txItf)
Inner L4 type: SCTP
Send IPv4+IPv6+ICMP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/IPv6(nh=58)/ICMP()/Raw('\0'*40)], iface=txItf)
Inner L4 type: ICMP
Test Case: NVGRE tunnel packet type detect¶
This case checked that whether NVGRE tunnel packet can be normally detected by Fortville. Fortville did not distinguish GRE/Teredo/Vxlan packets, all those types will be displayed as GRENAT.
Send IPv4+NVGRE fragment packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/NVGRE()/Ether()/IP(frag=5)/Raw('\0'*40)],
iface=txItf)
(outer) L2 type: ETHER
(outer) L3 type: IPV4_EXT_UNKNOWN
(outer) L4 type: Unknown
Tunnel type: GRENAT
Inner L2 type: ETHER
Inner L3 type: IPV4_EXT_UNKNOWN
Inner L4 type: L4_FRAG
Send IPV4+NVGRE+MAC packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/NVGRE()/Ether()/IP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: L4_NONFRAG
Send IPv4+NVGRE+MAC_VLAN packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/Raw('\0'*40)], iface=txItf)
Inner L2 type: ETHER_VLAN
Inner L4 type: Unknown
Send IPv4+NVGRE+MAC_VLAN+IPv4 fragment packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IP(frag=5)/Raw('\0'*40)],
iface=txItf)
Inner L3 type: IPV4_EXT_UNKNOWN
Inner L4 type: L4_FRAG
Send IPv4+NVGRE+MAC_VLAN+IPv4 packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: L4_NONFRAG
Send IPv4+NVGRE+MAC_VLAN+IPv4+UDP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IP()/UDP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: UDP
Send IPv4+NVGRE+MAC_VLAN+IPv4+TCP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IP()/TCP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: TCP
Send IPv4+NVGRE+MAC_VLAN+IPv4+SCTP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IP()/SCTP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: SCTP
Send IPv4+NVGRE+MAC_VLAN+IPv4+ICMP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IP()/ICMP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: ICMP
Send IPv4+NVGRE+MAC_VLAN+IPv6+IPv6 fragment acket and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IPv6()/IPv6ExtHdrFragment()/
Raw('\0'*40)], iface=txItf)
Inner L3 type: IPV6_EXT_UNKOWN
Inner L4 type: L4_FRAG
Send IPv4+NVGRE+MAC_VLAN+IPv6 packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IPv6()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: L4_NONFRAG
Send IPv4+NVGRE+MAC_VLAN+IPv6+UDP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IPv6()/UDP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: UDP
Send IPv4+NVGRE+MAC_VLAN+IPv6+TCP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IPv6()/TCP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: TCP
Send IPv4+NVGRE+MAC_VLAN+IPv6+SCTP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IPv6(nh=132)/SCTP()/
Raw('\0'*40)],iface=txItf)
Inner L4 type: SCTP
Send IPv4+NVGRE+MAC_VLAN+IPv6+ICMP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/NVGRE()/Ether()/Dot1Q()/IPv6(nh=58)/ICMP()/
Raw('\0'*40)],iface=txItf)
Inner L4 type: ICMP
Test Case: NVGRE in IPv6 tunnel packet type detect¶
This case checked that whether NVGRE in IPv6 tunnel packet can be normally detected by Fortville. Fortville did not distinguish GRE/Teredo/Vxlan packets, all those types will be displayed as GRENAT.
Send IPV6+NVGRE+MAC packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/Raw('\0'*18)], iface=txItf)
(outer) L2 type: ETHER
(outer) L3 type: IPV6_EXT_UNKNOWN
(outer) L4 type: Unknown
Tunnel type: GRENAT
Inner L2 type: ETHER
Inner L3 type: Unknown
Inner L4 type: Unknown
Send IPV6+NVGRE+MAC+IPv4 fragment packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/IP(frag=5)/Raw('\0'*40)],
iface=txItf)
Inner L3 type: IPV4_EXT_UNKNOWN
Inner L4 type: L4_FRAG
Send IPV6+NVGRE+MAC+IPv4 packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/IP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: L4_NONFRAG
Send IPV6+NVGRE+MAC+IPv4+UDP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/IP()/UDP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: UDP
Send IPV6+NVGRE+MAC+IPv4+TCP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/IP()/TCP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: TCP
Send IPV6+NVGRE+MAC+IPv4+SCTP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/IP()/SCTP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: SCTP
Send IPV6+NVGRE+MAC+IPv4+ICMP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/IP()/ICMP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: ICMP
Send IPV6+NVGRE+MAC+IPv6 fragment packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/IPv6()/IPv6ExtHdrFragment()
/Raw('\0'*40)],iface=txItf)
Inner L3 type: IPV6_EXT_UNKOWN
Inner L4 type: L4_FRAG
Send IPV6+NVGRE+MAC+IPv6 packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/IPv6()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: L4_NONFRAG
Send IPV6+NVGRE+MAC+IPv6+UDP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/IPv6()/UDP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: UDP
Send IPV6+NVGRE+MAC+IPv6+TCP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/IPv6()/TCP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: TCP
Send IPV6+NVGRE+MAC+IPv6+SCTP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/IPv6(nh=132)/SCTP()/
Raw('\0'*40)], iface=txItf)
Inner L4 type: SCTP
Send IPV6+NVGRE+MAC+IPv6+ICMP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/IPv6(nh=58)/ICMP()/
Raw('\0'*40)], iface=txItf)
Inner L4 type: ICMP
Send IPV6+NVGRE+MAC_VLAN+IPv4 fragment packet and inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/Dot1Q()/IP(frag=5)/
Raw('\0'*40)], iface=txItf)
Inner L2 type: ETHER_VLAN
Inner L3 type: IPV4_EXT_UNKNOWN
Inner L4 type: L4_FRAG
Send IPV6+NVGRE+MAC_VLAN+IPv4 packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/Dot1Q()/IP()/
Raw('\0'*40)], iface=txItf)
Inner L4 type: L4_NONFRAG
Send IPV6+NVGRE+MAC_VLAN+IPv4+UDP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/Dot1Q()/IP()/UDP()/
Raw('\0'*40)], iface=txItf)
Inner L4 type: UDP
Send IPV6+NVGRE+MAC_VLAN+IPv4+TCP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/Dot1Q()/IP()/TCP()/
Raw('\0'*40)], iface=txItf)
Inner L4 type: TCP
Send IPV6+NVGRE+MAC_VLAN+IPv4+SCTP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/Dot1Q()/IP()/SCTP()/
Raw('\0'*40)], iface=txItf)
Inner L4 type: SCTP
Send IPV6+NVGRE+MAC_VLAN+IPv4+ICMP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/Dot1Q()/IP()/ICMP()/
Raw('\0'*40)], iface=txItf)
Inner L4 type: ICMP
Send IPV6+NVGRE+MAC_VLAN+IPv6 fragment packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/Dot1Q()/IPv6()/
IPv6ExtHdrFragment()/Raw('\0'*40)], iface=txItf)
Inner L3 type: IPV6_EXT_UNKOWN
Inner L4 type: L4_FRAG
Send IPV6+NVGRE+MAC_VLAN+IPv6 packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/Dot1Q()/IPv6()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: L4_NONFRAG
Send IPV6+NVGRE+MAC_VLAN+IPv6+UDP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/Dot1Q()/IPv6()/UDP()/
Raw('\0'*40)], iface=txItf)
Inner L4 type: UDP
Send IPV6+NVGRE+MAC_VLAN+IPv6+TCP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/Dot1Q()/IPv6()/TCP()/
Raw('\0'*40)], iface=txItf)
Inner L4 type: TCP
Send IPV6+NVGRE+MAC_VLAN+IPv6+SCTP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/Dot1Q()/IPv6(nh=132)/SCTP()/
Raw('\0'*40)], iface=txItf)
Inner L4 type: SCTP
Send IPV6+NVGRE+MAC_VLAN+IPv6+ICMP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IPv6(nh=47)/NVGRE()/Ether()/Dot1Q()/IPv6(nh=58)/ICMP()/
Raw('\0'*40)], iface=txItf)
Inner L4 type: ICMP
Test Case: GRE tunnel packet type detect¶
This case checked that whether GRE tunnel packet can be normally detected by Fortville. Fortville did not distinguish GRE/Teredo/Vxlan packets, all those types will be displayed as GRENAT.
Send IPv4+GRE+IPv4 fragment packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/GRE()/IP(frag=5)/Raw('x'*40)], iface=txItf)
(outer) L2 type: ETHER
(outer) L3 type: IPV4_EXT_UNKNOWN
(outer) L4 type: Unknown
Tunnel type: GRENAT
Inner L2 type: Unknown
Inner L3 type: IPV4_EXT_UNKNOWN
Inner L4 type: L4_FRAG
Send IPv4+GRE+IPv4 packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/GRE()/IP()/Raw('x'*40)], iface=txItf)
Inner L4 type: L4_NONFRAG
Send IPv4+GRE+IPv4+UDP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/GRE()/IP()/UDP()/Raw('x'*40)], iface=txItf)
Inner L4 type: UDP
Send IPv4+GRE+IPv4+TCP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/GRE()/IP()/TCP()/Raw('x'*40)], iface=txItf)
Inner L4 type: TCP
Send IPv4+GRE+IPv4+SCTP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/GRE()/IP()/SCTP()/Raw('x'*40)], iface=txItf)
Inner L4 type: SCTP
Send IPv4+GRE+IPv4+ICMP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/GRE()/IP()/ICMP()/Raw('x'*40)], iface=txItf)
Inner L4 type: ICMP
Send IPv4+GRE packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/GRE()/Raw('x'*40)], iface=txItf)
Inner L3 type: Unknown
Inner L4 type: Unknown
Test Case: Vxlan tunnel packet type detect¶
This case checked that whether Vxlan tunnel packet can be normally detected by Fortville. Fortville did not distinguish GRE/Teredo/Vxlan packets, all those types will be displayed as GRENAT.
Add vxlan tunnel port filter on receive port:
rx_vxlan_port add 4789 0
Send IPv4+Vxlan+MAC+IPv4 fragment packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/UDP()/Vxlan()/Ether()/IP(frag=5)/Raw('\0'*40)],
iface=txItf)
(outer) L2 type: ETHER
(outer) L3 type: IPV4_EXT_UNKNOWN
(outer) L4 type: Unknown
Tunnel type: GRENAT
Inner L2 type: ETHER
Inner L3 type: IPV4_EXT_UNKNOWN
Inner L4 type: L4_FRAG
Send IPv4+Vxlan+MAC+IPv4 packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/UDP()/Vxlan()/Ether()/IP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: L4_NONFRAG
Send IPv4+Vxlan+MAC+IPv4+UDP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/UDP()/Vxlan()/Ether()/IP()/UDP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: UDP
Send IPv4+Vxlan+MAC+IPv4+TCP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/UDP()/Vxlan()/Ether()/IP()/TCP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: TCP
Send IPv4+Vxlan+MAC+IPv4+SCTP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/UDP()/Vxlan()/Ether()/IP()/SCTP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: SCTP
Send IPv4+Vxlan+MAC+IPv4+ICMP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/UDP()/Vxlan()/Ether()/IP()/ICMP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: ICMP
Send IPv4+Vxlan+MAC+IPv6 fragment packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/UDP()/Vxlan()/Ether()/IPv6()/IPv6ExtHdrFragment()/
Raw('\0'*40)], iface=txItf)
Inner L3 type: IPV6_EXT_UNKOWN
Inner L4 type: L4_FRAG
Send IPv4+Vxlan+MAC+IPv6 packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/UDP()/Vxlan()/Ether()/IPv6()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: L4_NONFRAG
Send IPv4+Vxlan+MAC+IPv6+UDP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/UDP()/Vxlan()/Ether()/IPv6()/UDP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: UDP
Send IPv4+Vxlan+MAC+IPv6+TCP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/UDP()/Vxlan()/Ether()/IPv6()/TCP()/Raw('\0'*40)],
iface=txItf)
Inner L4 type: TCP
Send IPv4+Vxlan+MAC+IPv6+SCTP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/UDP()/Vxlan()/Ether()/IPv6(nh=132)/SCTP()/
Raw('\0'*40)], iface=txItf)
Inner L4 type: SCTP
Send IPv4+Vxlan+MAC+IPv6+ICMP packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/UDP()/Vxlan()/Ether()/IPv6(nh=58)/ICMP()/
Raw('\0'*40)], iface=txItf)
Inner L4 type: ICMP
Send IPv4+Vxlan+MAC packet and verify inner and outer L2/L3/L4 corrected:
sendp([Ether()/IP()/UDP()/Vxlan()/Ether()/Raw('\0'*40)], iface=txItf)
Inner L3 type: Unknown
Inner L4 type: Unknown
Test Case: NSH¶
This case checks if NSH packets could be detected by I40e driver NIC
- Send a ether+nsh packet and verify the detection message::
sendp([Ether(type=0x894f)/NSH(Len=0x6,NextProto=0x0,NSP=0x000002,NSI=0xff)], iface=txItf)
L2 type: L2_ETHER_NSH
- Send a ether+nsh+ip packet and verify the detection message::
sendp([Ether(dst=”00:00:00:00:01:00”,type=0x894f)/NSH(Len=0x6,NextProto=0x1,NSP=0x000002,NSI=0xff)/IP()], iface=txItf)
L2 type: L2_ETHER_NSH L3 type: L3_IPV4_EXT_UNKNOWN L4 type: L4_NONFRAG
- Send a ether+nsh+ip+icmp packet and verify the detection message::
sendp([Ether(type=0x894f)/NSH(Len=0x6,NextProto=0x1,NSP=0x000002,NSI=0xff)/IP()/ICMP()], iface=txItf)
L2 type: L2_ETHER_NSH L3 type: L3_IPV4_EXT_UNKNOWN L4 type: L4_ICMP
- Send a ether+nsh+ip_frag packet and verify the detection message::
sendp([Ether(dst=”00:00:00:00:01:00”,type=0x894f)/NSH(Len=0x6,NextProto=0x1,NSP=0x000002,NSI=0xff)/IP(frag=1,flags=”MF”), iface=txItf)
L2 type: L2_ETHER_NSH L3 type: L3_IPV4_EXT_UNKNOWN L4 type: L4_FRAG
- Send a ether+nsh+ip+tcp packet and verify the detection message::
sendp([Ether(type=0x894f)/NSH(Len=0x6,NextProto=0x1,NSP=0x000002,NSI=0xff)/IP()/TCP()], iface=txItf)
L2 type: L2_ETHER_NSH L3 type: L3_IPV4_EXT_UNKNOWN L4 type: L4_TCP
- Send a ether+nsh+ip+udp packet verify the detection message::
sendp([Ether(dst=”00:00:00:00:01:00”,type=0x894f)/NSH(Len=0x6,NextProto=0x1,NSP=0x000002,NSI=0xff)/IP()/UDP()], iface=txItf)
L2 type: L2_ETHER_NSH L3 type: L3_IPV4_EXT_UNKNOWN L4 type: L4_UDP
- Send a ether+nsh+ip+sctp packet and verify the detection message::
sendp([Ether(type=0x894f)/NSH(Len=0x6,NextProto=0x1,NSP=0x000002,NSI=0xff)/IP()/SCTP(tag=1)/SCTPChunkData(data=’X’ * 16)], iface=txItf)
L2 type: L2_ETHER_NSH L3 type: L3_IPV4_EXT_UNKNOWN L4 type: L4_SCTP
- Send a ether+nsh+ipv6 packet and verify the detection message::
sendp([Ether(type=0x894f)/NSH(Len=0x6,NextProto=0x2,NSP=0x000002,NSI=0xff)/IPv6()], iface=txItf)
L2 type: L2_ETHER_NSH L3 type: L3_IPV6_EXT_UNKNOWN L4 type: L4_NONFRAG
- Send a ether+nsh+ipv6+icmp packet and verify the detection message::
sendp([Ether(type=0x894f)/NSH(Len=0x6,NextProto=0x2,NSP=0x000002,NSI=0xff)/IPv6(src=”2001::1”,dst=”2003::2”,nh=0x3A)/ICMP()], iface=txItf)
L2 type: L2_ETHER_NSH L3 type: L3_IPV6_EXT_UNKNOWN L4 type: L4_ICMP
- Send a ether+nsh+ipv6_frag packet and verify the detection message::
sendp([Ether(dst=”00:00:00:00:01:00”,type=0x894f)/NSH(Len=0x6,NextProto=0x2,NSP=0x000002,NSI=0xff)/IPv6()/IPv6ExtHdrFragment()], iface=txItf)
L2 type: L2_ETHER_NSH L3 type: L3_IPV6_EXT_UNKNOWN L4 type: L4_FRAG
- Send a ether+nsh+ipv6+tcp packet and verify the detection message::
sendp([Ether(type=0x894f)/NSH(Len=0x6,NextProto=0x2,NSP=0x000002,NSI=0xff)/IPv6()/TCP()],iface=txItf)
L2 type: L2_ETHER_NSH L3 type: L3_IPV6_EXT_UNKNOWN L4 type: L4_TCP
- Send a ether+nsh+ipv6+udp packet and verify the detection message::
sendp([Ether(dst=”00:00:00:00:01:00”,type=0x894f)/NSH(Len=0x6,NextProto=0x2,NSP=0x000002,NSI=0xff)/IPv6()/UDP()], iface=txItf)
L2 type: L2_ETHER_NSH L3 type: L3_IPV6_EXT_UNKNOWN L4 type: L4_UDP
- Send a ether+nsh+ipv6+sctp and verify the detection message::
sendp([Ether(type=0x894f)/NSH(Len=0x6,NextProto=0x2,NSP=0x000002,NSI=0xff)/IPv6(nh=0x84)/SCTP(tag=1)/SCTPChunkData(“x” * 16)], iface=txItf)
L2 type: L2_ETHER_NSH L3 type: L3_IPV6_EXT_UNKNOWN L4 type: L4_SCTP
Userspace Ethtool Tests¶
This feature is designed to provide one rte_ethtool shim layer based on rte_ethdev API. The Ethtool sample application shows an implementation of an ethtool-like API and provides a console environment that allows its use to query and change Ethernet card parameters. The Ethtool sample is based upon a simple L2 frame reflector.
Prerequisites¶
notice: On FVL, test case “test_dump_driver_info” need a physical link disconnect, this case must do manually at this condition.
Assume port 0 and 1 are connected to the traffic generator, to run the test application in linux app environment with 4 lcores, 2 ports:
ethtool -c f -n 4
The sample should be validated on Fortville, Niantic and i350 Nics.
other requirements:
- igxbe driver (version >= 4.3.13).
- ethtool of linux is the default reference tool.
- md5sum is a tool to do dumped bin format file comparison.
- insert two nic cards on No.0 socket
Test Case: Dump driver information test¶
User “drvinfo” command to dump driver information and then check that dumped information, which are dumped separately by dpdk’s ethtool and linux’s ethtool, were exactly the same:
EthApp> drvinfo
Port 0 driver: net_ixgbe (ver: DPDK 17.02.0-rc0)
bus-info: 0000:84:00.0
firmware-version: 0x61bf0001
Port 1 driver: net_ixgbe (ver: DPDK 17.02.0-rc0)
bus-info: 0000:84:00.1
firmware-version: 0x61bf0001
Use “link” command to dump all ports link status. Notice:: On FVL, link detect need a physical link disconnect:
EthApp> link
Port 0: Up
Port 1: Up
Change tester port link status to down and re-check link status:
EthApp> link
Port 0: Down
Port 1: Down
Send a few packets to l2fwd and check that command “portstats” dumps correct port statistics:
EthApp> portstats 0
Port 0 stats
In: 1 (64 bytes)
Out: 1 (64 bytes)
Test Case: Retrieve eeprom test¶
Unbind ports from igb_uio and bind them to default driver. Dump eeprom binary by linux’s ethtool and dpdk’s ethtool separately:
ethtool --eeprom-dump INTF_0 raw on > ethtool_eeprom_0.bin
ethtool --eeprom-dump INTF_1 raw on > ethtool_eeprom_1.bin
Retrieve eeprom on specified port using dpdk’s ethtool and compare csum with the file dumped by ethtool:
EthApp> eeprom 0 eeprom_0.bin
EthApp> eeprom 1 eeprom_1.bin
md5sum ethtool_eeprom_0.bin
md5sum eeprom_0.bin
compare md5sum value of the two bin files.
Test Case: Retrieve register test¶
Retrieve register on specified port:
EthApp> regs 0 reg_0.bin
EthApp> regs 1 reg_1.bin
Unbind ports from igb_uio and bind them to default driver:
dpdk/tools/dpdk_nic_bind.py --bind=ixgbe x:xx.x
Check that dumped register information is correct:
ethtool -d INTF_0 raw off file reg_0.bin
ethtool -d INTF_1 raw off file reg_0.bin
Test Case: Ring param test¶
Dump port 0 ring size by ringparam command and check numbers are correct:
EthApp> ringparam 0
Port 0 ring parameters
Rx Pending: 128 (256 max)
Tx Pending: 4096 (4096 max)
Change port 0 ring size by ringparam command and then verify Rx/Tx function:
EthApp> ringparam 0 256 2048
Recheck ring size by ringparam command:
EthApp> ringparam 0
Port 0 ring parameters
Rx Pending: 256 (256 max)
Tx Pending: 2048 (4096 max)
send packet by scapy on Tester:
check tx/rx packets
EthApp> portstats 0
Test Case: Vlan test¶
enable vlan filter flag in main.c of dpdk’s ethtool:
sed -i -e '/cfg_port.txmode.mq_mode = ETH_MQ_TX_NONE;$/a\\cfg_port.rxmode.hw_vlan_filter=1;' examples/ethtool/ethtool-app/main.c
re-compile examples/ethtool:
make -C examples/ethtool
Add vlan 0 to port 0 and vlan 1 to port1, send packet without vlan to port0,1 Verify port0 and port1 received vlan packets:
EthApp> vlan 0 add 0
VLAN vid 0 added
EthApp> vlan 1 add 1
VLAN vid 1 added
Send packet with vlan0,1 to port0&1. Verify port0 and port1 received vlan packets
Send packet with vlan1,0 to port0&1. Verify port0 and port1 can not receive vlan packets
Remove vlan 0,1 from port0&1, send packet with vlan0,1 to port0,1. Verify port0 and port1 can not receive vlan packet:
EthApp> vlan 0 del 0
VLAN vid 0 removed
EthApp> vlan 1 del 1
VLAN vid 1 removed
Test Case: Mac address test¶
Use “macaddr” command to dump port mac address and then check that dumped information is exactly the same as ifconfig do.
set a new mac address by dpdk’s ethtool, send and sniff packet and check packet forwarded status:
EthApp> macaddr 0
Port 0 MAC Address: XX:XX:XX:XX:XX:XX
EthApp> macaddr 1
Port 1 MAC Address: YY:YY:YY:YY:YY:YY
Check multicast macaddress will not be validated.:
EthApp> validate 01:00:00:00:00:00
Address is not unicast
Check all zero macaddress will not be validated:
EthApp> validate 00:00:00:00:00:00
Address is not unicast
Use “macaddr” command to change port mac address and then check mac changed:
EthApp> validate 00:10:00:00:00:00
Address is unicast
EthApp> macaddr 0 00:10:00:00:00:00
MAC address changed
EthApp> macaddr 0
Port 0 MAC Address: 00:10:00:00:00:00
Verified mac address in forwarded packets has been changed.
Test Case: Port config test¶
Use “stop” command to stop port0. Send packets to port0 and verify no packet received:
EthApp> stop 0
Use “open” command to re-enable port0. Send packets to port0 and verify packets received and forwarded:
EthApp> open 0
Test case: Mtu config test¶
Use “mtu” command to change port 0 mtu from default 1519 to 9000 on Tester’s port.
Send packet size over 1519 and check that packet will be detected as error:
EthApp> mtu 0 1519
Port 0 stats
In: 0 (0 bytes)
Out: 0 (0 bytes)
Err: 1
Change mtu to default value and send packet size over 1519 and check that packet will normally be received.
Test Case: Pause tx/rx test(performance test)¶
Enable port 0 Rx pause frame and then create two packets flows on IXIA port. One flow is 100000 normally packet and the second flow is pause frame. Check that dut’s port 0 Rx speed dropped status. For example, niantic will drop from 14.8Mpps to 7.49Mpps:
EthApp> pause 0 rx
Use “pause” command to print dut’s port pause status, check that dut’s port 0 rx has been paused:
EthApp> pause 0
Port 0: Rx Paused
Release pause status of port 0 rx and then restart port 0, check that packets Rx speed is normal:
EthApp> pause 0 none
EthApp>
Pause port 0 TX pause frame:
EthApp> pause 0 tx
Use “pause” command to print port pause status, check that port 0 tx has been paused:
EthApp> pause 0
Port 0: Tx Paused
Enable flow control in IXIA port and send packets from IXIA with line rate. Record line rate before send packet. Check that IXIA receive flow control packets and IXIA transmit speed dropped. IXIA Rx packets more then Tx packets to check that received pause frame.Compare the line rates in the time before and after the Pause packets are injected
Unpause port 0 tx and restart port 0. Then send packets to port0, check that packets forwarded normally from port 0:
EthApp> pause 0 none
EthApp> stop 0
EthApp> open 0
VEB Switch and floating VEB Tests¶
VEB Switching Introduction¶
IEEE EVB tutorial: http://www.ieee802.org/802_tutorials/2009-11/evb-tutorial-draft-20091116_v09.pdf
Virtual Ethernet Bridge (VEB) - This is an IEEE EVB term. A VEB is a VLAN Bridge internal to Fortville that bridges the traffic of multiple VSIs over an internal virtual network.
Virtual Ethernet Port Aggregator (VEPA) - This is an IEEE EVB term. A VEPA multiplexes the traffic of one or more VSIs onto a single Fortville Ethernet port. The biggest difference between a VEB and a VEPA is that a VEB can switch packets internally between VSIs, whereas a VEPA cannot.
Virtual Station Interface (VSI) - This is an IEEE EVB term that defines the properties of a virtual machine’s (or a physical machine’s) connection to the network. Each downstream v-port on a Fortville VEB or VEPA defines a VSI. A standards-based definition of VSI properties enables network management tools to perform virtual machine migration and associated network re-configuration in a vendor-neutral manner.
My understanding of VEB is that it’s an in-NIC switch(MAC/VLAN), and it can support VF->VF, PF->VF, VF->PF packet forwarding according to the NIC internal switch. It’s similar as Niantic’s SRIOV switch.
Prerequisites for VEB testing¶
Get the pci device id of DUT, for example:
./dpdk-devbind.py --st 0000:05:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens785f0 drv=i40e unused=
Host PF in kernel driver. Create 2 VFs from 1 PF with kernel driver, and set the VF MAC address at PF:
echo 2 > /sys/bus/pci/devices/0000\:05\:00.0/sriov_numvfs ./dpdk-devbind.py --st 0000:05:02.0 'XL710/X710 Virtual Function' unused= 0000:05:02.1 'XL710/X710 Virtual Function' unused= ip link set ens785f0 vf 0 mac 00:11:22:33:44:11 ip link set ens785f0 vf 1 mac 00:11:22:33:44:12
Host PF in DPDK driver. Create 2VFs from 1 PF with dpdk driver:
./dpdk-devbind.py -b igb_uio 05:00.0 echo 2 >/sys/bus/pci/devices/0000:05:00.0/max_vfs ./dpdk-devbind.py --st 0000:05:02.0 'XL710/X710 Virtual Function' unused=i40evf,igb_uio 0000:05:02.1 'XL710/X710 Virtual Function' unused=i40evf,igb_uio
Bind the VFs to dpdk driver:
./tools/dpdk-devbind.py -b igb_uio 05:02.0 05:02.1
Reserve huge pages memory(before using DPDK):
echo 4096 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB /nr_hugepages mkdir /mnt/huge mount -t hugetlbfs nodev /mnt/huge
Test Case1: VEB Switching Inter VF-VF MAC switch¶
Summary: Kernel PF, then create 2VFs. VFs running dpdk testpmd, send traffic to VF1, and set the packet’s DEST MAC to VF2, check if VF2 can receive the packets. Check Inter VF-VF MAC switch.
Details:
In VF1, run testpmd:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 --socket-mem 1024,1024 -w 05:02.0 --file-prefix=test1 -- -i --crc-strip --eth-peer=0,00:11:22:33:44:12 testpmd>set fwd mac testpmd>set promisc all off testpmd>start
In VF2, run testpmd:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0xa -n 4 --socket-mem 1024,1024 -w 05:02.1 --file-prefix=test2 -- -i --crc-strip testpmd>set fwd mac testpmd>set promisc all off testpmd>start
2. Send 100 packets to VF1’s MAC address, check if VF2 can get 100 packets. Check the packet content is no corrupted.
Test Case2: VEB Switching Inter VF-VF MAC/VLAN switch¶
Summary: Kernel PF, then create 2VFs, assign VF1 with VLAN=1 in, VF2 with VLAN=2. VFs are running dpdk testpmd, send traffic to VF1 with VLAN=1, then let it forwards to VF2,it should not work since they are not in the same VLAN; set VF2 with VLAN=1, then send traffic to VF1 with VLAN=1, and VF2 can receive the packets. Check inter VF MAC/VLAN switch.
Details:
Set the VLAN id of VF1 and VF2:
ip link set ens785f0 vf 0 vlan 1 ip link set ens785f0 vf 1 vlan 2
In VF1, run testpmd:
./testpmd -c 0xf -n 4 --socket-mem 1024,1024 -w 0000:05:02.0 --file-prefix=test1 -- -i --crc-strip --eth-peer=0,00:11:22:33:44:12 testpmd>set fwd mac testpmd>set promisc all off testpmd>start
In VF2, run testpmd:
./testpmd -c 0xf0 -n 4 --socket-mem 1024,1024 -w 0000:05:02.1 --file-prefix=test2 -- -i --crc-strip testpmd>set fwd rxonly testpmd>set promisc all off testpmd>start
Send 100 packets with VF1’s MAC address and VLAN=1, check if VF2 can’t get 100 packets since they are not in the same VLAN.
Change the VLAN id of VF2:
ip link set ens785f0 vf 1 vlan 1
Send 100 packets with VF1’s MAC address and VLAN=1, check if VF2 can get 100 packets since they are in the same VLAN now. Check the packet content is not corrupted:
sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=1)/IP() /Raw('x'*40)],iface="ens785f1")
Test Case3: VEB Switching Inter PF-VF MAC switch¶
Summary: DPDK PF, then create 1VF, PF in the host running dpdk testpmd, send traffic from PF to VF1, ensure PF->VF1(let VF1 in promisc mode); send traffic from VF1 to PF,ensure VF1->PF can work.
Details:
vf->pf In host, launch testpmd:
./testpmd -c 0x3 -n 4 -- -i testpmd>set fwd rxonly testpmd>set promisc all off testpmd>start
In VM1, run testpmd:
./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,pf_mac_addr testpmd>set fwd txonly testpmd>set promisc all off testpmd>start
pf->vf In host, launch testpmd:
./testpmd -c 0x3 -n 4 -- -i --eth-peer=0,vf1_mac_addr testpmd>set fwd txonly testpmd>set promisc all off testpmd>start
In VM1, run testpmd:
./testpmd -c 0x3 -n 4 -- -i testpmd>mac_addr add 0 vf1_mac_addr testpmd>set fwd rxonly testpmd>set promisc all off testpmd>start
tester->vf
Send 100 packets with PF’s MAC address from VF, check if PF can get 100 packets, so VF1->PF is working. Check the packet content is not corrupted.
Send 100 packets with VF’s MAC address from PF, check if VF1 can get 100 packets, so PF->VF1 is working. Check the packet content is not corrupted.
Send 100 packets with VF’s MAC address from tester, check if VF1 can get 100 packets, so tester->VF1 is working. Check the packet content is not corrupted.
Test Case4: VEB Switching Inter-VM PF-VF/VF-VF MAC switch Performance¶
Performance testing, repeat Testcase1 (VF-VF) and Testcase3 (PF-VF) to check the performance at different sizes(64B–1518B and jumbo frame–3000B) with 100% rate sending traffic
VFD as SRIOV Policy Manager Tests¶
VFD is SRIOV Policy Manager (daemon) running on the host allowing configuration not supported by kernel NIC driver, supports ixgbe and i40e drivers’ NIC. Run on the host for policy decisions w.r.t. what a VF can and cannot do to the PF. Only the DPDK PF would provide a callback to implement these features, the normal kernel drivers would not have the callback so would not support the features. Allow passing information to application controlling PF when VF message box event received such as those listed below, so action could be taken based on host policy. Stop VM1 from asking for something that compromises VM2.
Multiple purposes:
- set VF MAC anti-spoofing
- set VF VLAN anti-spoofing
- set TX loopback
- set VF unicast promiscuous mode
- set VF multicast promiscuous mode
- set VF MTU
- get/reset VF stats
- set VF MAC address
- set VF VLAN stripping
- VF VLAN insertion
- set VF broadcast mode
- set VF VLAN tag
- set VF VLAN filter
- Set/reset the queue drop enable bit for all pools(only ixgbe support)
- Set/reset the enable drop bit in the split receive control register
- (only ixgbe support)
VFD also includes VF to PF mailbox message management by APP. When PF receives mailbox messages from VF, PF should call the callback provided by APP to know if they’re permitted to be processed.
Prerequisites¶
Host PF in DPDK driver. Create 2 VFs from 1 PF with dpdk driver,take Niantic for example:
./tools/dpdk-devbind.py -b igb_uio 81:00.0 echo 2 >/sys/bus/pci/devices/0000:81:00.0/max_vfs
Detach VFs from the host:
rmmod ixgbevf
Passthrough VF 81:10.0 to vm0 and passthrough VF 81:10.2 to vm1, start vm0 and vm1
Login vm0 and vm1, then bind VF0 device to igb_uio driver.
Start testpmd on host and vm0 in chained port topology:
./testpmd -c f -n 4 -- -i --port-topology=chained --tx-offloads=0x8fff
Test Case 1: Set VLAN insert for VF from PF¶
Disable vlan insert for VF0 from PF:
testpmd> set vf vlan insert 0 0 0
Start VF0 testpmd, set it in mac forwarding mode and enable verbose output
Send packet from tester to VF0 without vlan id
Stop VF0 testpmd and check VF0 can receive packet without any vlan id
Enable vlan insert and insert random vlan id (1~4095) for VF0 from PF:
testpmd> set vf vlan insert 0 0 id
Start VF0 testpmd
Send packet from tester to VF0 without vlan id
Stop VF0 testpmd and check VF0 can receive packet with configured vlan id
Test Case 2: Set VLAN strip for VF from PF¶
Disable VLAN strip for all queues for VF0 from PF:
testpmd> set vf vlan stripq 0 0 off
Start VF0 testpmd, add rx vlan id as random 1~4095, set it in mac forwarding mode and enable verbose output:
testpmd> rx_vlan add id 0
Send packet from tester to VF0 with configured vlan id
Stop VF0 testpmd and check VF0 can receive packet with configured vlan id
Enable VLAN strip for all queues for VF0 from PF:
testpmd> set vf vlan stripq 0 0 on
Start VF0 testpmd
Send packet from tester to VF0 with configured vlan id
Stop VF0 testpmd and check VF0 can receive packet without any vlan id
Remove vlan id on VF0
Test Case 3: Set VLAN antispoof for VF from PF¶
Disable vlan filter and strip from PF:
testpmd> vlan set filter off 0 testpmd> vlan set strip off 0
Add a random 1~4095 vlan id to set filter from PF for VF:
testpmd> rx_vlan add id port 0 vf 1
Disable vlan antispoof for VF from PF:
testpmd> set vf vlan antispoof 0 0 off
Disable vlan filter and strip on VF0
Start testpmd on VF0, set it in mac forwarding mode and enable:
verbose output
Send packets with matching/non-matching/no vlan id on tester port
Stop VF0 testpmd and check VF0 can receive and transmit packets with matching/non-matching/no vlan id
Enable mac antispoof and vlan antispoof for vf from PF:
testpmd> set vf mac antispoof 0 0 on testpmd> set vf vlan antispoof 0 0 on
Start VF0 testpmd
Send packets with matching/non-matching/no vlan id on tester port
Stop VF0 testpmd and check VF0 can receive all but only transmit packet with matching vlan id
Test Case 4: Set mac antispoof for VF from PF¶
Add fake mac and use fake mac instead of transmitted mac in the macswap mode, so default is non-matching SA:
.addr_bytes = {0x00, 0x11, 0x22, 0x33, 0x44, 0x55}
Disable VF0 mac antispoof from PF:
testpmd> set vf mac antispoof 0 0 off
Start testpmd on VF0, set it in macswap forwarding mode and enable verbose output:
testpmd> set fwd macswap
Send packet from tester to VF0 with correct SA, but code has changed to use fake SA
Stop VF0 testpmd and check VF0 can receive then transmit packet
Enable VF0 mac antispoof from PF:
testpmd> set vf mac antispoof 0 0 on
Start VF0 testpmd
Send packet from tester to VF0 with correct SA, but code has changed to use fake SA
Stop VF0 testpmd and check VF0 can receive packet but can’t transmit packet
Recover original code
Test Case 5: Set the MAC address for VF from PF¶
Set VF0 different MAC address from PF, such as A2:22:33:44:55:66
testpmd> set vf mac addr 0 0 A2:22:33:44:55:66
Stop VF0 testpmd and restart VF0 testpmd, check VF0 address is configured address A2:22:33:44:55:66
Set testpmd in mac forwarding mode and enable verbose output
Send packet from tester to VF0 configured address
Stop VF0 testpmd and check VF0 can receive packet
Test Case 6: Enable/disable tx loopback¶
Disable tx loopback for VF0 from PF:
testpmd> set tx loopback 0 off
Set VF0 in rxonly forwarding mode and start testpmd
tcpdump on the tester port
Send 10 packets from VF1 to VF0
Stop VF0 testpmd, check VF0 can’t receive any packet but tester port could capture packet
Enable tx loopback for VF0 from PF:
testpmd> set tx loopback 0 on
Start VF0 testpmd
Send packet from VF1 to VF0
Stop VF0 testpmd, check VF0 can receive packet,but tester port can’t capture packet
Test Case 7: Set drop enable bit for all queues¶
Bind VF1 device to igb_uio driver and start testpmd in chained port topology
Disable drop enable bit for all queues from PF:
testpmd> set all queues drop 0 off
Only start VF1 to capture packet, set it in rxonly forwarding mode and enable verbose output
Send 200 packets to VF0, make VF0 queue full of packets
Send 20 packets to VF1
Stop VF1 testpmd and check VF1 can’t receive packet
Enable drop enable bit for all queues from PF:
testpmd> set all queues drop 0 on
Start VF1 testpmd
Stop VF1 testpmd and check VF1 can receive original queue buffer 20 packets
Start VF1 testpmd
Send 20 packets to VF1
Stop VF1 testpmd and check VF1 can receive 20 packets
Test Case 8: Set split drop enable bit for VF from PF¶
Disable split drop enable bit for VF0 from PF:
testpmd> set vf split drop 0 0 off
Set VF0 and host in rxonly forwarding mode and start testpmd
Send a burst of 20000 packets to VF0 and check PF and VF0 can receive all packets
Enable split drop enable bit for VF0 from PF:
testpmd> set vf split drop 0 0 on
Send a burst of 20000 packets to VF0 and check some packets dropped on PF and VF0
Test Case 9: Get/Reset stats for VF from PF¶
Add testpmd and some print code in the rte_pmd_i40e_set_vf_vlan_filter() function(drivers/net/i40e/i40e_ethdev.c) to start test, rebuild the code
Get stats output for VF0 from PF, and check RX/TX packets is 0:
testpmd> get vf stats 0 0
Set VF0 in mac forwarding mode and start testpmd
Send 10 packets to VF0 and check VF0 can receive 10 packets
Get stats for VF0 from PF, and check RX/TX packets is 10
Reset stats for VF0 from PF, and check PF and VF0 RX/TX packets is 0:
testpmd> reset vf stats 0 0 testpmd> get vf stats 0 0
Test Case 10: enhancement to identify VF MTU change¶
Set VF0 in mac forwarding mode and start testpmd
Default mtu size is 1500, send one packet with length bigger than default mtu size, such as 2000 from tester,check VF0 can receive but can’t transmit packet
Set VF0 mtu size as 3000, but need to stop then restart port to active mtu:
testpmd> port stop all testpmd> port config mtu 0 3000 testpmd> port start all testpmd> start
Send one packet with length 2000 from tester,check VF0 can receive and transmit packet
Send one packet with length bigger than configured mtu size, such as 5000 from tester, check VF0 can receive but can’t transmit packet
Test Case 11: Enable/disable vlan tag forwarding to VSIs¶
Disable VLAN tag for VF0 from PF:
testpmd> set vf vlan tag 0 0 off
Start VF0 testpmd, add rx vlan id as random 1~4095, set it in mac forwarding mode and enable verbose output
Send packet from tester to VF0 with vlan tag(vlan id should same as rx_vlan)
Stop VF0 testpmd and check VF0 can’t receive vlan tag packet
Enable VLAN tag for VF0 from PF:
testpmd> set vf vlan tag 0 0 on
Start VF0 testpmd
Send packet from tester to VF0 with vlan tag(vlan id should same as rx_vlan)
Stop VF0 testpmd and check VF0 can receive vlan tag packet
Remove vlan id on VF0
Test Case 12: Broadcast mode¶
Start testpmd on VF0, set it in rxonly mode and enable verbose output
Disable broadcast mode for VF0 from PF:
testpmd>set vf broadcast 0 0 off
Send packets from tester with broadcast address,ff:ff:ff:ff:ff:ff, and check VF0 can not receive the packet
Enable broadcast mode for VF0 from PF:
testpmd>set vf broadcast 0 0 on
Send packets from tester with broadcast address,ff:ff:ff:ff:ff:ff, and check VF0 can receive the packet
Test Case 13: Multicast mode¶
Start testpmd on VF0, set it in rxonly mode and enable verbose output
Disable promisc and multicast mode for VF0 from PF:
testpmd>set vf promisc 0 0 off testpmd>set vf allmulti 0 0 off
Send packet from tester to VF0 with multicast MAC, and check VF0 can not receive the packet
Enable multicast mode for VF0 from PF:
testpmd>set vf allmulti 0 0 on
Send packet from tester to VF0 with multicast MAC, and check VF0 can receive the packet
Test Case 14: Promisc mode¶
Start testpmd on VF0, set it in rxonly mode and enable verbose output
Disable promisc mode for VF from PF:
testpmd>set vf promisc 0 0 off
Send packet from tester to VF0 with random MAC, and check VF0 can not receive the packet
Send packet from tester to VF0 with correct MAC, and check VF0 can receive the packet
Enable promisc mode for VF from PF:
testpmd>set vf promisc 0 0 on
Send packet from tester to VF0 with random MAC, and the packet can be received by VF0
Send packet from tester to VF0 with correct MAC, and the packet can be received by VF0
Test Case 14: Set Vlan filter for VF from PF¶
Start VF0 testpmd, set it in rxonly forwarding mode, enable verbose output
Send packet without vlan id to random MAC, check VF0 can receive packet
Add vlan filter id as random 1~4095 for VF0 from PF:
testpmd> rx_vlan add id port 0 vf 1
Send packet from tester to VF0 with wrong vlan id to random MAC, check VF0 can’t receive packet
Send packet from tester to VF0 with configured vlan id to random MAC, check VF0 can receive packet
Remove vlan filter id for VF0 from PF:
testpmd> rx_vlan rm id port 0 vf 1
Send packet from tester to VF0 with wrong vlan id to random MAC, check VF0 can receive packet
Send packet from tester to VF0 with configured vlan id to random MAC, check VF0 can receive packet
Send packet without vlan id to random MAC, check VF0 can receive packet
VF Jumboframe Tests¶
The support of jumbo frames by Poll Mode Drivers consists in enabling a port to receive Jumbo Frames with a configurable maximum packet length that is greater than the standard maximum Ethernet frame length (1518 bytes), up to a maximum value imposed by the hardware.
Prerequisites¶
Create VF device from PF devices.:
./dpdk_nic_bind.py --st 0000:87:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:87:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused= echo 1 > /sys/bus/pci/devices/0000\:87\:00.0/sriov_numvfs echo 1 > /sys/bus/pci/devices/0000\:87\:00.1/sriov_numvfs ./dpdk_nic_bind.py --st 0000:87:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:87:02.0 'XL710/X710 Virtual Function' unused= 0000:87:0a.0 'XL710/X710 Virtual Function' unused=
Detach VFs from the host, bind them to pci-stub driver:
/sbin/modprobe pci-stub using `lspci -nn|grep -i ethernet` got VF device id, for example "8086 154c", echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id echo 0000:87:02.0 > /sys/bus/pci/devices/0000:87:02.0/driver/unbind echo 0000:87:02.0 > /sys/bus/pci/drivers/pci-stub/bind echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id echo 0000:87:0a.0 > /sys/bus/pci/devices/0000:87:0a.0/driver/unbind echo 0000:87:0a.0 > /sys/bus/pci/drivers/pci-stub/bind
Passthrough VFs 87:02.0 & 87:02.1 to vm0 and start vm0:
/usr/bin/qemu-system-x86_64 -name vm0 -enable-kvm \ -cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \ -device pci-assign,host=87:02.0,id=pt_0 \ -device pci-assign,host=87:0a.0,id=pt_1
Login vm0 and them bind VF devices to igb_uio driver:
./tools/dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
Start testpmd, set it in mac forward mode:
testpmd -c 0x0f-- -i --portmask=0x1 \ --tx-offloads=0x8fff --max-pkt-len=9000--port-topology=loop testpmd> set fwd mac testpmd> start
Start packet forwarding in the testpmd
application with the start
command. Then, make the Traffic Generator transmit to the target’s port
packets of lengths (CRC included) 1517, 1518, 8999, and 9000 respectively.
Check that the same amount of frames and bytes are received back by the
Traffic Generator from its port connected to the target’s port.
Note: 8259x family VF device jumbo frame setting only take effect when
VF rx mode jumbo frame is enable. VF device jumbo frame size setting shared
with PF device and testpmd parameter max-pkt-len
has no effect.
Functional Tests of Jumbo Frames¶
Testing the support of Jumbo Frames in Poll Mode Drivers consists in configuring the maximum packet length with a value greater than 1518, and in sending to the test machine packets with the following lengths (CRC included):
- packet length = 1518 - 1
- packet length = 1518
- packet length = 1518 + 1
- packet length = maximum packet length - 1
- packet length = maximum packet length
- packet length = maximum packet length + 1
Test Case: Normal frames with no jumbo frame support¶
Check that packets of standard lengths are still received with setting max-pkt-len.
Test Case: Normal frames with jumbo frame support¶
Check that packets of standard lengths are still received when enabling the receipt of Jumbo Frames.
Test Case: Jumbo frames with no jumbo frame support¶
Check that with jumbo frame support, packet lengths greater than the standard maximum frame (1518) can not received.
Test Case: Jumbo frames with jumbo frame support¶
Check that Jumbo Frames of lengths greater than the standard maximum frame (1518) and lower or equal to the maximum frame length can be received.
Test Case: Jumbo frames over jumbo frame support¶
Check that packets larger than the configured maximum packet length are effectively dropped by the hardware.
VF One-shot Rx Interrupt Tests¶
One-shot Rx interrupt feature will split rx interrupt handling from other interrupts like LSC interrupt. It implemented one handling mechanism to eliminate non-deterministic DPDK polling thread wakeup latency.
VFIO’ multiple interrupt vectors support mechanism to enable multiple event fds serving per Rx queue interrupt handling. UIO has limited interrupt support, specifically it only support a single interrupt vector, which is not suitable for enabling multi queues Rx/Tx interrupt.
Prerequisites¶
Each of the 10Gb Ethernet* ports of the DUT is directly connected in full-duplex to a different port of the peer traffic generator.
Assume PF port PCI addresses are 0000:04:00.0 and 0000:04:00.1, their Interfaces name are p786p1 and p786p2. Assume generated VF PCI address will be 0000:04:10.0, 0000:04:10.1.
Iommu pass through feature has been enabled in kernel:
intel_iommu=on iommu=pt
Support igb_uio and vfio driver, if used vfio, kernel need 3.6+ and enable vt-d in bios. When used vfio, requested to insmod two drivers vfio and vfio-pci.
Test Case1: VF interrupt pmd in VM with uio¶
Create one VF per Port in host and add these two VFs into VM:
usertools/dpdk-devbind.py --force --bind=vfio-pci 0000:04:00.0 0000:04:00.1
echo 1 >/sys/bus/pci/devices/0000\:04\:00.0/max_vfs
echo 1 >/sys/bus/pci/devices/0000\:04\:00.1/max_vfs
usertools/dpdk-devbind.py --force --bind=pci-stub 0000:04:10.0 0000:04:10.1
Start VM and start l3fwd-power with one queue per port in VM:
l3fwd-power -c 7 -n 4 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Send one packet to VF0 and VF1, check that thread on core1 and core2 waked up:
L3FWD_POWER: lcore 1 is waked up from rx interrupt on port1,rxq0
L3FWD_POWER: lcore 2 is waked up from rx interrupt on port1,rxq0
Check the packet has been normally forwarded.
After the packet forwarded, thread on core1 and core 2 will return to sleep:
L3FWD_POWER: lcore 1 sleeps until interrupt on port0,rxq0 triggers
L3FWD_POWER: lcore 2 sleeps until interrupt on port0,rxq0 triggers
Send packet flows to VF0 and VF1, check that thread on core1 and core2 will keep up awake.
Test Case2: VF interrupt pmd in Host with uio¶
Create one VF per Port in host and make sure PF interface up uses kernel driver to create vf:
echo 1 >/sys/bus/pci/devices/0000\:04\:00.1/sriov_numvf
echo 1 >/sys/bus/pci/devices/0000\:04\:00.1/sriov_numvf
Bind VF device to igb_uio:
./usertools/dpdk-devbind.py --bind=igb_uio 0000:04:10.0 0000:04:10.1
Start host and start l3fwd-power with one queue per port in host:
l3fwd-power -c 7 -n 4 -- -p 0x3 -P --config="(0,0,1),(1,0,2)"
Send one packet to VF0 and VF1, check that thread on core1 and core2 waked up:
L3FWD_POWER: lcore 1 is waked up from rx interrupt on port1,rxq0
L3FWD_POWER: lcore 2 is waked up from rx interrupt on port1,rxq0
Check the packet has been normally forwarded.
After the packet forwarded, thread on core1 and core 2 will return to sleep:
L3FWD_POWER: lcore 1 sleeps until interrupt on port0,rxq0 triggers
L3FWD_POWER: lcore 2 sleeps until interrupt on port0,rxq0 triggers
Send packet flows to VF0 and VF1, check that thread on core1 and core2 will keep up awake.
Test Case3: VF interrupt pmd in Host with vfio¶
Create one VF per Port in host and make sure PF interface up uses kernel driver to create vf:
echo 1 >/sys/bus/pci/devices/0000\:04\:00.1/sriov_numvf
echo 1 >/sys/bus/pci/devices/0000\:04\:00.1/sriov_numvf
Bind VF device to host igb_uio:
./usertools/dpdk-devbind.py --bind=vfio-pci 0000:04:10.0 0000:04:10.1
Start VM and start l3fwd-power with two queues per port in VM:
l3fwd-power -c 1f -n 4 -- -p 0x3 -P \
--config="(0,0,1),(0,1,2)(1,0,3),(1,1,4)"
Send packets with increased dest IP to Port0 and Port1, check that thread on core1,core2,core3,core4 waked up:
L3FWD_POWER: lcore 1 is waked up from rx interrupt on port1,rxq0
L3FWD_POWER: lcore 2 is waked up from rx interrupt on port1,rxq1
L3FWD_POWER: lcore 3 is waked up from rx interrupt on port1,rxq0
L3FWD_POWER: lcore 4 is waked up from rx interrupt on port1,rxq1
Check the packet has been normally forwarded.
After the packet forwarded, thread on core1,core2,core3,core4 will return to sleep:
L3FWD_POWER: lcore 1 sleeps until interrupt on port0,rxq0 triggers
L3FWD_POWER: lcore 2 sleeps until interrupt on port0,rxq1 triggers
L3FWD_POWER: lcore 3 sleeps until interrupt on port1,rxq0 triggers
L3FWD_POWER: lcore 4 sleeps until interrupt on port1,rxq1 triggers
Send packet flows to Port0 and Port1, check that thread on core1,core2,core3, core4 will keep up awake.
VF MAC Filter Tests¶
Test Case 1: test_kernel_2pf_2vf_1vm_iplink_macfilter¶
Get the pci device id of DUT, for example:
./dpdk_nic_bind.py --st 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused=
Create 2 VFs from 2 PFs, and set the VF MAC address at PF0:
echo 1 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs echo 1 > /sys/bus/pci/devices/0000\:81\:00.1/sriov_numvfs ./dpdk_nic_bind.py --st 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused= 0000:81:02.0 'XL710/X710 Virtual Function' unused= 0000:81:0a.0 'XL710/X710 Virtual Function' unused= ip link set ens259f0 vf 0 mac 00:11:22:33:44:55
Detach VFs from the host, bind them to pci-stub driver:
/sbin/modprobe pci-stub using `lspci -nn|grep -i ethernet` got VF device id, for example "8086 154c", echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id echo 0000:81:02.0 > /sys/bus/pci/devices/0000:08:02.0/driver/unbind echo 0000:81:02.0 > /sys/bus/pci/drivers/pci-stub/bind echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id echo 0000:81:0a.0 > /sys/bus/pci/devices/0000:08:0a.0/driver/unbind echo 0000:81:0a.0 > /sys/bus/pci/drivers/pci-stub/bind
or using the following more easy way:
virsh nodedev-detach pci_0000_81_02_0; virsh nodedev-detach pci_0000_81_0a_0; ./dpdk_nic_bind.py --st 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused= 0000:81:02.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused= 0000:81:0a.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused=
it can be seen that VFs 81:02.0 & 81:0a.0 ‘s driver is pci-stub.
Passthrough VFs 81:02.0 & 81:0a.0 to vm0, and start vm0:
/usr/bin/qemu-system-x86_64 -name vm0 -enable-kvm \ -cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \ -device pci-assign,host=81:02.0,id=pt_0 \ -device pci-assign,host=81:0a.0,id=pt_1
Login vm0, got VFs pci device id in vm0, assume they are 00:06.0 & 00:07.0, bind them to igb_uio driver, and then start testpmd, enable CRC strip, disable promisc mode,set it in mac forward mode:
./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 00:07.0 ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> port stop all testpmd> port config all crc-strip on testpmd> port start all testpmd> set promisc all off testpmd> set fwd mac testpmd> start
Use scapy to send 100 random packets with ip link set MAC to VF, verify the packets can be received by one VF and can be forward to another VF correctly.
Also use scapy to send 100 random packets with a wrong MAC to VF, verify the packets can’t be received by one VF and can be forward to another VF correctly.
Test Case 2: test_kernel_2pf_2vf_1vm_mac_add_filter¶
Get the pci device id of DUT, for example:
./dpdk_nic_bind.py --st 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused=
Create 2 VFs from 2 PFs, and don’t set the VF MAC address at PF0:
echo 1 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs echo 1 > /sys/bus/pci/devices/0000\:81\:00.1/sriov_numvfs ./dpdk_nic_bind.py --st 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused= 0000:81:02.0 'XL710/X710 Virtual Function' unused= 0000:81:0a.0 'XL710/X710 Virtual Function' unused=
Detach VFs from the host, bind them to pci-stub driver:
/sbin/modprobe pci-stub using `lspci -nn|grep -i ethernet` to get VF device id, for example "8086 154c", echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id echo 0000:81:02.0 > /sys/bus/pci/devices/0000:08:02.0/driver/unbind echo 0000:81:02.0 > /sys/bus/pci/drivers/pci-stub/bind echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id echo 0000:81:0a.0 > /sys/bus/pci/devices/0000:08:0a.0/driver/unbind echo 0000:81:0a.0 > /sys/bus/pci/drivers/pci-stub/bind
or using the following more easy way:
virsh nodedev-detach pci_0000_81_02_0; virsh nodedev-detach pci_0000_81_0a_0; ./dpdk_nic_bind.py --st 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused= 0000:81:02.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused= 0000:81:0a.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused=
it can be seen that VFs 81:02.0 & 81:0a.0 ‘s driver is pci-stub.
Passthrough VFs 81:02.0 & 81:0a.0 to vm0, and start vm0:
/usr/bin/qemu-system-x86_64 -name vm0 -enable-kvm \ -cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \ -device pci-assign,host=81:02.0,id=pt_0 \ -device pci-assign,host=81:0a.0,id=pt_1
login vm0, got VFs pci device id in vm0, assume they are 00:06.0 & 00:07.0, bind them to igb_uio driver, and then start testpmd, enable CRC strip on VF, disable promisc mode, add a new MAC to VF0 and then start:
./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 00:07.0 ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> port stop all testpmd> port config all crc-strip on testpmd> port start all testpmd> set promisc all off testpmd> mac_addr add 0 00:11:22:33:44:55 testpmd> set fwd mac testpmd> start
Note: In Jan, 2016, i40e doesn’t support mac_addr add operation, so the case will be failed for FVL/Fort park NICs.
Use scapy to send 100 random packets with current VF0’s MAC, verify the packets can be received by one VF and can be forward to another VF correctly.
Use scapy to send 100 random packets with new added VF0’s MAC, verify the packets can be received by one VF and can be forward to another VF correctly.
Use scapy to send 100 random packets with a wrong MAC to VF0, verify the packets can’t be received by one VF and can be forward to another VF correctly.
VF Offload¶
Prerequisites for checksum offload¶
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
Assuming that ports 0
and 2
are connected to a traffic generator,
launch the testpmd
with the following arguments:
./build/app/testpmd -cffffff -n 1 -- -i --burst=1 --txpt`=32 \
--txht=8 --txwt=0 --txfreet=0 --rxfreet=64 --mbcache=250 --portmask=0x5
enable-rx-checksum
Set the verbose level to 1 to display information for each received packet:
testpmd> set verbose 1
Setup the csum
forwarding mode:
testpmd> set fwd csum
Set csum packet forwarding mode
Start the packet forwarding:
testpmd> start
csum packet forwarding - CRC stripping disabled - packets/burst=32
nb forwarding cores=1 - nb forwarding ports=10
RX queues=1 - RX desc=128 - RX free threshold=64
RX threshold registers: pthresh=8 hthresh=8 wthresh=4
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=32 hthresh=8 wthresh=8
Verify that how many packets found with Bad-ipcsum or Bad-l4csum:
testpmd> stop
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
Bad-ipcsum: 0 Bad-l4csum: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
Test Case: HW checksum offload check¶
Start testpmd and enable checksum offload on tx port.
Setup the csum
forwarding mode:
testpmd> set fwd csum
Set csum packet forwarding mode
Enable the IPv4/UDP/TCP/SCTP checksum offload on port 0:
testpmd>
testpmd> tx_checksum set ip hw 0
testpmd> tx_checksum set udp hw 0
testpmd> tx_checksum set tcp hw 0
testpmd> tx_checksum set sctp hw 0
testpmd> start
csum packet forwarding - CRC stripping disabled - packets/burst=32
nb forwarding cores=1 - nb forwarding ports=10
RX queues=1 - RX desc=128 - RX free threshold=64
RX threshold registers: pthresh=8 hthresh=8 wthresh=4
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=32 hthresh=8 wthresh=8
Configure the traffic generator to send the multiple packets for the following combination: IPv4/UDP, IPv4/TCP, IPv4/SCTP, IPv6/UDP, IPv6/TCP.
Send packets with incorrect checksum, Verify dpdk can rx it and reported the checksum error, Verify that the same number of packet are correctly received on the traffic generator side. And IPv4 checksum, TCP checksum, UDP checksum, SCTP CRC32c need be validated as pass by the tester.
The IPv4 source address will not be changed by testpmd.
Test Case: SW checksum offload check¶
Disable HW checksum offload on tx port, SW Checksum check. Send same packet with incorrect checksum and verify checksum is valid.
Setup the csum
forwarding mode:
testpmd> set fwd csum
Set csum packet forwarding mode
Disable the IPv4/UDP/TCP/SCTP checksum offload on port 0:
testpmd> tx_checksum set 0x0 0
testpmd> start
csum packet forwarding - CRC stripping disabled - packets/burst=32
nb forwarding cores=1 - nb forwarding ports=10
RX queues=1 - RX desc=128 - RX free threshold=64
RX threshold registers: pthresh=8 hthresh=8 wthresh=4
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=32 hthresh=8 wthresh=8
Configure the traffic generator to send the multiple packets for the following combination: IPv4/UDP, IPv4/TCP, IPv6/UDP, IPv6/TCP.
Send packets with incorrect checksum, Verify dpdk can rx it and reported the checksum error, Verify that the same number of packet are correctly received on the traffic generator side. And IPv4 checksum, TCP checksum, UDP checksum need be validated as pass by the IXIA.
The first byte of source IPv4 address will be increment by testpmd. The checksum is indeed recalculated by software algorithms.
Prerequisites for TSO¶
The DUT must take one of the Ethernet controller ports connected to a port on another device that is controlled by the Scapy packet generator.
The Ethernet interface identifier of the port that Scapy will use must be known. On tester, all offload feature should be disabled on tx port, and start rx port capture:
ethtool -K <tx port> rx off tx off tso off gso off gro off lro off
ip l set <tx port> up
tcpdump -n -e -i <rx port> -s 0 -w /tmp/cap
On DUT, run pmd with parameter “–enable-rx-cksum”. Then enable TSO on tx port and checksum on rx port. The test commands is below:
#enable hw checksum on rx port
tx_checksum set ip hw 0
tx_checksum set udp hw 0
tx_checksum set tcp hw 0
tx_checksum set sctp hw 0
set fwd csum
# enable TSO on tx port
*tso set 800 1
Test case: csum fwd engine, use TSO¶
This test uses Scapy
to send out one large TCP package. The dut forwards package
with TSO enable on tx port while rx port turns checksum on. After package send out
by TSO on tx port, the tester receives multiple small TCP package.
Turn off tx port by ethtool on tester:
ethtool -K <tx port> rx off tx off tso off gso off gro off lro off
ip l set <tx port> up
capture package rx port on tester:
tcpdump -n -e -i <rx port> -s 0 -w /tmp/cap
Launch the userland testpmd
application on DUT as follows:
testpmd> set verbose 1
# enable hw checksum on rx port
testpmd> tx_checksum set ip hw 0
testpmd> tx_checksum set udp hw 0
testpmd> tx_checksum set tcp hw 0
testpmd> tx_checksum set sctp hw 0
# enable TSO on tx port
testpmd> tso set 800 1
# set fwd engine and start
testpmd> set fwd csum
testpmd> start
Test IPv4() in scapy:
sendp([Ether(dst="%s", src="52:00:00:00:00:00")/IP(src="192.168.1.1",dst="192.168.1.2")/UDP(sport=1021,dport=1021)/Raw(load="\x50"*%s)], iface="%s")
Test IPv6() in scapy:
sendp([Ether(dst="%s", src="52:00:00:00:00:00")/IPv6(src="FE80:0:0:0:200:1FF:FE00:200", dst="3555:5555:6666:6666:7777:7777:8888:8888")/UDP(sport=1021,dport=1021)/Raw(load="\x50"*%s)], iface="%s"
VF Packet RxTX Tests¶
Test Case 1: VF_packet_IO_kernel_PF_dpdk_VF¶
Got the pci device id of DUT, for example:
./dpdk_nic_bind.py --st 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused=
Create 2 VFs from 2 PFs:
echo 1 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs echo 1 > /sys/bus/pci/devices/0000\:81\:00.1/sriov_numvfs ./dpdk_nic_bind.py --st 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused= 0000:81:02.0 'XL710/X710 Virtual Function' unused= 0000:81:0a.0 'XL710/X710 Virtual Function' unused=
Detach VFs from the host, bind them to pci-stub driver:
/sbin/modprobe pci-stub using `lspci -nn|grep -i ethernet` got VF device id, for example "8086 154c", echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id echo 0000:81:02.0 > /sys/bus/pci/devices/0000:08:02.0/driver/unbind echo 0000:81:02.0 > /sys/bus/pci/drivers/pci-stub/bind echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id echo 0000:81:0a.0 > /sys/bus/pci/devices/0000:08:0a.0/driver/unbind echo 0000:81:0a.0 > /sys/bus/pci/drivers/pci-stub/bind
or using the following more easy way:
virsh nodedev-detach pci_0000_81_02_0; virsh nodedev-detach pci_0000_81_0a_0; ./dpdk_nic_bind.py --st 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused= 0000:81:02.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused= 0000:81:0a.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused=
it can be seen that VFs 81:02.0 & 81:0a.0 ‘s drv is pci-stub.
Passthrough VFs 81:02.0 & 81:0a.0 to vm0, and start vm0:
/usr/bin/qemu-system-x86_64 -name vm0 -enable-kvm \ -cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \ -device pci-assign,host=81:02.0,id=pt_0 \ -device pci-assign,host=81:0a.0,id=pt_1
Login vm0, got VFs pci device id in vm0, assume they are 00:06.0 & 00:07.0, bind them to igb_uio driver, and then start testpmd, set it in mac forward mode:
./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 00:07.0 ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -w 00:06.0 -w 00:07.0 \ -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> set fwd mac testpmd> start
Get mac address of one VF and use it as dest mac, using scapy to send 2000 random packets from tester, verify the packets can be received by one VF and can be forward to another VF correctly.
VF PF Reset Tests¶
Prerequisites¶
Hardware:
- Fortville 4*10G NIC (driver: i40e)
- tester: ens3f0
- dut: ens5f0(pf0), ens5f1(pf1)
- ens3f0 connect with ens5f0 by cable
- the status of ens5f1 is linked
Added command:
testpmd> port reset (port_id|all) "Reset all ports or port_id"
Test Case 1: vf reset – create two vfs on one pf¶
Got the pci device id of DUT, for example:
./dpdk_nic_bind.py --st 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens5f0 drv=i40e 0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens5f1 drv=i40e
Create 2 VFs from 1 PF,and set the VF MAC address at PF0:
echo 2 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs ./dpdk_nic_bind.py --st 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens5f0 drv=i40e 0000:81:02.0 'XL710/X710 Virtual Function' unused= 0000:81:02.1 'XL710/X710 Virtual Function' unused= ip link set ens5f0 vf 0 mac 00:11:22:33:44:11 ip link set ens5f0 vf 1 mac 00:11:22:33:44:12
Bind the VFs to dpdk driver:
./tools/dpdk-devbind.py -b vfio-pci 81:02.0 81:02.1
Set the VLAN id of VF1 and VF2:
ip link set ens5f0 vf 0 vlan 1 ip link set ens5f0 vf 1 vlan 1
Run testpmd:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -- -i \ --portmask=0x3 --tx-offloads=0x8fff --crc-strip testpmd> set fwd mac testpmd> start testpmd> set allmulti all on testpmd> set promisc all off testpmd> show port info all Promiscuous mode: disabled Allmulticast mode: enabled
the status are not different from the default value.
Get mac address of one VF and use it as dest mac, using scapy to send 1000 random packets from tester, verify the packets can be received by one VF and can be forward to another VF correctly:
scapy >>>sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=1)/IP()/Raw('x'*40)], \ iface="ens3f0",count=1000)
Reset pf:
ifconfig ens5f0 promisc
or:
ifconfig ens5f0 -promisc
Vf receive a pf reset message:
Event type: RESET interrupt on port 0 Event type: RESET interrupt on port 1
if don’t reset the vf, send the same 1000 packets with scapy from tester, the vf cannot receive any packets, including vlan=0 and vlan=1
Reset the vfs, run the command:
testpmd> stop testpmd> port reset 0 testpmd> port reset 1 testpmd> start
or just run the command “port reset all” send the same 1000 packets with scapy from tester, verify the packets can be received by one VF and can be forward to another VF correctly, check the port info:
testpmd> show port info all ********************* Infos for port 0 ********************* MAC address: 00:11:22:33:44:11 Promiscuous mode: disabled Allmulticast mode: enabled ********************* Infos for port 1 ********************* MAC address: 00:11:22:33:44:12 Promiscuous mode: disabled Allmulticast mode: enabled
the info status is consistent to the status before reset.
Test Case 2: vf reset – create two vfs on one pf, run testpmd separately¶
Execute step1-step3 of test case 1
Start testpmd on two vf ports:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 \ --socket-mem 1024,1024 -w 81:02.0 --file-prefix=test1 \ -- -i --crc-strip --eth-peer=0,00:11:22:33:44:12 \ ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf0 -n 4 \ --socket-mem 1024,1024 -w 81:02.1 --file-prefix=test2 \ -- -i --crc-strip
Set fwd mode on vf0:
testpmd> set fwd mac testpmd> start
Set rxonly mode on vf1:
testpmd> set fwd rxonly testpmd> start
Send packets with scapy from tester:
sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
vf0 can forward the packets to vf1.
Reset pf, don’t reset vf0 and vf1, send the packets, vf0 and vf1 cannot receive any packets.
Reset vf0 and vf1, send the packets, vf0 can forward the packet to vf1.
Test Case 3: vf reset – create one vf on each pf¶
Create vf0 from pf0, create vf1 from pf1:
echo 1 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs echo 1 > /sys/bus/pci/devices/0000\:81\:00.1/sriov_numvfs ip link set ens5f0 vf 0 mac 00:11:22:33:44:11 ip link set ens5f1 vf 0 mac 00:11:22:33:44:12
Bind the two vfs to vfio-pci:
./usertools/dpdk-devbind.py -b vfio-pci 81:02.0 81:06.0
Start one testpmd on two vf ports:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -- -i \ --portmask=0x3 --tx-offloads=0x8fff --crc-strip
Start forwarding:
testpmd> set fwd mac testpmd> start
Send packets with scapy from tester:
sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
vfs can fwd the packets normally.
Reset pf0 and pf1, don’t reset vf0 and vf1, send the packets, vfs cannot receive any packets.
Reset vf0 and vf1, send the packets, vfs can fwd the packets normally.
Test Case 4: vlan rx restore – vf reset all ports¶
Execute the step1-step3 of test case 1, then start the testpmd:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -- -i \ --portmask=0x3 --tx-offloads=0x8fff --crc-strip testpmd> set fwd mac
Add vlan on both ports:
testpmd> rx_vlan add 1 0 testpmd> rx_vlan add 1 1 testpmd> start
send packets with scapy from tester:
sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000) sendp([Ether(dst="00:11:22:33:44:12")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000) sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=1)/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000) sendp([Ether(dst="00:11:22:33:44:12")/Dot1Q(vlan=1)/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
vfs can receive the packets and forward it. send packets with scapy from tester:
sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=2)/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
vf0 cannot receive any packets.
Reset pf, don’t reset vf, send the packets in step2 from tester, the vfs cannot receive any packets.
Reset both vfs:
testpmd> stop testpmd> port reset all testpmd> start
send the packets in step2 from tester vfs can receive the packets and forward it. send packets with scapy from tester:
sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=2)/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
vf0 cannot receive any packets.
test Case 5: vlan rx restore – vf reset one port¶
Execute the step1-step3 of test case 1, then start the testpmd:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -- -i \ --portmask=0x3 --tx-offloads=0x8fff --crc-strip testpmd> set fwd mac
Add vlan on both ports:
testpmd> rx_vlan add 1 0 testpmd> rx_vlan add 1 1 testpmd> start
send packets with scapy from tester:
sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000) sendp([Ether(dst="00:11:22:33:44:12")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000) sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=1)/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000) sendp([Ether(dst="00:11:22:33:44:12")/Dot1Q(vlan=1)/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
vfs can receive the packets and forward it.
Pf reset, then reset vf0, send packets from tester:
testpmd> stop testpmd> port reset 0 testpmd> start sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000) sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=1)/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
vf0 can receive the packets, but vf1 can’t transmit the packets. send packets from tester:
sendp([Ether(dst="00:11:22:33:44:12")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000) sendp([Ether(dst="00:11:22:33:44:12")/Dot1Q(vlan=1)/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
vf1 cannot receive the packets.
Reset vf1:
testpmd> stop testpmd> port reset 1 testpmd> start sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000) sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=1)/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000) sendp([Ether(dst="00:11:22:33:44:12")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000) sendp([Ether(dst="00:11:22:33:44:12")/Dot1Q(vlan=1)/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
vfs can receive and forward the packets.
Test Case 6: vlan rx restore – create one vf on each pf¶
Execute the step1-step3 of test case 3
Add vlan on both ports:
testpmd> rx_vlan add 1 0 testpmd> rx_vlan add 1 1
Set forward and start:
testpmd> set fwd mac testpmd> start
Send packets with scapy from tester:
sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000) sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=1)/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
vfs can forward the packets normally. send packets with scapy from tester:
sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=2)/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
vf0 cannot receive any packets. remove vlan 0 on vf1:
testpmd> rx_vlan rm 0 1 sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
vf0 can receive the packets, but vf1 can’t transmit the packets.
Reset pf, don’t reset vf, send packets from tester:
sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000) sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=1)/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
the vfs cannot receive any packets.
Reset both vfs, send packets from tester:
testpmd> stop testpmd> port reset all testpmd> start sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
vf0 can receive the packets, but vf1 can’t transmit the packets. send packets from tester:
sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=1)/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
vfs can forward the packets normally.
Test Case 7: vlan tx restore¶
Execute the step1-step3 of test case 1
Run testpmd:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -- -i \ --portmask=0x3 --tx-offloads=0x8fff --crc-strip
Add tx vlan offload on VF1 port, take care the first param is port, start forwarding:
testpmd> set fwd mac testpmd> vlan set filter on 0 testpmd> set promisc all off testpmd> vlan set strip off 0 testpmd> set nbport 2 testpmd> tx_vlan set 1 51 testpmd> start
Send packets with scapy from tester:
sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*18)], \ iface="ens3f0",count=1)
Listening the port ens3f0:
tcpdump -i ens3f0 -n -e -x -v
check the packet received, the packet is configured with vlan 51
- Reset the pf, then reset the two vfs, send the same packet with no vlan tag, check packets received by tester, the packet is configured with vlan 51.
test Case 8: MAC address restore¶
Create vf0 from pf0, create vf1 from pf1:
echo 1 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs echo 1 > /sys/bus/pci/devices/0000\:81\:00.1/sriov_numvfs
Bind the two vfs to vfio-pci:
./usertools/dpdk-devbind.py -b vfio-pci 81:02.0 81:06.0
Start testpmd on two vf ports:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 \ -- -i --portmask=0x3 --tx-offloads=0x8fff --crc-strip
Add MAC address to the vf0 ports:
testpmd> mac_addr add 0 00:11:22:33:44:11 testpmd> mac_addr add 0 00:11:22:33:44:12
Start forwarding:
testpmd> set fwd mac testpmd> start
Send packets with scapy from tester:
sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000) sendp([Ether(dst="00:11:22:33:44:12")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
vfs can forward both of the two type packets.
- Reset pf0 and pf1, don’t reset vf0 and vf1, send the two packets, vf0 and vf1 cannot receive any packets.
- Reset vf0 and vf1, send the two packets, vfs can forward both of the two type packets.
test Case 9: vf reset (two vfs passed through to one VM)¶
Create 2 VFs from 1 PF,and set the VF MAC address at PF0:
echo 2 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs ./dpdk_nic_bind.py --st 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens5f0 drv=i40e 0000:81:02.0 'XL710/X710 Virtual Function' unused= 0000:81:02.1 'XL710/X710 Virtual Function' unused=
Detach VFs from the host, bind them to pci-stub driver:
modprobe pci-stub ./tools/dpdk_nic_bind.py --bind=pci_stub 81:02.0 81:02.1
or using the following way:
virsh nodedev-detach pci_0000_81_02_0; virsh nodedev-detach pci_0000_81_02_1; ./dpdk_nic_bind.py --st 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens5f0 drv=i40e 0000:81:02.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused= 0000:81:02.1 'XL710/X710 Virtual Function' if= drv=pci-stub unused=
it can be seen that VFs 81:02.0 & 81:02.1 ‘s drv is pci-stub.
Passthrough VFs 81:02.0 & 81:02.1 to vm0, and start vm0:
/usr/bin/qemu-system-x86_64 -name vm0 -enable-kvm \ -cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \ -device pci-assign,host=81:02.0,id=pt_0 \ -device pci-assign,host=81:02.1,id=pt_1
Login vm0, got VFs pci device id in vm0, assume they are 00:05.0 & 00:05.1, bind them to igb_uio driver,and then start testpmd:
./tools/dpdk_nic_bind.py --bind=igb_uio 00:05.0 00:05.1 ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 \ -w 00:05.0 -w 00:05.1 -- -i --portmask=0x3 --tx-offloads=0x8fff
Add MAC address to the vf0 ports, set it in mac forward mode:
testpmd> mac_addr add 0 00:11:22:33:44:11 testpmd> mac_addr add 0 00:11:22:33:44:12 testpmd> set fwd mac testpmd> start
Send packets with scapy from tester:
sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000) sendp([Ether(dst="00:11:22:33:44:12")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
vfs can forward both of the two type packets.
Reset pf0 and pf1, don’t reset vf0 and vf1, send the two packets, vf0 and vf1 cannot receive any packets.
Reset vf0 and vf1, send the two packets, vfs can forward both of the two type packets.
test Case 10: vf reset (two vfs passed through to two VM)¶
Create 2 VFs from 1 PF,and set the VF MAC address at PF:
echo 2 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs ./dpdk_nic_bind.py --st 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens5f0 drv=i40e 0000:81:02.0 'XL710/X710 Virtual Function' unused= 0000:81:02.1 'XL710/X710 Virtual Function' unused=
Detach VFs from the host, bind them to pci-stub driver:
modprobe pci-stub
using lspci -nn|grep -i ethernet got VF device id, for example “8086 154c”:
echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id echo "0000:82:02.0" > /sys/bus/pci/drivers/i40evf/unbind echo "0000:82:02.0" > /sys/bus/pci/drivers/pci-stub/bind echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id echo "0000:82:02.1" > /sys/bus/pci/drivers/i40evf/unbind echo "0000:82:02.1" > /sys/bus/pci/drivers/pci-stub/bind
Pass through VF0 81:02.0 to vm0, VF1 81:02.1 to vm1:
taskset -c 20-21 qemu-system-x86_64 \ -enable-kvm -m 2048 -smp cores=2,sockets=1 -cpu host -name dpdk1-vm0 \ -device pci-assign,host=0000:81:02.0 \ -drive file=/home/img/vm1/f22.img \ -netdev tap,id=ipvm0,ifname=tap1,script=/etc/qemu-ifup \ -device rtl8139,netdev=ipvm0,id=net1,mac=00:11:22:33:44:11 \ -vnc :1 -daemonize taskset -c 18-19 qemu-system-x86_64 \ -enable-kvm -m 2048 -smp cores=2,sockets=1 -cpu host -name dpdk1-vm1 \ -device pci-assign,host=0000:81:02.1 \ -drive file=/home/img/vm1/f22.img \ -netdev tap,id=ipvm1,ifname=tap2,script=/etc/qemu-ifup \ -device rtl8139,netdev=ipvm1,id=net2,mac=00:11:22:33:44:12 \ -vnc :2 -daemonize
Login vm0, got VF0 pci device id in vm0, assume it’s 00:05.0, bind the port to igb_uio, then start testpmd on vf0 port:
./tools/dpdk_nic_bind.py --bind=igb_uio 00:05.0 ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf -n 4 \ -- -i --crc-strip --eth-peer=0,vf1port_macaddr \
login vm1, got VF1 pci device id in vm1, assume it’s 00:06.0, bind the port to igb_uio, then start testpmd on vf1 port:
./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xf0 -n 4 \ -- -i --crc-strip
Add vlan on vf0 in vm0, and set fwd mode:
testpmd> rx_vlan add 1 0 testpmd> set fwd mac testpmd> start
add vlan on vf1 in vm1, set rxonly mode:
testpmd> rx_vlan add 1 0 testpmd> set fwd rxonly testpmd> start
Send packets with scapy from tester:
sendp([Ether(dst="00:11:22:33:44:11")/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000) sendp([Ether(dst="00:11:22:33:44:11")/Dot1Q(vlan=1)/IP()/Raw('x'*1000)], \ iface="ens3f0",count=1000)
vf0 can forward the packets to vf1.
- Reset pf, don’t reset vf0 and vf1, send the two packets, vf0 and vf1 cannot receive any packets.
- Reset vf0 and vf1, send the two packets, vf0 can forward both of the two type packets to VF1.
VF Port Start Stop Tests¶
Prerequisites¶
Create Two VF interfaces from two kernel PF interfaces, and then attach them to VM. Suppose PF is 0000:04:00.0. Generate 2VFs using commands below and make them in pci-stub mods.
Get the pci device id of DUT:
./dpdk_nic_bind.py --st 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=ens261f0 drv=ixgbe unused=igb_uio
Create 2 VFs from 2 PFs:
echo 2 > /sys/bus/pci/devices/0000\:04\:00.0/sriov_numvfs
VFs 04:10.0 & 04:10.1 have been created:
./dpdk_nic_bind.py --st 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=ens261f0 drv=ixgbe unused= 0000:04:10.0 '82599 Ethernet Controller Virtual Function' if=enp4s16 drv=ixgbevf unused= 0000:04:10.1 '82599 Ethernet Controller Virtual Function' if=enp4s16f1 drv=ixgbevf unused=
Detach VFs from the host, bind them to pci-stub driver:
/sbin/modprobe pci-stub echo "8086 10ed" > /sys/bus/pci/drivers/pci-stub/new_id echo 0000:04:10.0 > /sys/bus/pci/devices/0000\:04\:10.0/driver/unbind echo 0000:04:10.0 > /sys/bus/pci/drivers/pci-stub/bind echo 0000:04:10.1 > /sys/bus/pci/devices/0000\:04\:10.1/driver/unbind echo 0000:04:10.1 > /sys/bus/pci/drivers/pci-stub/bind
or using the following more easy way:
./dpdk_nic_bind.py -b pci-stub 04:10.0 04:10.1
it can be seen that VFs 04:10.0 & 04:10.1 ‘s drv is pci-stub:
./dpdk_nic_bind.py --st 0000:04:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection' if=ens261f0 drv=ixgbe unused=vfio-pci 0000:04:10.0 '82599 Ethernet Controller Virtual Function' if= drv=pci-stub unused=ixgbevf,vfio-pci 0000:04:10.1 '82599 Ethernet Controller Virtual Function' if= drv=pci-stub unused=ixgbevf,vfio-pci
Do not forget bring up PFs:
ifconfig ens261f0 up
Passthrough VFs 04:10.0 & 04:10.1 to vm0, and start vm0, you can refer to below command:
taskset -c 6-12 qemu-system-x86_64 \ -enable-kvm -m 8192 -smp 6 -cpu host -name dpdk15-vm1 \ -drive file=/home/image/fedora23.img \ -netdev tap,id=hostnet1,ifname=tap1,script=/etc/qemu-ifup,vhost=on \ -device rtl8139,netdev=hostnet1,id=net1,mac=52:54:01:6b:10:61,bus=pci.0,addr=0xa \ -device pci-assign,bus=pci.0,addr=0x6,host=04:10.0 \ -device pci-assign,bus=pci.0,addr=0x7,host=04:10.1 \ -vnc :11 -daemonize
the /etc/qemu-ifup can be below script, need you to create first:
#!/bin/sh set -x switch=br0 if [ -n "$1" ];then /usr/sbin/tunctl -u `whoami` -t $1 /sbin/ip link set $1 up sleep 0.5s /usr/sbin/brctl addif $switch $1 exit 0 else echo "Error: no interface specified" exit 1 fi
Set up bridge br0 before create /etc/qemu-ifup, for example:
cd /etc/sysconfig/network-scripts vim ifcfg-enp1s0f0 HWADDR=00:1e:67:fb:0f:d4 TYPE=Ethernet NAME=enp1s0f0 ONBOOT=yes DEVICE=enp1s0f0 NM_CONTROLLED=no BRIDGE=br0 vim ifcfg-br0 TYPE=Bridge DEVICE=br0 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp HOSTNAME="dpdk-test58"
Login vm0, got VFs pci device id in vm0, assume they are 00:06.0 & 00:07.0, bind them to igb_uio driver, and then start testpmd, set it in mac forward mode:
./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 00:07.0 ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x0f -n 4 -- -i testpmd-> set fwd mac testpmd-> start
Test Case: port start/stop¶
Start send packets from tester , then start/stop ports several times ,verify if it running right.
Commands could be used to start/stop ports refer to below:
Start port:
testpmd-> port start all
Stop port:
testpmd-> port stop all
Send IP+UDP packet:
Ether(dst="0E:CB:F8:FF:4E:02", src="0E:CB:F8:FF:4E:02")/IP(src="127.0.0.2")/UDP()/("X"*46)
Send IP+TCP packet:
Ether(dst="0E:CB:F8:FF:4E:02", src="0E:CB:F8:FF:4E:02")/IP(src="127.0.0.2")/TCP()/("X"*46)
Send IP+SCTP packet:
Ether(dst="0E:CB:F8:FF:4E:02", src="0E:CB:F8:FF:4E:02")/IP(src="127.0.0.2")/SCTP()/("X"*46)
Send IPv6+UDP packet:
Ether(dst="0E:CB:F8:FF:4E:02", src="0E:CB:F8:FF:4E:02")/IP(src="::2")/UDP()/("X"*46)
Send IPv6+TCP packet:
Ether(dst="0E:CB:F8:FF:4E:02", src="0E:CB:F8:FF:4E:02")/IP(src="::2")/TCP()/("X"*46)
VF RSS - Configuring Hash Function Tests¶
This document provides test plan for testing the function of Fortville: Support configuring hash functions.
Prerequisites¶
- 2x Intel? 82599 (Niantic) NICs (2x 10GbE full duplex optical ports per NIC)
- 1x Fortville_eagle NIC (4x 10G)
- 1x Fortville_spirit NIC (2x 40G)
- 2x Fortville_spirit_single NIC (1x 40G)
The one port of the 82599 connect to the Fortville_eagle; The one port of Fortville_spirit connect to Fortville_spirit_single. The three kinds of NICs are the target NICs. the connected NICs can send packets to these three NICs using scapy.
Network Traffic¶
The RSS feature is designed to improve networking performance by load balancing the packets received from a NIC port to multiple NIC RX queues, with each queue handled by a different logical core.
- The receive packet is parsed into the header fields used by the hash operation (such as IP addresses, TCP port, etc.)
- A hash calculation is performed. The Fortville supports three hash function: Toeplitz, simple XOR and their Symmetric RSS.
- Hash result are used as an index into a 128/512 entry ‘redirection table’.
- Niantic VF only support simple default hash algorithm(simple). Fortville NIC support all hash algorithm only used dpdk driver on host. when used kernel driver on host, fortville nic only support default hash algorithm(simple).
The RSS RETA update feature is designed to make RSS more flexible by allowing users to define the correspondence between the seven LSBs of hash result and the queue id(RSS output index) by them self.
Test Case: test_rss_hash¶
The following RX Ports/Queues configurations have to be benchmarked:
- 1 RX port / 4 RX queues (1P/4Q)
Testpmd configuration - 4 RX/TX queues per port¶
testpmd -c 1f -n 3 -- -i --rxq=4 --txq=4 --tx-offloads=0x8fff
Testpmd Configuration Options¶
By default, a single logical core runs the test.
The CPU IDs and the number of logical cores running the test in parallel can
be manually set with the set corelist X,Y
and the set nbcore N
interactive commands of the testpmd
application.
Got the pci device id of DUT, for example:
./dpdk_nic_bind.py --st 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused=
Create 2 VFs from 2 PFs:
echo 1 > /sys/bus/pci/devices/0000\:81\:00.0/sriov_numvfs echo 1 > /sys/bus/pci/devices/0000\:81\:00.1/sriov_numvfs ./dpdk_nic_bind.py --st 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused= 0000:81:02.0 'XL710/X710 Virtual Function' unused= 0000:81:0a.0 'XL710/X710 Virtual Function' unused=
Detach VFs from the host, bind them to pci-stub driver:
/sbin/modprobe pci-stub
using
lspci -nn|grep -i ethernet
got VF device id, for example “8086 154c”:echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id echo 0000:81:02.0 > /sys/bus/pci/devices/0000:08:02.0/driver/unbind echo 0000:81:02.0 > /sys/bus/pci/drivers/pci-stub/bind echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id echo 0000:81:0a.0 > /sys/bus/pci/devices/0000:08:0a.0/driver/unbind echo 0000:81:0a.0 > /sys/bus/pci/drivers/pci-stub/bind
or using the following more easy way:
virsh nodedev-detach pci_0000_81_02_0; virsh nodedev-detach pci_0000_81_0a_0; ./dpdk_nic_bind.py --st 0000:81:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:81:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused= 0000:81:02.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused= 0000:81:0a.0 'XL710/X710 Virtual Function' if= drv=pci-stub unused=it can be seen that VFs 81:02.0 & 81:0a.0 ‘s drv is pci-stub.
Passthrough VFs 81:02.0 & 81:0a.0 to vm0, and start vm0:
/usr/bin/qemu-system-x86_64 -name vm0 -enable-kvm \ -cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \ -device pci-assign,host=81:02.0,id=pt_0 \ -device pci-assign,host=81:0a.0,id=pt_1
Login vm0, got VFs pci device id in vm0, assume they are 00:06.0 & 00:07.0, bind them to igb_uio driver, and then start testpmd, set it in mac forward mode:
./tools/dpdk_nic_bind.py --bind=igb_uio 00:06.0 00:07.0
Reta Configuration. 128 reta entries configuration:
testpmd command: port config 0 rss reta (hash_index,queue_id)
Pmd fwd only receive the packets:
testpmd command: set fwd rxonly
Rss received package type configuration two received packet types configuration:
testpmd command: port config 0 rss ip/udp/tcp
Verbose configuration:
testpmd command: set verbose 8
Start packet receive:
testpmd command: start
Send packet and check rx port received packet by different queue. different hash type send different packet, example hash type is ip, packet src and dts ip not different:
sendp([Ether(dst="90:e2:ba:36:99:3c")/IP(src="192.168.0.4", dst="192.168.0.5")], iface="eth3") sendp([Ether(dst="90:e2:ba:36:99:3c")/IP(src="192.168.0.5", dst="192.168.0.4")], iface="eth3")
Test Case: test_reta¶
This case test hash reta table, the test steps same with test_rss_hash except config hash reta table
Before send packet, config hash reta,512(niantic nic have 128 reta) reta entries configuration:
testpmd command: port config 0 rss reta (hash_index,queue_id)
VF to VF Bridge Tests¶
This test suite aims to validate the bridge function on physical functional for virtual functional to virtual functional communication. Cases of the suite based on the vm to vm test scenario, echo vm needs on vf, and both of the vfs generated from the same pf port.
Prerequisites:¶
On host:
- Hugepages: at least 10 G hugepages, 6G(for vm on which run pktgen as stream source end) + 2G(for vm on which run testpmd as receive end) + 2G(for host used)
- Guest: two img with os for kvm qemu
- NIC: one pf port
- pktgen-dpdk: copy $DTS/dep/tgen.tgz to guest from which send the stream
On Guest:
- Stream Source end: scapy pcpay and essential tarballs for compile pktgen-dpdk tools
Set up basic virtual scenario:¶
Step 1: generate two vfs on the target pf port (i.e. 0000:85:00.0):
echo 2 > /sys/bus/pci/devices/0000\:85\:00.0/sriov_numvfs
Step 2: bind the two vfs to pci-stub:
echo "8086 10ed" > /sys/bus/pci/drivers/pci-stub/new_id
echo 0000:85:10.0 > /sys/bus/pci/devices/0000:85:10.0/driver/unbind
echo 0000:85:10.0 > /sys/bus/pci/drivers/pci-stub/bind
echo 0000:85:10.2 > /sys/bus/pci/devices/0000:85:10.2/driver/unbind
echo 0000:85:10.2 > /sys/bus/pci/drivers/pci-stub/bind
Step 3: passthrough vf 0 to vm0 and start vm0:
taskset -c 20,21,22,23 /usr/local/qemu-2.4.0/x86_64-softmmu/qemu-system-x86_64 \
-name vm0 -enable-kvm -chardev socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 \
-device virtio-serial -device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 \
-daemonize -monitor unix:/tmp/vm0_monitor.sock,server,nowait \
-net nic,vlan=0,macaddr=00:00:00:e2:4f:fb,addr=1f \
-net user,vlan=0,hostfwd=tcp:10.239.128.125:6064-:22 \
-device pci-assign,host=85:10.0,id=pt_0 -cpu host -smp 4 -m 6144 \
-object memory-backend-file,id=mem,size=6144M,mem-path=/mnt/huge,share=on \
-numa node,memdev=mem -mem-prealloc -drive file=/home/img/vm0.img -vnc :4
Step 4: passthrough vf 1 to vm1 and start vm1:
taskset -c 30,31,32,33 /usr/local/qemu-2.4.0/x86_64-softmmu/qemu-system-x86_64 \
-name vm1 -enable-kvm -chardev socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 \
-device virtio-serial -device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.0 \
-daemonize -monitor unix:/tmp/vm1_monitor.sock,server,nowait \
-net nic,vlan=0,macaddr=00:00:00:7b:d5:cb,addr=1f \
-net user,vlan=0,hostfwd=tcp:10.239.128.125:6126-:22 \
-device pci-assign,host=85:10.2,id=pt_0 -cpu host -smp 4 -m 6144 \
-object memory-backend-file,id=mem,size=6144M,mem-path=/mnt/huge,share=on \
-numa node,memdev=mem -mem-prealloc -drive file=/home/img/vm1.img -vnc :5
Test Case1: test_2vf_d2d_pktgen_stream¶
both vfs in the two vms using the dpdk driver, send stream from vf1 in vm1 by dpdk pktgen to vf in vm0, and verify the vf on vm0 can receive stream.
Step 1: run testpmd on vm0:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -n 1 -- -i --tx-offloads=0x8fff
Step 2: set rxonly and start on vm0:
set fwd rxonly
start
Step 3: copy pktgen-dpdk tarball to vm1:
scp tgen.tgz to vm1
tar xvf tgen.tgz
Step 4: generate pcap file on vm1:
Context: [Ether(dst="52:54:12:45:67:10", src="52:54:12:45:67:11")/IP()/Raw(load='X'\*46)]
Step 5: send stream by pkt-gen on vm1:
./app/app/x86_64-native-linuxapp-gcc/app/pktgen -c 0xf -n 2 --proc-type auto -- -P -T -m '1.0' -s P:flow.pcap
Step 6: verify vf 0 receive status on vm0: Rx-packets equal to send packets count, 100:
show port stats 0
######################## NIC statistics for port 0 ########################
RX-packets: 100 RX-missed: 0 RX-bytes: 6000
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
Test Case2: test_2vf_d2k_pktgen_stream¶
Step 1: bind vf to kernel driver on vm0
Step 2: start up vf interface and using tcpdump to capture received packets
Step 3: copy pktgen-dpdk tarball to vm1:
scp tgen.tgz to vm1
tar xvf tgen.tgz
Step 4: generate pcap file on vm1:
Context: [Ether(dst="52:54:12:45:67:10", src="52:54:12:45:67:11")/IP()/Raw(load='X'\*46)]
Step 5: send stream by pkt-gen on vm1:
./app/app/x86_64-native-linuxapp-gcc/app/pktgen -c 0xf -n 2 --proc-type auto -- -P -T -m '1.0' -s P:flow.pcap
Step 6: verify vf 0 receive status on vm0: Rx-packets equal to send packets count, 100
Test Case3: test_2vf_k2d_scapy_stream¶
Step 1: run testpmd on vm0:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7 -n 1 -- -i --tx-offloads=0x8fff
Step 2: set rxonly and start on vm0:
set fwd rxonly
start
Step 3: bind vf to kernel driver on vm0
Step 4: using scapy to send packets
Step 5:verify vf 0 receive status on vm0: Rx-packets equal to send packets count, 100:
show port stats 0
######################## NIC statistics for port 0 ########################
RX-packets: 100 RX-missed: 0 RX-bytes: 6000
RX-errors: 0
RX-nombuf: 0
TX-packets: 0 TX-errors: 0 TX-bytes: 0
############################################################################
VF VLAN Tests¶
The support of VLAN offload features by VF device consists in:
- the filtering of received VLAN packets
- VLAN header stripping by hardware in received [VLAN] packets
- VLAN header insertion by hardware in transmitted packets
Prerequisites¶
Create VF device from PF devices:
./dpdk_nic_bind.py --st 0000:87:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:87:00.1 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f1 drv=i40e unused= echo 1 > /sys/bus/pci/devices/0000\:87\:00.0/sriov_numvfs echo 1 > /sys/bus/pci/devices/0000\:87\:00.1/sriov_numvfs ./dpdk_nic_bind.py --st 0000:87:00.0 'Ethernet Controller X710 for 10GbE SFP+' if=ens259f0 drv=i40e unused= 0000:87:02.0 'XL710/X710 Virtual Function' unused= 0000:87:0a.0 'XL710/X710 Virtual Function' unused=
Detach VFs from the host, bind them to pci-stub driver:
/sbin/modprobe pci-stub using `lspci -nn|grep -i ethernet` got VF device id, for example "8086 154c", echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id echo 0000:87:02.0 > /sys/bus/pci/devices/0000:87:02.0/driver/unbind echo 0000:87:02.0 > /sys/bus/pci/drivers/pci-stub/bind echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id echo 0000:87:0a.0 > /sys/bus/pci/devices/0000:87:0a.0/driver/unbind echo 0000:87:0a.0 > /sys/bus/pci/drivers/pci-stub/bind
Passthrough VFs 87:02.0 & 87:02.1 to vm0 and start vm0:
/usr/bin/qemu-system-x86_64 -name vm0 -enable-kvm \ -cpu host -smp 4 -m 2048 -drive file=/home/image/sriov-fc20-1.img -vnc :1 \ -device pci-assign,host=87:02.0,id=pt_0 \ -device pci-assign,host=87:0a.0,id=pt_1
Login vm0 and them bind VF devices to igb_uio driver.:
./tools/dpdk_nic_bind.py --bind=igb_uio 00:04.0 00:05.0
Start testpmd, set it in rxonly mode and enable verbose output:
testpmd -c 0x0f -n 4 -w 00:04.0 -w 00:05.0 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
Test Case 1: Add port based vlan on VF¶
Linux network configuration tool only set pvid on VF devices.
Add pvid on VF0 from PF device:
ip link set $PF_INTF vf 0 vlan 2
Send packet with same vlan id and check VF can receive
Send packet without vlan and check VF can’t receive
Send packet with wrong and check Vf can’t receive
Check pf device show correct pvid setting:
ip link show ens259f0 ... vf 0 MAC 00:00:00:00:00:00, vlan 1, spoof checking on, link-state auto
Test Case 2: Remove port based vlan on VF¶
Remove added vlan from PF device:
ip link set $PF_INTF vf 0 vlan 0
Restart testpmd and send packet without vlan and check VF can receive
Set packet with vlan id 0 and check VF can receive
Set packet with random id 1-4095 and check VF can’t receive
Test Case 3: VF port based vlan tx¶
Add pvid on VF0 from PF device:
ip link set $PF_INTF vf 0 vlan 2
Start testpmd with mac forward mode:
testpmd> set fwd mac testpmd> start
Send packet from tester port1 and check packet received by tester port0:
Check port1 received packet with configured vlan 2
Test Case 3: VF tagged vlan tx¶
Start testpmd with full-featured tx code path and with mac forward mode:
testpmd -c f -n 3 -- -i --tx-offloads=0x8fff testpmd> set fwd mac testpmd> start
Add tx vlan offload on VF0, take care the first param is port:
testpmd> tx_vlan 0 1
Send packet from tester port1 and check packet received by tester port0:
Check port- received packet with configured vlan 1
Rerun with step2-3 with random vlan and max vlan 4095
Test case4: VF tagged vlan rx¶
Make sure port based vlan disabled on VF0 and VF1
Start testpmd with rxonly mode:
testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
Send packet without vlan and check packet received
Send packet with vlan 0 and check packet received
Add vlan on VF0 from VF driver:
testpmd> rx_vlan add 1 0
Send packet with vlan0/1 and check packet received
Rerun with step5-6 with random vlan and max vlan 4095
Remove vlan on VF0:
rx_vlan rm 1 0
Send packet with vlan 0 and check packet received
Send packet without vlan and check packet received
Send packet with vlan 1 and check packet can’t received
Test case5: VF Vlan strip test¶
Start testpmd with mac forward mode:
testpmd> set fwd mac testpmd> set verbose 1 testpmd> start
Add tagged vlan 1 on VF0:
testpmd> rx_vlan add 1 0
Disable VF0 vlan strip and sniff packet on tester port1:
testpmd> vlan set strip off 0
Set packet from tester port0 with vlan 1 and check sniffed packet has vlan
Enable vlan strip on VF0 and sniff packet on tester port1:
testpmd> vlan set strip on 0
Send packet from tester port0 with vlan 1 and check sniffed packet without vlan
Send packet from tester port0 with vlan 0 and check sniffed packet without vlan
Rerun with step 2-8 with random vlan and max vlan 4095
Vhost PMD Xstats Tests¶
This test plan will cover the basic vhost pmd xstats case and will be worked as a regression test plan. In the test plan, we will use vhost as a pmd port in testpmd.
Test Case1: xstats based on packet size¶
Flow:
TG-->NIC-->>Vhost TX-->Virtio RX-->Virtio TX-->Vhsot RX-->NIC-->TG
Bind one physical port to igb_uio, then launch the testpmd
Launch VM1 with using hugepage, 2048M memory, 2 cores, 1 sockets, 1 virtio-net-pci:
taskset -c 6-7 qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge, share=on -numa node,memdev=mem -mem-prealloc -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16.img -chardev socket,id=char0,path=./vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=1 -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,mq=on -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139, netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -localtime -vnc :10 -daemonize
On VM1, run testpmd
On host, testpmd, set ports to the mac forward mode:
testpmd>set fwd mac testpmd>start tx_first
On VM, testpmd, set port to the mac forward mode:
testpmd>set fwd mac testpmd>start
On host run “show port xstats all” at least twice to check the packets number
Let TG generate different size of packets, send 10000 packets for each packet sizes(64,128,255, 512, 1024, 1523), check the statistic number is correct
On host run “clear port xstats all” , then all the statistic date should be 0
Test Case2: xstats based on packet types¶
Similar as Test Case1, all steps are similar except step 6, 7:
- On host run “show port xstats all” at least twice to check the packets type:
- Let TG generate different type of packets, broadcast, multicast, ucast, check the statistic number is correct
- On host run “clear port xstats all” , then all the statistic date should be 0
Test Case3: stability case with multiple queues¶
No need bind any physical port to igb_uio,then launch the testpmd
Launch VM1, set queues=2, vectors=2xqueues+2, mq=on, with using hugepage, 2048M memory, 2 cores, 1 sockets, 1 virtio-net-pci:
taskset -c 6-7 qemu-system-x86_64 -name us-vhost-vm1 -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,\ share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/osimg/ubuntu16.img \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=1 \ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=off,mq=on \ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139, netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -localtime -vnc :10 -daemonize
On VM1, run testpmd
On host, testpmd, set ports to the mac forward mode:
testpmd>set fwd io retry testpmd>start tx_first 8
On VM, testpmd, set port to the mac forward mode:
testpmd>start
Send packets for 30 minutes, check the Xstats still can work correctly:
testpmd>show port xstats all
Vhost TSO Tests¶
The feature enabled the DPDK Vhost TX offload(checksum and TSO), so that it will let the NIC to do the TX offload, and it can improve performance. The feature added the negotiation between DPDK user space vhost and virtio-net, so we will verify the DPDK Vhost user + virtio-net for the TSO/cksum in the TCP/IP stack enabled environment. DPDK vhost + virtio-pmd will not be covered by this plan since virtio-pmd doesn’t have TCP/IP stack and virtio TSO is not enabled, so it will not be tested.
In the test plan, we will use vhost switch sample to test. When testing vm2vm case, we will only test vm2vm=1(software switch), not test vm2vm=2(hardware switch).
Prerequisites¶
Install iperf on both host and guests.
Test Case1: DPDK vhost user + virtio-net one VM fwd tso¶
HW preparation: Connect 2 ports directly. In our case, connect 81:00.0(port1) and 81:00.1(port2) two ports directly. Port1 is bound to igb_uio for vhost-sample to use, while port2 is in kernel driver.
SW preparation: Change one line of the vhost sample and rebuild:
#In function virtio_tx_route(xxx)
m->vlan_tci = vlan_tag;
#changed to
m->vlan_tci = 1000;
Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1. For TSO/CSUM test, we need set “–mergeable 1–tso 1 –csum 1”.:
taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-copy 0 --vm2vm 0 --tso 1 --csum 1
Launch VM1:
taskset -c 21-22 \ qemu-system-x86_64 -name us-vhost-vm1 \ -cpu host -enable-kvm -m 1024 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/img/dpdk-vm1.img \ -chardev socket,id=char0,path=/home/qxu10/vhost-tso-test/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,gso=on,guest_csum=on,guest_tso4=on,guest_tso6=on,guest_ecn=on \ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:01 -nographic
On host,configure port2, then you can see there is a interface called ens260f1.1000.:
ifconfig ens260f1 vconfig add ens260f1 1000 ifconfig ens260f1.1000 1.1.1.8
On the VM1, set the virtio IP and run iperf:
ifconfig ethX 1.1.1.2 ping 1.1.1.8 # let virtio and port2 can ping each other successfully, then the arp table will be set up automatically.
In host, run : iperf -s -i 1 ; In guest, run iperf -c 1.1.1.2 -i 1 -t 60, check if there is 64K (size: 65160) packet. If there is 64K packet, then TSO is enabled, or else TSO is disabled.
On the VM1, run tcpdump -i ethX -n -e -vv to check if the cksum is correct. You should not see incorrect cksum output.
Test Case2: DPDK vhost user + virtio-net VM2VM=1 fwd tso¶
Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1. For TSO/CSUM test, we need set “–mergeable 1–tso 1 –csum 1 –vm2vm 1”.:
taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-copy 0 --vm2vm 1 --tso 1 --csum 1
Launch VM1 and VM2.
taskset -c 21-22 \ qemu-system-x86_64 -name us-vhost-vm1 \ -cpu host -enable-kvm -m 1024 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/img/dpdk-vm1.img \ -chardev socket,id=char0,path=/home/qxu10/vhost-tso-test/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,csum=on,gso=on,guest_csum=on,guest_tso4=on,guest_tso6=on,guest_ecn=on \ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:01 -nographic taskset -c 23-24 \ qemu-system-x86_64 -name us-vhost-vm1 \ -cpu host -enable-kvm -m 1024 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \ -chardev socket,id=char1,path=/home/qxu10/vhost-tso-test/dpdk/vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce \ -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2 \ -netdev tap,id=ipvm1,ifname=tap4,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:10:00:00:11:02 -nographic
On VM1, set the virtio IP and run iperf:
ifconfig ethX 1.1.1.2 arp -s 1.1.1.8 52:54:00:00:00:02 arp # to check the arp table is complete and correct.
On VM2, set the virtio IP and run iperf:
ifconfig ethX 1.1.1.8 arp -s 1.1.1.2 52:54:00:00:00:01 arp # to check the arp table is complete and correct.
Ensure virtio1 can ping virtio2. Then in VM1, run : iperf -s -i 1 ; In VM2, run iperf -c 1.1.1.2 -i 1 -t 60, check if there is 64K (size: 65160) packet. If there is 64K packet, then TSO is enabled, or else TSO is disabled.
On the VM1, run tcpdump -i ethX -n -e -vv.
Vhost User Live Migration Tests¶
This feature is to make sure vhost user live migration works based on testpmd.
Prerequisites¶
HW setup
- Connect three ports to one switch, these three ports are from Host, Backup host and tester. Ensure the tester can send packets out, then host/backup server ports can receive these packets.
- Better to have 2 similar machine with the same OS.
NFS configuration
Make sure host nfsd module updated to v4 version(v2 not support file > 4G)
Start nfs service and export nfs to backup host IP:
host# service rpcbind start host# service nfs-server start host# service nfs-mountd start host# systemctrl stop firewalld.service host# vim /etc/exports host# /home/vm-image backup-host-ip(rw,sync,no_root_squash)
Mount host nfs folder on backup host:
backup# mount -t nfs -o nolock,vers=4 host-ip:/home/vm-image /mnt/nfs
On host server side:
Create enough hugepages for vhost-switch and qemu backend memory:
host# echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages host# mount -t hugetlbfs hugetlbfs /mnt/huge
Bind host port to igb_uio and start testpmd with vhost port:
#./tools/dpdk-devbind.py -b igb_uio 83:00.1 #./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i testpmd>start
Start VM on host, here we set 5432 as the serial port, 3333 as the qemu monitor port, 5555 as the SSH port:
taskset -c 22-23 qemu-system-x86_64 -name vm1host \ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -smp 2 -cpu host -drive file=/home/qxu10/img/vm1.img \ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 \ -monitor telnet::3333,server,nowait \ -serial telnet:localhost:5432,server,nowait \ -daemonize
On the backup server, run the vhost testpmd on the host and launch VM:
Set huge page, bind one port to igb_uio and run testpmd on the backup server, the command is very similar to host:
backup server# echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages backup server# mount -t hugetlbfs hugetlbfs /mnt/huge backup server#./tools/dpdk-devbind.py -b igb_uio 81:00.1 backup server#./x86_64-native-linuxapp-gcc/app/testpmd -c 0xc0000 -n 4 --vdev 'eth_vhost0,iface=./vhost-net,queues=1' --socket-mem 1024,1024 -- -i testpmd>start
Launch VM on the backup server, and the script is similar to host, but note the 2 differences:
need add ” -incoming tcp:0:4444 ” for live migration.
need make sure the VM image is the NFS mounted folder, VM image is the exact one on host server:
Backup server # qemu-system-x86_64 -name vm2 \ -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on \ -numa node,memdev=mem -mem-prealloc -smp 4 -cpu host -drive file=/mnt/nfs/vm1.img \ -net nic,model=e1000,addr=1f -net user,hostfwd=tcp:127.0.0.1:5555-:22 \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01 \ -monitor telnet::3333,server,nowait \ -serial telnet:localhost:5432,server,nowait \ -incoming tcp:0:4444 \ -daemonize
Test Case 1: migrate with virtio-pmd¶
Make sure all Prerequisites has been done
SSH to VM and scp the DPDK folder from host to VM:
host # ssh -p 5555 localhost, then input password to log in. host # scp -P 5555 -r <dpdk_folder>/ localhost:/root, then input password to let the file transfer.
Telnet the serial port and run testpmd in VM:
host # telnet localhost 5432 Input Enter, then log in to VM If need leave the session, input "CTRL" + "]", then quit the telnet session. On the Host server VM, run below commands to launch testpmd host vm # cd /root/dpdk modprobe uio insmod ./x86_64-native-linuxapp-gcc/kmod/igb_uio.ko ./tools/dpdk_nic_bind.py --bind=igb_uio 00:03.0 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 4 -- -i > set fwd rxonly > set verbose 1 > start tx_first
Check host vhost pmd connect with VM’s virtio device:
testpmd> host testpmd message for connection
Send continuous packets with the physical port’s mac(e.g: 90:E2:BA:69:C9:C9) from tester port:
tester# scapy tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20) tester# sendp(p, iface="p5p1", inter=1, loop=1) Then check the host VM can receive the packet: host VM# testpmd> port 0/queue 0: received 1 packets
Start Live migration, ensure the traffic is continuous at the HOST VM side:
host server # telnet localhost 3333 (qemu)migrate -d tcp:backup server:4444 e.g: migrate -d tcp:10.239.129.176:4444 (qemu)info migrate Check if the migrate is active and not failed.
Check host vm can receive packet before migration done
Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done:
host# (qemu)info migrate host# (qemu) Migration status: completed
After live migration, go to the backup server and check if the virtio-pmd can continue to receive packets.:
Backup server # telnet localhost 5432
Log in then see the same screen from the host server, and check if the virtio-pmd can continue receive the packets.
Test Case 2: migrate with virtio-net¶
Make sure all Prerequisites has been done.
Telnet the serial port and run testpmd in VM:
host # telnet localhost 5432 Input Enter, then log in to VM
If need leave the session, input “CTRL” + “]”, then quit the telnet session.
Let the virtio-net link up:
host vm # ifconfig eth1 up
Send continuous packets with the physical port’s mac(e.g: 90:E2:BA:69:C9:C9) from tester port:
tester# scapy tester# p = Ether(dst="90:E2:BA:69:C9:C9")/IP()/UDP()/Raw('x'*20) tester# sendp(p, iface="p5p1", inter=1, loop=1)
Check the host VM can receive the packet:
host VM# tcpdump -i eth1
Start Live migration, ensure the traffic is continuous at the HOST VM side:
host server # telnet localhost 3333 (qemu)migrate -d tcp:backup server:4444 e.g: migrate -d tcp:10.239.129.176:4444 (qemu)info migrate Check if the migrate is active and not failed.
Check host vm can receive packet before migration done.
Query stats of migrate in monitor, check status of migration, when the status is completed, then the migration is done:
host# (qemu)info migrate host# (qemu) Migration status: completed
After live migration, go to the backup server and check if the virtio-pmd can continue to receive packets.:
Backup server # telnet localhost 5432
Log in then see the same screen from the host server, and check if the virtio-net can continue receive the packets.
Virtio-1.0 Support Tests¶
Virtio 1.0 is a new version of virtio. And the virtio 1.0 spec link is at http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.pdf.
The major difference is at PCI layout. For testing virtio 1.0 pmd, we need test the basic RX/TX, different path(tx-offloads), mergeable on/off, and also test with virtio0.95 to ensure they can co-exist. Besides, we need test virtio 1.0’s performance to ensure it has similar performance as virtio0.95.
Test Case 1: test_func_vhost_user_virtio1.0-pmd with different tx-offloads¶
Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1 or 2.5.0.
Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.:
taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 0
Start VM with 1 virtio, note: we need add “disable-modern=false” to enable virtio 1.0:
taskset -c 22-23 \ /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \ -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-modern=false \ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
In the VM, change the config file–common_linuxapp, “CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT=y”; Run dpdk testpmd in VM:
./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x8000 --disable-hw-vlan $ >set fwd mac $ >start tx_first We expect similar output as below, and see modern virtio pci detected. PMD: virtio_read_caps(): [98] skipping non VNDR cap id: 11 PMD: virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len: 0 PMD: virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len: 4194304 PMD: virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len: 4096 PMD: virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len: 4096 PMD: virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len: 4096 PMD: virtio_read_caps(): found modern virtio pci device. PMD: virtio_read_caps(): common cfg mapped at: 0x7f2c61a83000 PMD: virtio_read_caps(): device cfg mapped at: 0x7f2c61a85000 PMD: virtio_read_caps(): isr cfg mapped at: 0x7f2c61a84000 PMD: virtio_read_caps(): notify base: 0x7f2c61a86000, notify off multiplier: 4096 PMD: vtpci_init(): modern virtio pci detected.
Send traffic to virtio1(MAC1=52:54:00:00:00:01) with VLAN ID=1000. Check if virtio packet can be RX/TX and also check the TX packet size is same as the RX packet size.
Also run the dpdk testpmd in VM with tx-offloads=0 for the virtio pmd optimization usage:
./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --tx-offloads=0 --disable-hw-vlan $ >set fwd mac $ >start tx_first
Send traffic to virtio1(MAC1=52:54:00:00:00:01) and VLAN ID=1000. Check if virtio packet can be RX/TX and also check the TX packet size is same as the RX packet size. Check the packet content is correct.
Test Case 2: test_func_vhost_user_virtio1.0-pmd for packet sequence check¶
Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1 or 2.5.0.
Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.:
taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 0
Start VM with 1 virtio, note: we need add “disable-modern=false” to enable virtio 1.0:
taskset -c 22-23 \ /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \ -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-modern=false \ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
In the VM, change the config file–common_linuxapp, “CONFIG_RTE_LIBRTE_VIRTIO_DEBUG_INIT=y”; Run dpdk testpmd in VM:
./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x8000 --disable-hw-vlan $ >set fwd mac $ >start tx_first We expect similar output as below, and see modern virtio pci detected. PMD: virtio_read_caps(): [98] skipping non VNDR cap id: 11 PMD: virtio_read_caps(): [84] cfg type: 5, bar: 0, offset: 0000, len: 0 PMD: virtio_read_caps(): [70] cfg type: 2, bar: 4, offset: 3000, len: 4194304 PMD: virtio_read_caps(): [60] cfg type: 4, bar: 4, offset: 2000, len: 4096 PMD: virtio_read_caps(): [50] cfg type: 3, bar: 4, offset: 1000, len: 4096 PMD: virtio_read_caps(): [40] cfg type: 1, bar: 4, offset: 0000, len: 4096 PMD: virtio_read_caps(): found modern virtio pci device. PMD: virtio_read_caps(): common cfg mapped at: 0x7f2c61a83000 PMD: virtio_read_caps(): device cfg mapped at: 0x7f2c61a85000 PMD: virtio_read_caps(): isr cfg mapped at: 0x7f2c61a84000 PMD: virtio_read_caps(): notify base: 0x7f2c61a86000, notify off multiplier: 409 6 PMD: vtpci_init(): modern virtio pci detected.
Send 100 packets at rate 25% at small packet(e.g: 70B) to the virtio with VLAN=1000, and insert the sequence number at byte offset 44 bytes. Make the sequence number starting from 00 00 00 00 and the step 1, first ensure no packet loss at IXIA, then check if the received packets have the same order as sending side.If out of order, then it’s an issue.
Test Case 3: test_func_vhost_user_virtio1.0-pmd with mergeable enabled¶
Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.:
taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 1 --zero-copy 0 --vm2vm 0
Start VM with 1 virtio, note: we need add “disable-modern=false” to enable virtio 1.0:
taskset -c 22-23 \ /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \ -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-modern=false \ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
Run dpdk testpmd in VM:
./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x8000 --disable-hw-vlan --max-pkt-len=9000 $ >set fwd mac $ >start tx_first
Send traffic to virtio1(MAC1=52:54:00:00:00:01) with VLAN ID=1000. Check if virtio packet can be RX/TX and also check the TX packet size is same as the RX packet size. Check packet size(64-1518) as well as the jumbo frame(3000,9000) can be RX/TX.
Test Case 4: test_func_vhost_user_one-vm-virtio1.0-one-vm-virtio0.95¶
Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.:
taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 1
Start VM1 with 1 virtio, note: we need add “disable-modern=false” to enable virtio 1.0:
taskset -c 22-23 \ /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \ -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-modern=false \ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
Start VM2 with 1 virtio, note:
taskset -c 24-25 \ /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/img/vm2.img \ -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet2,chardev=char0,vhostforce \ -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,disable-modern=true \ -netdev tap,id=ipvm2,ifname=tap4,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm2,id=net1,mac=00:00:00:00:10:02 -nographic
Run dpdk testpmd in VM1 and VM2:
VM1: ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x8000 --disable-hw-vlan --eth-peer=0,52:54:00:00:00:02 $ >set fwd mac $ >start tx_first VM2: ./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x8000 --disable-hw-vlan $ >set fwd mac $ >start tx_first
Send 100 packets at low rate to virtio1, and the expected flow is ixia–>NIC–>VHOST–>Virtio1–>Virtio2–>Vhost–>NIC->ixia port. Check the packet back at ixia port is content correct, no size change and payload change.
Test Case 5: test_perf_vhost_user_one-vm-virtio1.0-pmd¶
Note: For virtio1.0 usage, we need use qemu version >2.4, such as 2.4.1 or 2.5.0.
Launch the Vhost sample by below commands, socket-mem is set for the vhost sample to use, need ensure that the PCI port located socket has the memory. In our case, the PCI BDF is 81:00.0, so we need assign memory for socket1.:
taskset -c 18-20 ./examples/vhost/build/vhost-switch -c 0x1c0000 -n 4 --huge-dir /mnt/huge --socket-mem 0,2048 -- -p 1 --mergeable 0 --zero-copy 0 --vm2vm 0
Start VM with 1 virtio, note: we need add “disable-modern=false” to enable virtio 1.0:
taskset -c 22-23 \ /root/qemu-versions/qemu-2.5.0/x86_64-softmmu/qemu-system-x86_64 -name us-vhost-vm1 \ -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \ -smp cores=2,sockets=1 -drive file=/home/img/vm1.img \ -chardev socket,id=char0,path=/home/qxu10/virtio-1.0/dpdk/vhost-net -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,disable-modern=false \ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 -nographic
In the VM, run dpdk testpmd in VM:
./<dpdk_folder>/tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 ./<dpdk_folder>/x86_64-native-linuxapp-gcc/app/test-pmd/testpmd -c 0x3 -n 4 -- -i --tx-offloads=0x8000 --disable-hw-vlan $ >set fwd mac $ >start tx_first
Send traffic at line rate to virtio1(MAC1=52:54:00:00:00:01) with VLAN ID=1000. Check the performance at different packet size(68,128,256,512,1024,1280,1518) and record it as the performance data. The result should be similar as virtio0.95.
VLAN Ethertype Config Tests¶
Description¶
for single vlan default TPID is 0x8100. for QinQ, default S-Tag+C-Tag VLAN TPIDs 0x88A8 + 0x8100. This feature implemented configuration of VLAN ethertype TPID, such as changing single vlan TPID 0x8100 to 0xA100, or changing QinQ “0x88A8 + 0x8100” to “0x9100+0xA100” or “0x8100+0x8100”
Prerequisites¶
- Hardware: one Fortville NIC (4x 10G or 2x10G or 2x40G or 1x10G)
- Software:
- Assuming that DUT ports
0
and1
are connected to the tester’s portA
andB
.
Test Case 1: change VLAN TPID¶
Start testpmd, start in rxonly mode:
./testpmd -c 0xff -n 4 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
Change VLAN TPIDs to 0xA100:
testpmd> vlan set outer tpid 0xA100 0
send a packet with VLAN TPIDs = 0xA100, verify it can be recognized as vlan packet.
Test Case 2: test VLAN filtering on/off¶
Start testpmd, setup vlan filter on, start in mac forwarding mode:
./testpmd -c 0xff -n 4 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> set fwd mac testpmd> vlan set filter on 0 testpmd> start
Send 1 packet with the VLAN Tag 16 on port
A
, Verify that the VLAN packet cannot be received in portB
.Disable vlan filtering on port
0
:testpmd> vlan set filter off 0
Send 1 packet with the VLAN Tag 16 on port
A
, Verify that the VLAN packet can be received in portB
.
Test Case 3: test adding VLAN Tag Identifier with changing VLAN TPID¶
start testpmd, setup vlan filter on, start in mac forwarding mode:
./testpmd -c 0xff -n 4 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> set fwd mac testpmd> vlan set filter on 0 testpmd> vlan set strip off 0 testpmd> start
Add a VLAN Tag Identifier
16
on port0
:testpmd> rx_vlan add 16 0
Send 1 packet with the VLAN Tag 16 on port
A
, Verify that the VLAN packet can be received in portB
and TPID is 0x8100Change VLAN TPID to 0xA100 on port
0
:testpmd> vlan set outer tpid 0xA100 0
Send 1 packet with VLAN TPID 0xA100 and VLAN Tag 16 on port
A
, Verify that the VLAN packet can be received in portB
and TPID is 0xA100Remove the VLAN Tag Identifier
16
on port0
:testpmd> rx_vlan rm 16 0
Send 1 packet with VLAN TPID 0xA100 and VLAN Tag 16 on port
A
, Verify that the VLAN packet cannot be received in portB
.
Test Case 4: test VLAN header striping with changing VLAN TPID¶
start testpmd, setup vlan filter off, vlan strip on, start in mac forwarding mode:
./testpmd -c 0xff -n 4 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> set fwd mac testpmd> vlan set filter off 0 testpmd> vlan set strip on 0 testpmd> start
Send 1 packet with the VLAN Tag 16 on port
A
. Verify that packet received in portB
without VLAN Tag IdentifierChange VLAN TPID to 0xA100 on port
0
:testpmd> vlan set outer tpid 0xA100 0
4. Send 1 packet with VLAN TPID 0xA100 and VLAN Tag 16 on port A
.
Verify that packet received in port B
without VLAN Tag Identifier
Disable vlan header striping on port
0
:testpmd> vlan set strip off 0
Send 1 packet with VLAN TPID 0xA100 and VLAN Tag 16 on port
A
. Verify that packet received in portB
with VLAN Tag Identifier.
Test Case 5: test VLAN header inserting with changing VLAN TPID¶
start testpmd, enable vlan packet forwarding, start in mac forwarding mode:
./testpmd -c 0xff -n 4 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> set fwd mac testpmd> vlan set filter off 0 testpmd> vlan set strip off 0 testpmd> start
Insert VLAN Tag Identifier
16
on port1
:testpmd> tx_vlan set 1 16
Send 1 packet without VLAN Tag Identifier on port
A
. Verify that packet received in portB
with VLAN Tag Identifier 16 and TPID is 0x8100Change VLAN TPID to 0xA100 on port
1
:testpmd> vlan set outer tpid 0xA100 1
Send 1 packet without VLAN Tag Identifier on port
A
. Verify that packet received in portB
with VLAN Tag Identifier 16 and TPID is 0xA100.Delete the VLAN Tag Identifier
16
on port1
:testpmd> tx_vlan reset 1
Send 1 packet without VLAN Tag Identifier on port
A
. Verify that packet received in portB
without VLAN Tag Identifier 16.
Test Case 6: Change S-Tag and C-Tag within QinQ¶
Start testpmd, enable QinQ, start in rxonly mode:
./testpmd -c 0xff -n 4 -- -i --portmask=0x3 --tx-offloads=0x8fff testpmd> vlan set qinq on 0 testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
Change S-Tag+C-Tag VLAN TPIDs to 0x88A8 + 0x8100:
testpmd> vlan set outer tpid 0x88A8 0 testpmd> vlan set inner tpid 0x8100 0
Send a packet with set S-Tag+C-Tag VLAN TPIDs to 0x88A8 + 0x8100. verify it can be recognized as qinq packet.
Change S-Tag+C-Tag VLAN TPIDs to 0x9100+0xA100:
testpmd> vlan set outer tpid 0x9100 0 testpmd> vlan set inner tpid 0xA100 0
Send a packet with set S-Tag+C-Tag VLAN TPIDs to 0x9100+0xA100. verify it can be recognized as qinq packet.
Change S-Tag+C-Tag VLAN TPIDs to 0x8100+0x8100:
testpmd> vlan set outer tpid 0x8100 0 testpmd> vlan set inner tpid 0x8100 0
Send a packet with set S-Tag+C-Tag VLAN TPIDs to 0x8100+0x8100. verify it can be recognized as qinq packet.
Note:
Send packet with specific S-Tag+C-Tag VLAN TPID:
wrpcap("qinq.pcap",[Ether(dst="68:05:CA:3A:2E:58")/Dot1Q(type=0x8100,vlan=16)/Dot1Q(type=0x8100,vlan=1006)/IP(src="192.168.0.1", dst="192.168.0.2")])
.- hexedit qinq.pcap; change tpid field, “ctrl+w” to save, “ctrl+x” to exit.
- sendp(rdpcap(“qinq.pcap”), iface=”ens260f0”).
Send packet with specific VLAN TPID:
wrpcap("vlan.pcap",[Ether(dst="68:05:CA:3A:2E:58")/Dot1Q(type=0x8100,vlan=16)/IP(src="192.168.0.1", dst="192.168.0.2")])
.- hexedit vlan.pcap; change tpid field, “ctrl+w” to save, “ctrl+x” to exit.
- sendp(rdpcap(“vlan.pcap”), iface=”ens260f0”).
VLAN Offload Tests¶
The support of VLAN offload features by Poll Mode Drivers consists in:
- the filtering of received VLAN packets,
- VLAN header stripping by hardware in received [VLAN] packets,
- VLAN header insertion by hardware in transmitted packets.
The filtering of VLAN packets is automatically enabled by the testpmd
application for each port.
By default, the VLAN filter of each port is empty and all received VLAN packets
are dropped by the hardware.
To enable the receipt of VLAN packets tagged with the VLAN tag identifier
vlan_id
on the port port_id
, the following command of the testpmd
application must be used:
rx_vlan add vlan_id port_id
In the same way, the insertion of a VLAN header with the VLAN tag identifier
vlan_id
in packets sent on the port port_id
can be enabled with the
following command of the testpmd
application:
tx_vlan set vlan_id port_id
The transmission of VLAN packets is done with the start tx_first
command
of the testpmd
application that arranges to first send a burst of packets
on all configured ports before starting the rxonly
packet forwarding mode
that has been previously selected.
Prerequisites¶
Assuming that ports 0
and 1
are connected to a traffic generator’s port
A
and B
. Launch the testpmd
with the following arguments:
./build/app/testpmd -cffffff -n 3 -- -i --burst=1 --txpt=32 \
--txht=8 --txwt=0 --txfreet=0 --rxfreet=64 --mbcache=250 --portmask=0x3
The -n command is used to select the number of memory channels. It should match the number of memory channels on that setup.
Set the verbose level to 1 to display information for each received packet:
testpmd> set verbose 1
Test Case: Enable receipt of VLAN packets and VLAN header stripping¶
Setup the mac
forwarding mode:
testpmd> set fwd mac
Set mac packet forwarding mode
Enable the receipt of VLAN packets with VLAN Tag Identifier 1 on port 0:
testpmd> rx_vlan add 1 0
testpmd> start
rxonly packet forwarding - CRC stripping disabled - packets/burst=32
nb forwarding cores=1 - nb forwarding ports=10
RX queues=1 - RX desc=128 - RX free threshold=64
RX threshold registers: pthresh=8 hthresh=8 wthresh=4
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=32 hthresh=8 wthresh=8
- Configure the traffic generator to send VLAN packets with the Tag Identifier
1
and send 1 packet on portA
.
Verify that the VLAN packet was correctly received on port B
with VLAN tag 1
.
Test Case: Disable receipt of VLAN packets¶
Disable the receipt of VLAN packets with Tag Identifier 1
on port 0.
Send VLAN packets with the Tag Identifier 1
check that no packet is received
on port B
, meaning that VLAN packets are now dropped on port 0:
testpmd> rx_vlan rm 1 0
testpmd> start
rxonly packet forwarding - CRC stripping disabled - packets/burst=32
nb forwarding cores=1 - nb forwarding ports=8
RX queues=1 - RX desc=128 - RX free threshold=64
RX threshold registers: pthresh=8 hthresh=8 wthresh=4
TX queues=1 - TX desc=512 - TX free threshold=0
TX threshold registers: pthresh=32 hthresh=8 wthresh=8
testpmd> stop
Verify that no packet was received on port B
.
Test Case: Enable VLAN header insertion in transmitted packets¶
Arrange to only send packets on port 0:
testpmd> set nbport 1
Number of forwarding ports set to 1
Arrange to send one VLAN packet with VLAN Tag Identifier 1
on port 0
:
testpmd> tx_vlan set 1 0
testpmd> start tx_first
Verify that the packet is correctly received on the traffic generator side
(with VLAN Tag Identifier 1
)
VMDQ Tests¶
The 1G, 10G 82599 and 40G FVL Network Interface Card (NIC), supports a number of packet filtering functions which can be used to distribute incoming packets into a number of reception (RX) queues. VMDQ is a filtering functions which operate on VLAN-tagged packets to distribute those packets among up to 512 RX queues.
The feature itself works by:
- splitting the incoming packets up into different “pools” - each with its own set of RX queues - based upon the MAC address and VLAN ID within the VLAN tag of the packet.
- assigning each packet to a specific queue within the pool, based upon the user priority field within the VLAN tag and MAC address.
The VMDQ features are enabled in the vmdq
example application
contained in the Intel DPDK, and this application should be used to validate
the feature.
Prerequisites¶
- All tests assume a linuxapp setup.
- The port ids of the two 10G or 40G ports to be used for the testing are specified in the commandline. it use a portmask.
- The Intel DPDK is compiled for the appropriate target type in each case, and the VMDQ example application is compiled and linked with that DPDK instance
- Two ports are connected to the test system, one to be used for packet reception, the other for transmission
- The traffic generator being used is configured to send to the application RX port a stream of packets with VLAN tags, where the VLAN IDs increment from 0 to the pools numbers(e.g: for FVL spirit, it’s 63, inclusive) as well as the MAC address from 52:54:00:12:[port_index]:00 to 52:54:00:12:[port_index]:3e and the VLAN user priority field increments from 0 to 7 (inclusive) for each VLAN ID. In our case port_index = 0 or 1.
Test Case: Measure VMDQ pools queues¶
- Put different number of pools: in the case of 10G 82599 Nic is 64, in the case of FVL spirit is 63,in case of FVL eagle is 34.
- Start traffic transmission using approx 10% of line rate.
- After a number of seconds, e.g. 15, stop traffic, and ensure no traffic loss (<0.001%) has occurred.
- Send a hangup signal (SIGHUP) to the application to have it print out the statistics of how many packets were received per RX queue
Expected Result:
- No packet loss is expected
- Every RX queue should have received approximately (+/-15%) the same number of incoming packets
Test Case: Measure VMDQ Performance¶
- Compile VMDQ example application as in first test above.
- Run application using a core mask for the appropriate thread and core settings given in the following list:
- 1S/1C/1T
- 1S/2C/1T
- 1S/2C/2T
- 1S/4C/1T
- Measure maximum RFC2544 performance throughput for bi-directional traffic for all standard packet sizes.
Output Format: The output format should be as below, or any similar table-type, with figures given in mpps:
Frame size | 1S/1C/1T | 1S/2C/1T | 1S/2C/2T | 1S/4C/1T |
---|---|---|---|---|
64 | 19.582 | 42.222 | 53.204 | 73.768 |
128 | 20.607 | 42.126 | 52.964 | 67.527 |
256 | 15.614 | 33.849 | 36.232 | 36.232 |
512 | 11.794 | 18.797 | 18.797 | 18.797 |
1024 | 9.568 | 9.579 | 9.579 | 9.579 |
1280 | 7.692 | 7.692 | 7.692 | 7.692 |
1518 | 6.395 | 6.502 | 6.502 | 6.502 |
VM Power Management Tests¶
This test plan is for the test and validation of feature VM Power Management of DPDK 1.8.
VM Power Manager would use a hint based mechanism by which a VM can communicate to a host based governor about its current processing requirements. By mapping VMs virtual CPUs to physical CPUs the Power Manager can then make decisions according to some policy as to what power state the physical CPUs can transition to.
VM Agent shall have the ability to send the following hints to host: - Scale frequency down - Scale frequency up - Reduce frequency to min - Increase frequency to max
The Power manager is responsible for enabling the Linux userspace power governor and interacting via its sysfs entries to get/set frequencies.
The power manager will manage the file handles for each core(<n>) below:
/sys/devices/system/cpu/cpu<n>/cpufreq/scaling_governor
/sys/devices/system/cpu/cpu<n>/cpufreq/scaling_available_frequencies
/sys/devices/system/cpu/cpu<n>/cpufreq/scaling_cur_freq
/sys/devices/system/cpu/cpu<n>/cpufreq/scaling_setspeed
Prerequisites¶
Hardware:
- CPU: Haswell, IVB(CrownPass)
- NIC: Niantic 82599
BIOS:
- Enable VT-d and VT-x
- Enable Enhanced Intel SpeedStep(R) Tech
- Disable Intel(R) Turbo Boost Technology
- Enable Processor C3
- Enable Processor C6
- Enable Intel(R) Hyper-Threading Tech
OS and Kernel:
- Fedora 20
- Enable Kernel features Huge page, UIO, IOMMU, KVM
- Enable Intel IOMMU in kernel command
- Disable Selinux
- Disable intel_pstate
Virtualization:
- QEMU emulator version 1.6.1
- libvirtd (libvirt) 1.1.3.5
- Add virio-serial port
IXIA Traffic Generator Configuration LPM table used for packet routing is:
Entry # LPM prefix (IP/length) 0 1.1.1.0/24 P0 1 2.1.1.0/24 P1 The flows should be configured and started by the traffic generator.
Flow Traffic Gen. Port IPv4 Src. Address IPv4 Dst. Address Port Src. Port Dest. L4 Proto. 1 TG0 0.0.0.0 2.1.1.0 any any UDP 2 TG1 0.0.0.0 1.1.1.0 any any UDP
Test Case 1: VM Power Management Channel¶
Configure VM XML to pin VCPUs/CPUs:
<vcpu placement='static'>5</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='2'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='4'/> <vcpupin vcpu='4' cpuset='5'/> </cputune>
Configure VM XML to set up virtio serial ports
Create temporary folder for vm_power socket.
mkdir /tmp/powermonitor
Setup one serial port for every one vcpu in VM.
<channel type='unix'> <source mode='bind' path='/tmp/powermonitor/<vm_name>.<channel_num>'/> <target type='virtio' name='virtio.serial.port.poweragent.<channel_num>'/> <address type='virtio-serial' controller='0' bus='0' port='4'/> </channel>
Run power-manager in Host:
./build/vm_power_mgr -c 0x3 -n 4
Startup VM and run guest_vm_power_mgr:
guest_vm_power_mgr -c 0x1f -n 4 -- -i
Add vm in host and check vm_power_mgr can get frequency normally:
vmpower> add_vm <vm_name> vmpower> add_channels <vm_name> all vmpower> show_cpu_freq <core_num>
Check vcpu/cpu mapping can be detected normally:
vmpower> show_vm <vm_name> VM: vCPU Refresh: 1 Channels 5 [0]: /tmp/powermonitor/<vm_name>.0, status = 1 [1]: /tmp/powermonitor/<vm_name>.1, status = 1 [2]: /tmp/powermonitor/<vm_name>.2, status = 1 [3]: /tmp/powermonitor/<vm_name>.3, status = 1 [4]: /tmp/powermonitor/<vm_name>.4, status = 1 Virtual CPU(s): 5 [0]: Physical CPU Mask 0x2 [1]: Physical CPU Mask 0x4 [2]: Physical CPU Mask 0x8 [3]: Physical CPU Mask 0x10 [4]: Physical CPU Mask 0x20
Run vm_power_mgr in vm:
guest_cli/build/vm_power_mgr -c 0x1f -n 4
Check monitor channel for all cores has been connected.
Test Case 2: VM Power Management Numa¶
Get core and socket information by cpu_layout:
./tools/cpu_layout.py
Configure VM XML to pin VCPUs on Socket1:
Repeat Case1 steps 3-7 sequentially
Check vcpu/cpu mapping can be detected normally
Test Case 3: VM Scale CPU Frequency Down¶
Setup VM power management environment
Send cpu frequency down hints to Host:
vmpower(guest)> set_cpu_freq 0 down
Verify the frequency of physical CPU has been set down correctly:
vmpower> show_cpu_freq 1 Core 1 frequency: 2700000
Check other CPUs’ frequency is not affected by change above
check if the other VM works fine (if they use different CPUs)
Repeat step2-5 several times
Test Case 4: VM Scale CPU Frequency UP¶
Setup VM power management environment
Send cpu frequency down hints to Host:
vmpower(guest)> set_cpu_freq 0 up
Verify the frequency of physical CPU has been set up correctly:
vmpower> show_cpu_freq 1 Core 1 frequency: 2800000
Check other CPUs’ frequency is not affected by change above
check if the other VM works fine (if they use different CPUs)
Repeat step2-5 several times
Test Case 5: VM Scale CPU Frequency to Min¶
Setup VM power management environment
Send cpu frequency scale to minimum hints.:
vmpower(guest)> set_cpu_freq 0 min
Verify the frequency of physical CPU has been scale to min correctly:
vmpower> show_cpu_freq 1 Core 1 frequency: 1200000
Check other CPUs’ frequency is not affected by change above
check if the other VM works fine (if they use different CPUs)
Test Case 6: VM Scale CPU Frequency to Max¶
Setup VM power management environment
Send cpu frequency down hints to Host:
vmpower(guest)> set_cpu_freq 0 max
Verify the frequency of physical CPU has been set to max correctly:
vmpower> show_cpu_freq 1 Core 1 frequency: 2800000
Check other CPUs’ frequency is not affected by change above
check if the other VM works fine (if they use different CPUs)
Test Case 7: VM Power Management Multi VMs¶
Setup VM power management environment for VM1
Setup VM power management environment for VM2
Run power-manager in Host:
./build/vm_power_mgr -c 0x3 -n 4
Startup VM1 and VM2
Add VM1 in host and check vm_power_mgr can get frequency normally:
vmpower> add_vm <vm1_name> vmpower> add_channels <vm1_name> all vmpower> show_cpu_freq <core_num>
Add VM2 in host and check vm_power_mgr can get frequency normally:
vmpower> add_vm <vm2_name> vmpower> add_channels <vm2_name> all vmpower> show_cpu_freq <core_num>
Run Case3-6 and check VM1 and VM2 cpu frequency can by modified by guest_cli
Poweroff VM2 and remove VM2 from host vm_power_mgr:
vmpower> rm_vm <vm2_name>
Test Case 8: VM l3fwd-power Latency¶
Connect two physical ports to IXIA
Start VM and run l3fwd-power:
l3fwd-power -c 6 -n 4 -- -p 0x3 --config '(P0,0,C{1.1.0}),(P1,0,C{1.2.0})'
Configure packet flow in IxiaNetwork
Start to send packets from IXIA and check the receiving packets and latency
Record the latency of frame sizes 128
Compare latency value with sample l3fwd
Test Case 9: VM l3fwd-power Performance¶
Start VM and run l3fwd-power:
l3fwd-power -c 6 -n 4 -- -p 0x3 --config '(P0,0,C{1.1.0}),(P1,0,C{1.2.0})'
Input traffic linerate varied from 0 to 100%, in order to see cpu frequency changes.
The test report should provide the throughput rate measurements (in Mpps and % of the line rate for 2x NIC ports) and cpu frequency as listed in the table below:
% Tx linerate Rx % linerate Cpu freq 0 20 40 60 80 100
Fortville Vxlan Tests¶
Cloud providers build virtual network overlays over existing network infrastructure that provide tenant isolation and scaling. Tunneling layers added to the packets carry the virtual networking frames over existing Layer 2 and IP networks. Conceptually, this is similar to creating virtual private networks over the Internet. Fortville will process these tunneling layers by the hardware.
This document provides test plan for Fortville vxlan packet detecting, checksum computing and filtering.
Prerequisites¶
1x Intel® X710 (Fortville) NICs (2x 40GbE full duplex optical ports per NIC) plugged into the available PCIe Gen3 8-lane slot.
1x Intel® XL710-DA4 (Eagle Fountain) (1x 10GbE full duplex optical ports per NIC) plugged into the available PCIe Gen3 8-lane slot.
DUT board must be two sockets system and each cpu have more than 8 lcores.
Test Case: Vxlan ipv4 packet detect¶
Start testpmd with tunneling packet type to vxlan:
testpmd -c ffff -n 4 -- -i --rxq=4 --txq=4 --nb-cores=8 --nb-ports=2 --tx-offloads=0x8fff
Set rxonly packet forwarding mode and enable verbose log:
set fwd rxonly
set verbose 1
rx_vxlan_port add 4789 0
Send packet as table listed and check dumped packet type the same as column “Rx packet type”.
Outer Vlan | Outer IP | Outer UDP | Inner Vlan | Inner L3 | Inner L4 | Rx packet type | Pkt Error |
No | Ipv4 | None | None | None | None | PKT_RX_IPV4_HDR | None |
No | Ipv4 | Vxlan | None | Ipv4 | Udp | PKT_RX_IPV4_HDR_EXT | None |
No | Ipv4 | Vxlan | None | Ipv4 | Tcp | PKT_RX_IPV4_HDR_EXT | None |
No | Ipv4 | Vxlan | None | Ipv4 | Sctp | PKT_RX_IPV4_HDR_EXT | None |
Yes | Ipv4 | Vxlan | None | Ipv4 | Udp | PKT_RX_IPV4_HDR_EXT | None |
Yes | Ipv4 | Vxlan | Yes | Ipv4 | Udp | PKT_RX_IPV4_HDR_EXT | None |
Test Case: Vxlan ipv6 packet detect¶
Start testpmd with tunneling packet type to vxlan:
testpmd -c ffff -n 4 -- -i --disable-rss --rxq=4 --txq=4 --nb-cores=8 --nb-ports=2 --tx-offloads=0x8fff
Set rxonly packet forwarding mode and enable verbose log:
set fwd rxonly
set verbose 1
rx_vxlan_port add 4789 0
Send ipv6 packet as table listed and check dumped packet type the same as column “Rx packet type”.
Outer Vlan | Outer IP | Outer UDP | Inner Vlan | Inner L3 | Inner L4 | Rx packet type | Pkt Error |
No | Ipv6 | None | None | None | None | PKT_RX_IPV6_HDR | None |
No | Ipv6 | Vxlan | None | Ipv6 | Udp | PKT_RX_IPV6_HDR_EXT | None |
No | Ipv6 | Vxlan | None | Ipv6 | Tcp | PKT_RX_IPV6_HDR_EXT | None |
No | Ipv6 | Vxlan | None | Ipv6 | Sctp | PKT_RX_IPV6_HDR_EXT | None |
Yes | Ipv6 | Vxlan | None | Ipv6 | Udp | PKT_RX_IPV6_HDR_EXT | None |
Yes | Ipv6 | Vxlan | Yes | Ipv6 | Udp | PKT_RX_IPV6_HDR_EXT | None |
Test Case: Vxlan ipv4 checksum offload¶
Start testpmd with tunneling packet type to vxlan:
testpmd -c ffff -n 4 -- -i --rxq=4 --txq=4 --nb-cores=8 --nb-ports=2 --tx-offloads=0x8fff
Set csum packet forwarding mode and enable verbose log:
set fwd csum
set verbose 1
rx_vxlan_port add 4789 0
Enable VXLAN protocol on ports:
rx_vxlan_port add 4789 0
rx_vxlan_port add 4789 1
Enable IP,UDP,TCP,SCTP,OUTER-IP checksum offload:
csum parse_tunnel on 0
csum parse_tunnel on 1
csum set ip hw 0
csum set udp hw 0
csum set tcp hw 0
csum set stcp hw 0
csum set outer-ip hw 0
Send packet with valid checksum and check there’s no chksum error counter increased.
Outer Vlan | Outer IP | Outer UDP | Inner Vlan | Inner L3 | Inner L4 | Pkt Error |
No | Ipv4 | None | None | None | None | None |
Send packet with invalid l3 checksum first. Then check forwarded packet checksum corrected and there’s correct l3 chksum error counter increased.
Outer Vlan | Outer IP | Outer UDP | Inner Vlan | Inner L3 | Inner L4 |
No | Bad Ipv4 | None | None | None | None |
No | Ipv4 | Vxlan | None | Bad Ipv4 | Udp |
No | Bad Ipv4 | Vxlan | None | Ipv4 | Udp |
No | Bad Ipv4 | Vxlan | None | Bad Ipv4 | Udp |
Send packet with invalid l4 checksum first. Then check forwarded packet checksum corrected and there’s correct l4 chksum error counter increased.
Outer Vlan | Outer IP | Outer UDP | Inner Vlan | Inner L3 | Inner L4 |
No | Ipv4 | Vxlan | None | Ipv4 | Bad Udp |
No | Ipv4 | Vxlan | None | Ipv4 | Bad Tcp |
No | Ipv4 | Vxlan | None | Ipv4 | Bad Sctp |
Send vlan packet with invalid l3 checksum first. Then check forwarded packet checksum corrected and there’s correct l3 chksum error counter increased.
Outer Vlan | Outer IP | Outer UDP | Inner Vlan | Inner L3 | Inner L4 |
Yes | Bad Ipv4 | Vxlan | None | Ipv4 | Udp |
Yes | Ipv4 | Vxlan | None | Bad Ipv4 | Udp |
Yes | Bad Ipv4 | Vxlan | None | Bad Ipv4 | Udp |
Yes | Bad Ipv4 | Vxlan | Yes | Ipv4 | Udp |
Yes | Ipv4 | Vxlan | Yes | Bad Ipv4 | Udp |
Yes | Bad Ipv4 | Vxlan | Yes | Bad Ipv4 | Udp |
Send vlan packet with invalid l4 checksum first. Then check forwarded packet checksum corrected and there’s correct l4 chksum error counter increased.
Outer Vlan | Outer IP | Outer UDP | Inner Vlan | Inner L3 | Inner L4 |
Yes | Ipv4 | Vxlan | None | Ipv4 | Bad Udp |
Yes | Ipv4 | Vxlan | None | Ipv4 | Bad Tcp |
Yes | Ipv4 | Vxlan | None | Ipv4 | Bad Sctp |
Test Case: Vxlan ipv6 checksum offload¶
Start testpmd with tunneling packet type:
testpmd -c ffff -n 4 -- -i --tunnel-type=1 --rxq=4 --txq=4 --nb-cores=8 --nb-ports=2
Set csum packet forwarding mode and enable verbose log:
set fwd csum
set verbose 1
Enable VXLAN protocol on ports:
rx_vxlan_port add 4789 0
rx_vxlan_port add 4789 1
Enable IP,UDP,TCP,SCTP,VXLAN checksum offload:
csum parse_tunnel on 0
csum parse_tunnel on 1
csum set ip hw 0
csum set udp hw 0
csum set tcp hw 0
csum set stcp hw 0
csum set outer-ip hw 0
Send ipv6 packet with valid checksum and check there’s no chksum error counter increased.
Outer Vlan | Outer IP | Outer UDP | Inner Vlan | Inner L3 | Inner L4 | Pkt Error |
No | Ipv6 | None | None | None | None | None |
Send ipv6 packet with invalid l3 checksum first. Then check forwarded packet checksum corrected and there’s correct l3 chksum error counter increased.
Outer Vlan | Outer IP | Outer UDP | Inner Vlan | Inner L3 | Inner L4 |
No | Ipv6 | Vxlan | None | Ipv4 | None |
No | Ipv6 | Vxlan | None | Bad Ipv4 | Udp |
Send vlan+ipv6 packet with invalid l4 checksum first. Then check forwarded packet checksum corrected and there’s correct l4 chksum error counter increased.
Outer Vlan | Outer IP | Outer UDP | Inner Vlan | Inner L3 | Inner L4 |
Yes | Ipv6 | Vxlan | None | Ipv4 | Bad Udp |
Yes | Ipv6 | Vxlan | None | Ipv4 | Bad Tcp |
Yes | Ipv6 | Vxlan | None | Ipv4 | Bad Sctp |
Yes | Ipv6 | Vxlan | Yes | Ipv4 | Bad Udp |
Yes | Ipv6 | Vxlan | Yes | Ipv4 | Bad Tcp |
Yes | Ipv6 | Vxlan | Yes | Ipv4 | Bad Sctp |
Test Case: Cloud Filter¶
Start testpmd with tunneling packet type to vxlan and disable receive side scale for hardware limitation:
testpmd -c ffff -n 4 -- -i --disable-rss --rxq=4 --txq=4 --nb-cores=8 --nb-ports=2 --tx-offloads=0x8fff
Set rxonly packet forwarding mode and enable verbose log:
set fwd rxonly
set verbose 1
Add one new Cloud filter as table listed first:
tunnel_filter add 0 11:22:33:44:55:66 00:00:20:00:00:01 192.168.2.2 1 vxlan imac-ivlan 1 3
Then send one packet and check packet was forwarded into right queue.
Outer Mac | Inner Mac | Inner Vlan | Outer Ip | Inner Ip | Vni ID | Queue |
No | Yes | Yes | No | No | No | 1 |
No | Yes | Yes | No | No | Yes | 1 |
No | Yes | No | No | No | Yes | 1 |
No | Yes | No | No | No | No | 1 |
Yes | Yes | No | No | Yes | Yes | 1 |
No | No | No | No | Yes | No | 1 |
Add Cloud filter to max number will be failed.
Remove Cloud filter which has been added. Then send one packet and check packet was received in queue 0.
Add Cloud filter with invalid Mac address “00:00:00:00:01” will be failed.
Add Cloud filter with invalid ip address “192.168.1.256” will be failed.
Add Cloud filter with invalid vlan “4097” will be failed.
Add Cloud filter with invalid vni “16777216” will be failed.
Add Cloud filter with invalid queue id “64” will be failed.
Test Case: Vxlan Checksum Offload Performance Benchmarking¶
The throughput is measured for each of these cases for vxlan tx checksum offload of “all by software”, “L3 offload by hardware”, “L4 offload by hardware”, “l3&l4 offload by hardware”.
The results are printed in the following table:
Calculate Type | Queues | Mpps | % linerate |
---|---|---|---|
SOFTWARE ALL | Single | ||
HW L4 | Single | ||
HW L3&L4 | Single | ||
SOFTWARE ALL | Multi | ||
HW L4 | Multi | ||
HW L3&L4 | Multi |
Test Case: Vxlan Tunnel filter Performance Benchmarking¶
The throughput is measured for different Vxlan tunnel filter types. Queue single mean there’s only one flow and forwarded to the first queue. Queue multi mean there are two flows and configure to different queues.
Packet | Filter | Queue | Mpps | % linerate |
---|---|---|---|---|
Normal | None | Single | ||
Vxlan | None | Single | ||
Vxlan | imac-ivlan | Single | ||
Vxlan | imac-ivlan-tenid | Single | ||
Vxlan | imac-tenid | Single | ||
Vxlan | imac | Single | ||
Vxlan | omac-imac-tenid | Single | ||
Vxlan | imac-ivlan | Multi | ||
Vxlan | imac-ivlan-tenid | Multi | ||
Vxlan | imac-tenid | Multi | ||
Vxlan | imac | Multi | ||
Vxlan | omac-imac-tenid | Multi |
Niantic ixgbe_get_vf_queue Include Extra Information Tests¶
Description¶
VF can get following information in ixgbe driver:
- Get the TC’s configured by PF for a given VF.
- Get the User priority to TC mapping information for a given VF.
Prerequisites¶
Hardware: Ixgbe connect tester to pf with cable.
software: dpdk: http://dpdk.org/git/dpdk scapy: http://www.secdev.org/projects/scapy/
bind the pf to dpdk driver:
./usertools/dpdk-devbind.py -b igb_uio 05:00.0
the mac address of 05:00.0 is 00:00:00:00:01:00
create 1 vf from pf:
echo 1 >/sys/bus/pci/devices/0000:05:00.0/max_vfs
Detach VF from the host, bind them to pci-stub driver:
modprobe pci-stub
using lspci -nn|grep -i ethernet got VF device id “8086 10ed”, then:
echo "8086 10ed" > /sys/bus/pci/drivers/pci-stub/new_id echo "0000:05:10.0" > /sys/bus/pci/drivers/ixgbevf/unbind echo "0000:05:10.0" > /sys/bus/pci/drivers/pci-stub/bind
Lauch the VM with the VF PCI passthrough:
taskset -c 2-5 qemu-system-x86_64 \ -enable-kvm -m 8192 -smp cores=4,sockets=1 -cpu host -name dpdk1-vm1 \ -drive file=/home/VM/centOS7_1.img \ -device pci-assign,host=05:10.0 \ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:01 \ -localtime -vnc :2 -daemonize
login VM, get VF’s mac adress is 2e:ae:7f:16:6f:e7
Test case 1: DPDK PF, kernel VF, enable DCB mode with TC=4¶
start the testpmd on PF:
./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=4 --txq=4 --nb-cores=16 testpmd> port stop 0 testpmd> port config 0 dcb vt on 4 pfc off testpmd> port start 0
check if VF port is linked. if vf port is down, reload the ixgbevf driver:
rmmod ixgbevf modprobe ixgbevf
then you can see VF information in PF side:
PMD: VF 0: enabling multicast promiscuous PMD: VF 0: disabling multicast promiscuous
check VF’s queue number:
ethtool -S ens3
there is 1 tx queue and 4 rx queues which equals TC number.
send packet from tester to VF:
pkt1 = Ether(dst="2e:ae:7f:16:6f:e7", src="00:02:00:00:00:01")/Dot1Q(prio=0, vlan=0)/IP()/Raw('x'*20) pkt2 = Ether(dst="2e:ae:7f:16:6f:e7", src="00:02:00:00:00:01")/Dot1Q(prio=1, vlan=0)/IP()/Raw('x'*20) pkt3 = Ether(dst="2e:ae:7f:16:6f:e7", src="00:02:00:00:00:01")/Dot1Q(prio=2, vlan=0)/IP()/Raw('x'*20) pkt4 = Ether(dst="2e:ae:7f:16:6f:e7", src="00:02:00:00:00:01")/Dot1Q(prio=3, vlan=0)/IP()/Raw('x'*20) pkt5 = Ether(dst="2e:ae:7f:16:6f:e7", src="00:02:00:00:00:01")/Dot1Q(prio=4, vlan=0)/IP()/Raw('x'*20) pkt6 = Ether(dst="2e:ae:7f:16:6f:e7", src="00:02:00:00:00:01")/Dot1Q(prio=5, vlan=0)/IP()/Raw('x'*20) pkt7 = Ether(dst="2e:ae:7f:16:6f:e7", src="00:02:00:00:00:01")/Dot1Q(prio=6, vlan=0)/IP()/Raw('x'*20) pkt8 = Ether(dst="2e:ae:7f:16:6f:e7", src="00:02:00:00:00:01")/Dot1Q(prio=7, vlan=0)/IP()/Raw('x'*20) pkt9 = Ether(dst="2e:ae:7f:16:6f:e7", src="00:02:00:00:00:01")/Dot1Q(prio=0, vlan=1)/IP()/Raw('x'*20) pkt10 = Ether(dst="2e:ae:7f:16:6f:e7", src="00:02:00:00:00:01")/IP()/Raw('x'*20)
check the packets with different User Priority mapping to TC:
ethtool -S ens3
check the NIC statistics to check the packets increasing of different rx queue. pkt1 to queue 0, pkt2 to queue 1, pkt3 to queue 2, pkt4 to queue 3, pkt5-pkt8 to queue 0, VF can’t get pkt9, pkt10 to queue 0.
Test case 2: DPDK PF, kernel VF, disable DCB mode¶
start the testpmd on PF:
./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --nb-cores=16 --rxq=2 --txq=2
check if VF port is linked. if vf port is down, reload the ixgbevf driver:
rmmod ixgbevf modprobe ixgbevf
then you can see VF information in PF side:
PMD: VF 0: enabling multicast promiscuous PMD: VF 0: disabling multicast promiscuous
set vlan insert to vf:
set vf vlan insert 0 0 1
check VF’s queue number:
ethtool -S ens3
there is 2 tx queues and 2 rx queues as default number.
send packet from tester to VF:
pkt1 = Ether(dst="2e:ae:7f:16:6f:e7", src="00:02:00:00:00:01")/IP()/Raw('x'*20) pkt2 = Ether(dst="2e:ae:7f:16:6f:e7", src="00:02:00:00:00:01")/IP(src="192.168.0.1", dst="192.168.0.3")/UDP(sport=23,dport=24)/Raw('x'*20)
check the NIC statistics to verify the different packets mapping to different queues according RSS rule:
ethtool -S ens3
send 100 pkt1 to VF, all the packets received by queue 0, then, send 100 pkt2 to VF, all the packets received by queue 1.
Fortville Configure RSS Queue Regions Tests¶
Description¶
FVL/FPK and future CVL/CPK NICs support queue regions configuration for RSS in PF/VF, so different traffic classes or different packet classification types can be separated to different queue regions which includes several queues, but traffic classes and packet classification cannot co-existing with the support of queue region functionality. Different PCtype packets take rss algorithm in different queue regions.
Examples:
- all TCP packets with SYN flags set can be sent to queue A, when TCP packets with out SYN flags will be distributed to queues B-F.
- IPv4 and IPv6 packets distributed to different queue regions
- UDP and TCP packets distributed to different queue regions
- Different tunnels distributed to different queue regions (requires tunnels PCTYPEs creation using personalization profiles)
- different traffic classes defined in VLAN PCP bits distributed to different queue regions
For FVL see chapter 7.1.7 of the latest datasheet. For FPK/CPK see corresponding EAS sections.
Prerequisites¶
Hardware: Fortville
software: dpdk: http://dpdk.org/git/dpdk scapy: http://www.secdev.org/projects/scapy/
bind the port to dpdk driver:
./usertools/dpdk-devbind.py -b igb_uio 05:00.0
the mac address of 05:00.0 is 00:00:00:00:01:00
start the testpmd:
./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=16 --txq=16 testpmd> port config all rss all testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
Test case 1: different pctype packet can enter the expected queue region¶
Set queue region on a port:
testpmd> set port 0 queue-region region_id 0 queue_start_index 1 queue_num 1 testpmd> set port 0 queue-region region_id 1 queue_start_index 3 queue_num 2 testpmd> set port 0 queue-region region_id 2 queue_start_index 6 queue_num 2 testpmd> set port 0 queue-region region_id 3 queue_start_index 8 queue_num 2 testpmd> set port 0 queue-region region_id 4 queue_start_index 11 queue_num 4 testpmd> set port 0 queue-region region_id 5 queue_start_index 15 queue_num 1
Set the mapping of flowtype to region index on a port:
testpmd> set port 0 queue-region region_id 0 flowtype 31 testpmd> set port 0 queue-region region_id 1 flowtype 32 testpmd> set port 0 queue-region region_id 2 flowtype 33 testpmd> set port 0 queue-region region_id 3 flowtype 34 testpmd> set port 0 queue-region region_id 4 flowtype 35 testpmd> set port 0 queue-region region_id 5 flowtype 45 testpmd> set port 0 queue-region region_id 2 flowtype 41 testpmd> set port 0 queue-region flush on
send packet:
pkt1 = Ether(dst="00:00:00:00:01:00", src="00:02:00:00:00:01")/IP(src="192.168.0.1", dst="192.168.0.2")/UDP(sport=23,dport=24)/Raw('x'*20) pkt2 = Ether(dst="00:00:00:00:01:00", src="00:02:00:00:00:01")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(sport=33,dport=34,flags="S")/Raw('x'*20) pkt3 = Ether(dst="00:00:00:00:01:00", src="00:02:00:00:00:01")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(sport=33,dport=34,flags="PA")/Raw('x' * 20) pkt4 = Ether(dst="00:00:00:00:01:00", src="00:02:00:00:00:01")/IP(src="192.168.0.1", dst="192.168.0.2")/SCTP(sport=44,dport=45,tag=1)/SCTPChunkData(data="X" * 20) pkt5 = Ether(dst="00:00:00:00:01:00", src="00:02:00:00:00:01")/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x'*20) pkt6 = Ether(dst="00:00:00:00:01:00", src="00:02:00:00:00:01")/IPv6(src="2001::1", dst="2001::2")/Raw('x' * 20) pkt7 = Ether(dst="00:00:00:00:01:00", src="00:02:00:00:00:01")/IPv6(src="2001::1", dst="2001::2")/UDP(sport=24,dport=25)/Raw('x'*20) pkt8 = Ether(dst="00:00:00:00:01:00", src="00:02:00:00:00:01")/Dot1Q(prio=1)/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x'*20) pkt9 = Ether(dst="00:00:00:00:01:00", src="00:02:00:00:00:01")/IPv6(src="2001::1", dst="2001::2")/TCP(sport=24,dport=25)/Raw('x'*20)
verify the pkt1 to queue 1, pkt2 to queue 3 or queue 4, pkt3 to queue 6 or queue 7, pkt4 to queue 8 or queue 9, pkt5 to queue 11 or 12 or 13 or 14, pkt6 to queue 15, pkt7 to queue 6 or queue 7, pkt8 enter the same queue with pkt5. pkt9 to queue 1.
Notes: If the packet type doesn’t match any queue region rules, it will be distributed to the queue of queue region 0, despite queue region 0 matches any rule.
verified the rules can be listed and flushed:
testpmd> show port 0 queue-region testpmd> set port 0 queue-region flush off
Send the pkt1-pkt9, the packets can’t enter the same queue which defined in queue region rule. They are distributed to queues according RSS rule.
Notes: fortville can’t parse the TCP SYN type packet, fortpark can parse it. So if fortville, pkt2 to queue 6 or queue 7.
Test case 2: different user priority packet can enter the expected queue region¶
Set queue region on a port:
testpmd> set port 0 queue-region region_id 0 queue_start_index 14 queue_num 2 testpmd> set port 0 queue-region region_id 7 queue_start_index 0 queue_num 8 testpmd> set port 0 queue-region region_id 2 queue_start_index 10 queue_num 4
Set the mapping of User Priority to Traffic Classes on a port:
testpmd> set port 0 queue-region UP 3 region_id 0 testpmd> set port 0 queue-region UP 1 region_id 7 testpmd> set port 0 queue-region UP 2 region_id 2 testpmd> set port 0 queue-region UP 7 region_id 2 testpmd> set port 0 queue-region flush on
send packet:
pkt1=Ether(dst="00:00:00:00:01:00", src="00:02:00:00:00:01")/Dot1Q(prio=3)/IP(src="192.168.0.1", dst="192.168.0.2")/UDP(sport=22, dport=23)/Raw('x'*20) pkt2=Ether(dst="00:00:00:00:01:00", src="00:02:00:00:00:01")/Dot1Q(prio=1)/IP(src="192.168.0.1", dst="192.168.0.2")/UDP(sport=22, dport=23)/Raw('x'*20) pkt3=Ether(dst="00:00:00:00:01:00", src="00:02:00:00:00:01")/Dot1Q(prio=2)/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(sport=32, dport=33)/Raw('x'*20) pkt4=Ether(dst="00:00:00:00:01:00", src="00:02:00:00:00:01")/Dot1Q(prio=7)/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(sport=32, dport=33)/Raw('x'*20) pkt5=Ether(dst="00:00:00:00:01:00", src="00:02:00:00:00:01")/Dot1Q(prio=7)/IP(src="192.168.0.3", dst="192.168.0.4")/UDP(sport=22, dport=23)/Raw('x'*20) pkt6=Ether(dst="00:00:00:00:01:00", src="00:02:00:00:00:01")/IP(src="192.168.0.3", dst="192.168.0.4")/UDP(sport=22, dport=23)/Raw('x'*20)
verify the pkt1 to queue 14 or 15, pkt2 to queue 0 or 1 or 2 or 3 or 4 or 5 or 6 or 7. pkt3 to queue 10 or 11 or 12 or 13. pkt4 enter the same queue with pkt3. pkt5 to queue 10 or 11 or 12 or 13. pkt6 to queue 14 or 15.
Notes: If the packet UP doesn’t match any queue region rules, it will be distributed to the queue of queue region 0, despite queue region 0 matches any rule.
verified the rules can be listed and flushed:
testpmd> show port 0 queue-region testpmd> set port 0 queue-region flush off
Send the pkt1-pkt6, the packets can’t enter the same queue which defined in queue region rule. They are distributed to queues according RSS rule.
Test case 3: boundary value testing¶
boundary value testing of “Set a queue region on a port”
the following three rules are set successfully:
testpmd> set port 0 queue-region region_id 0 queue_start_index 0 queue_num 16 testpmd> set port 0 queue-region flush on testpmd> set port 0 queue-region flush off testpmd> set port 0 queue-region region_id 0 queue_start_index 15 queue_num 1 testpmd> set port 0 queue-region flush on testpmd> set port 0 queue-region flush off testpmd> set port 0 queue-region region_id 7 queue_start_index 2 queue_num 8 testpmd> set port 0 queue-region flush on
all the three rules can be listed:
testpmd> show port 0 queue-region testpmd> set port 0 queue-region flush off
the following four rules can’t be set successfully.:
testpmd> set port 0 queue-region region_id 8 queue_start_index 2 queue_num 2 testpmd> set port 0 queue-region region_id 1 queue_start_index 16 queue_num 1 testpmd> set port 0 queue-region region_id 2 queue_start_index 15 queue_num 2 testpmd> set port 0 queue-region region_id 3 queue_start_index 2 queue_num 3
no rules can be listed:
testpmd> show port 0 queue-region testpmd> set port 0 queue-region flush off
boundary value testing of “Set the mapping of flowtype to region index on a port”:
testpmd> set port 0 queue-region region_id 0 queue_start_index 2 queue_num 2 testpmd> set port 0 queue-region region_id 7 queue_start_index 4 queue_num 4
the first two rules can be set successfully:
testpmd> set port 0 queue-region region_id 0 flowtype 63 testpmd> set port 0 queue-region region_id 7 flowtype 0
the first two rules can be listed:
testpmd> show port 0 queue-region
the last two rule can’t be set successfully:
testpmd> set port 0 queue-region region_id 0 flowtype 64 testpmd> set port 0 queue-region region_id 2 flowtype 34 testpmd> set port 0 queue-region flush on
the last two rules can’t be listed:
testpmd> show port 0 queue-region testpmd> set port 0 queue-region flush off
boundary value testing of “Set the mapping of UP to region index on a port”:
testpmd> set port 0 queue-region region_id 0 queue_start_index 2 queue_num 2 testpmd> set port 0 queue-region region_id 7 queue_start_index 4 queue_num 4
the first two rules can be set successfully:
testpmd> set port 0 queue-region UP 7 region_id 0 testpmd> set port 0 queue-region UP 0 region_id 7
the first two rules can be listed:
testpmd> show port 0 queue-region
the last two rule can’t be set successfully:
testpmd> set port 0 queue-region UP 8 region_id 0 testpmd> set port 0 queue-region UP 1 region_id 2 testpmd> set port 0 queue-region flush on
the last two rules can’t be listed:
testpmd> show port 0 queue-region testpmd> set port 0 queue-region flush off
Niantic Inline IPsec Tests¶
This test plan describe the method of validation inline hardware acceleration of symmetric crypto processing of IPsec flows on Intel® 82599 10 GbE Controller (IXGBE) within the cryptodev framework.
*Limitation: AES-GCM 128 ESP Tunnel/Transport mode and Authentication only mode are supported.*
Ref links: https://tools.ietf.org/html/rfc4301
https://tools.ietf.org/html/rfc4302
https://tools.ietf.org/html/rfc4303
http://dpdk.org/doc/guides/sample_app_ug/ipsec_secgw.html
Abbr: ESP: Encapsulating Security Payload:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ----
| Security Parameters Index (SPI) | ^Int.
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |Cov-
| Sequence Number | |ered
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | ----
| Payload Data* (variable) | | ^
~ ~ | |
| | |Conf.
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |Cov-
| | Padding (0-255 bytes) | |ered*
+-+-+-+-+-+-+-+-+ +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | |
| | Pad Length | Next Header | v v
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ------
| Integrity Check Value-ICV (variable) |
~ ~
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
SPI: Security Parameters Index
The SPI is an arbitrary 32-bit value that is used by a receiver to identify the SA to which an incoming packet is bound.
Sequence Number:
This unsigned 32-bit field contains a counter value that increases by one for each packet sent
AES: Advanced Encryption Standard
GCM: Galois Counter Mode
Prerequisites¶
2 * 10Gb Ethernet ports of the DUT are directly connected in full-duplex to different ports of the peer traffic generator.
Bind two ports to vfio-pci. modprobe vfio-pci
Test Case: Inline cfg parsing¶
Create inline ipsec configuration file like below:
#SP IPv4 rules
sp ipv4 out esp protect 1005 pri 1 dst 192.168.105.0/24 sport 0:65535 dport 0:65535
#SA rules
sa out 1005 aead_algo aes-128-gcm aead_key 2b:7e:15:16:28:ae:d2:a6:ab:f7:15:88:09:cf:4f:3d:de:ad:be:ef \
mode ipv4-tunnel src 172.16.1.5 dst 172.16.2.5 \
port_id 1 \
type inline-crypto-offload \
sa in 5 aead_algo aes-128-gcm aead_key 2b:7e:15:16:28:ae:d2:a6:ab:f7:15:88:09:cf:4f:3d:de:ad:be:ef \
mode ipv4-tunnel src 172.16.1.5 dst 172.16.2.5 \
port_id 1 \
type inline-crypto-offload \
#Routing rules
rt ipv4 dst 172.16.2.5/32 port 1
rt ipv4 dst 192.168.105.10/32 port 0
Starting ipsec-secgw sample and make sure SP/SA/RT rules loaded successfully.
Check ipsec-secgw can detect invalid cipher algo.
Check ipsec-secgw can detect invalid auth algo.
Check ipsec-secgw can detect invalid key format.
Test Case: IPSec Encryption¶
Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode:
sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev
"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u
0x2 --config="(0,0,20),(1,0,21)" -f ./enc.cfg
Use scapy to listen on unprotected port:
sniff(iface='%s',count=1,timeout=10)
Use scapy send burst(32) normal packets with dst ip (192.168.105.0) to protected port.
Check burst esp packets received from unprotected port:
tcpdump -Xvvvi ens802f1
tcpdump: listening on ens802f1, link-type EN10MB (Ethernet), capture size 262144 bytes
06:10:25.674233 IP (tos 0x0, ttl 64, id 0, offset 0, flags [none], proto ESP (50), length 108)
172.16.1.5 > 172.16.2.5: ESP(spi=0x000003ed,seq=0x9), length 88
0x0000: 4500 006c 0000 0000 4032 1f36 ac10 0105 E..l....@2.6....
0x0010: ac10 0205 0000 03ed 0000 0009 0000 0000 ................
0x0020: 0000 0009 4468 a4af 5853 7545 b21d 977c ....Dh..XSuE...|
0x0030: b911 7ec6 74a0 3349 b986 02d2 a322 d050 ..~.t.3I.....".P
0x0040: 8a0d 4ffc ef4d 6246 86fe 26f0 9377 84b5 ..O..MbF..&..w..
0x0050: 8b06 c7e0 05d3 1ac5 1a30 1a93 8660 4292 .........0...`B.
0x0060: 999a c84d 49ed ff95 89a1 6917 ...MI.....i.
Check esp packets’ format is correct.
See decrypted packets on scapy output:
###[ IP ]###
version = 4
ihl = 5
tos = 0x0
len = 52
id = 1
flags =
frag = 0
ttl = 63
proto = ip
chksum = 0x2764
src = 192.168.105.10
dst = 192.168.105.10
\options \
###[ Raw ]###
load = '|->test-test-test-test-test-t<-|'
Test Case: IPSec Encryption with Jumboframe¶
Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode:
sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev
"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u
0x2 --config="(0,0,20),(1,0,21)" -f ./enc.cfg
Use scapy to listen on unprotected port
Default frame size is 1518, send burst(1000) packets with dst ip (192.168.105.0) to protected port.
Check burst esp packets received from unprotected port.
Check esp packets’ format is correct.
See decrypted packets on scapy output
Send burst(8192) jumbo packets with dst ip (192.168.105.0) to protected port.
Check burst esp packets can’t be received from unprotected port.
Set jumbo frames size as 9000, start it with port 1 assigned to unprotected mode:
sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev
"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u
0x2 -j 9000 --config="(0,0,20),(1,0,21)" -f ./enc.cfg
Use scapy to listen on unprotected port
Send burst(8192) jumbo packets with dst ip (192.168.105.0) to protected port.
Check burst jumbo packets received from unprotected port.
Check esp packets’ format is correct.
See decrypted packets on scapy output
Send burst(9000) jumbo packets with dst ip (192.168.105.0) to protected port.
Check burst jumbo packets can’t be received from unprotected port.
Test Case: IPSec Encryption with RSS¶
Create configuration file with multiple SP/SA/RT rules for different ip address.
Start ipsec-secgw with two queues enabled on each port and port 1 assigned to unprotected mode:
sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev
"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u
0x2 --config="(0,0,20),(0,1,20),(1,0,21),(1,1,21)" -f ./enc_rss.cfg
Use scapy to listen on unprotected port
Send burst(32) packets with different dst ip to protected port.
Check burst esp packets received from queue 0 and queue 1 on unprotected port. tcpdump -Xvvvi ens802f1
Check esp packets’ format is correct.
See decrypted packets on scapy output
Test Case: IPSec Decryption¶
Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode:
sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev
"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u
0x2 --config="(0,0,20),(1,0,21)" -f ./dec.cfg
Send two burst(32) esp packets to unprotected port.
First one will produce an error “IPSEC_ESP: failed crypto op” in the IPsec application, but it will setup the SA. Second one will decrypt and send back the decrypted packet.
Check burst packets which have been decapsulated received from protected port tcpdump -Xvvvi ens802f0
Test Case: IPSec Decryption with wrong key¶
Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode:
sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev
"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u
0x2 --config="(0,0,20),(1,0,21)" -f ./dec.cfg
Change dec.cfg key is not same with send packet encrypted key
Send one burst(32) esp packets to unprotected port.
IPsec application will produce an error “IPSEC_ESP: failed crypto op” , but it will setup the SA.
Send one burst(32) esp packets to unprotected port.
Check burst packets which have been decapsulated can’t be received from protected port, IPsec application will produce error “IPSEC_ESP: failed crypto op”.
Test Case: IPSec Decryption with Jumboframe¶
- Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode::
- sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 –vdev “crypto_null” –log-level 8 –socket-mem 1024,1 – -p 0xf -P -u 0x2 –config=”(0,0,20),(1,0,21)” -f ./dec.cfg
Default frame size is 1518, Send two burst(1000) esp packets to unprotected port.
First one will produce an error “IPSEC_ESP: failed crypto op” in the IPsec application, but it will setup the SA. Second one will decrypt and send back the decrypted packet.
Check burst(1000) packets which have been decapsulated received from protected port.
Send burst(8192) esp packets to unprotected port.
Check burst(8192) packets which have been decapsulated can’t be received from protected port.
Set jumbo frames size as 9000, start it with port 1 assigned to unprotected mode:
sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev
"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u
0x2 -j 9000 --config="(0,0,20),(1,0,21)" -f ./dec.cfg
Send two burst(8192) esp packets to unprotected port.
First one will produce an error “IPSEC_ESP: failed crypto op” in the IPsec application, but it will setup the SA. Second one will decrypt and send back the decrypted packet.
Check burst(8192) packets which have been decapsulated received from protected port.
Send burst(9000) esp packets to unprotected port.
Check burst(9000) packets which have been decapsulated can’t be received from protected port.
Test Case: IPSec Decryption with RSS¶
Create configuration file with multiple SA rule for different ip address.
Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode:
sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1 --vdev
"crypto_null" --log-level 8 --socket-mem 1024,1 -- -p 0xf -P -u
0x2 -config="(0,0,20),(0,1,20),(1,0,21),(1,1,21)" -f ./dec_rss.cfg
Send two burst(32) esp packets with different ip to unprotected port.
First one will produce an error “IPSEC_ESP: failed crypto op” in the IPsec application, but it will setup the SA. Second one will decrypt and send back the decrypted packet.
Check burst(32) packets which have been decapsulated received from queue 0 and 1 on protected port.
Test Case: IPSec Encryption/Decryption simultaneously¶
Start ipsec-secgw with two 82599 ports and assign port 1 to unprotected mode:
sudo ./build/ipsec-secgw -l 20,21 -w 83:00.0 -w 83:00.1
--vdev "crypto_null" --log-level 8 --socket-mem 1024,1
-- -p 0xf -P -u 0x2 --config="(0,0,20),(1,0,21)" -f ./enc_dec.cfg
Send normal and esp packets to protected and unprotected ports simultaneously.
Note when testing inbound IPSec, first one will produce an error “IPSEC_ESP: invalid padding” in the IPsec application, but it will setup the SA. Second one will decrypt and send back the decrypted packet.
Check esp and normal packets received from unprotected and protected ports.
Eventdev Pipeline SW PMD Tests¶
Prerequistites¶
Test Case 1: Keep the packets order with one ordered stage in single-flow and multi-flow¶
Description: the sample only guarantee that keep the packets order with only one stage.
1. Run the sample with below command: # ./build/eventdev_pipeline_sw_pmd –vdev event_sw0 – -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -o -D Parameters: -r2, -t4, -e8: allocate cores to rx, tx and shedular -w: allocate cores to workers -s1: the sample only contain 1 stage -n0: the sample will run forever without a packets num limit
- Send traffic from ixia device with same 5 tuple(single-link) and with different 5-tuple(multi-flow)
- Observe the packets received by ixia device, check the packets order.
Test Case 2: Keep the packets order with atomic stage in single-flow and multi-flow¶
Description: the packets’ order which will pass through a same flow should be guaranteed.
1. Run the sample with below command: # ./build/eventdev_pipeline_sw_pmd –vdev event_sw0 – -r2 -t4 -e8 -w F0 -s2 -n0 -c32 -W1000 -a -D
- Send traffic from ixia device with same 5 tuple(single-link) and with different 5-tuple(multi-flow)
- Observe the packets received by ixia device, ensure packets in each flow remain in order, but note that flows may be re-ordered compared to eachother.
Test Case 3: Check load-balance behavior with atomic type in single-flow and multi-flow situations¶
Description: In multi-flow situation, sample should have a good load-blanced behavior; in single-flow, the load-balanced behavior is not guaranteed;
1. Run the sample with below command: # ./build/eventdev_pipeline_sw_pmd –vdev event_sw0 – -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -a -D
2. Use traffic generator to send huge number of packets: In single-flow situation, traffic generator will send packets with the same 5-tuple which is used to calculate rss value; In multi-flow situation, traffice generator will send packets with different 5-tuple;
- Check the load-balance bahavior by the workload of every worker.
Test Case 4: Check load-balance behavior with order type stage in single-flow and multi-flow situations¶
Description: A good load-balanced behavior should be guaranteed in both single-flow and multi-flow situations.
1. Run the sample with below command: # ./build/eventdev_pipeline_sw_pmd –vdev event_sw0 – -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -o -D
2. Use traffic generator to send huge number of packets: In single-flow situation, traffic generator will send packets with the same 5-tuple which is used to calculate rss value; In multi-flow situation, traffice generator will send packets with different 5-tuple;
- Check the load-balance bahavior by the workload of every worker.
Test Case 5: Check load-balance behavior with parallel type stage in single-flow and multi-flow situations¶
Description: A good load-balanced behavior should be guaranteed in both single-flow and multi-flow situations.
1. Run the sample with below command: # ./build/eventdev_pipeline_sw_pmd –vdev event_sw0 – -r2 -t4 -e8 -w F0 -s1 -n0 -c32 -W1000 -p -D
2. Use traffic generator to send huge number of packets: In single-flow situation, traffic generator will send packets with the same 5-tuple which is used to calculate rss value; In multi-flow situation, traffic generator will send packets with different 5-tuple;
- Check the load-balance bahavior by the workload of every worker.
Test Case 6: Performance test for atomic type of stage¶
Description: Execute performance test with atomic type of stage in single-flow and multi-flow situation. We use 4 worker and 2 stage as the test background.
1. Run the sample with below command: # ./build/eventdev_pipeline_sw_pmd –vdev event_sw0 – -r2 -t4 -e8 -w F0 -s1 -n0 -c32
- use traffic generator to send huge number of packets(with same 5-tuple and different 5-tuple)
- observe the speed of packets received.
Test Case 7: Performance test for parallel type of stage¶
Description: Execute performance test with atomic type of stage in single-flow and multi-flow situation. We use 4 worker and 2 stage as the test background.
1. Run the sample with below command: # ./build/eventdev_pipeline_sw_pmd –vdev event_sw0 – -r2 -t4 -e8 -w F0 -s1 -n0 -c32
- use traffic generator to send huge number of packets(with same 5-tuple and different 5-tuple)
- observe the speed of packets received.
Test Case 8: Performance test for ordered type of stage¶
Description: Execute performance test with atomic type of stage in single-flow and multi-flow situation. We use 4 worker and 2 stage as the test background.
1. Run the sample with below command: # ./build/eventdev_pipeline_sw_pmd –vdev event_sw0 – -r2 -t4 -e8 -w F0 -s1 -n0 -c32
- use traffic generator to send huge number of packets(with same 5-tuple and different 5-tuple)
- observe the speed of packets received.
Test Case 9: Basic forward test for all type of stage¶
Description: Execute basic forward test with all type of stage.
1. Run the sample with below command: # ./build/eventdev_pipeline_sw_pmd –vdev event_sw0 – -r2 -t4 -e8 -w F0 -s1 -n0 -c32
- use traffic generator to send some packets and verify the sample could forward them normally
Fortville Dynamic Mapping of Flow Types to PCTYPEs Tests¶
More protocols can be added dynamically using dynamic device personalization profiles (DDP).
A packet can be identified by hardware as different flow types. Different NIC hardwares may support different flow types. Basically, the NIC hardware identifies the flow type as deep protocol as possible, and exclusively. To address requirements for new PCTYPEs configuration for post filters(RSS/FDIR), a set of functions providing dynamic HW PCTYPE to SW RTE_ETH_FLOW type mapping is proposed.
Dynamic flow type mapping will eliminate usage of number of hard-coded flow types in bulky if-else statements. For instance, when configure hash enable flags for RSS in i40e_config_hena() function and will make partitioning FVL in i40e PMD more scalable.
I40e PCTYPEs are statically mapped to RTE_ETH_FLOW_* types in DPDK, defined in rte_eth_ctrl.h, and flow types used to define ETH_RSS_* offload types in rte_ethdev.h. RTE_ETH_FLOW_MAX is defined now as 22, leaves 42 flow type unassigned.
New protocols GTP can be decomposed into separate protocols, GTP-C, GTP-U. According to DDP profile request, list GTP PCTYPEs as below:
22 - GTP-U IPv4
23 - GTP-U IPv6
24 - GTP-U PAY4
25 - GTP-C PAY4
Select flow types value between 23 and 63, pctype and flow type mapping as below:
+-------------+------------+------------+
| Packet Type | PCTypes | Flow Types |
+-------------+------------+------------+
| GTP-U IPv4 | 22 | 26 |
+-------------+------------+------------+
| GTP-U IPv6 | 23 | 23 |
+-------------+------------+------------+
| GTP-U PAY4 | 24 | 24 |
+-------------+------------+------------+
| GTP-C PAY4 | 25 | 25 |
+-------------+------------+------------+
Prerequisites¶
Host PF in DPDK driver:
./tools/dpdk-devbind.py -b igb_uio 81:00.0
Start testpmd on host, set chained port topology mode, add txq/rxq to enable multi-queues. In general, PF’s max queue is 64:
./testpmd -c f -n 4 -- -i --port-topology=chained --txq=64 --rxq=64
Set rxonly forwarding and enable output
Test Case: Load dynamic device personalization¶
Stop testpmd port before loading profile:
testpmd > port stop all
Load profile gtp.pkgo which is a binary file:
testpmd > ddp add (port_id) (profile_path)
Start testpmd port:
testpmd > port start all
Note:
Gtp.pkgo profile is not released by ND yet, only have engineer version for internal use so far. Plan to keep public reference profiles at Intel Developer Zone, release versions of profiles and supply link later.
Loading DDP profile is the prerequisite for below dynamic mapping relative cases, operate global reset or lanconf tool to recover original setting. Global reset trigger reg is 0xb8190, first cmd is core reset, second cmd is global reset:
testpmd> write reg 0 0xb8190 1 testpmd> write reg 0 0xb8190 2
Test Case: Check profile info correctness¶
Check profile information correctness, includes used protocols, packet classification types, defined packet types and so on, no core dump or crash issue:
testpmd> ddp get info <profile_path>
Test Case: Reset flow type to pctype mapping¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-U IPv4 flow type id 26 to pcytpe id 22 mapping item:
testpmd> port config 0 pctype mapping update 22 26
Check mapping table adds 26 this mapping:
testpmd> show port 0 pctype mapping
Reset flow type to pctype mapping to default value:
testpmd> port config 0 pctype mapping reset
Check mapping table doesn’t have 26 this mapping:
testpmd> show port 0 pctype mapping
Start testpmd
Send normal packet to port, check RSS could work, print PKT_RX_RSS_HASH:
>>> p=Ether()/IP()/Raw('x'*20)
Test Case: Update flow type to GTP-U IPv4 pctype mapping item¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-U IPv4 flow type id 26 to pcytpe id 22 mapping item:
testpmd> port config 0 pctype mapping update 22 26
Check flow ptype to pctype mapping adds 26 this mapping:
testpmd> show port 0 pctype mapping
Add udp key to hash input set for flow type id 26 on port 0:
testpmd> set_hash_input_set 0 26 udp-key add
Enable flow type id 26’s RSS:
testpmd> port config all rss 26
Start testpmd
Send GTP-U IPv4 packets, check RSS could work, print PKT_RX_RSS_HASH:
>>> p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IP()/Raw('x'*20) >>> p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP()/Raw('x'*20)
Send GTP-U IPv6, GTP-U PAY4 and GTP-C PAY4 packets, check receive packets from queue 0 and don’t have PKT_RX_RSS_HASH print.
Test Case: Update flow type to GTP-U IPv6 pctype mapping item¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-U IPv4 flow type id 23 to pcytpe id 23 mapping item:
testpmd> port config 0 pctype mapping update 23 23
Check flow ptype to pctype mapping adds 23 this mapping:
testpmd> show port 0 pctype mapping
Add udp key to hash input set for flow type id 23 on port 0:
testpmd> set_hash_input_set 0 23 udp-key add
Enable flow type id 23’s RSS:
testpmd> port config all rss 23
Start testpmd
Send GTP-U IPv6 packets, check RSS could work, print PKT_RX_RSS_HASH:
>>> p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IPv6()/Raw('x'*20) >>> p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IPv6()/Raw('x'*20)
Send GTP-U IPv4, GTP-U PAY4 and GTP-C PAY4 packets, check receive packets from queue 0 and don’t have PKT_RX_RSS_HASH print
Test Case: Update flow type to GTP-U PAY4 pctype mapping item¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-U IPv4 flow type id 24 to pcytpe id 24 mapping item:
testpmd> port config 0 pctype mapping update 24 24
Check flow ptype to pctype mapping adds 24 this mapping:
testpmd> show port 0 pctype mapping
Add udp key to hash input set for flow type id 24 on port 0:
testpmd> set_hash_input_set 0 24 udp-key add
Enable flow type id 24’s RSS:
testpmd> port config all rss 24
Start testpmd
Send GTP-U, PAY4 packets, check RSS could work, print PKT_RX_RSS_HASH:
>>> p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/Raw('x'*20) >>> p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/Raw('x'*20)
Send GTP-U IPv4, GTP-U IPv6 and GTP-C PAY4 packets, check receive packets from queue 0 and don’t have PKT_RX_RSS_HASH print.
Test Case: Update flow type to GTP-C PAY4 pctype mapping item¶
Check flow ptype to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-C PAY4 flow type id 25 to pcytpe id 25 mapping item:
testpmd> port config 0 pctype mapping update 25 25
Check flow ptype to pctype mapping adds 25 this mapping
Add udp key to hash input set for flow type id 25 on port 0:
testpmd> set_hash_input_set 0 25 udp-key add
Enable flow type id 25’s RSS:
testpmd> port config all rss 25
Start testpmd
Send GTP-C PAY4 packets, check RSS could work, print PKT_RX_RSS_HASH:
>>> p=Ether()/IP()/UDP(dport=2123)/GTP_U_Header()/Raw('x'*20) >>> p=Ether()/IPv6()/UDP(dport=2123)/GTP_U_Header()/Raw('x'*20)
Send GTP-U IPv4, GTP-U IPv6 and GTP-U PAY4 packets, check receive packets from queue 0 and don’t have PKT_RX_RSS_HASH print.
GTP packet¶
Note:
List all of profile supported GTP packets as below, also could use “ddp get info gtp.pkgo” to check profile information. Below left number is ptype value, right are layer types:
167: IPV4, GTP-C, PAY4
Scapy 2.3.3+ versions support to send GTP packet. Please check your scapy tool could send below different GTP types’ packets successfully then run above tests.
GTP-C packet types¶
167: IPV4, GTP-C, PAY4:
p=Ether()/IP()/UDP(dport=2123)/GTP_U_Header()/Raw('x'*20)
168: IPV6, GTP-C, PAY4:
p=Ether()/IPv6()/UDP(dport=2123)/GTP_U_Header()/Raw('x'*20)
GTP-U data packet types, IPv4 transport, IPv4 payload¶
169: IPV4 GTPU IPV4 PAY3:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IP()/Raw('x'*20)
170: IPV4 GTPU IPV4FRAG PAY3:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IP(frag=5)/Raw('x'*20)
171: IPV4 GTPU IPV4 UDP PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IP()/UDP()/Raw('x'*20)
172: IPV4 GTPU IPV4 TCP PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IP()/TCP()/Raw('x'*20)
173: IPV4 GTPU IPV4 SCTP PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IP()/SCTP()/Raw('x'*20)
174: IPV4 GTPU IPV4 ICMP PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IP()/ICMP()/Raw('x'*20)
GTP-U data packet types, IPv6 transport, IPv4 payload¶
175: IPV6 GTPU IPV4 PAY3:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP()/Raw('x'*20)
176: IPV6 GTPU IPV4FRAG PAY3:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP(frag=5)/Raw('x'*20)
177: IPV6 GTPU IPV4 UDP PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP()/UDP()/Raw('x'*20)
178: IPV6 GTPU IPV4 TCP PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP()/TCP()/Raw('x'*20)
179: IPV6 GTPU IPV4 SCTP PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP()/SCTP()/Raw('x'*20)
180: IPV6 GTPU IPV4 ICMP PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP()/ICMP()/Raw('x'*20)
GTP-U control packet types¶
181: IPV4, GTP-U, PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/Raw('x'*20)
182: PV6, GTP-U, PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/Raw('x'*20)
GTP-U data packet types, IPv4 transport, IPv6 payload¶
183: IPV4 GTPU IPV6FRAG PAY3:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IPv6()/IPv6ExtHdrFragment()/Raw('x'*20)
184: IPV4 GTPU IPV6 PAY3:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IPv6()/Raw('x'*20)
185: IPV4 GTPU IPV6 UDP PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IPv6()/UDP()/Raw('x'*20)
186: IPV4 GTPU IPV6 TCP PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IPv6()/TCP()/Raw('x'*20)
187: IPV4 GTPU IPV6 SCTP PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IPv6()/SCTP()/Raw('x'*20)
188: IPV4 GTPU IPV6 ICMPV6 PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IPv6(nh=58)/ICMP()/Raw('x'*20)
GTP-U data packet types, IPv6 transport, IPv6 payload¶
189: IPV6 GTPU IPV6 PAY3:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IPv6()/Raw('x'*20)
190: IPV6 GTPU IPV6FRAG PAY3:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IPv6()/IPv6ExtHdrFragment()/Raw('x'*20)
191: IPV6 GTPU IPV6 UDP PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IPv6()/UDP()/Raw('x'*20)
113: IPV6 GTPU IPV6 TCP PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IPv6()/TCP()/Raw('x'*20)
120: IPV6 GTPU IPV6 SCTP PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IPv6()/SCTP()/Raw('x'*20)
128: IPV6 GTPU IPV6 ICMPV6 PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IPv6(nh=58)/ICMP()/Raw('x'*20)
VFD as SRIOV Policy Manager Tests¶
VFD is SRIOV Policy Manager (daemon) running on the host allowing configuration not supported by kernel NIC driver, supports ixgbe and i40e NIC. Run on the host for policy decisions w.r.t. what a VF can and cannot do to the PF. Only the DPDK PF would provide a callback to implement these features, the normal kernel drivers would not have the callback so would not support the features. Allow passing information to application controlling PF when VF message box event received such as those listed below, so action could be taken based on host policy. Stop VM1 from asking for something that compromises VM2. Use DPDK DPDK PF + kernel VF mode to verify below features.
Test Case 1: Set up environment and load driver¶
Get the pci device id of DUT, load ixgbe driver to required version, take Niantic for example:
rmmod ixgbe insmod ixgbe.ko
Host PF in DPDK driver. Create VFs from PF with dpdk driver:
./tools/dpdk-devbind.py -b igb_uio 05:00.0 echo 2 >/sys/bus/pci/devices/0000\:05\:00.0/max_vfs
Check ixgbevf version and update ixgbevf to required version
Detach VFs from the host:
rmmod ixgbevf
Pass through VF 05:10.0 and 05:10.2 to VM0,start and login VM0
Check ixgbevf version in VM and update to required version
Test Case 2: Link¶
Pre-environment:
(1)Host one DPDK PF and create two VFs, pass through VF0 and VF1 to VM0,
start VM0
(2)Load host DPDK driver and VM0 kernel driver
Steps:
Enable multi-queues to start DPDK PF:
./testpmd -c f -n 4 -- -i --rxq=4 --txq=4
Link up kernel VF and expect VF link up
Link down kernel VF and expect VF link down
Repeat above 2~3 for 100 times, expect no crash or core dump issues.
Test Case 3: ping¶
Pre-environment:
(1)Establish link with link partner.
(2)Host one DPDK PF and create two VFs, pass through VF0 and VF1 to VM0,
start VM0
(3)Load host DPDK driver and VM0 kernel driver
Steps:
- Ifconfig IP on VF0 and VF1
- Ifconfig IP on link partner PF, name as tester PF
- Start inbound and outbound pings, check ping successfully.
- Link down the devx, stop the pings, link up the devx then restart the pings, check port could ping successfully.
- Repeat step 3~4 for 5 times
Test Case 4: reset¶
Pre-environment:
(1)Establish link with link partner.
(2)Host one DPDK PF and create two VFs, pass through VF0 to VM0 and VF1 to
VM1, start VM0 and VM1
(3)Load host DPDK driver and VM kernel driver
Steps:
- Check host testpmd and PF at link up status
- Link up VF0 in VM0 and VF1 in VM1
- Link down VF1 in VM1 and check no impact on VF0 status
- Unload VF1 kernel driver and expect no impact on VF0
- Use tcpdump to dump packet on VF0
- Send packets to VF0 using IXIA or scapy tool, expect RX successfully
- Link down and up DPDK PF, ensure that the VF recovers and continues to receive packet.
- Load VF1 kernel driver and expect no impact on VF0
- Send packets to VF0 using IXIA or scapy tool, expect RX successfully
Test Case 5: add/delete IP/MAC address¶
Pre-environment:
(1)Establish link with link partner.
(2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
(3)Load host DPDK driver and VM0 kernel drive
Steps:
Ifconfig IP on kernel VF0
Ifconfig IP on link partner PF, name as tester PF
Kernel VF0 ping tester PF, tester PF ping kernel VF0
Add IPv6 on kernel VF0(e.g: ens3):
ifconfig ens3 add efdd::9fc8:6a6d:c232:f1c0
Delete IPv6 on kernel VF:
ifconfig ens3 del efdd::9fc8:6a6d:c232:f1c0
Modify MAC address on kernel VF:
ifconfig ens3 hw ether 00:AA:BB:CC:dd:EE
Send packet to modified MAC, expect VF can receive packet successfully
Test Case 6: add/delete vlan¶
Pre-environment:
(1)Establish link with link partner.
(2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
(3)Load host DPDK driver and VM0 kernel driver
Steps:
Add random vlan id(0~4095) on kernel VF0(e.g: ens3), take vlan id 51 for example:
modprobe 8021q vconfig add ens3 51
Check add vlan id successfully, expect to have ens3.51 device:
ls /proc/net/vlan
Send packet from tester to VF MAC with not-matching vlan id, check the packet can’t be received at the vlan device
Send packet from tester to VF MAC with matching vlan id, check the packet can be received at the vlan device.
Delete configured vlan device:
vconfig rem ens3.51
Check delete vlan id 51 successfully
Send packet from tester to VF MAC with vlan id(51), check that the packet can’t be received at the VF.
Test Case 7: Get packet statistic¶
Pre-environment:
(1)Establish link with link partner.
(2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
(3)Load host DPDK driver and VM0 kernel driver
Steps:
Send packet to kernel VF0 mac
Check packet statistic could increase correctly:
ethtool -S ens3
Test Case 8: MTU¶
Pre-environment:
(1)Establish link with link partner.
(2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
(3)Load host DPDK driver and VM0 kernel driver
Steps:
Check DPDK PF and kernel VF mtu, normal as 1500
Use scapy to send one packet with length as 2000 with DPDK PF MAC as DST MAC, check that DPDK PF can’t receive packet
Use scapy to send one packet with length as 2000 with kernel VF MAC as DST MAC, check that Kernel VF can’t receive packet
Change DPDK PF mtu as 3000,check no confusion/crash on kernel VF:
Testpmd > port stop all Testpmd > port config mtu 0 3000 Testpmd > port start all
Use scapy to send one packet with length as 2000 with DPDK PF MAC as DST MAC, check that DPDK PF can receive packet
Change kernel VF mtu as 3000, check no confusion/crash on DPDK PF:
ifconfig eth0 mtu 3000
Use scapy to send one packet with length as 2000 with kernel VF MAC as DST MAC, check Kernel VF can receive packet
Note: HW limitation on 82599, need add “–max-pkt-len=<length>” on testpmd to set mtu value, all the VFs and PF share same MTU, the largest one takes effect.
Test Case 9: Enable/disable promisc mode¶
Pre-environment:
(1)Establish link with link partner.
(2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
(3)Load host DPDK driver and VM0 kernel driver
Steps:
Start DPDK PF, enable promisc mode, set rxonly forwarding
Set up kernel VF tcpdump without -p parameter, without/with -p parameter could enable/disable promisc mode:
sudo tcpdump -i ens3 -n -e -vv
Send packet from tester with random DST MAC, check the packet can be received by DPDK PF and kernel VF
Disable DPDK PF promisc mode
Set up kernel VF tcpdump with -p parameter, which means disable promisc mode:
sudo tcpdump -i ens3 -n -e –vv -p
Send packet from tester with random DST MAC, check the packet can’t be received by DPDK PF and kernel VF
Send packet from tester to VF with correct DST MAC, check the packet can be received by kernel VF
Send packet from tester to PF with correct DST MAC, check the packet can be received by DPDK PF
Note: Niantic NIC un-supports this case.
Test Case 10: RSS¶
Pre-environment:
(1)Establish link with link partner.
(2)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
(3)Load host DPDK driver and VM0 kernel driver
Steps:
- Verify kernel VF RSS using ethtool -“l” (lower case L) <devx> that the default RSS setting is equal to the number of CPUs in the system and that the maximum number of RSS queues displayed is correct for the DUT
- Run “ethtool -S <devx> | grep rx_bytes | column” to see the current queue count and verify that it is correct to step 1
- Send multi-threaded traffics to the DUT with a number of threads
- Check kernel VF each queue can receive packets
Note: Niantic NIC un-supports this case.
Test Case 11: DPDK PF + kernel VF + DPDK VF¶
Pre-environment:
(1)Establish link with IXIA.
(2)Host one DPDK PF and create two VFs, pass through VF0 and VF1 to VM0,
start VM0
(3)Load host DPDK driver, VM0 DPDK driver and kernel driver
Steps:
- Check DPDK testpmd and PF at link up status
- Bind kernel VF0 to igb_uio
- Link up DPDK VF0
- Link up kernel VF1
- Start DPDK VF0, enable promisc mode and set rxonly forwarding
- Set up kernel VF1 tcpdump without -p parameter on promisc mode
- Create 2 streams on IXIA, set DST MAC as each VF MAC, transmit these 2 streams at the same time, check DPDK VF0 and kernel VF1 can receive packet successfully
- Check DPDK VF0 and kernel VF1 don’t impact each other and no performance drop for 10 minutes
Test Case 12: DPDK PF + 2kernel VFs + 2DPDK VFs + 2VMs¶
Pre-environment:
(1)Establish link with IXIA.
(2)Host one DPDK PF and create 6 VFs, pass through VF0, VF1, VF2 and VF3
to VM0, pass through VF4, VF5 to VM1, start VM0 and VM1
(3)Load host DPDK driver, VM DPDK driver and kernel driver
Steps:
- Check DPDK testpmd and PF at link up status
- Bind kernel VF0, VF1 to igb_uio in VM0, bind kernel VF4 to igb_uio in VM1
- Link up DPDK VF0,VF1 in VM0, link up DPDK VF4 in VM1
- Link up kernel VF2, VF3 in VM0, link up kernel VF5 in VM1
- Start DPDK VF0, VF1 in VM0 and VF4 in VM1, enable promisc mode and set rxonly forwarding
- Set up kernel VF2, VF3 in VM0 and VF5 in VM1 tcpdump without -p parameter on promisc mode
- Create 6 streams on IXIA, set DST MAC as each VF MAC, transmit 6 streams at the same time, expect RX successfully
- Link down DPDK VF0 and expect no impact on other VFs
- Link down kernel VF2 and expect no impact on other VFs
- Quit VF4 DPDK testpmd and expect no impact on other VFs
- Unload VF5 kernel driver and expect no impact on other VFs
- Reboot VM1 and expect no impact on VM0’s VFs
Test Case 13: Load kernel driver stress¶
Pre-environment:
(1)Host one DPDK PF and create one VF, pass through VF0 to VM0, start VM0
(2)Load host DPDK driver and VM0 kernel driver
Steps:
- Check DPDK testpmd and PF at link up status
- Unload kernel VF0 driver
- Load kernel VF0 driver
- Write script to repeat step 2 and step 3 for 100 times stress test
- Check no error/crash and system work normally
Multiple Pthread Test¶
Description¶
This test is a basic multiple pthread test which demonstrates the basics of control group. Cgroup is a Linux kernel feature that limits, accounts for and isolates the resource usage, like CPU, memory, disk I/O, network, etc of a collection of processes. Now, it’s focus on the CPU usage.
Prerequisites¶
Support igb_uio driver, kernel is 3.11+. Use “modprobe uio” “modprobe igb_uio” and then use “./tools/dpdk_nic_bind.py –bind=igb_uio device_bus_id” to bind the ports.
Assuming that an Intel’s DPDK build has been set up and the testpmd applications have been built.
Os required: Linux and FreeBSD. The command used in the test plan is only for Linux OS.
The format pattern:
–lcores=’<lcore_set>[@cpu_set][,<lcore_set>[@cpu_set],...]’
‘lcore_set’ and ‘cpu_set’ can be a single number, range or a group. A number is a “digit([0-9]+)”; a range is “<number>-<number>”; a group is “(<number|range>[,<number|range>,…])”. If a ‘@cpu_set’ value is not supplied, the value of ‘cpu_set’ will default to the value of ‘lcore_set’. For example, “–lcores=‘1,2@(5-7),(3-5)@(0,2),(0,6),7-8’” which means start 9 EAL thread:
lcore 0 runs on cpuset 0x41 (cpu 0,6);
lcore 1 runs on cpuset 0x2 (cpu 1);
lcore 2 runs on cpuset 0xe0 (cpu 5,6,7);
lcore 3,4,5 runs on cpuset 0x5 (cpu 0,2);
lcore 6 runs on cpuset 0x41 (cpu 0,6);
lcore 7 runs on cpuset 0x80 (cpu 7);
lcore 8 runs on cpuset 0x100 (cpu 8).
Test Case 1: Basic operation¶
To run the application, start the testpmd with the lcores all running with threads and also the unique core assigned, command as follows:
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='0@8,(4-5)@9' -n 4 -- -i
Using the command to make sure the lcore are init on the correct cpu:
ps -C testpmd -L -opid,tid,%cpu,psr,args
Result as follows:
PID TID %CPU PSR COMMAND
31038 31038 22.5 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
31038 31039 0.0 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
31038 31040 0.0 9 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
31038 31041 0.0 9 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
31038 31042 0.0 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
Their TIDs are for these threads as below:
+------------------------+
| TID | THREAD |
+-------+----------------+
| 31038 | Master thread |
+-------+----------------+
| 31039 |Eal-intr-thread |
+------+-----------------+
| 31040 | Lcore-slave-4 |
+-------+----------------+
| 31041 | Lcore-slave-5 |
+-------+----------------+
| 31042 | Pdump-thread |
+-------+----------------+
Before running the test, make sure the core is a unique one otherwise, the throughput will be floating on different cores, configure lcore 4&5 used for packet forwarding, command as follows:
testpmd>set corelist 4,5
Pay attention that set corelist need to be configured before start, otherwise, it will not work:
testpmd>start
Check forward configuration:
testpmd>show config fwd
Logical Core 4 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
Logical Core 5 (socket 0) forwards packets on 1 streams:
RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
Send packets continuous:
PID TID %CPU PSR COMMAND
31038 31038 0.6 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
31038 31039 0.0 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
31038 31040 1.5 9 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
31038 31041 1.5 9 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
31038 31042 0.0 8 ./x86_64-native-linuxapp-gcc/app/testpmd --lcores=0@8,(4-5)@9 -n 4 -- -i
You can see TID 31040(Lcore 4), 31041(Lore 5) are running.
Test Case 2: Positive Test¶
Input random valid commands to make sure the commands can work, Give examples, suppose DUT have 128 cpu core.
Case 1:
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='0@8,(4-5)@(8-11)' -n 4 -- -i
It means start 3 EAL thread:
lcore 0 runs on cpuset 0x100 (cpu 8);
lcore 4,5 runs on cpuset 0x780 (cpu 8,9,10,11).
Case 2:
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='1,2@(0-4,6),(3-4,6)@5,(7,8)' -n 4 -- -i
It means start 7 EAL thread:
lcore 1 runs on cpuset 0x2 (cpu 1);
lcore 2 runs on cpuset 0x5f (cpu 0,1,2,3,4,6);
lcore 3,4,6 runs on cpuset 0x10 (cpu 5);
lcore 7 runs on cpuset 0x80 (cpu 7);
lcore 8 runs on cpuset 0x100 (cpu 8).
Case 3:
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,CONFIG_RTE_MAX_LCORE-1)@(4,5)' -n 4 -- -i
(default CONFIG_RTE_MAX_LCORE=128). It means start 2 EAL thread:
lcore 0,127 runs on cpuset 0x30 (cpu 4,5).
Case 4:
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,64-66)@(4,5)' -n 4 -- -i
It means start 4 EAL thread:
lcore 0,64,65,66 runs on cpuset 0x30 (cpu 4,5).
Case 5:
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='2-5,6,7-9' -n 4 -- -i
It means start 8 EAL thread:
lcore 2 runs on cpuset 0x4 (cpu 2);
lcore 3 runs on cpuset 0x8 (cpu 3);
lcore 4 runs on cpuset 0x10 (cpu 4);
lcore 5 runs on cpuset 0x20 (cpu 5);
lcore 6 runs on cpuset 0x40 (cpu 6);
lcore 7 runs on cpuset 0x80 (cpu 7);
lcore 8 runs on cpuset 0x100 (cpu 8);
lcore 9 runs on cpuset 0x200 (cpu 9).
Case 6:
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='2,(3-5)@3' -n 4 -- -i
It means start 4 EAL thread:
lcore 2 runs on cpuset 0x4 (cpu 2);
lcore 3,4,5 runs on cpuset 0x8 (cpu 3).
Case 7:
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,7-4)@(4,5)' -n 4 -- -i
It means start 5 EAL thread:
lcore 0,4,5,6,7 runs on cpuset 0x30 (cpu 4,5)
Test Case 3: Negative Test¶
Input invalid commands to make sure the commands can’t work:
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0-,4-7)@(4,5)' -n 4 -- -i
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(-1,4-7)@(4,5)' -n 4 -- -i
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7-9)@(4,5)' -n 4 -- -i
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,abcd)@(4,5)' -n 4 -- -i
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)@(1-,5)' -n 4 -- -i
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)@(-1,5)' -n 4 -- -i
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)@(4,5-8-9)' -n 4 -- -i
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)@(abc,5)' -n 4 -- -i
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)@(4,xyz)' -n 4 -- -i
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0,4-7)=(8,9)' -n 4 -- -i
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='2,3 at 4,(0-1,,4))' -n 4 -- -i
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='[0-,4-7]@(4,5)' -n 4 -- -i
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='(0-,4-7)@[4,5]' -n 4 -- -i
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='3-4 at 3,2 at 5-6' -n 4 -- -i
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='2,,3''2--3' -n 4 -- -i
./x86_64-native-linuxapp-gcc/app/testpmd --lcores='2,,,3''2--3' -n 4 -- -i
Fortville Cloud filters for QinQ steering Tests¶
This document provides test plan for testing the function of Fortville: QinQ filter function
Prerequisites¶
- 1.Hardware:
- Fortville HarborChannel_DP_OEMGEN_8MB_J24798-001_0.65_80002DA4 firmware-version: 5.70 0x80002da4 1.3908.0(fortville 25G) or 6.0.0+
- 2.Software:
- dpdk: http://dpdk.org/git/dpdk scapy: http://www.secdev.org/projects/scapy/ disable vector mode when build dpdk
Test Case 1: test qinq packet type¶
Testpmd configuration - 4 RX/TX queues per port¶
set up testpmd with fortville NICs:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 -- -i --rxq=4 --txq=4 --tx-offloads=0x8fff --disable-rss
enable qinq:
testpmd command: vlan set qinq on 0
PMD fwd only receive the packets:
testpmd command: set fwd rxonly
verbose configuration:
testpmd command: set verbose 1
start packet receive:
testpmd command: start
tester Configuration¶
send dual vlan packet with scapy, verify it can be recognized as qinq packet:
sendp([Ether(dst="3C:FD:FE:A3:A0:AE")/Dot1Q(type=0x8100,vlan=2)/Dot1Q(type=0x8100,vlan=3)/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x' * 20)], iface="eth17")
Test Case 2: qinq packet filter to PF queues¶
Testpmd configuration - 4 RX/TX queues per port¶
set up testpmd with fortville NICs:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 -- -i --rxq=4 --txq=4 --tx-offloads=0x8fff --disable-rss
enable qinq:
testpmd command: vlan set qinq on 0
PMD fwd only receive the packets:
testpmd command: set fwd rxonly
verbose configuration:
testpmd command: set verbose 1
start packet receive:
testpmd command: start
create filter rules:
testpmd command: flow create 0 ingress pattern eth / vlan tci is 1 / vlan tci is 4093 / end actions pf / queue index 1 / end testpmd command: flow create 0 ingress pattern eth / vlan tci is 2 / vlan tci is 4094 / end actions pf / queue index 2 / end
tester Configuration¶
send dual vlan packet with scapy, verify packets can filter to queues:
sendp([Ether(dst="3C:FD:FE:A3:A0:AE")/Dot1Q(type=0x8100,vlan=1)/Dot1Q(type=0x8100,vlan=4093)/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x' * 20)], iface="eth17") sendp([Ether(dst="3C:FD:FE:A3:A0:AE")/Dot1Q(type=0x8100,vlan=2)/Dot1Q(type=0x8100,vlan=4093)/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x' * 20)], iface="eth17")
Test Case 3: qinq packet filter to VF queues¶
create VF on dut:
linux cmdline: echo 2 > /sys/bus/pci/devices/0000:81:00.0/max_vfs
bind igb_uio to vfs
linux cmdline: ./usertools/dpdk-devbind.py -b igb_uio 81:02.0 81:02.1
set up testpmd with fortville PF NICs:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -w 81:00.0 -- -i --rxq=4 --txq=4 --tx-offloads=0x8fff
enable qinq:
testpmd command: vlan set qinq on 0
PMD fwd only receive the packets:
testpmd command: set fwd rxonly
verbose configuration:
testpmd command: set verbose 1
start packet receive:
testpmd command: start
create filter rules:
testpmd command: flow create 0 ingress pattern eth / vlan tci is 1 / vlan tci is 4093 / end actions vf id 0 / queue index 2 / end testpmd command: flow create 0 ingress pattern eth / vlan tci is 2 / vlan tci is 4094 / end actions vf id 1 / queue index 3 / end testpmd command: flow create 0 ingress pattern eth / vlan tci is 3 / vlan tci is 4094 / end actions pf / queue index 1 / end
set up testpmd with fortville VF0 NICs:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -w 81:02.0 -- -i --rxq=4 --txq=4 --rss-udp
PMD fwd only receive the packets:
testpmd command: set fwd rxonly
verbose configuration:
testpmd command: set verbose 1
start packet receive:
testpmd command: start
set up testpmd with fortville VF0 NICs:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -w 81:02.0 -- -i --rxq=4 --txq=4 --rss-udp
PMD fwd only receive the packets:
testpmd command: set fwd rxonly
verbose configuration:
testpmd command: set verbose 1
start packet receive:
testpmd command: start
tester Configuration¶
send dual vlan packet with scapy, verify packets can filter to the corresponding PF and VF queues:
sendp([Ether(dst="3C:FD:FE:A3:A0:AE")/Dot1Q(type=0x8100,vlan=1)/Dot1Q(type=0x8100,vlan=4094)/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x' * 20)], iface="eth17") sendp([Ether(dst="3C:FD:FE:A3:A0:AE")/Dot1Q(type=0x8100,vlan=2)/Dot1Q(type=0x8100,vlan=4094)/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x' * 20)], iface="eth17") sendp([Ether(dst="3C:FD:FE:A3:A0:AE")/Dot1Q(type=0x8100,vlan=3)/Dot1Q(type=0x8100,vlan=4094)/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x' * 20)], iface="eth17")
Test Case 4: qinq packet filter with diffierent tpid¶
create VF on dut:
linux cmdline: echo 2 > /sys/bus/pci/devices/0000:81:00.0/max_vfs
bind igb_uio to vfs
linux cmdline: ./usertools/dpdk-devbind.py -b igb_uio 81:02.0 81:02.1
set up testpmd with fortville PF NICs:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x1f -n 4 --socket-mem=1024,1024 --file-prefix=pf -w 81:00.0 -- -i --rxq=4 --txq=4 --tx-offloads=0x8fff
enable qinq:
testpmd command: vlan set qinq on 0
PMD fwd only receive the packets:
testpmd command: set fwd rxonly
verbose configuration:
testpmd command: set verbose 1
start packet receive:
testpmd command: start
change S-Tag+C-Tag VLAN TPIDs to 0x88A8 + 0x8100:
testpmd command: vlan set outer tpid 0x88a8 0
create filter rules:
testpmd command: flow create 0 ingress pattern eth / vlan tci is 1 / vlan tci is 4093 / end actions vf id 0 / queue index 2 / end testpmd command: flow create 0 ingress pattern eth / vlan tci is 2 / vlan tci is 4094 / end actions vf id 1 / queue index 3 / end testpmd command: flow create 0 ingress pattern eth / vlan tci is 3 / vlan tci is 4094 / end actions pf / queue index 1 / end
set up testpmd with fortville VF0 NICs:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3e0 -n 4 --socket-mem=1024,1024 --file-prefix=vf0 -w 81:02.0 -- -i --rxq=4 --txq=4 --rss-udp
PMD fwd only receive the packets:
testpmd command: set fwd rxonly
verbose configuration:
testpmd command: set verbose 1
start packet receive:
testpmd command: start
set up testpmd with fortville VF0 NICs:
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x7c0 -n 4 --socket-mem=1024,1024 --file-prefix=vf1 -w 81:02.0 -- -i --rxq=4 --txq=4 --rss-udp
PMD fwd only receive the packets:
testpmd command: set fwd rxonly
verbose configuration:
testpmd command: set verbose 1
start packet receive:
testpmd command: start
tester Configuration¶
#. send dual vlan packet with scapy, verify packets can filter to the corresponding VF queues:: 7. send qinq packet with traffic generator, verify packets can filter to the corresponding VF queues.
Note¶
How to send packet with specific TPID with scapy:
1. wrpcap("qinq.pcap",[Ether(dst="3C:FD:FE:A3:A0:AE")/Dot1Q(type=0x8100,vlan=1)/Dot1Q(type=0x8100,vlan=4092)/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x' * 20)]). 2. hexedit qinq.pcap; change tpid field, "ctrl+w" to save, "ctrl+x" to exit. 3. sendp(rdpcap("qinq.pcap"), iface="eth17").
Fortville DDP GTP-C/GTP-U Tests¶
FVL6 supports DDP (Dynamic Device Personalization) to program analyzer/parser via AdminQ. Profile can be used to update FVL configuration tables via MMIO configuration space, not microcode or firmware itself. For microcode/FW changes new HW/FW/NVM image must be uploaded to the NIC. Profiles will be stored in binary files and need to be passed to AQ to program FVL during initialization stage.
GPRS Tunneling Protocol (GTP) is a group of IP-based communications protocols used to carry general packet radio service (GPRS) within GSM, UMTS and LTE networks. GTP can be decomposed into separate protocols, GTP-C, GTP-U. With DDP, new types GTP-C/GTP-U tunnels can be supported. To make it scalable it is preferable to use DDP API to get information about new PCTYPE/PTYPEs defined a profile, instead of hardcoding i40e PCTYPE/PTYPE mapping to DPDK FlowType/PacketType.
Below features have be enabled for GTP-C/GTP-U:
- FDIR for GTP-C/GTP-U to direct different TEIDs to different queues
- Tunnel filters for GTP-C/GTP-U to direct different TEIDs to different VFs
Prerequisites¶
Host PF in DPDK driver:
./tools/dpdk-devbind.py -b igb_uio 81:00.0
Create 1 VF from 1 PF with DPDK driver:
echo 1 > /sys/bus/pci/devices/0000:81:00.0/max_vfs
Detach VF from the host:
rmmod i40evf
Pass through VF 81:10.0 to vm0, start vm0.
Login vm0, then bind VF0 device to igb_uio driver.
Start testpmd on host and vm0, host supports flow director and cloud filter, VM supports cloud filter. If test PF flow director, need to add –pkt-filter-mode=perfect on testpmd to enable flow director, set chained port topology mode, add txq/rxq to enable multi-queues. In general, PF’s max queue is 64, VF’s max queue is 4:
./testpmd -c f -n 4 -- -i --pkt-filter-mode=perfect --port-topology=chained --tx-offloads=0x8fff --txq=64 --rxq=64
Test Case: Load dynamic device personalization¶
Stop testpmd port before loading profile:
testpmd > port stop all
Load gtp.pkgo file to the memory buffer, save original configuration and return in the same buffer to the gtp.bak file:
testpmd > ddp add (port_id) /tmp/gtp.pkgo,/tmp/gtp.bak
Check profile information successfully:
testpmd > ddp get list (port_id)
Start testpmd port:
testpmd > port start all
Test Case: Delete dynamic device personalization¶
Remove profile from the network adapter and restore original configuration:
testpmd > ddp del (port_id) /tmp/gtp.bak
Note:
- Gtp.pkgo profile is not released by ND yet, only have engineer version for internal use so far. Plan to keep public reference profiles at Intel Developer Zone, release versions of profiles and supply link later.
- Loading DDP is the prerequisite for below GTP relative cases. Load profile again once restarting testpmd to let software detect this event, although has “profile has already existed” reminder.
Test Case: GTP-C FDIR packet for PF¶
Add GTP-C flow director rule for PF, set TEID as random 20 bits, port is 2123, queue should be among configured queue number:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / gtpc teid is 0x3456 / end actions queue index 12 / end
Set fwd rxonly, enable output and start PF and VF testpmd.
Send GTP-C packet with good checksum, dport is 2123, TEID is same as configured rule:
p=Ether()/IP()/UDP(dport=2123)/GTP_U_Header(teid=0x3456)/Raw('x'*20)
Check PF could receive configured TEID GTP-C packet, checksum is good, queue is configured queue, ptypes are correct, check PKT_RX_FDIR print.
Send GTP-C packet with bad checksum, dport is 2123, TEID is same as configured rule:
p=Ether()/IP()/UDP(chksum=0x1234,dport=2123)/GTP_U_Header(teid=0x3456)/Raw('x'*20)
Check PF could receive configured TEID GTP packet, checksum is good, queue is configured queue, ptypes are correct, check PKT_RX_FDIR print.
Send some TEIDs are not same as configured rule or other types packets, check checksum are good, queue is 0, ptypes are correct, check no PKT_RX_FDIR print.
Test Case: GTP-C Cloud filter packet for PF¶
Add GTP-C cloud filter rule for PF, set TEID as random 20 bits, port is 2123, queue should be among configured queue number:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / gtpc teid is 0x12345678 / end actions pf / queue index 3 / end
Set fwd rxonly, enable output and start PF and VF testpmd.
Send GTP-C packet with good checksum, dport is 2123, TEID is same as configured rule:
p=Ether()/IP()/UDP(dport=2123)/GTP_U_Header(teid=0x12345678)/Raw('x'*20)
Check PF could receive configured TEID GTP-C packet, checksum is good, queue is configured queue, ptypes are correct, check no PKT_RX_FDIR print.
Send GTP-C packet with bad checksum, dport is 2123, TEID is same as configured rule:
p=Ether()/IP()/UDP(chksum=0x1234,dport=2123)/GTP_U_Header(teid=0x12345678)/Raw('x'*20)
Check PF could receive configured TEID GTP packet, checksum is good, queue is configured queue, ptypes are correct, check no PKT_RX_FDIR print.
Send some TEIDs are not same as configured rule or other types packets, check checksum are good, queue is 0, ptypes are correct, no PKT_RX_FDIR print.
Test Case: GTP-U FDIR packet for PF¶
Add GTP-U flow director rule for PF, set TEID as random 20 bits, port is 2152, queue should be among configured queue number:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / gtpu teid is 0x123456 / end actions queue index 18 / end testpmd> flow create 0 ingress pattern eth / ipv4 / udp / gtpu teid is 0x123456 / ipv4 / end actions queue index 58 / end testpmd> flow create 0 ingress pattern eth / ipv4 / udp / gtpu teid is 0x123456 / ipv6 / end actions queue index 33 / end
Set fwd rxonly, enable output and start PF and VF testpmd.
Send GTP-U packet with good checksum, dport is 2152, TEID is same as configured rule:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0x123456)/Raw('x'*20) p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0x123456)/IP()/Raw('x'*20) p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0x123456)/IPv6()/Raw('x'*20)
Check PF could receive configured TEID GTP-U packet, checksum is good, queue is configured queue, ptypes are correct, check PKT_RX_FDIR print.
Send GTP-U packet with bad checksum, dport is 2152, TEID is same as configured rule:
p=Ether()/IP()/UDP(chksum=0x1234,dport=2152)/GTP_U_Header(teid=0x123456)/Raw('x'*20) p=Ether()/IP()/UDP(chksum=0x1234,dport=2152)/GTP_U_Header(teid=0x123456)/IP()/Raw('x'*20) p=Ether()/IP()/UDP(chksum=0x1234,dport=2152)/GTP_U_Header(teid=0x123456)/IPv6()/Raw('x'*20)
Check PF could receive configured TEID GTP packet, checksum is good, queue is configured queue, ptypes are corrcet, check PKT_RX_FDIR print.
Send some TEIDs are not same as configured rule or other types packets, check checksum are good, queue is 0, pytpes are correct, check no PKT_RX_FDIR print.
Test Case: GTP-U Cloud filter packet for PF¶
Add GTP-U cloud filter rule for PF, set TEID as random 20 bits, port is 2152, queue should be among configured queue number:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / gtpu teid is 0x12345678 / end actions pf / queue index 3 / end
Set fwd rxonly, enable output and start PF and VF testpmd.
Send GTP-U packet with good checksum, dport is 2152, TEID is same as configured rule:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0x12345678)/Raw('x'*20)
Check PF could receive configured TEID GTP-U packet, checksum is good, queue is configured queue, ptypes are correct, check no PKT_RX_FDIR print.
Send GTP-U packet with bad checksum, dport is 2152, TEID is same as configured rule:
p=Ether()/IP()/UDP(chksum=0x1234,dport=2152)/GTP_U_Header(teid=0x12345678)/Raw('x'*20)
Check PF could receive configured TEID GTP packet, checksum is good, queue is configured queue, ptypes are correct, check no PKT_RX_FDIR print.
Send some TEIDs are not same as configured rule or other types packets, check checksum are good, queue is 0, ptypes are correct, no PKT_RX_FDIR print.
Test Case: GTP-C Cloud filter packet for VF¶
Add GTP-C cloud filter rule for VF, set TEID as random 20 bits, port is 2123, queue should be among configured queue number:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / gtpc teid is 0x1678 / end actions vf id 0 / queue index 3 / end
Set fwd rxonly, enable output and start PF and VF testpmd.
Send GTP-C packet with good checksum, dport is 2123, TEID is same as configured rule:
p=Ether()/IPv6()/UDP(dport=2123)/GTP_U_Header(teid=0x1678)/Raw('x'*20)
Check VF could receive configured teid GTP-C packet, checksum is good, queue is configured queue.
Send GTP-C packet with bad checksum, dport is 2123, TEID is same as configured rule:
p=Ether()/IPv6()/UDP(chksum=0x1234,dport=2123)/GTP_U_Header(teid=0x1678)/Raw('x'*20)
Check VF could receive configured TEID GTP packet, checksum is good, queue is configured queue.
Test Case: GTP-U Cloud filter packet for VF¶
Add GTP-U cloud filter rule for VF, set TEID as random 20 bits, port is 2152, queue should be among configured queue number:
testpmd > flow create 0 ingress pattern eth / ipv4 / udp / gtpu teid is 0x178 / end actions vf id 0 / queue index 1 / end
Set fwd rxonly, enable output and start PF and VF testpmd.
Send GTP-U packet with good checksum, dport is 2152, TEID is same as configured rule:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header(teid=0x178)/Raw('x'*20)
Check VF could receive configured TEID GTP-U packet, checksum is good, queue is configured queue.
Send GTP-U packet with bad checksum, GTP-U dport is 2152, TEID is same as configured rule:
p=Ether()/IPv6()/UDP(chksum=0x1234,dport=2152)/GTP_U_Header(teid=0x178)/Raw('x'*20)
Check VF could receive configured TEID GTP packet, checksum is good, queue is configured queue.
GTP packet¶
Note:
- List all of profile supported GTP packets as below, also could use “ddp get info gtp.pkgo” to check profile information. Below left number is ptype value, right are layer types. 167: IPV4, GTP-C, PAY4
- Scapy 2.3.3+ versions support to send GTP packet. Please check your scapy tool could send below different GTP types’ packets successfully then run above tests.
GTP-C packet types¶
167: IPV4, GTP-C, PAY4:
p=Ether()/IP()/UDP(dport=2123)/GTP_U_Header()/Raw('x'*20)
168: IPV6, GTP-C, PAY4:
p=Ether()/IPv6()/UDP(dport=2123)/GTP_U_Header()/Raw('x'*20)
GTP-U data packet types, IPv4 transport, IPv4 payload¶
169: IPV4 GTPU IPV4 PAY3:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IP()/Raw('x'*20)
170: IPV4 GTPU IPV4FRAG PAY3:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IP(frag=5)/Raw('x'*20)
171: IPV4 GTPU IPV4 UDP PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IP()/UDP()/Raw('x'*20)
172: IPV4 GTPU IPV4 TCP PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IP()/TCP()/Raw('x'*20)
173: IPV4 GTPU IPV4 SCTP PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IP()/SCTP()/Raw('x'*20)
174: IPV4 GTPU IPV4 ICMP PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IP()/ICMP()/Raw('x'*20)
GTP-U data packet types, IPv6 transport, IPv4 payload¶
175: IPV6 GTPU IPV4 PAY3:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP()/Raw('x'*20)
176: IPV6 GTPU IPV4FRAG PAY3:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP(frag=5)/Raw('x'*20)
177: IPV6 GTPU IPV4 UDP PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP()/UDP()/Raw('x'*20)
178: IPV6 GTPU IPV4 TCP PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP()/TCP()/Raw('x'*20)
179: IPV6 GTPU IPV4 SCTP PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP()/SCTP()/Raw('x'*20)
180: IPV6 GTPU IPV4 ICMP PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP()/ICMP()/Raw('x'*20)
GTP-U control packet types¶
181: IPV4, GTP-U, PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/Raw('x'*20)
182: PV6, GTP-U, PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/Raw('x'*20)
GTP-U data packet types, IPv4 transport, IPv6 payload¶
183: IPV4 GTPU IPV6FRAG PAY3:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IPv6()/IPv6ExtHdrFragment()/Raw('x'*20)
184: IPV4 GTPU IPV6 PAY3:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IPv6()/Raw('x'*20)
185: IPV4 GTPU IPV6 UDP PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IPv6()/UDP()/Raw('x'*20)
186: IPV4 GTPU IPV6 TCP PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IPv6()/TCP()/Raw('x'*20)
187: IPV4 GTPU IPV6 SCTP PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IPv6()/SCTP()/Raw('x'*20)
188: IPV4 GTPU IPV6 ICMPV6 PAY4:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header()/IPv6(nh=58)/ICMP()/Raw('x'*20)
GTP-U data packet types, IPv6 transport, IPv6 payload¶
189: IPV6 GTPU IPV6 PAY3:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IPv6()/Raw('x'*20)
190: IPV6 GTPU IPV6FRAG PAY3:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IPv6()/IPv6ExtHdrFragment()/Raw('x'*20)
191: IPV6 GTPU IPV6 UDP PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IPv6()/UDP()/Raw('x'*20)
113: IPV6 GTPU IPV6 TCP PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IPv6()/TCP()/Raw('x'*20)
120: IPV6 GTPU IPV6 SCTP PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IPv6()/SCTP()/Raw('x'*20)
128: IPV6 GTPU IPV6 ICMPV6 PAY4:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IPv6(nh=58)/ICMP()/Raw('x'*20)
Generic filter/flow api¶
Prerequisites¶
Hardware: Fortville and Niantic
software: dpdk: http://dpdk.org/git/dpdk scapy: http://www.secdev.org/projects/scapy/
bind the pf to dpdk driver:
./usertools/dpdk-devbind.py -b igb_uio 05:00.0
Test case: Fortville ethertype¶
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=16 --txq=16 testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
create filter rules:
testpmd> flow validate 0 ingress pattern eth type is 0x0806 / end actions queue index 1 / end testpmd> flow create 0 ingress pattern eth type is 0x0806 / end actions queue index 2 / end testpmd> flow validate 0 ingress pattern eth type is 0x08bb / end actions queue index 16 / end testpmd> flow create 0 ingress pattern eth type is 0x88bb / end actions queue index 3 / end testpmd> flow create 0 ingress pattern eth dst is 00:11:22:33:44:55 type is 0x88e5 / end actions queue index 4 / end testpmd> flow create 0 ingress pattern eth type is 0x8864 / end actions drop / end testpmd> flow validate 0 ingress pattern eth type is 0x88cc / end actions queue index 5 / end testpmd> flow create 0 ingress pattern eth type is 0x88cc / end actions queue index 6 / end
the i40e don’t support the 0x88cc eth type packet. so the last two commands failed.
send packets:
pkt1 = Ether(dst="ff:ff:ff:ff:ff:ff")/ARP(pdst="192.168.1.1") pkt2 = Ether(dst="00:11:22:33:44:55", type=0x88BB)/Raw('x' * 20) pkt3 = Ether(dst="00:11:22:33:44:55", type=0x88e5)/Raw('x' * 20) pkt4 = Ether(dst="00:11:22:33:44:55", type=0x8864)/Raw('x' * 20)
verify pkt1 to queue 2, and pkt2 to queue 3, pkt3 to queue 4, pkt4 dropped.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0 verify pkt1 to queue 0, and pkt2 to queue 3, pkt3 to queue 4, testpmd> flow list 0 testpmd> flow flush 0 verify pkt1 to queue 0, and pkt2 to queue 0, pkt3 to queue 0, pkt4 to queue 0. testpmd> flow list 0
Test case: Fortville fdir for L2 payload¶
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -w 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
create filter rules:
testpmd> flow create 0 ingress pattern eth / vlan tci is 1 / end actions queue index 1 / end testpmd> flow create 0 ingress pattern eth type is 0x0807 / end actions queue index 2 / end
send packets:
pkt1 = Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/Raw('x' * 20) pkt2 = Ether(dst="00:11:22:33:44:55", type=0x0807)/Dot1Q(vlan=1)/Raw('x' * 20) pkt3 = Ether(dst="00:11:22:33:44:55", type=0x0807)/IP(src="192.168.0.5", dst="192.168.0.6")/Raw('x' * 20)
check pkt1 to queue 1, pkt2 to queue 2, pkt3 to queue 2.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0 testpmd> flow list 0 testpmd> flow flush 0 testpmd> flow list 0
Test case: Fortville fdir for flexbytes¶
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -w 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
create filter rules
l2-payload:
testpmd> flow create 0 ingress pattern eth type is 0x0807 / raw relative is 1 pattern is ab / end actions queue index 1 / end
ipv4-other:
testpmd> flow create 0 ingress pattern eth / vlan tci is 4095 / ipv4 proto is 255 ttl is 40 / raw relative is 1 offset is 2 pattern is ab / raw relative is 1 offset is 10 pattern is abcdefghij / raw relative is 1 offset is 0 pattern is abcd / end actions queue index 2 / end
ipv4-udp:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 2.2.2.4 dst is 2.2.2.5 / udp src is 22 dst is 23 / raw relative is 1 offset is 2 pattern is fhds / end actions queue index 3 / end
ipv4-tcp:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 2.2.2.4 dst is 2.2.2.5 tos is 4 ttl is 3 / tcp src is 32 dst is 33 / raw relative is 1 offset is 2 pattern is hijk / end actions queue index 4 / end
ipv4-sctp:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 2.2.2.4 dst is 2.2.2.5 / sctp src is 42 / raw relative is 1 offset is 2 pattern is abcdefghijklmnop / end actions queue index 5 / end
ipv6-tcp:
testpmd> flow create 0 ingress pattern eth / vlan tci is 1 / ipv6 src is 2001::1 dst is 2001::2 tc is 3 hop is 30 / tcp src is 32 dst is 33 / raw relative is 1 offset is 0 pattern is hijk / raw relative is 1 offset is 8 pattern is abcdefgh / end actions queue index 6 / end
spec-mask(not supportted now, 6wind will update lately) restart testpmd, create new rules:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 2.2.2.4 dst is 2.2.2.5 / tcp src is 32 dst is 33 / raw relative is 1 offset is 2 pattern spec \x61\x62\x63\x64 pattern mask \x00\x00\xff\x01 / end actions queue index 7 / end
send packets:
pkt1 = Ether(dst="00:11:22:33:44:55", type=0x0807)/Raw(load="\x61\x62\x63\x64") pkt2 = Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=4095)/IP(src="192.168.0.1", dst="192.168.0.2", proto=255, ttl=40)/Raw(load="xxabxxxxxxxxxxabcdefghijabcdefg") pkt3 = Ether(dst="00:11:22:33:44:55")/IP(src="2.2.2.4", dst="2.2.2.5")/UDP(sport=22,dport=23)/Raw(load="fhfhdsdsfwef") pkt4 = Ether(dst="00:11:22:33:44:55")/IP(src="2.2.2.4", dst="2.2.2.5", tos=4, ttl=3)/TCP(sport=32,dport=33)/Raw(load="fhhijk") pkt5 = Ether(dst="00:11:22:33:44:55")/IP(src="2.2.2.4", dst="2.2.2.5")/SCTP(sport=42,dport=43,tag=1)/Raw(load="xxabcdefghijklmnopqrst") pkt6 = Ether(dst="00:11:22:33:44:55")/IP(src="2.2.2.4", dst="2.2.2.5")/SCTP(sport=42,dport=43,tag=1)/Raw(load="xxabxxxabcddxxabcdefghijklmn") pkt7 = Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IPv6(src="2001::1", dst="2001::2", tc=3, hlim=30)/TCP(sport=32,dport=33)/Raw(load="hijkabcdefghabcdefghijklmn")
pkt8-pkt10 are not supported now:
pkt8 = Ether(dst="00:11:22:33:44:55")/IP(src="2.2.2.4", dst="2.2.2.5")/TCP(sport=32,dport=33)/Raw(load="\x68\x69\x61\x62\x63\x64") pkt9 = Ether(dst="00:11:22:33:44:55")/IP(src="2.2.2.4", dst="2.2.2.5")/TCP(sport=32,dport=33)/Raw(load="\x68\x69\x68\x69\x63\x74") pkt10 = Ether(dst="00:11:22:33:44:55")/IP(src="2.2.2.4", dst="2.2.2.5")/TCP(sport=32,dport=33)/Raw(load="\x68\x69\x61\x62\x63\x65")
check pkt1 to pkt5 are received by queue 1 to queue 5, pkt6 to queue 0, pkt7 to queue6. pkt8 to queue7, pkt8 and pkt9 to queue 0.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0 testpmd> flow list 0 testpmd> flow flush 0 testpmd> flow list 0
Test case: Fortville fdir for ipv4¶
Prerequisites:
add two vfs on dpdk pf, then bind the vfs to vfio-pci:
echo 2 >/sys/bus/pci/devices/0000:05:00.0/max_vfs ./usertools/dpdk-devbind.py -b vfio-pci 05:02.0 05:02.1
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -w 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e0000 -n 4 -w 05:02.0 --file-prefix=vf0 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e00000 -n 4 -w 05:02.1 --file-prefix=vf1 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
create filter rules
ipv4-other:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 proto is 3 / end actions queue index 1 / end
ipv4-udp:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 ttl is 3 / udp src is 22 dst is 23 / end actions queue index 2 / end
ipv4-tcp:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 tos is 3 / tcp src is 32 dst is 33 / end actions queue index 3 / end
ipv4-sctp:
testpmd> flow create 0 ingress pattern eth / vlan tci is 1 / ipv4 src is 192.168.0.1 dst is 192.168.0.2 tos is 3 ttl is 3 / sctp src is 44 dst is 45 tag is 1 / end actions queue index 4 / end
ipv4-other-vf0:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 proto is 3 / vf id is 0 / end actions queue index 1 / end
ipv4-sctp-vf1:
testpmd> flow create 0 ingress pattern eth / vlan tci is 2 / ipv4 src is 192.168.0.1 dst is 192.168.0.2 tos is 4 ttl is 4 / sctp src is 46 dst is 47 tag is 1 / vf id is 1 / end actions queue index 2 / end
ipv4-sctp drop:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.5 dst is 192.168.0.6 tos is 3 ttl is 3 / sctp src is 44 dst is 45 tag is 1 / end actions drop / end
ipv4-sctp passthru-flag:
testpmd> flow create 0 ingress pattern eth / vlan tci is 3 / ipv4 src is 192.168.0.1 dst is 192.168.0.2 tos is 4 ttl is 4 / sctp src is 44 dst is 45 tag is 1 / end actions passthru / flag / end
ipv4-udp queue-flag:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 ttl is 4 / udp src is 22 dst is 23 / end actions queue index 5 / flag / end
ipv4-tcp queue-mark:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 tos is 4 / tcp src is 32 dst is 33 / end actions queue index 6 / mark id 3 / end
ipv4-other passthru-mark:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.3 dst is 192.168.0.4 proto is 3 / end actions passthru / mark id 4 / end
send packets:
pkt1 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2", proto=3)/Raw('x' * 20) pkt2 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2", ttl=3)/UDP(sport=22,dport=23)/Raw('x' * 20) pkt3 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2", tos=3)/TCP(sport=32,dport=33)/Raw('x' * 20) pkt4 = Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IP(src="192.168.0.1", dst="192.168.0.2", tos=3, ttl=3)/SCTP(sport=44,dport=45,tag=1)/SCTPChunkData(data="X" * 20) pkt5 = Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=2)/IP(src="192.168.0.1", dst="192.168.0.2", tos=4, ttl=4)/SCTP(sport=46,dport=47,tag=1)/Raw('x' * 20) pkt6 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.5", dst="192.168.0.6", tos=3, ttl=3)/SCTP(sport=44,dport=45,tag=1)/SCTPChunkData(data="X" * 20) pkt7 = Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=3)/IP(src="192.168.0.1", dst="192.168.0.2", tos=4, ttl=4)/SCTP(sport=44,dport=45,tag=1)/Raw('x' * 20) pkt8 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2", ttl=4)/UDP(sport=22,dport=23)/Raw('x' * 20) pkt9 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2", tos=4)/TCP(sport=32,dport=33)/Raw('x' * 20) pkt10 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.3", dst="192.168.0.4", proto=3)/Raw('x' * 20)
verify packet pkt1 to queue 1 and vf0 queue 1, pkt2 to queue 2, pkt3 to queue 3, pkt4 to queue 4, pkt5 to vf1 queue 2, pkt6 can’t be received by pf. if not “–disable-rss”, pkt7 to queue 0, FDIR matched hash 0 ID 0, pkt8 to queue 5, FDIR matched hash 0 ID 0, pkt9 to queue 6, FDIR matched ID 3, pkt10 queue determined by rss rule, FDIR matched ID 4. if “–disable-rss” pkt7-9 has same result with above, pkt10 to queue 0, FDIR matched ID 4.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0 testpmd> flow list 0 testpmd> flow flush 0 testpmd> flow list 0
Test case: Fortville fdir for ipv6¶
Prerequisites:
add two vfs on dpdk pf, then bind the vfs to vfio-pci:
echo 2 >/sys/bus/pci/devices/0000:05:00.0/max_vfs ./usertools/dpdk-devbind.py -b vfio-pci 05:02.0 05:02.1
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -w 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e0000 -n 4 -w 05:02.0 --file-prefix=vf0 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e00000 -n 4 -w 05:02.1 --file-prefix=vf1 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
create filter rules
ipv6-other:
testpmd> flow create 0 ingress pattern eth / vlan tci is 1 / ipv6 src is 2001::1 dst is 2001::2 tc is 1 proto is 5 hop is 10 / end actions queue index 1 / end
ipv6-udp:
testpmd> flow create 0 ingress pattern eth / vlan tci is 2 / ipv6 src is 2001::1 dst is 2001::2 tc is 2 hop is 20 / udp src is 22 dst is 23 / end actions queue index 2 / end
ipv6-tcp:
testpmd> flow create 0 ingress pattern eth / vlan tci is 3 / ipv6 src is 2001::1 dst is 2001::2 tc is 3 hop is 30 / tcp src is 32 dst is 33 / end actions queue index 3 / end
ipv6-sctp:
testpmd> flow create 0 ingress pattern eth / vlan tci is 4 / ipv6 src is 2001::1 dst is 2001::2 tc is 4 hop is 40 / sctp src is 44 dst is 45 tag is 1 / end actions queue index 4 / end
ipv6-other-vf0:
testpmd> flow create 0 ingress pattern eth / vlan tci is 5 / ipv6 src is 2001::3 dst is 2001::4 tc is 5 proto is 5 hop is 50 / vf id is 0 / end actions queue index 1 / end
ipv6-tcp-vf1:
testpmd> flow create 0 ingress pattern eth / vlan tci is 4095 / ipv6 src is 2001::3 dst is 2001::4 tc is 6 hop is 60 / tcp src is 32 dst is 33 / vf id is 1 / end actions queue index 3 / end
ipv6-sctp-drop:
testpmd> flow create 0 ingress pattern eth / vlan tci is 7 / ipv6 src is 2001::1 dst is 2001::2 tc is 7 hop is 70 / sctp src is 44 dst is 45 tag is 1 / end actions drop / end
ipv6-tcp-vf1-drop:
testpmd> flow create 0 ingress pattern eth / vlan tci is 8 / ipv6 src is 2001::3 dst is 2001::4 tc is 8 hop is 80 / tcp src is 32 dst is 33 / vf id is 1 / end actions drop / end
send packets:
pkt1 = Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=1)/IPv6(src="2001::1", dst="2001::2", tc=1, nh=5, hlim=10)/Raw('x' * 20) pkt2 = Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=2)/IPv6(src="2001::1", dst="2001::2", tc=2, hlim=20)/UDP(sport=22,dport=23)/Raw('x' * 20) pkt3 = Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=3)/IPv6(src="2001::1", dst="2001::2", tc=3, hlim=30)/TCP(sport=32,dport=33)/Raw('x' * 20) pkt4 = Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=4)/IPv6(src="2001::1", dst="2001::2", tc=4, nh=132, hlim=40)/SCTP(sport=44,dport=45,tag=1)/SCTPChunkData(data="X" * 20) pkt5 = Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=5)/IPv6(src="2001::3", dst="2001::4", tc=5, nh=5, hlim=50)/Raw('x' * 20) pkt6 = Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=4095)/IPv6(src="2001::3", dst="2001::4", tc=6, hlim=60)/TCP(sport=32,dport=33)/Raw('x' * 20) pkt7 = Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=7)/IPv6(src="2001::1", dst="2001::2", tc=7, nh=132, hlim=70)/SCTP(sport=44,dport=45,tag=1)/SCTPChunkData(data="X" * 20) pkt8 = Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=8)/IPv6(src="2001::3", dst="2001::4", tc=8, hlim=80)/TCP(sport=32,dport=33)/Raw('x' * 20)
verify packet pkt1 to queue 1 and vf queue 1, pkt2 to queue 2, pkt3 to queue 3, pkt4 to queue 4, pkt5 to vf0 queue 1, pkt6 to vf1 queue 3, pkt7 can’t be received by pf, pkt8 can’t be received by vf1.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0 testpmd> flow list 0 testpmd> flow flush 0 testpmd> flow list 0
Test case: Fortville fdir wrong parameters¶
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -w 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
create filter rules
Exceeds maximum payload limit:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 2.2.2.4 dst is 2.2.2.5 / sctp src is 42 / raw relative is 1 offset is 2 pattern is abcdefghijklmnopq / end actions queue index 5 / end
it shows “Caught error type 9 (specific pattern item): cause: 0x7fd87ff60160 exceeds maximum payload limit”.
can’t set mac_addr when setting fdir filter:
testpmd> flow create 0 ingress pattern eth dst is 00:11:22:33:44:55 / vlan tci is 4095 / ipv6 src is 2001::3 dst is 2001::4 tc is 6 hop is 60 / tcp src is 32 dst is 33 / end actions queue index 3 / end
it shows “Caught error type 9 (specific pattern item): cause: 0x7f463ff60100 Invalid MAC_addr mask”.
- can’t change the configuration of the same packet type::
testpmd> flow create 0 ingress pattern eth / vlan tci is 3 / ipv4 src is 192.168.0.1 dst is 192.168.0.2 tos is 4 ttl is 4 / sctp src is 44 dst is 45 tag is 1 / end actions passthru / flag / end testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 tos is 4 ttl is 4 / sctp src is 34 dst is 35 tag is 1 / end actions passthru / flag / end
it shows “Caught error type 9 (specific pattern item): cause: 0x7feabff60120 Conflict with the first rule’s input set”.
invalid queue ID:
testpmd> flow create 0 ingress pattern eth / ipv6 src is 2001::3 dst is 2001::4 tc is 6 hop is 60 / tcp src is 32 dst is 33 / end actions queue index 16 / end
it shows “Caught error type 11 (specific action): cause: 0x7ffc7bb9a338, Invalid queue ID for FDIR”.
If create a rule on vf that has invalid queue ID:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 proto is 3 / vf id is 0 / end actions queue index 4 / end
it shows “Caught error type 11 (specific action): cause: 0x7ffc7bb9a338, Invalid queue ID for FDIR”.
Note:
/// not support IP fragment ///
Test case: Fortville tunnel vxlan¶
Prerequisites:
add a vf on dpdk pf, then bind the vf to vfio-pci:
echo 1 >/sys/bus/pci/devices/0000:05:00.0/max_vfs ./usertools/dpdk-devbind.py -b vfio-pci 05:02.0
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -w 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --tx-offloads=0x8fff --disable-rss testpmd> rx_vxlan_port add 4789 0 testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> set promisc all off testpmd> start the pf's mac address is 00:00:00:00:01:00 ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e0000 -n 4 -w 05:02.0 --file-prefix=vf --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --tx-offloads=0x8fff --disable-rss testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> set promisc all off testpmd> start
the vf’s mac address is D2:8C:1A:50:2A:78
create filter rules
inner mac + actions pf:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth dst is 00:11:22:33:44:55 / end actions pf / queue index 1 / end
vni + inner mac + actions pf:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / vxlan vni is 2 / eth dst is 00:11:22:33:44:55 / end actions pf / queue index 2 / end
inner mac + inner vlan +actions pf:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / vxlan / eth dst is 00:11:22:33:44:55 / vlan tci is 10 / end actions pf / queue index 3 / end
vni + inner mac + inner vlan + actions pf:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / vxlan vni is 4 / eth dst is 00:11:22:33:44:55 / vlan tci is 20 / end actions pf / queue index 4 / end
inner mac + outer mac + vni + actions pf:
testpmd> flow create 0 ingress pattern eth dst is 00:11:22:33:44:66 / ipv4 / udp / vxlan vni is 5 / eth dst is 00:11:22:33:44:55 / end actions pf / queue index 5 / end
vni + inner mac + inner vlan + actions vf:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / vxlan vni is 6 / eth dst is 00:11:22:33:44:55 / vlan tci is 30 / end actions vf id 0 / queue index 1 / end
inner mac + outer mac + vni + actions vf:
testpmd> flow create 0 ingress pattern eth dst is 00:11:22:33:44:66 / ipv4 / udp / vxlan vni is 7 / eth dst is 00:11:22:33:44:55 / end actions vf id 0 / queue index 3 / end
send packets:
pkt1 = Ether(dst="00:11:22:33:44:66")/IP()/UDP()/Vxlan()/Ether(dst="00:11:22:33:44:55")/IP()/TCP()/Raw('x' * 20) pkt2 = Ether(dst="00:11:22:33:44:66")/IP()/UDP()/Vxlan(vni=2)/Ether(dst="00:11:22:33:44:55")/IP()/TCP()/Raw('x' * 20) pkt31 = Ether(dst="00:11:22:33:44:66")/IP()/UDP()/Vxlan()/Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=10)/IP()/TCP()/Raw('x' * 20) pkt32 = Ether(dst="00:11:22:33:44:66")/IP()/UDP()/Vxlan()/Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=11)/IP()/TCP()/Raw('x' * 20) pkt4 = Ether(dst="00:11:22:33:44:66")/IP()/UDP()/Vxlan(vni=4)/Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=20)/IP()/TCP()/Raw('x' * 20) pkt51 = Ether(dst="00:11:22:33:44:66")/IP()/UDP()/Vxlan(vni=5)/Ether(dst="00:11:22:33:44:55")/IP()/TCP()/Raw('x' * 20) pkt52 = Ether(dst="00:11:22:33:44:66")/IP()/UDP()/Vxlan(vni=4)/Ether(dst="00:11:22:33:44:55")/IP()/TCP()/Raw('x' * 20) pkt53 = Ether(dst="00:00:00:00:01:00")/IP()/UDP()/Vxlan(vni=5)/Ether(dst="00:11:22:33:44:55")/IP()/TCP()/Raw('x' * 20) pkt54 = Ether(dst="00:11:22:33:44:77")/IP()/UDP()/Vxlan(vni=5)/Ether(dst="00:11:22:33:44:55")/IP()/TCP()/Raw('x' * 20) pkt55 = Ether(dst="00:00:00:00:01:00")/IP()/UDP()/Vxlan(vni=5)/Ether(dst="00:11:22:33:44:77")/IP()/TCP()/Raw('x' * 20) pkt56 = Ether(dst="00:11:22:33:44:66")/IP()/UDP()/Vxlan(vni=5)/Ether(dst="00:11:22:33:44:77")/IP()/TCP()/Raw('x' * 20) pkt61 = Ether(dst="00:11:22:33:44:66")/IP()/UDP()/Vxlan(vni=6)/Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=30)/IP()/TCP()/Raw('x' * 20) pkt62 = Ether(dst="00:11:22:33:44:66")/IP()/UDP()/Vxlan(vni=6)/Ether(dst="00:11:22:33:44:77")/Dot1Q(vlan=30)/IP()/TCP()/Raw('x' * 20) pkt63 = Ether(dst="D2:8C:1A:50:2A:78")/IP()/UDP()/Vxlan(vni=6)/Ether(dst="00:11:22:33:44:77")/Dot1Q(vlan=30)/IP()/TCP()/Raw('x' * 20) pkt64 = Ether(dst="00:00:00:00:01:00")/IP()/UDP()/Vxlan(vni=6)/Ether(dst="00:11:22:33:44:77")/Dot1Q(vlan=30)/IP()/TCP()/Raw('x' * 20) pkt71 = Ether(dst="00:11:22:33:44:66")/IP()/UDP()/Vxlan(vni=7)/Ether(dst="00:11:22:33:44:55")/IP()/TCP()/Raw('x' * 20) pkt72 = Ether(dst="D2:8C:1A:50:2A:78")/IP()/UDP()/Vxlan(vni=7)/Ether(dst="00:11:22:33:44:55")/IP()/TCP()/Raw('x' * 20) pkt73 = Ether(dst="D2:8C:1A:50:2A:78")/IP()/UDP()/Vxlan(vni=7)/Ether(dst="00:11:22:33:44:77")/IP()/TCP()/Raw('x' * 20) pkt74 = Ether(dst="00:00:00:00:01:00")/IP()/UDP()/Vxlan(vni=7)/Ether(dst="00:11:22:33:44:77")/IP()/TCP()/Raw('x' * 20)
verify pkt1 received by pf queue 1, pkt2 to pf queue 2, pkt31 to pf queue 3, pkt32 to pf queue 1, pkt4 to pf queue 4, pkt51 to pf queue 5, pkt52 to pf queue 1, pkt53 to pf queue 1, pkt54 to pf queue 1, pkt55 to pf queue 0, pf can’t receive pkt56. pkt61 to vf queue 1 and pf queue 1, pf and vf can’t receive pkt62, pkt63 to vf queue 0, pkt64 to pf queue 0, vf can’t receive pkt64, pkt71 to vf queue 3 and pf queue 1, pkt72 to pf queue 1, vf can’t receive pkt72, pkt73 to vf queue 0, pkt74 to pf queue 0, vf can’t receive pkt74.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0
verify pkt51 to pf queue 5, pkt53 and pkt55 to pf queue 0, pf can’t receive pkt52,pkt54 and pkt56. pkt71 to vf queue 3, pkt72 and pkt73 to vf queue 0, pkt74 to pf queue 0, vf can’t receive pkt74. Then:
testpmd> flow flush 0 testpmd> flow list 0
Test case: Fortville tunnel nvgre¶
Prerequisites:
add two vfs on dpdk pf, then bind the vfs to vfio-pci:
echo 2 >/sys/bus/pci/devices/0000:05:00.0/max_vfs ./usertools/dpdk-devbind.py -b vfio-pci 05:02.0 05:02.1
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -w 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --tx-offloads=0x8fff testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> set promisc all off testpmd> start ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e0000 -n 4 -w 05:02.0 --file-prefix=vf0 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --tx-offloads=0x8fff testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> set promisc all off testpmd> start ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e00000 -n 4 -w 05:02.1 --file-prefix=vf1 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --tx-offloads=0x8fff testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> set promisc all off testpmd> start
the pf’s mac address is 00:00:00:00:01:00 the vf0’s mac address is 54:52:00:00:00:01 the vf1’s mac address is 54:52:00:00:00:02
create filter rules
inner mac + actions pf:
testpmd> flow create 0 ingress pattern eth / ipv4 / nvgre / eth dst is 00:11:22:33:44:55 / end actions pf / queue index 1 / end
tni + inner mac + actions pf:
testpmd> flow create 0 ingress pattern eth / ipv4 / nvgre tni is 2 / eth dst is 00:11:22:33:44:55 / end actions pf / queue index 2 / end
inner mac + inner vlan + actions pf:
testpmd> flow create 0 ingress pattern eth / ipv4 / nvgre / eth dst is 00:11:22:33:44:55 / vlan tci is 30 / end actions pf / queue index 3 / end
tni + inner mac + inner vlan + actions pf:
testpmd> flow create 0 ingress pattern eth / ipv4 / nvgre tni is 0x112244 / eth dst is 00:11:22:33:44:55 / vlan tci is 40 / end actions pf / queue index 4 / end
inner mac + outer mac + tni + actions pf:
testpmd> flow create 0 ingress pattern eth dst is 00:11:22:33:44:66 / ipv4 / nvgre tni is 0x112255 / eth dst is 00:11:22:33:44:55 / end actions pf / queue index 5 / end
tni + inner mac + inner vlan + actions vf:
testpmd> flow create 0 ingress pattern eth / ipv4 / nvgre tni is 0x112266 / eth dst is 00:11:22:33:44:55 / vlan tci is 60 / end actions vf id 0 / queue index 1 / end
inner mac + outer mac + tni + actions vf:
testpmd> flow create 0 ingress pattern eth dst is 00:11:22:33:44:66 / ipv4 / nvgre tni is 0x112277 / eth dst is 00:11:22:33:44:55 / end actions vf id 1 / queue index 3 / end
send packets:
pkt1 = Ether(dst="00:11:22:33:44:66")/IP()/NVGRE()/Ether(dst="00:11:22:33:44:55")/IP()/TCP()/Raw('x' * 20) pkt2 = Ether(dst="00:11:22:33:44:66")/IP()/NVGRE(TNI=2)/Ether(dst="00:11:22:33:44:55")/IP()/TCP()/Raw('x' * 20) pkt31 = Ether(dst="00:11:22:33:44:66")/IP()/NVGRE()/Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=30)/IP()/TCP()/Raw('x' * 20) pkt32 = Ether(dst="00:11:22:33:44:66")/IP()/NVGRE()/Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=31)/IP()/TCP()/Raw('x' * 20) pkt4 = Ether(dst="00:11:22:33:44:66")/IP()/NVGRE(TNI=0x112244)/Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=40)/IP()/TCP()/Raw('x' * 20) pkt51 = Ether(dst="00:11:22:33:44:66")/IP()/NVGRE(TNI=0x112255)/Ether(dst="00:11:22:33:44:55")/IP()/TCP()/Raw('x' * 20) pkt52 = Ether(dst="00:11:22:33:44:66")/IP()/NVGRE(TNI=0x112256)/Ether(dst="00:11:22:33:44:55")/IP()/TCP()/Raw('x' * 20) pkt53 = Ether(dst="00:00:00:00:01:00")/IP()/NVGRE(TNI=0x112255)/Ether(dst="00:11:22:33:44:55")/IP()/TCP()/Raw('x' * 20) pkt54 = Ether(dst="00:11:22:33:44:77")/IP()/NVGRE(TNI=0x112255)/Ether(dst="00:11:22:33:44:55")/IP()/TCP()/Raw('x' * 20) pkt55 = Ether(dst="00:00:00:00:01:00")/IP()/NVGRE(TNI=0x112255)/Ether(dst="00:11:22:33:44:77")/IP()/TCP()/Raw('x' * 20) pkt56 = Ether(dst="00:11:22:33:44:66")/IP()/NVGRE(TNI=0x112255)/Ether(dst="00:11:22:33:44:77")/IP()/TCP()/Raw('x' * 20) pkt61 = Ether(dst="00:11:22:33:44:66")/IP()/NVGRE(TNI=0x112266)/Ether(dst="00:11:22:33:44:55")/Dot1Q(vlan=60)/IP()/TCP()/Raw('x' * 20) pkt62 = Ether(dst="00:11:22:33:44:66")/IP()/NVGRE(TNI=0x112266)/Ether(dst="00:11:22:33:44:77")/Dot1Q(vlan=60)/IP()/TCP()/Raw('x' * 20) pkt63 = Ether(dst="54:52:00:00:00:01")/IP()/NVGRE(TNI=0x112266)/Ether(dst="00:11:22:33:44:77")/Dot1Q(vlan=60)/IP()/TCP()/Raw('x' * 20) pkt64 = Ether(dst="00:00:00:00:01:00")/IP()/NVGRE(TNI=0x112266)/Ether(dst="00:11:22:33:44:77")/Dot1Q(vlan=60)/IP()/TCP()/Raw('x' * 20) pkt71 = Ether(dst="00:11:22:33:44:66")/IP()/NVGRE(TNI=0x112277)/Ether(dst="00:11:22:33:44:55")/IP()/TCP()/Raw('x' * 20) pkt72 = Ether(dst="54:52:00:00:00:02")/IP()/NVGRE(TNI=0x112277)/Ether(dst="00:11:22:33:44:55")/IP()/TCP()/Raw('x' * 20) pkt73 = Ether(dst="54:52:00:00:00:02")/IP()/NVGRE(TNI=0x112277)/Ether(dst="00:11:22:33:44:77")/IP()/TCP()/Raw('x' * 20) pkt74 = Ether(dst="00:00:00:00:01:00")/IP()/NVGRE(TNI=0x112277)/Ether(dst="00:11:22:33:44:77")/IP()/TCP()/Raw('x' * 20)
verify pkt1 received by pf queue 1, pkt2 to pf queue 2, pkt31 to pf queue 3, pkt32 to pf queue 1, pkt4 to pf queue 4, pkt51 to pf queue 5, pkt52 to pf queue 1, pkt53 to pf queue 1, pkt54 to pf queue 1, pkt55 to pf queue 0, pf can’t receive pkt56. pkt61 to vf0 queue 1 and pf queue 1, pf and vf0 can’t receive pkt62, pkt63 to vf0 queue 0, pkt64 to pf queue 0, vf0 can’t receive pkt64, pkt71 to vf1 queue 3 and pf queue 1, pkt72 to pf queue 1, vf1 can’t receive pkt72, pkt73 to vf1 queue 0, pkt74 to pf queue 0, vf1 can’t receive pkt74.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0
verify pkt51 to pf queue 5, pkt53 and pkt55 to pf queue 0, pf can’t receive pkt52,pkt54 and pkt56. pkt71 to vf1 queue 3, pkt72 and pkt73 to vf1 queue 0, pkt74 to pf queue 0, vf1 can’t receive pkt74. Then:
testpmd> flow flush 0 testpmd> flow list 0
Test case: IXGBE SYN¶
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=16 --txq=16 --disable-rss testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
create filter rules
ipv4:
testpmd> flow create 0 ingress pattern eth / ipv4 / tcp flags spec 0x02 flags mask 0x02 / end actions queue index 3 / end
ipv6:
testpmd> flow destroy 0 rule 0 testpmd> flow create 0 ingress pattern eth / ipv6 / tcp flags spec 0x02 flags mask 0x02 / end actions queue index 4 / end
send packets:
pkt1 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(dport=80,flags="S")/Raw('x' * 20) pkt2 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(dport=80,flags="PA")/Raw('x' * 20) pkt3 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2")/TCP(dport=80,flags="S")/Raw('x' * 20) pkt4 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2")/TCP(dport=80,flags="PA")/Raw('x' * 20)
ipv4 verify pkt1 to queue 3, pkt2 to queue 0, pkt3 to queue 3, pkt4 to queue 0 ipv6 verify pkt1 to queue 4, pkt2 to queue 0, pkt3 to queue 4, pkt4 to queue 0 notes: the out packet default is Flags [S], so if the flags is omitted in sent pkt, the pkt will be into queue 3 or queue 4.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0 testpmd> flow list 0 testpmd> flow flush 0 testpmd> flow list 0
Test case: IXGBE n-tuple(supported by x540 and 82599)¶
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=16 --txq=16 --disable-rss testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
create filter rules
ipv4-other:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / end actions queue index 1 / end
ipv4-udp:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 proto is 17 / udp src is 22 dst is 23 / end actions queue index 2 / end
ipv4-tcp:
testpmd> flow create 0 ingress pattern ipv4 src is 192.168.0.2 dst is 192.168.0.3 proto is 6 / tcp src is 32 dst is 33 / end actions queue index 3 / end
ipv4-sctp:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.2 dst is 192.168.0.3 proto is 132 / sctp src is 44 dst is 45 / end actions queue index 4 / end
send packets:
pkt11 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x' * 20) pkt12 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.2", dst="192.168.0.3")/Raw('x' * 20) pkt21 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.2", dst="192.168.0.3")/UDP(sport=22,dport=23)/Raw('x' * 20) pkt22 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.2", dst="192.168.0.3")/UDP(sport=22,dport=24)/Raw('x' * 20) pkt31 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.2", dst="192.168.0.3")/TCP(sport=32,dport=33)/Raw('x' * 20) pkt32 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.2", dst="192.168.0.3")/TCP(sport=34,dport=33)/Raw('x' * 20) pkt41 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.2", dst="192.168.0.3")/SCTP(sport=44,dport=45)/Raw('x' * 20) pkt42 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.2", dst="192.168.0.3")/SCTP(sport=44,dport=46)/Raw('x' * 20) pkt5 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/SCTP(sport=44,dport=45)/Raw('x' * 20) pkt6 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2")/TCP(sport=32,dport=33)/Raw('x' * 20)
verify pkt11 to queue 1, pkt12 to queue 0, pkt21 to queue 2, pkt22 to queue 0, pkt31 to queue 3, pkt32 to queue 0, pkt41 to queue 4, pkt42 to queue 0, pkt5 to queue 1, pkt6 to queue 0,
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0 testpmd> flow list 0 testpmd> flow flush 0 testpmd> flow list 0
Test case: IXGBE ethertype¶
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=16 --txq=16 testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
create filter rules:
testpmd> flow validate 0 ingress pattern eth type is 0x0806 / end actions queue index 3 / end testpmd> flow validate 0 ingress pattern eth type is 0x86DD / end actions queue index 5 / end testpmd> flow create 0 ingress pattern eth type is 0x0806 / end actions queue index 3 / end testpmd> flow create 0 ingress pattern eth type is 0x88cc / end actions queue index 4 / end
the ixgbe don’t support the 0x88DD eth type packet. so the second command failed.
send packets:
pkt1 = Ether(dst="ff:ff:ff:ff:ff:ff")/ARP(pdst="192.168.1.1") pkt2 = Ether(dst="00:11:22:33:44:55", type=0x88CC)/Raw('x' * 20) pkt3 = Ether(dst="00:11:22:33:44:55", type=0x86DD)/Raw('x' * 20)
verify pkt1 to queue 3, and pkt2 to queue 4, pkt3 to queue 0.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0
verify pkt1 to queue 0, and pkt2 to queue 4. Then:
testpmd> flow list 0 testpmd> flow flush 0
verify pkt1 to queue 0, and pkt2 to queue 0. Then:
testpmd> flow list 0
Test case: IXGBE L2-tunnel(supported by x552 and x550)¶
Prerequisites:
add two vfs on dpdk pf, then bind the vfs to vfio-pci:
echo 2 >/sys/bus/pci/devices/0000:05:00.0/max_vfs ./usertools/dpdk-devbind.py -b vfio-pci 05:02.0 05:02.1
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -w 05:00.0 --file-prefix=pf --socket-mem=1024,1024 -- -i --rxq=16 --txq=16 --disable-rss testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e0000 -n 4 -w 05:02.0 --file-prefix=vf0 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start ./x86_64-native-linuxapp-gcc/app/testpmd -c 1e00000 -n 4 -w 05:02.1 --file-prefix=vf1 --socket-mem=1024,1024 -- -i --rxq=4 --txq=4 --disable-rss testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
Enabling ability of parsing E-tag packet, set on pf:
testpmd> port config 0 l2-tunnel E-tag enable
Enable E-tag packet forwarding, set on pf:
testpmd> E-tag set forwarding on port 0
create filter rules:
testpmd> flow create 0 ingress pattern e_tag grp_ecid_b is 0x1309 / end actions queue index 0 / end testpmd> flow create 0 ingress pattern e_tag grp_ecid_b is 0x1308 / end actions queue index 1 / end testpmd> flow create 0 ingress pattern e_tag grp_ecid_b is 0x1307 / end actions queue index 2 / end
send packets:
pkt1 = Ether(dst="00:11:22:33:44:55")/Dot1BR(GRP=0x1, ECIDbase=0x309)/Raw('x' * 20) pkt2 = Ether(dst="00:11:22:33:44:55")/Dot1BR(GRP=0x1, ECIDbase=0x308)/Raw('x' * 20) pkt3 = Ether(dst="00:11:22:33:44:55")/Dot1BR(GRP=0x1, ECIDbase=0x307)/Raw('x' * 20) pkt4 = Ether(dst="00:11:22:33:44:55")/Dot1BR(GRP=0x2, ECIDbase=0x309)/Raw('x' * 20)
verify pkt1 to vf0 queue0, pkt2 to vf1 queue0, pkt3 to pf queue0, pkt4 can’t received by pf and vfs.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0 testpmd> flow list 0 testpmd> flow flush 0 testpmd> flow list 0
Test case: IXGBE fdir for ipv4¶
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
create filter rules
ipv4-other (only support by 82599 and x540, this rule matches the n-tuple):
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / end actions queue index 1 / end
ipv4-udp:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.3 dst is 192.168.0.4 / udp src is 22 dst is 23 / end actions queue index 2 / end
ipv4-tcp:
testpmd> flow create 0 ingress pattern ipv4 src is 192.168.0.3 dst is 192.168.0.4 / tcp src is 32 dst is 33 / end actions queue index 3 / end
ipv4-sctp (x550/x552, 82599 can support this format, because it matches n-tuple):
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.3 dst is 192.168.0.4 / sctp src is 44 dst is 45 / end actions queue index 4 / end
ipv4-sctp(82599/x540):
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.3 dst is 192.168.0.4 / sctp / end actions queue index 4 / end
ipv4-sctp-drop(x550/x552):
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.3 dst is 192.168.0.4 / sctp src is 46 dst is 47 / end actions drop / end
ipv4-sctp-drop(82599/x540):
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.5 dst is 192.168.0.6 / sctp / end actions drop / end
notes: 82599 don’t support the sctp port match drop, x550 and x552 support it.
ipv4-udp-flexbytes:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / udp src is 24 dst is 25 / raw relative is 0 search is 0 offset is 44 limit is 0 pattern is 86 / end actions queue index 5 / endipv4-tcp-flexbytes:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.3 dst is 192.168.0.4 / tcp src is 22 dst is 23 / raw relative spec 0 relative mask 1 search spec 0 search mask 1 offset spec 54 offset mask 0xffffffff limit spec 0 limit mask 0xffff pattern is ab pattern is cd / end actions queue index 6 / end
notes: the second pattern will overlap the first pattern. the rule 6 and 7 should be created after the testpmd reset, because the flexbytes rule is global bit masks.
invalid queue id:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / udp src is 32 dst is 33 / end actions queue index 16 / end
notes: the rule can’t be created successfully because the queue id exceeds the max queue id.
send packets:
pkt1 = Ether(dst="A0:36:9F:7B:C5:A9")/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x' * 20) pkt2 = Ether(dst="A0:36:9F:7B:C5:A9")/IP(src="192.168.0.3", dst="192.168.0.4")/UDP(sport=22,dport=23)/Raw('x' * 20) pkt3 = Ether(dst="A0:36:9F:7B:C5:A9")/IP(src="192.168.0.3", dst="192.168.0.4")/TCP(sport=32,dport=33)/Raw('x' * 20)
for x552/x550:
pkt41 = Ether(dst="A0:36:9F:7B:C5:A9")/IP(src="192.168.0.3", dst="192.168.0.4")/SCTP(sport=44,dport=45)/Raw('x' * 20) pkt42 = Ether(dst="A0:36:9F:7B:C5:A9")/IP(src="192.168.0.3", dst="192.168.0.4")/SCTP(sport=42,dport=43)/Raw('x' * 20)
for 82599/x540:
pkt41 = Ether(dst="A0:36:9F:7B:C5:A9")/IP(src="192.168.0.3", dst="192.168.0.4")/SCTP()/Raw('x' * 20) pkt42 = Ether(dst="A0:36:9F:7B:C5:A9")/IP(src="192.168.0.3", dst="192.168.0.5")/SCTP()/Raw('x' * 20)
for x552/x550:
pkt5 = Ether(dst="A0:36:9F:7B:C5:A9")/IP(src="192.168.0.3", dst="192.168.0.4")/SCTP(sport=46,dport=47)/Raw('x' * 20)
for 82599/x540:
pkt5 = Ether(dst="A0:36:9F:7B:C5:A9")/IP(src="192.168.0.5", dst="192.168.0.6")/SCTP()/Raw('x' * 20) pkt6 = Ether(dst="A0:36:9F:7B:C5:A9")/IP(src="192.168.0.1", dst="192.168.0.2")/UDP(sport=24,dport=25)/Raw(load="xx86ddef") pkt7 = Ether(dst="A0:36:9F:7B:C5:A9")/IP(src="192.168.0.3", dst="192.168.0.4")/TCP(sport=22,dport=23)/Raw(load="abcdxxx") pkt8 = Ether(dst="A0:36:9F:7B:C5:A9")/IP(src="192.168.0.3", dst="192.168.0.4")/TCP(sport=22,dport=23)/Raw(load="cdcdxxx")
verify pkt1 to pkt3 can be received by queue 1 to queue 3 correctly. pkt41 to queue 4, pkt42 to queue 0, pkt5 couldn’t be received. pkt6 to queue 5, pkt7 to queue 0, pkt8 to queue 6.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0 testpmd> flow list 0 testpmd> flow flush 0 testpmd> flow list 0
Test case: IXGBE fdir for signature(ipv4/ipv6)¶
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=signature testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
create filter rules
ipv6-other (82599 support this rule,x552 and x550 don’t support this rule):
testpmd> flow create 0 ingress pattern fuzzy thresh is 1 / ipv6 src is 2001::1 dst is 2001::2 / end actions queue index 1 / end
ipv6-udp:
testpmd> flow create 0 ingress pattern fuzzy thresh spec 2 thresh last 5 thresh mask 0xffffffff / ipv6 src is 2001::1 dst is 2001::2 / udp src is 22 dst is 23 / end actions queue index 2 / end
ipv6-tcp:
testpmd> flow create 0 ingress pattern fuzzy thresh is 3 / ipv6 src is 2001::1 dst is 2001::2 / tcp src is 32 dst is 33 / end actions queue index 3 / end
ipv6-sctp (x552 and x550):
testpmd> flow create 0 ingress pattern fuzzy thresh is 4 / ipv6 src is 2001::1 dst is 2001::2 / sctp src is 44 dst is 45 / end actions queue index 4 / end
(82599 and x540):
testpmd> flow create 0 ingress pattern fuzzy thresh is 4 / ipv6 src is 2001::1 dst is 2001::2 / sctp / end actions queue index 4 / end
ipv6-other-flexbytes (just for 82599/x540):
testpmd> flow create 0 ingress pattern fuzzy thresh is 6 / ipv6 src is 2001::1 dst is 2001::2 / raw relative is 0 search is 0 offset is 56 limit is 0 pattern is 86 / end actions queue index 5 / end
notes: this rule can be created successfully on 82599/x540, but can’t be created successfully on x552/x550, because it’s an ipv4-other rule. but the offset<=62, the mac header is 14bytes, the ipv6 header is 40 bytes, the shortest L4 header (udp header) is 8bytes, the total header is 62 bytes, there is no payload can be set offset. so we don’t test the ipv6 flexbytes on x550/x552. according to hardware limitation, signature mode does not support drop action, while IPv6 rely on signature mode, so it is expected result that a IPv6 flow with drop action can’t be created
ipv4-other (82599 support this rule,x552 and x550 don’t support this rule):
testpmd> flow create 0 ingress pattern fuzzy thresh is 1 / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / end actions queue index 6 / endipv4-udp:
testpmd> flow create 0 ingress pattern fuzzy thresh is 2 / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / udp src is 22 dst is 23 / end actions queue index 7 / endipv4-tcp:
testpmd> flow create 0 ingress pattern fuzzy thresh is 3 / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / tcp src is 32 dst is 33 / end actions queue index 8 / endipv4-sctp(x550/x552):
testpmd> flow create 0 ingress pattern fuzzy thresh is 4 / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / sctp src is 44 dst is 45 / end actions queue index 9 / endipv4-sctp(82599/x540):
testpmd> flow create 0 ingress pattern fuzzy thresh is 5 / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / sctp / end actions queue index 9 / end
notes: if set the ipv4-sctp rule with sctp ports on 82599, it will fail to create the rule.
ipv4-sctp-flexbytes(x550/x552):
testpmd> flow create 0 ingress pattern fuzzy thresh is 6 / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / sctp src is 24 dst is 25 / raw relative is 0 search is 0 offset is 48 limit is 0 pattern is ab / end actions queue index 10 / endipv4-sctp-flexbytes(82599/x540):
testpmd> flow create 0 ingress pattern fuzzy thresh is 6 / eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 / sctp / raw relative is 0 search is 0 offset is 48 limit is 0 pattern is ab / end actions queue index 10 / end
notes: you need to reset testpmd before create this rule, because it’s conflict with the rule 9.
send packets
ipv6 packets:
pkt1 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2")/Raw('x' * 20) pkt2 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2")/UDP(sport=22,dport=23)/Raw('x' * 20) pkt3 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2")/TCP(sport=32,dport=33)/Raw(load="xxxxabcd")
for x552/x550:
pkt4 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2",nh=132)/SCTP(sport=44,dport=45,tag=1)/SCTPChunkData(data="cdxxxx") pkt5 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2",nh=132)/SCTP(sport=46,dport=47,tag=1)/SCTPChunkData(data="cdxxxx")
for 82599/x540:
pkt41 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2",nh=132)/SCTP(sport=44,dport=45,tag=1)/SCTPChunkData(data="cdxxxx") pkt42 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2",nh=132)/SCTP()/SCTPChunkData(data="cdxxxx") pkt51 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2",nh=132)/SCTP(sport=46,dport=47,tag=1)/SCTPChunkData(data="cdxxxx") pkt52 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::3", dst="2001::4",nh=132)/SCTP(sport=46,dport=47,tag=1)/SCTPChunkData(data="cdxxxx") pkt6 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2")/Raw(load="xx86abcd") pkt7 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2")/Raw(load="xxx86abcd")
ipv4 packets:
pkt1 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/Raw('x' * 20) pkt2 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/UDP(sport=22,dport=23)/Raw('x' * 20) pkt3 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(sport=32,dport=33)/Raw('x' * 20)
for x552/x550:
pkt41 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/SCTP(sport=44,dport=45)/Raw('x' * 20) pkt42 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/SCTP(sport=42,dport=43)/Raw('x' * 20)
for 82599/x540:
pkt41 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/SCTP()/Raw('x' * 20) pkt42 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.3")/SCTP()/Raw('x' * 20) pkt51 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/SCTP(sport=24,dport=25)/Raw(load="xxabcdef") pkt52 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/SCTP(sport=24,dport=25)/Raw(load="xxaccdef")
verify ipv6 packets: for x552/x550: pkt1 to queue 0, pkt2 to queue 2, pkt3 to queue 3. pkt4 to queue 4, pkt5 to queue 0.
for 82599/x540: packet pkt1 to pkt3 can be received by queue 1 to queue 3 correctly. pkt41 and pkt42 to queue 4, pkt51 to queue 4, pkt52 to queue 0. pkt6 to queue 5, pkt7 to queue 0.
verify ipv4 packets: for x552/x550: pk1 to queue 0, pkt2 to queue 7, pkt3 to queue 8. pkt41 to queue 9, pkt42 to queue 0, pkt51 to queue 10, pkt52 to queue 0.
for 82599/x540: pkt1 to pkt3 can be received by queue 6 to queue 8 correctly. pkt41 to queue 9, pkt42 to queue 0, pkt51 to queue 10, pkt52 to queue 0.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0 testpmd> flow list 0 testpmd> flow flush 0 testpmd> flow list 0
Test case: IXGBE fdir for mac/vlan(support by x540, x552, x550)¶
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect-mac-vlan testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start testpmd> vlan set strip off 0 testpmd> vlan set filter off 0
create filter rules:
testpmd> flow create 0 ingress pattern eth dst is A0:36:9F:7B:C5:A9 / vlan tpid is 0x8100 tci is 1 / end actions queue index 9 / end testpmd> flow create 0 ingress pattern eth dst is A0:36:9F:7B:C5:A9 / vlan tpid is 0x8100 tci is 4095 / end actions queue index 10 / end
send packets:
pkt1 = Ether(dst="A0:36:9F:7B:C5:A9")/Dot1Q(vlan=1)/IP()/TCP()/Raw('x' * 20) pkt2 = Ether(dst="A0:36:9F:7B:C5:A9")/Dot1Q(vlan=4095)/IP()/UDP()/Raw('x' * 20)
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0 testpmd> flow list 0 testpmd> flow flush 0 testpmd> flow list 0
Test case: IXGBE fdir for tunnel (vxlan and nvgre)(support by x540, x552, x550)¶
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=16 --txq=16 --disable-rss --pkt-filter-mode=perfect-tunnel testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
create filter rules
vxlan:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / vxlan vni is 8 / eth dst is A0:36:9F:7B:C5:A9 / vlan tci is 2 tpid is 0x8100 / end actions queue index 1 / end testpmd> flow create 0 ingress pattern eth / ipv6 / udp / vxlan vni is 9 / eth dst is A0:36:9F:7B:C5:A9 / vlan tci is 4095 tpid is 0x8100 / end actions queue index 2 / end
nvgre:
testpmd> flow create 0 ingress pattern eth / ipv4 / nvgre tni is 0x112244 / eth dst is A0:36:9F:7B:C5:A9 / vlan tci is 20 / end actions queue index 3 / end testpmd> flow create 0 ingress pattern eth / ipv6 / nvgre tni is 0x112233 / eth dst is A0:36:9F:7B:C5:A9 / vlan tci is 21 / end actions queue index 4 / end
send packets
vxlan:
pkt1=Ether(dst="A0:36:9F:7B:C5:A9")/IP()/UDP()/Vxlan(vni=8)/Ether(dst="A0:36:9F:7B:C5:A9")/Dot1Q(vlan=2)/IP()/TCP()/Raw('x' * 20) pkt2=Ether(dst="A0:36:9F:7B:C5:A9")/IPv6()/UDP()/Vxlan(vni=9)/Ether(dst="A0:36:9F:7B:C5:A9")/Dot1Q(vlan=4095)/IP()/TCP()/Raw('x' * 20)
nvgre:
pkt3 = Ether(dst="A0:36:9F:7B:C5:A9")/IP()/NVGRE(TNI=0x112244)/Ether(dst="A0:36:9F:7B:C5:A9")/Dot1Q(vlan=20)/IP()/TCP()/Raw('x' * 20) pkt4 = Ether(dst="A0:36:9F:7B:C5:A9")/IPv6()/NVGRE(TNI=0x112233)/Ether(dst="A0:36:9F:7B:C5:A9")/Dot1Q(vlan=21)/IP()/TCP()/Raw('x' * 20)
verify pkt1 to pkt4 are into queue 1 to queue 4.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0 testpmd> flow list 0 testpmd> flow flush 0 testpmd> flow list 0
Test case: igb SYN¶
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=8 --txq=8 --disable-rss testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
create filter rules
ipv4:
testpmd> flow create 0 ingress pattern eth / ipv4 / tcp flags spec 0x02 flags mask 0x02 / end actions queue index 3 / end
ipv6:
testpmd> flow destroy 0 rule 0 testpmd> flow create 0 ingress pattern eth / ipv6 / tcp flags spec 0x02 flags mask 0x02 / end actions queue index 4 / end
send packets:
pkt1 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(dport=80,flags="S")/Raw('x' * 20) pkt2 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2")/TCP(dport=80,flags="S")/Raw('x' * 20) pkt3 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(dport=80,flags="PA")/Raw('x' * 20) pkt4 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2")/TCP(dport=80,flags="PA")/Raw('x' * 20)
ipv4 verify pkt1 to queue 3, pkt2 to queue 0, pkt3 to queue 0 ipv6 verify pkt2 to queue 4, pkt1 to queue 0, pkt4 to queue 0
notes: the out packet default is Flags [S], so if the flags is omitted in sent pkt, the pkt will be into queue 3 or queue 4.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0 testpmd> flow list 0 testpmd> flow flush 0 testpmd> flow list 0
Test case: igb n-tuple(82576)¶
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=8 --txq=8 --disable-rss testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
create filter rules:
testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 proto is 17 / udp src is 22 dst is 23 / end actions queue index 1 / end testpmd> flow create 0 ingress pattern eth / ipv4 src is 192.168.0.1 dst is 192.168.0.2 proto is 6 / tcp src is 22 dst is 23 / end actions queue index 2 / end
send packets:
pkt1 = Ether(dst="%s")/IP(src="192.168.0.1", dst="192.168.0.2")/UDP(sport=22,dport=23)/Raw('x' * 20) pkt2 = Ether(dst="%s")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(sport=32,dport=33)/Raw('x' * 20)
verify pkt1 to queue 1, pkt2 to queue 2, pkt3 to queue 3.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0 testpmd> flow list 0 testpmd> flow flush 0 testpmd> flow list 0
Test case: igb n-tuple(i350 or 82580)¶
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=8 --txq=8 --disable-rss testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
create filter rules:
testpmd> flow create 0 ingress pattern eth / ipv4 proto is 17 / udp dst is 23 / end actions queue index 1 / end testpmd> flow create 0 ingress pattern eth / ipv4 proto is 6 / tcp dst is 33 / end actions queue index 2 / end
send packets:
pkt1 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/UDP(sport=22,dport=23)/Raw('x' * 20) pkt2 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/UDP(sport=22,dport=24)/Raw('x' * 20) pkt3 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(sport=32,dport=33)/Raw('x' * 20) pkt4 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(sport=32,dport=34)/Raw('x' * 20)
verify pkt1 to queue 1, pkt2 to queue 0. pkt3 to queue 2, pkt4 to queue 0.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0 testpmd> flow list 0 testpmd> flow flush 0 testpmd> flow list 0
Test case: igb ethertype¶
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=8 --txq=8 testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
create filter rules:
testpmd> flow validate 0 ingress pattern eth type is 0x0806 / end actions queue index 3 / end testpmd> flow validate 0 ingress pattern eth type is 0x86DD / end actions queue index 5 / end testpmd> flow create 0 ingress pattern eth type is 0x0806 / end actions queue index 3 / end testpmd> flow create 0 ingress pattern eth type is 0x88cc / end actions queue index 4 / end testpmd> flow create 0 ingress pattern eth type is 0x88cc / end actions queue index 8 / end
the ixgbe don’t support the 0x88DD eth type packet. so the second command failed. the queue id exceeds the max queue id, so the last command failed.
send packets:
pkt1 = Ether(dst="ff:ff:ff:ff:ff:ff")/ARP(pdst="192.168.1.1") pkt2 = Ether(dst="00:11:22:33:44:55", type=0x88CC)/Raw('x' * 20)
verify pkt1 to queue 3, and pkt2 to queue 4.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0 verify pkt1 to queue 0, and pkt2 to queue 4. testpmd> flow list 0 testpmd> flow flush 0
verify pkt1 to queue 0, and pkt2 to queue 0 Then:
testpmd> flow list 0
Test case: igb flexbytes¶
Launch the app
testpmd
with the following arguments:./x86_64-native-linuxapp-gcc/app/testpmd -c 1ffff -n 4 -- -i --rxq=8 --txq=8 --disable-rss testpmd> set fwd rxonly testpmd> set verbose 1 testpmd> start
create filter rules
l2 packet:
testpmd> flow create 0 ingress pattern raw relative is 0 offset is 14 pattern is fhds / end actions queue index 1 / end
l2 packet relative is 1 (the first relative must be 0, so this rule won’t work):
testpmd> flow create 0 ingress pattern raw relative is 1 offset is 2 pattern is fhds / end actions queue index 2 / end
ipv4 packet:
testpmd> flow create 0 ingress pattern raw relative is 0 offset is 34 pattern is ab / end actions queue index 3 / end
ipv6 packet:
testpmd> flow create 0 ingress pattern raw relative is 0 offset is 58 pattern is efgh / end actions queue index 4 / end
3 fields relative is 0:
testpmd> flow create 0 ingress pattern raw relative is 0 offset is 38 pattern is ab / raw relative is 0 offset is 34 pattern is cd / raw relative is 0 offset is 42 pattern is efgh / end actions queue index 5 / end
4 fields relative is 0 and 1:
testpmd> flow create 0 ingress pattern raw relative is 0 offset is 48 pattern is ab / raw relative is 1 offset is 0 pattern is cd / raw relative is 0 offset is 44 pattern is efgh / raw relative is 1 offset is 10 pattern is hijklmnopq / end actions queue index 6 / end
3 fields offset conflict:
testpmd> flow create 0 ingress pattern raw relative is 0 offset is 64 pattern is ab / raw relative is 1 offset is 4 pattern is cdefgh / raw relative is 0 offset is 68 pattern is klmn / end actions queue index 7 / end
1 field 128bytes
flush the rules:
testpmd> flow flush 0
then create the rule:
testpmd> flow create 0 ingress pattern raw relative is 0 offset is 128 pattern is ab / end actions queue index 1 / end testpmd> flow create 0 ingress pattern raw relative is 0 offset is 126 pattern is abcd / end actions queue index 1 / end testpmd> flow create 0 ingress pattern raw relative is 0 offset is 126 pattern is ab / end actions queue index 1 / end
the first two rules failed to create, only the last flow rule is created successfully.
2 field 128bytes:
testpmd> flow create 0 ingress pattern raw relative is 0 offset is 68 pattern is ab / raw relative is 1 offset is 58 pattern is cd / end actions queue index 2 / end testpmd> flow create 0 ingress pattern raw relative is 0 offset is 68 pattern is ab / raw relative is 1 offset is 56 pattern is cd / end actions queue index 2 / end
the first rule failed to create, only the last flow rule is created successfully.
send packets:
pkt11 = Ether(dst="00:11:22:33:44:55")/Raw(load="fhdsab") pkt12 = Ether(dst="00:11:22:33:44:55")/Raw(load="afhdsb") pkt2 = Ether(dst="00:11:22:33:44:55")/Raw(load="abfhds") pkt3 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/Raw(load="abcdef") pkt41 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2")/Raw(load="xxxxefgh") pkt42 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2")/TCP(sport=32,dport=33)/Raw(load="abcdefgh") pkt5 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/Raw(load="cdxxabxxefghxxxx") pkt6 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2", tos=4, ttl=3)/UDP(sport=32,dport=33)/Raw(load="xxefghabcdxxxxxxhijklmnopqxxxx") pkt71 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(sport=22,dport=23)/Raw(load="xxxxxxxxxxabxxklmnefgh") pkt72 = Ether(dst="00:11:22:33:44:55")/IPv6(src="2001::1", dst="2001::2", tc=3, hlim=30)/Raw(load="xxxxxxxxxxabxxklmnefgh") pkt73 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(sport=22,dport=23)/Raw(load="xxxxxxxxxxabxxklcdefgh") pkt81 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(sport=22,dport=23)/Raw(load="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxab") pkt82 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(sport=22,dport=23)/Raw(load="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxcb") pkt91 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(sport=22,dport=23)/Raw(load="xxxxxxxxxxxxxxabxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxcd") pkt92 = Ether(dst="00:11:22:33:44:55")/IP(src="192.168.0.1", dst="192.168.0.2")/TCP(sport=22,dport=23)/Raw(load="xxxxxxxxxxxxxxabxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxce")
verify pkt11 to queue 1, pkt12 to queue 0. pkt2 to queue 0. pkt3 to queue 3. pkt41 to queue 4, pkt42 to queue 0, // tcp header has 20 bytes. pkt5 to queue 5. pkt6 to queue 6. pkt71 to queue 7, pkt72 to queue 7, pkt73 to queue 0. pkt81 to queue 1, pkt82 to queue 0. pkt91 to queue 2, pkt92 to queue 0.
verify rules can be listed and destroyed:
testpmd> flow list 0 testpmd> flow destroy 0 rule 0 testpmd> flow list 0 testpmd> flow flush 0 testpmd> flow list 0
DDP GTP Qregion¶
DDP profile 0x80000008 adds support for GTP with IPv4 or IPv6 payload. The test case plan focus on more DDP GTP requirements as below. For DDP GTP introduction, please refer to DDP GTP test plan.
Requirements¶
- GTP-C distributed to control plane queues region using outer IP destination address as hash input set (there is no inner IP headers for GTP-C packets)
- GTP-U distributed to data plane queues region using inner IP source address as hash input set.
- GTP-C distributed to control plane queues region using TEID as hash input set.
- GTP-U distributed to data plane queues region using TEID and inner packet 5-tuple as hash input set.
- Requirements 1 and 2 should be possible for IPv6 addresses to use 64, 48 or 32-bit prefixes instead of full address.
FVL supports queue regions configuration for RSS, so different traffic classes or different packet classification types can be separated to different queue regions which includes several queues. Support to set hash input set info for RSS flexible payload, then enable new protocols’ RSS. Dynamic flow type feature introduces GTP pctype and flow type, design and add queue region/queue range mapping as below table. For more detailed and relative information, please refer to dynamic flow type and queue region test plan:
+-------------+------------+------------+--------------+-------------+
| Packet Type | PCTypes | Flow Types | Queue region | Queue range |
+-------------+------------+------------+--------------+-------------+
| GTP-U IPv4 | 22 | 26 | 0 | 1~8 |
+-------------+------------+------------+--------------+-------------+
| GTP-U IPv6 | 23 | 23 | 1 | 10~25 |
+-------------+------------+------------+--------------+-------------+
| GTP-U PAY4 | 24 | 24 | 2 | 30~37 |
+-------------+------------+------------+--------------+-------------+
| GTP-C PAY4 | 25 | 25 | 3 | 40~55 |
+-------------+------------+------------+--------------+-------------+
Prerequisites¶
Host PF in DPDK driver:
./tools/dpdk-devbind.py -b igb_uio 81:00.0
Start testpmd on host, set chained port topology mode, add txq/rxq to enable multi-queues. If test PF flow director, need to add –pkt-filter-mode=perfect on testpmd to enable flow director. In general, PF’s max queue is 64:
./testpmd -c f -n 4 -- -i --pkt-filter-mode=perfect --port-topology=chained --txq=64 --rxq=64
Load/delete dynamic device personalization¶
Stop testpmd port before loading profile:
testpmd > port stop all
Load gtp.pkgo file to the memory buffer, save original configuration and return in the same buffer to the gtp.bak file:
testpmd > ddp add (port_id) /tmp/gtp.pkgo,/tmp/gtp.bak
Remove profile from the network adapter and restore original configuration:
testpmd > ddp del (port_id) /tmp/gtp.bak
Start testpmd port:
testpmd > port start all
Note:
- Gtp.pkgo profile is not released by ND yet, only have engineer version for internal use so far. Plan to keep public reference profiles at Intel Developer Zone, release versions of profiles and supply link later.
- Loading DDP is the prerequisite for below GTP relative cases. Load profile again once restarting testpmd to let software detect this event, although has “profile has already existed” reminder.
Flow type and queue region mapping setting¶
As above mapping table, set queue region on a port:
testpmd> set port 0 queue-region region_id 0 queue_start_index 1 queue_num 8 testpmd> set port 0 queue-region region_id 1 queue_start_index 10 queue_num 16 testpmd> set port 0 queue-region region_id 2 queue_start_index 30 queue_num 8 testpmd> set port 0 queue-region region_id 3 queue_start_index 40 queue_num 16
Set the mapping of flow type to region index on a port:
testpmd> set port 0 queue-region region_id 0 flowtype 26 testpmd> set port 0 queue-region region_id 1 flowtype 23 testpmd> set port 0 queue-region region_id 2 flowtype 24 testpmd> set port 0 queue-region region_id 3 flowtype 25 testpmd> set port 0 queue-region flush on
flush all queue regions:
testpmd> set port 0 queue-region flush off
Test Case: Outer IPv6 dst controls GTP-C queue in queue region¶
Check flow ptype to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-C flow type id 25 to pctype id 25 mapping item:
testpmd> port config 0 pctype mapping update 25 25
Check flow ptype to pctype mapping adds 25 this mapping
Reset GTP-C hash configure:
testpmd> port config 0 pctype 25 hash_inset clear all
Outer dst address words are 50~57, enable hash input set for outer dst:
testpmd> port config 0 pctype 25 hash_inset set field 50 testpmd> port config 0 pctype 25 hash_inset set field 51 testpmd> port config 0 pctype 25 hash_inset set field 52 testpmd> port config 0 pctype 25 hash_inset set field 53 testpmd> port config 0 pctype 25 hash_inset set field 54 testpmd> port config 0 pctype 25 hash_inset set field 55 testpmd> port config 0 pctype 25 hash_inset set field 56 testpmd> port config 0 pctype 25 hash_inset set field 57
Enable flow type id 25’s RSS:
testpmd> port config all rss 25
Start testpmd, set fwd rxonly, enable output print
Send outer dst GTP-C packet, check RSS could work, verify the queue is between 40 and 55, print PKT_RX_RSS_HASH:
p=Ether()/IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/UDP(dport=2123)/ GTP_U_Header()/Raw('x'*20)
Send different outer dst GTP-C packet, check pmd receives packet from different queue but between 40 and 55:
p=Ether()/IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0002")/UDP(dport=2123)/ GTP_U_Header()/Raw('x'*20)
Send different outer src GTP-C packet, check pmd receives packet from same queue:
p=Ether()/IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0002", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(dport=2123)/GTP_U_Header()/Raw('x'*20)
Test Case: TEID controls GTP-C queue in queue region¶
Check flow ptype to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-C flow type id 25 to pctype id 25 mapping item:
testpmd> port config 0 pctype mapping update 25 25
Check flow ptype to pctype mapping adds 25 this mapping
Reset GTP-C hash configure:
testpmd> port config 0 pctype 25 hash_inset clear all
Teid words are 44 and 45, enable hash input set for teid:
testpmd> port config 0 pctype 25 hash_inset set field 44 testpmd> port config 0 pctype 25 hash_inset set field 45
Enable flow type id 25’s RSS:
testpmd> port config all rss 25
Start testpmd, set fwd rxonly, enable output print
Send teid GTP-C packet, check RSS could work, verify the queue is between 40 and 55, print PKT_RX_RSS_HASH:
p=Ether()/IPv6()/UDP(dport=2123)/GTP_U_Header(teid=0xfe)/Raw('x'*20)
Send different teid GTP-C packet, check receive packet from different queue but between 40 and 55:
p=Ether()/IPv6()/UDP(dport=2123)/GTP_U_Header(teid=0xff)/Raw('x'*20)
Test Case: TEID controls GTP-U IPv4 queue in queue region¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-U IPv4 flow type id 26 to pctype id 22 mapping item:
testpmd> port config 0 pctype mapping update 22 26
Check flow ptype to pctype mapping adds 26 this mapping:
testpmd> show port 0 pctype mapping
Reset GTP-U IPv4 hash configure:
testpmd> port config 0 pctype 22 hash_inset clear all
Teid words are 44 and 45, enable hash input set for teid:
testpmd> port config 0 pctype 22 hash_inset set field 44 testpmd> port config 0 pctype 22 hash_inset set field 45
Enable flow type id 26’s RSS:
testpmd> port config all rss 26
Start testpmd, set fwd rxonly, enable output print
Send teid GTP-U IPv4 packet, check RSS could work, verify the queue is between 1 and 8, print PKT_RX_RSS_HASH:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/IP()/Raw('x'*20)
Send different teid GTP-U IPv4 packet, check receive packet from different queue but between 1 and 8:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header(teid=0xff)/IP()/Raw('x'*20)
Test Case: Sport controls GTP-U IPv4 queue in queue region¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-U IPv4 flow type id 26 to pctype id 22 mapping item:
testpmd> port config 0 pctype mapping update 22 26
Check flow ptype to pctype mapping adds 26 this mapping:
testpmd> show port 0 pctype mapping
Reset GTP-U IPv4 hash configure:
testpmd> port config 0 pctype 22 hash_inset clear all
Sport words are 29 and 30, enable hash input set for sport:
testpmd> port config 0 pctype 22 hash_inset set field 29 testpmd> port config 0 pctype 22 hash_inset set field 30
Enable flow type id 26’s RSS:
testpmd> port config all rss 26
Start testpmd, set fwd rxonly, enable output print
Send sport GTP-U IPv4 packet, check RSS could work, verify the queue is between 1 and 8, print PKT_RX_RSS_HASH:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header(teid=30)/IP()/ UDP(sport=100,dport=200)/Raw('x'*20)
Send different sport GTP-U IPv4 packet, check pmd receives packet from different queue but between 1 and 8:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header(teid=30)/IP()/ UDP(sport=101,dport=200)/Raw('x'*20)
Test Case: Dport controls GTP-U IPv4 queue in queue region¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-U IPv4 flow type id 26 to pctype id 22 mapping item:
testpmd> port config 0 pctype mapping update 22 26
Check flow ptype to pctype mapping adds 26 this mapping:
testpmd> show port 0 pctype mapping
Reset GTP-U IPv4 hash configure:
testpmd> port config 0 pctype 22 hash_inset clear all
Dport words are 29 and 30, enable hash input set for dport:
testpmd> port config 0 pctype 22 hash_inset set field 29 testpmd> port config 0 pctype 22 hash_inset set field 30
Enable flow type id 26’s RSS:
testpmd> port config all rss 26
Start testpmd, set fwd rxonly, enable output print
Send dprot GTP-U IPv4 packet, check RSS could work, verify the queue is between 1 and 8, print PKT_RX_RSS_HASH:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header(teid=30)/IP()/ UDP(sport=100,dport=200)/Raw('x'*20)
Send different dport GTP-U IPv4 packet, check receive packet from different queue but between 1 and 8:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header(teid=30)/IP()/ UDP(sport=100,dport=201)/Raw('x'*20)
Test Case: Inner IP src controls GTP-U IPv4 queue in queue region¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-U IPv4 flow type id 26 to pctype id 22 mapping item:
testpmd> port config 0 pctype mapping update 22 26
Check flow ptype to pctype mapping adds 26 this mapping:
testpmd> show port 0 pctype mapping
Reset GTP-U IPv4 hash configure:
testpmd> port config 0 pctype 22 hash_inset clear all
Inner source words are 15 and 16, enable hash input set for inner src:
testpmd> port config 0 pctype 22 hash_inset set field 15 testpmd> port config 0 pctype 22 hash_inset set field 16
Enable flow type id 26’s RSS:
testpmd> port config all rss 26
Start testpmd, set fwd rxonly, enable output print
Send inner src GTP-U IPv4 packet, check RSS could work, verify the queue is between 1 and 8, print PKT_RX_RSS_HASH:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IP(src="1.1.1.1",dst="2.2.2.2")/UDP()/Raw('x'*20)
Send different src GTP-U IPv4 packet, check pmd receives packet from different queue but between 1 and 8:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IP(src="1.1.1.2",dst="2.2.2.2")/UDP()/Raw('x'*20)
Send different dst GTP-U IPv4 packet, check pmd receives packet from same queue:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IP(src="1.1.1.1",dst="2.2.2.3")/UDP()/Raw('x'*20)
Test Case: Inner IP dst controls GTP-U IPv4 queue in queue region¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-U IPv4 flow type id 26 to pctype id 22 mapping item:
testpmd> port config 0 pctype mapping update 22 26
Check flow ptype to pctype mapping adds 26 this mapping:
testpmd> show port 0 pctype mapping
Reset GTP-U IPv4 hash configure:
testpmd> port config 0 pctype 22 hash_inset clear all
Inner dst words are 27 and 28, enable hash input set for inner dst:
testpmd> port config 0 pctype 22 hash_inset set field 27 testpmd> port config 0 pctype 22 hash_inset set field 28
Enable flow type id 26’s RSS:
testpmd> port config all rss 26
Start testpmd, set fwd rxonly, enable output print
Send inner dst GTP-U IPv4 packet, check RSS could work, verify the queue is between 1 and 8, print PKT_RX_RSS_HASH:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IP(src="1.1.1.1",dst="2.2.2.2")/UDP()/Raw('x'*20)
Send different dst address GTP-U IPv4 packet, check pmd receives packet from different queue but between 1 and 8:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IP(src="1.1.1.1",dst="2.2.2.3")/UDP()/Raw('x'*20)
Send different src address, check pmd receives packet from same queue:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IP(src="1.1.1.2",dst="2.2.2.2")/UDP()/Raw('x'*20)
Test Case: TEID controls GTP-U IPv6 queue in queue region¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-U IPv6 flow type id 23 to pctype id 23 mapping item:
testpmd> port config 0 pctype mapping update 23 23
Check flow ptype to pctype mapping adds 23 this mapping:
testpmd> show port 0 pctype mapping
Reset GTP-U IPv6 hash configure:
testpmd> port config 0 pctype 23 hash_inset clear all
Teid words are 44 and 45, enable hash input set for teid:
testpmd> port config 0 pctype 23 hash_inset set field 44 testpmd> port config 0 pctype 23 hash_inset set field 45
Enable flow type id 23’s RSS:
testpmd> port config all rss 23
Start testpmd, set fwd rxonly, enable output print
Send teid GTP-U IPv6 packet, check RSS could work, verify the queue is between 10 and 25, print PKT_RX_RSS_HASH:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/IPv6()/ UDP(sport=100,dport=200)/Raw('x'*20)
Send different teid GTP-U IPv4 packet, check pmd receives packet from different queue but between 10 and 25:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xff)/IPv6()/ UDP(sport=100,dport=200)/Raw('x'*20)
Test Case: Sport controls GTP-U IPv6 queue in queue region¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-U IPv6 flow type id 23 to pctype id 23 mapping item:
testpmd> port config 0 pctype mapping update 23 23
Check flow ptype to pctype mapping adds 23 this mapping:
testpmd> show port 0 pctype mapping
Reset GTP-U IPv6 hash configure:
testpmd> port config 0 pctype 23 hash_inset clear all
Sport words are 29 and 30, enable hash input set for sport:
testpmd> port config 0 pctype 23 hash_inset set field 29 testpmd> port config 0 pctype 23 hash_inset set field 30
Enable flow type id 23’s RSS:
testpmd> port config all rss 23
Start testpmd, set fwd rxonly, enable output print
Send sport GTP-U IPv6 packet, check RSS could work, verify the queue is between 10 and 25, print PKT_RX_RSS_HASH:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/IPv6()/ UDP(sport=100,dport=200)/Raw('x'*20)
Send different sport GTP-U IPv6 packet, check pmd receives packet from different queue but between 10 and 25:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/IPv6()/ UDP(sport=101,dport=200)/Raw('x'*20)
Test Case: Dport controls GTP-U IPv6 queue in queue region¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-U IPv6 flow type id 23 to pctype id 23 mapping item:
testpmd> port config 0 pctype mapping update 23 23
Check flow ptype to pctype mapping adds 23 this mapping:
testpmd> show port 0 pctype mapping
Reset GTP-U IPv6 hash configure:
testpmd> port config 0 pctype 23 hash_inset clear all
Dport words are 29 and 30, enable hash input set for dport:
testpmd> port config 0 pctype 23 hash_inset set field 29 testpmd> port config 0 pctype 23 hash_inset set field 30
Enable flow type id 23’s RSS:
testpmd> port config all rss 23
Start testpmd, set fwd rxonly, enable output print
Send dport GTP-U IPv6 packet, check RSS could work, verify the queue is between 10 and 25, print PKT_RX_RSS_HASH:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/IPv6()/ UDP(sport=100,dport=200)/Raw('x'*20)
Send different dport GTP-U IPv6 packet, check pmd receives packet from different queue but between 10 and 25:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/IPv6()/ UDP(sport=100,dport=201)/Raw('x'*20)
Test Case: Inner IPv6 src controls GTP-U IPv6 queue in queue region¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-U IPv6 flow type id 23 to pctype id 23 mapping item:
testpmd> port config 0 pctype mapping update 23 23
Check flow ptype to pctype mapping adds 23 this mapping:
testpmd> show port 0 pctype mapping
Reset GTP-U IPv6 hash configure:
testpmd> port config 0 pctype 23 hash_inset clear all
Inner IPv6 src words are 13~20, enable hash input set for inner src:
testpmd> port config 0 pctype 23 hash_inset set field 13 testpmd> port config 0 pctype 23 hash_inset set field 14 testpmd> port config 0 pctype 23 hash_inset set field 15 testpmd> port config 0 pctype 23 hash_inset set field 16 testpmd> port config 0 pctype 23 hash_inset set field 17 testpmd> port config 0 pctype 23 hash_inset set field 18 testpmd> port config 0 pctype 23 hash_inset set field 19 testpmd> port config 0 pctype 23 hash_inset set field 20
Enable flow type id 23’s RSS:
testpmd> port config all rss 23
Start testpmd, set fwd rxonly, enable output print
Send inner src address GTP-U IPv6 packets, check RSS could work, verify the queue is between 10 and 25, print PKT_RX_RSS_HASH:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/UDP()/Raw('x'*20)
Send different inner src GTP-U IPv6 packet, check pmd receives packet from different queue but between 10 and 25:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0002", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/UDP()/Raw('x'*20)
Send different inner dst GTP-U IPv6 packet, check pmd receives packet from same queue:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0002)/UDP()/Raw('x'*20)
Test Case: Inner IPv6 dst controls GTP-U IPv6 queue in queue region¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-U IPv6 flow type id 23 to pctype id 23 mapping item:
testpmd> port config 0 pctype mapping update 23 23
Check flow ptype to pctype mapping adds 23 this mapping:
testpmd> show port 0 pctype mapping
Reset GTP-U IPv6 hash configure:
testpmd> port config 0 pctype 23 hash_inset clear all
Inner IPv6 dst words are 21~28, enable hash input set for inner dst:
testpmd> port config 0 pctype 23 hash_inset set field 21 testpmd> port config 0 pctype 23 hash_inset set field 22 testpmd> port config 0 pctype 23 hash_inset set field 23 testpmd> port config 0 pctype 23 hash_inset set field 24 testpmd> port config 0 pctype 23 hash_inset set field 25 testpmd> port config 0 pctype 23 hash_inset set field 26 testpmd> port config 0 pctype 23 hash_inset set field 27 testpmd> port config 0 pctype 23 hash_inset set field 28
Enable flow type id 23’s RSS:
testpmd> port config all rss 23
Start testpmd, set fwd rxonly, enable output print
Send inner dst GTP-U IPv6 packets, check RSS could work, verify the queue is between 10 and 25, print PKT_RX_RSS_HASH:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/UDP()/Raw('x'*20)
Send different inner dst GTP-U IPv6 packets, check pmd receives packet from different queue but between 10 and 25:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0002")/UDP()/Raw('x'*20)
Send different inner src GTP-U IPv6 packets, check pmd receives packet from same queue:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0002", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/UDP()/Raw('x'*20)
Test Case: Flow director for GTP IPv4 with default fd input set¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP IPv4 flow type id 26 to pctype id 22 mapping item:
testpmd> port config 0 pctype mapping update 22 26
Default flow director input set is teid, start testpmd, set fwd rxonly, enable output print
Send GTP IPv4 packets, check to receive packet from queue 0:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/IP(src="1.1.1.1", dst="2.2.2.2")/UDP(sport=40, dport=50)/Raw('x'*20)
Use scapy to generate GTP IPv4 raw packet test_gtp.raw, source/destination address and port should be swapped in the template and traffic packets:
a=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/IP(dst="1.1.1.1", src="2.2.2.2")/UDP(dport=40, sport=50)/Raw('x'*20)
Setup raw flow type filter for flow director, configured queue is random queue between 1~63, such as 36:
testpmd> flow_director_filter 0 mode raw add flow 26 fwd queue 36 fd_id 1 packet test_gtp.raw
Send matched swapped traffic packet, check to receive packet from configured queue 36:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/IP(src="1.1.1.1", dst="2.2.2.2")/UDP(sport=40, dport=50)/Raw('x'*20)
Send non-matched inner src IPv4/dst IPv4/sport/dport packets, check to receive packets from queue 36:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/IP(src="1.1.1.2", dst="2.2.2.2")/UDP(sport=40, dport=50)/Raw('x'*20) p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/IP(src="1.1.1.1", dst="2.2.2.3")/UDP(sport=40, dport=50)/Raw('x'*20) p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/IP(src="1.1.1.1", dst="2.2.2.2")/UDP(sport=41, dport=50)/Raw('x'*20) p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/IP(src="1.1.1.1", dst="2.2.2.2")/UDP(sport=40, dport=51)/Raw('x'*20)
Send non-matched teid GTP IPv4 packets, check to receive packet from queue 0:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header(teid=0xff)/IP(src="1.1.1.1", dst="2.2.2.2")/UDP(sport=40, dport=50)/Raw('x'*20)
Test Case: Flow director for GTP IPv4 according to inner dst IPv4¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP IPv4 flow type id 26 to pctype id 22 mapping item:
testpmd> port config 0 pctype mapping update 22 26
Reset GTP IPv4 flow director configure:
testpmd> port config 0 pctype 22 fdir_inset clear all
Inner dst IPv4 words are 27 and 28, enable flow director input set for them:
testpmd> port config 0 pctype 22 fdir_inset set field 27 testpmd> port config 0 pctype 22 fdir_inset set field 28
Start testpmd, set fwd rxonly, enable output print
Send GTP IPv4 packets, check to receive packet from queue 0:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP(src="1.1.1.1", dst="2.2.2.2")/UDP(sport=40, dport=50)/Raw('x'*20)
Use scapy to generate GTP IPv4 raw packet test_gtp.raw, source/destination address and port should be swapped in the template and traffic packets:
a=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP(dst="1.1.1.1", src="2.2.2.2")/UDP(dport=40, sport=50)/Raw('x'*20)
Setup raw flow type filter for flow director, configured queue is random queue between 1~63, such as 36:
testpmd> flow_director_filter 0 mode raw add flow 26 fwd queue 36 fd_id 1 packet test_gtp.raw
Send matched swapped traffic packet, check to receive packet from configured queue 36:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP(src="1.1.1.1", dst="2.2.2.2")/UDP(sport=40, dport=50)/Raw('x'*20)
Send non-matched inner src IPv4/sport/dport packets, check to receive packets from queue 36:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP(src="1.1.1.2", dst="2.2.2.2")/UDP(sport=40, dport=50)/Raw('x'*20) p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP(src="1.1.1.1", dst="2.2.2.2")/UDP(sport=41, dport=50)/Raw('x'*20) p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP(src="1.1.1.1", dst="2.2.2.2")/UDP(sport=40, dport=51)/Raw('x'*20)
Send non-matched inner dst IPv4 packets, check to receive packet from queue 0:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP(src="1.1.1.1", dst="2.2.2.3")/UDP(sport=40, dport=50)/Raw('x'*20)
Test Case: Flow director for GTP IPv4 according to inner src IPv4¶
Check flow ptype to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP IPv4 flow type id 26 to pctype id 22 mapping item:
testpmd> port config 0 pctype mapping update 22 26
Reset GTP IPv4 flow director configure:
testpmd> port config 0 pctype 22 fdir_inset clear all
Inner src IPv4 words are 15 and 16, enable flow director input set for them:
testpmd> port config 0 pctype 22 fdir_inset set field 15 testpmd> port config 0 pctype 22 fdir_inset set field 16
Start testpmd, set fwd rxonly, enable output print
Send GTP IPv4 packets, check to receive packet from queue 0:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP(src="1.1.1.1", dst="2.2.2.2")/UDP(sport=40, dport=50)/Raw('x'*20)
Use scapy to generate GTP IPv4 raw packet test_gtp.raw, source/destination address and port should be swapped in the template and traffic packets:
a=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP(dst="1.1.1.1", src="2.2.2.2")/UDP(dport=40, sport=50)/Raw('x'*20)
Setup raw flow type filter for flow director, configured queue is random queue between 1~63, such as 36:
testpmd> flow_director_filter 0 mode raw add flow 26 fwd queue 36 fd_id 1 packet test_gtp.raw
Send matched swapped traffic packet, check to receive packet from configured queue 36:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP(src="1.1.1.1", dst="2.2.2.2")/UDP(sport=40, dport=50)/Raw('x'*20)
Send non-matched inner dst IPv4/sport/dport packets, check to receive packets from queue 36:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP(src="1.1.1.1", dst="2.2.2.3")/UDP(sport=40, dport=50)/Raw('x'*20) p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP(src="1.1.1.1", dst="2.2.2.2")/UDP(sport=41, dport=50)/Raw('x'*20) p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP(src="1.1.1.1", dst="2.2.2.2")/UDP(sport=40, dport=51)/Raw('x'*20)
Send non-matched inner src IPv4 packets, check to receive packet from queue 0:
p=Ether()/IPv6()/UDP(dport=2152)/GTP_U_Header()/IP(src="1.1.1.2", dst="2.2.2.2")/UDP(sport=40, dport=50)/Raw('x'*20)
Test Case: Flow director for GTP IPv6 with default fd input set¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP IPv6 flow type id 23 to pctype id 23 mapping item:
testpmd> port config 0 pctype mapping update 23 23
Default flow director input set is teid, start testpmd, set fwd rxonly, enable output print
Send GTP IPv6 packets, check to receive packet from queue 0:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(sport=40,dport=50)/Raw('x'*20)
Use scapy to generate GTP IPv6 raw packet test_gtp.raw, source/destination address and port should be swapped in the template and traffic packets:
a=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(dst="1001:0db8:85a3:0000:0000:8a2e:0370:0001", src="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(dport=40,sport=50)/Raw('x'*20)
Setup raw flow type filter for flow director, configured queue is random queue between 1~63, such as 36:
testpmd> flow_director_filter 0 mode raw add flow 23 fwd queue 36 fd_id 1 packet test_gtp.raw
Send matched swapped traffic packet, check to receive packet from configured queue 36:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(sport=40,dport=50)/Raw('x'*20)
Send non-matched inner src IPv6/dst IPv6/sport/dport packets, check to receive packets from queue 36:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0002", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(sport=40,dport=50)/Raw('x'*20) p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0002")/ UDP(sport=40,dport=50)/Raw('x'*20) p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(sport=41,dport=50)/Raw('x'*20) p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(sport=40,dport=51)/Raw('x'*20)
Send non-matched teid packets, check to receive packet from queue 0:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xff)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(sport=40,dport=50)/Raw('x'*20)
Test Case: Flow director for GTP IPv6 according to inner dst IPv6¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP IPv6 flow type id 23 to pctype id 23 mapping item:
testpmd> port config 0 pctype mapping update 23 23
Reset GTP IPv6 flow director configure:
testpmd> port config 0 pctype 23 fdir_inset clear all
Inner dst IPv6 words are 21~28 , enable flow director input set for them:
testpmd> port config 0 pctype 23 fdir_inset set field 21 testpmd> port config 0 pctype 23 fdir_inset set field 22 testpmd> port config 0 pctype 23 fdir_inset set field 23 testpmd> port config 0 pctype 23 fdir_inset set field 24 testpmd> port config 0 pctype 23 fdir_inset set field 25 testpmd> port config 0 pctype 23 fdir_inset set field 26 testpmd> port config 0 pctype 23 fdir_inset set field 27 testpmd> port config 0 pctype 23 fdir_inset set field 28
Start testpmd, set fwd rxonly, enable output print
Send GTP IPv6 packets, check to receive packet from queue 0:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(sport=40,dport=50)/Raw('x'*20)
Use scapy to generate GTP IPv6 raw packet test_gtp.raw, source/destination address and port should be swapped in the template and traffic packets:
a=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(dst="1001:0db8:85a3:0000:0000:8a2e:0370:0001", src="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(dport=40,sport=50)/Raw('x'*20)
Setup raw flow type filter for flow director, configured queue is random queue between 1~63, such as 36:
testpmd> flow_director_filter 0 mode raw add flow 23 fwd queue 36 fd_id 1 packet test_gtp.raw
Send matched swapped traffic packet, check to receive packet from configured queue 36:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(sport=40,dport=50)/Raw('x'*20)
Send non-matched inner src IPv6/sport/dport packets, check to receive packets from queue 36:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0002", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(sport=40,dport=50)/Raw('x'*20) p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(sport=41,dport=50)/Raw('x'*20) p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(sport=40,dport=51)/Raw('x'*20)
Send non-matched inner dst IPv6 packets, check to receive packet from queue 0:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0002")/ UDP(sport=40,dport=50)/Raw('x'*20)
Test Case: Flow director for GTP IPv6 according to inner src IPv6¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP IPv6 flow type id 23 to pctype id 23 mapping item:
testpmd> port config 0 pctype mapping update 23 23
Reset GTP IPv6 flow director configure:
testpmd> port config 0 pctype 23 fdir_inset clear all
Inner src IPv6 words are 13~20, enable flow director input set for them:
testpmd> port config 0 pctype 23 fdir_inset set field 13 testpmd> port config 0 pctype 23 fdir_inset set field 14 testpmd> port config 0 pctype 23 fdir_inset set field 15 testpmd> port config 0 pctype 23 fdir_inset set field 16 testpmd> port config 0 pctype 23 fdir_inset set field 17 testpmd> port config 0 pctype 23 fdir_inset set field 18 testpmd> port config 0 pctype 23 fdir_inset set field 19 testpmd> port config 0 pctype 23 fdir_inset set field 20
Start testpmd, set fwd rxonly, enable output print
Send GTP IPv6 packets, check to receive packet from queue 0:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(sport=40,dport=50)/Raw('x'*20)
Use scapy to generate GTP IPv6 raw packet test_gtp.raw, source/destination address and port should be swapped in the template and traffic packets:
a=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(dst="1001:0db8:85a3:0000:0000:8a2e:0370:0001", src="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(dport=40,sport=50)/Raw('x'*20)
Setup raw flow type filter for flow director, configured queue is random queue between 1~63, such as 36:
testpmd> flow_director_filter 0 mode raw add flow 23 fwd queue 36 fd_id 1 packet test_gtp.raw
Send matched swapped traffic packet, check to receive packet from configured queue 36:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(sport=40,dport=50)/Raw('x'*20)
Send non-matched inner dst IPv6/sport/dport packets, check to receive packets from queue 36:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0002")/ UDP(sport=40,dport=50)/Raw('x'*20) p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(sport=41,dport=50)/Raw('x'*20) p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(sport=40,dport=51)/Raw('x'*20)
Send non-matched inner src IPv6 packets, check to receive packet from queue 0:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=0xfe)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0002", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(sport=40,dport=50)/Raw('x'*20)
Test Case: Outer 64 bit prefix dst controls GTP-C queue¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-C flow type id 25 to pctype id 25 mapping item:
testpmd> port config 0 pctype mapping update 25 25
Check flow type to pctype mapping adds 25 this mapping
Reset GTP-C hash configure:
testpmd> port config 0 pctype 25 hash_inset clear all
Outer dst address words are 50~57, only setting 50~53 words means 64 bits prefixes, enable hash input set for outer dst:
testpmd> port config 0 pctype 25 hash_inset set field 50 testpmd> port config 0 pctype 25 hash_inset set field 51 testpmd> port config 0 pctype 25 hash_inset set field 52 testpmd> port config 0 pctype 25 hash_inset set field 53
Enable flow type id 25’s RSS:
testpmd> port config all rss 25
Start testpmd, set fwd rxonly, enable output print
Send outer dst GTP-C packet, check RSS could work, verify the queue is between 40 and 55, print PKT_RX_RSS_HASH:
p=Ether()/IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/UDP(dport=2123)/ GTP_U_Header()/Raw('x'*20)
Send different outer dst 64 bit prefixes GTP-C packet, check pmd receives packet from different queue but between 40 and 55:
p=Ether()/IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0001:0000:8a2e:0370:0001")/UDP(dport=2123)/ GTP_U_Header()/Raw('x'*20)
Send different outer dst 64 bit suffixal GTP-C packet, check pmd receives packet from same queue:
p=Ether()/IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0002")/UDP(dport=2123)/ GTP_U_Header()/Raw('x'*20)
Send different outer src GTP-C packet, check pmd receives packet from same queue:
p=Ether()/IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0002", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/ UDP(dport=2123)/GTP_U_Header()/Raw('x'*20)
Test Case: Inner 48 bit prefix src controls GTP-U IPv6 queue¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-U IPv6 flow type id 23 to pctype id 23 mapping item:
testpmd> port config 0 pctype mapping update 23 23
Check flow type to pctype mapping adds 23 this mapping:
testpmd> show port 0 pctype mapping
Reset GTP-U IPv6 hash configure:
testpmd> port config 0 pctype 23 hash_inset clear all
Inner IPv6 src words are 13~20, only setting 13~15 words means 48 bit prefixes, enable hash input set for inner src:
testpmd> port config 0 pctype 23 hash_inset set field 13 testpmd> port config 0 pctype 23 hash_inset set field 14 testpmd> port config 0 pctype 23 hash_inset set field 15
Enable flow type id 23’s RSS:
testpmd> port config all rss 23
Start testpmd, set fwd rxonly, enable output print
Send inner src address GTP-U IPv6 packets, check RSS could work, verify the queue is between 10 and 25, print PKT_RX_RSS_HASH:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/UDP()/Raw('x'*20)
Send different inner src 48 bit prefixes GTP-U IPv6 packet, check pmd receives packet from different queue but between 10 and 25:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IPv6(src="1001:0db8:85a4:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/UDP()/Raw('x'*20)
Send different inner src 48 bit suffixal GTP-C packet, check pmd receives packet from same queue:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0002", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/UDP()/Raw('x'*20)
Send different inner dst GTP-U IPv6 packet, check pmd receives packet from same queue:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0002")/UDP()/Raw('x'*20)
Test Case: Inner 32 bit prefix dst controls GTP-U IPv6 queue¶
Check flow type to pctype mapping:
testpmd> show port 0 pctype mapping
Update GTP-U IPv6 flow type id 23 to pctype id 23 mapping item:
testpmd> port config 0 pctype mapping update 23 23
Check flow ptype to pctype mapping adds 23 this mapping:
testpmd> show port 0 pctype mapping
Reset GTP-U IPv6 hash configure:
testpmd> port config 0 pctype 23 hash_inset clear all
Inner IPv6 dst words are 21~28, only setting 21~22 words means 32 bit prefixes, enable hash input set for inner dst:
testpmd> port config 0 pctype 23 hash_inset set field 21 testpmd> port config 0 pctype 23 hash_inset set field 22
Enable flow type id 23’s RSS:
testpmd> port config all rss 23
Start testpmd, set fwd rxonly, enable output print
Send inner dst GTP-U IPv6 packets, check RSS could work, verify the queue is between 10 and 25, print PKT_RX_RSS_HASH:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/UDP()/Raw('x'*20)
Send different inner dst 32 bit prefixes GTP-U IPv6 packets, check pmd receives packet from different queue but between 10 and 25:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db9:85a3:0000:0000:8a2e:0370:0001")/UDP()/Raw('x'*20)
Send different inner dst 32 bit suffixal GTP-U packet, check pmd receives packet from same queue:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0001", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0002")/UDP()/Raw('x'*20)
Send different inner src GTP-U IPv6 packets, check pmd receives packet from same queue:
p=Ether()/IP()/UDP(dport=2152)/GTP_U_Header(teid=30)/ IPv6(src="1001:0db8:85a3:0000:0000:8a2e:0370:0002", dst="2001:0db8:85a3:0000:0000:8a2e:0370:0001")/UDP()/Raw('x'*20)
Dynamically Configure VF Queue Number¶
Description¶
Now RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF is used to determine the max queue number per VF. It’s not friendly to the users because it means the users must decide the max queue number when compiling. There’s no chance to change it when deploying their APP. It’s good to make the queue number to be configurable so the users can change it when launching the APP. This requirement is meaningless to ixgbe since the queue is fixed on ixgbe. The number of queues per i40e VF can be determinated during run time. For example, if the PCI address of an i40e PF is aaaa:bb.cc, with the EAL parameter -w aaaa:bb.cc,queue-num-per-vf=8, the number of queues per VF created from this PF is 8. Set the VF max queue number with the PF EAL parameter “queue-num-per-vf”. the valid values includes 1,2,4,8,16; if the value after the “queue-num-per-vf” is invalid, it is set as 4 forcibly; if there is no “queue-num-per-vf” setting in EAL parameters, it is 4 by default as before.
Prerequisites¶
Hardware: Fortville
Software: dpdk: http://dpdk.org/git/dpdk scapy: http://www.secdev.org/projects/scapy/
Bind the pf port to dpdk driver:
./usertools/dpdk-devbind.py -b igb_uio 05:00.0
Set up two vfs from the pf with DPDK driver:
echo 2 > /sys/bus/pci/devices/0000\:05\:00.0/max_vfs
Bind the two vfs to DPDK driver:
./usertools/dpdk-devbind.py -b vfio-pci 05:02.0 05:02.1
Test case: set valid VF max queue number¶
Try the valid values 1:
./testpmd -c f -n 4 -w 05:00.0,queue-num-per-vf=1 \ --file-prefix=test1 --socket-mem 1024,1024 -- -i
Testpmd can be started normally without any wrong or error.
Start VF testpmd with “–rxq=1 –txq=1”, the number of rxq and txq is consistent with the configured VF max queue number:
./testpmd -c 0xf0 -n 4 -w 05:02.0 \ --file-prefix=test2 --socket-mem 1024,1024 -- -i --rxq=1 --txq=1
Check the Max possible RX queues and TX queues is 1:
testpmd> show port info all Max possible RX queues: 1 Max possible number of RXDs per queue: 4096 Min possible number of RXDs per queue: 64 RXDs number alignment: 32 Max possible TX queues: 1 Max possible number of TXDs per queue: 4096 Min possible number of TXDs per queue: 64 TXDs number alignment: 32
Start forwarding, you can see the actual queue number is 1:
testpmd> start RX queues=1 - RX desc=128 - RX free threshold=32 TX queues=1 - TX desc=512 - TX free threshold=32
Repeat step1-2 with “queue-num-per-vf=2/4/8/16”, and start VF testpmd with consistent rxq and txq number. check the max queue num and actual queue number is 2/4/8/16.
Test case: set invalid VF max queue number¶
Try the invalid value 0:
./testpmd -c f -n 4 -w 05:00.0,queue-num-per-vf=0 \ --file-prefix=test1 --socket-mem 1024,1024 -- -i
Testpmd started with “i40e_pf_parse_vf_queue_number_handler(): Wrong VF queue number = 0, it must be power of 2 and equal or less than 16 !, Now it is kept the value = 4”
Start VF testpmd with “–rxq=4 –txq=4”, the number of rxq and txq is consistent with the default VF max queue number:
./testpmd -c 0xf0 -n 4 -w 05:02.0 --file-prefix=test2 \ --socket-mem 1024,1024 -- -i --rxq=4 --txq=4
Check the Max possible RX queues and TX queues is 4:
testpmd> show port info all Max possible RX queues: 4 Max possible TX queues: 4
Start forwarding, you can see the actual queue number is 4:
testpmd> start RX queues=4 - RX desc=128 - RX free threshold=32 TX queues=4 - TX desc=512 - TX free threshold=32
Repeat step1-2 with “queue-num-per-vf=6/17/32”, and start VF testpmd with default max rxq and txq number. check the max queue num and actual queue number is 4.
Test case: set VF queue number in testpmd command-line options¶
Set VF max queue number:
./testpmd -c f -n 4 -w 05:00.0,queue-num-per-vf=8 \ --file-prefix=test1 --socket-mem 1024,1024 -- -i
Start VF testpmd with “–rxq=3 –txq=3”:
./testpmd -c 0xf0 -n 4 -w 05:02.0 --file-prefix=test2 \ --socket-mem 1024,1024 -- -i --rxq=3 --txq=3
Check the Max possible RX queues and TX queues is 8:
testpmd> show port info all Max possible RX queues: 8 Max possible TX queues: 8
Start forwarding, you can see the actual queue number is 3:
testpmd> start RX queues=3 - RX desc=128 - RX free threshold=32 TX queues=3 - TX desc=512 - TX free threshold=32
Quit the VF testpmd, then restart VF testpmd with “–rxq=9 –txq=9”:
./testpmd -c 0xf0 -n 4 -w 05:02.0 --file-prefix=test2 \ --socket-mem 1024,1024 -- -i --rxq=9 --txq=9
VF testpmd failed to start with the print:
Fail: nb_rxq(9) is greater than max_rx_queues(8)
Test case: set VF queue number with testpmd function command¶
Set VF max queue number:
./testpmd -c f -n 4 -w 05:00.0,queue-num-per-vf=8 \ --file-prefix=test1 --socket-mem 1024,1024 -- -i
Start VF testpmd without setting “rxq” and “txq”:
./testpmd -c 0xf0 -n 4 -w 05:02.0 --file-prefix=test2 \ --socket-mem 1024,1024 -- -i
Check the Max possible RX queues and TX queues is 8, and actual RX queue number and TX queue number is 1:
testpmd> show port info all Current number of RX queues: 1 Max possible RX queues: 8 Current number of TX queues: 1 Max possible TX queues: 8
Set rx queue number and tx queue number with testpmd function command:
testpmd> port stop all testpmd> port config all rxq 8 testpmd> port config all txq 8 testpmd> port start all
Start forwarding, you can see the actual queue number is 8:
testpmd> show port info all Current number of RX queues: 8 Max possible RX queues: 8 Current number of TX queues: 8 Max possible TX queues: 8
Reset rx queue number and tx queue number to 7:
testpmd> port stop all testpmd> port config all rxq 7 testpmd> port config all txq 7 testpmd> port start all
Start forwarding, you can see the actual queue number is 7:
testpmd> show port info all Current number of RX queues: 7 Max possible RX queues: 8 Current number of TX queues: 7 Max possible TX queues: 8
Reset rx queue number and tx queue number to 9:
testpmd> port stop all testpmd> port config all txq 9 Fail: nb_txq(9) is greater than max_tx_queues(8) testpmd> port config all rxq 9 Fail: nb_rxq(9) is greater than max_rx_queues(8) testpmd> port start all
Start forwarding, you can see the actual queue number is still 7:
testpmd> show port info all Current number of RX queues: 7 Max possible RX queues: 8 Current number of TX queues: 7 Max possible TX queues: 8
Test case: VF max queue number when VF bound to kernel driver¶
Set VF max queue number by PF:
./testpmd -c f -n 4 -w 05:00.0,queue-num-per-vf=2 \ --file-prefix=test1 --socket-mem 1024,1024 -- -i
Check the VF0 rxq and txq number is 2:
# ethtool -S enp5s2 NIC statistics: rx_bytes: 0 rx_unicast: 0 rx_multicast: 0 rx_broadcast: 0 rx_discards: 0 rx_unknown_protocol: 0 tx_bytes: 0 tx_unicast: 0 tx_multicast: 0 tx_broadcast: 0 tx_discards: 0 tx_errors: 0 tx-0.packets: 0 tx-0.bytes: 0 tx-1.packets: 0 tx-1.bytes: 0 rx-0.packets: 0 rx-0.bytes: 0 rx-1.packets: 0 rx-1.bytes: 0
Check the VF1 rxq and txq number is 2 too.
Repeat step1-2 with “queue-num-per-vf=1/4/8/16”, check the rxq and txq number is 1/4/8/16.
Test case: set VF max queue number with max VFs on one PF port¶
Set up max VFs from one PF with DPDK driver Create 32 vfs on four ports fortville NIC:
echo 32 > /sys/bus/pci/devices/0000\:05\:00.0/max_vfs
Create 64 vfs on two ports fortville NIC:
echo 64 > /sys/bus/pci/devices/0000\:05\:00.0/max_vfs
Bind the two of the VFs to DPDK driver:
./usertools/dpdk-devbind.py -b vfio-pci 05:02.0 05:05.7
Set VF max queue number to 16:
./testpmd -c f -n 4 -w 05:00.0,queue-num-per-vf=16 \ --file-prefix=test1 --socket-mem 1024,1024 -- -i
PF port failed to started with “i40e_pf_parameter_init(): Failed to allocate 577 queues, which exceeds the hardware maximum 384” If create 64 vfs, the maximum is 768.
Set VF max queue number to 8:
./testpmd -c f -n 4 -w 05:00.0,queue-num-per-vf=8 \ --file-prefix=test1 --socket-mem 1024,1024 -- -i
Start the two VFs testpmd with “–rxq=8 –txq=8” and “–rxq=6 –txq=6”:
./testpmd -c 0xf0 -n 4 -w 05:02.0 --file-prefix=test2 \ --socket-mem 1024,1024 -- -i --rxq=8 --txq=8 ./testpmd -c 0xf00 -n 4 -w 05:05.7 --file-prefix=test3 \ --socket-mem 1024,1024 -- -i --rxq=6 --txq=6
Check the Max possible RX queues and TX queues of the two VFs are both 8:
testpmd> show port info all Max possible RX queues: 8 Max possible TX queues: 8
Start forwarding, you can see the actual queue number VF0:
testpmd> start RX queues=8 - RX desc=128 - RX free threshold=32 TX queues=8 - TX desc=512 - TX free threshold=32
VF1:
testpmd> start RX queues=6 - RX desc=128 - RX free threshold=32 TX queues=6 - TX desc=512 - TX free threshold=32
Modify the queue number of VF1:
testpmd> stop testpmd> port stop all testpmd> port config all rxq 8 testpmd> port config all txq 7 testpmd> port start all
Start forwarding, you can see the VF1 actual queue number is 8 and 7:
testpmd> start RX queues=8 - RX desc=128 - RX free threshold=32 TX queues=7 - TX desc=512 - TX free threshold=32
Send 256 packets to VF0 and VF1, make sure packets can be distributed to all the queues.
Test case: pass through VF to VM¶
Bind the pf to dpdk driver:
./usertools/dpdk-devbind.py -b igb_uio 05:00.0
Create 1 vf from pf:
echo 1 >/sys/bus/pci/devices/0000:05:00.0/max_vfs
Detach VF from the host, bind them to pci-stub driver:
modprobe pci-stub echo "8086 154c" > /sys/bus/pci/drivers/pci-stub/new_id echo "0000:05:02.0" > /sys/bus/pci/drivers/i40evf/unbind echo "0000:05:02.0" > /sys/bus/pci/drivers/pci-stub/bind
Lauch the VM with the VF PCI passthrough:
taskset -c 5-20 qemu-system-x86_64 \ -enable-kvm -m 8192 -smp cores=16,sockets=1 -cpu host -name dpdk1-vm1 \ -drive file=/home/VM/ubuntu-14.04.img \ -device pci-assign,host=0000:05:02.0 \ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:00:01 \ -localtime -vnc :2 -daemonize
Set VF Max possible RX queues and TX queues to 8 by PF:
./testpmd -c f -n 4 -w 05:00.0,queue-num-per-vf=8 \ --file-prefix=test1 --socket-mem 1024,1024 -- -i
Testpmd can be started normally without any wrong or error.
Start VF testpmd with “–rxq=6 –txq=6”, the number of rxq and txq is consistent with the configured VF max queue number:
./testpmd -c 0xf -n 4 -- -i --rxq=6 --txq=6
Check the Max possible RX queues and TX queues is 8:
testpmd> show port info all Max possible RX queues: 8 Max possible TX queues: 8
Start forwarding, you can see the actual queue number is 6:
testpmd> start RX queues=6 - RX desc=128 - RX free threshold=32 TX queues=6 - TX desc=512 - TX free threshold=32
Modify the queue number of VF:
testpmd> stop testpmd> port stop all testpmd> port config all rxq 8 testpmd> port config all txq 8 testpmd> port start all
Start forwarding, you can see the VF1 actual queue number is 8:
testpmd> start RX queues=8 - RX desc=128 - RX free threshold=32 TX queues=8 - TX desc=512 - TX free threshold=32
Repeat step2-3 with “queue-num-per-vf=1/2/4/16”, and start VF testpmd with consistent rxq and txq number. check the max queue num and actual queue number is 1/2/4/16.
Bind VF to kernel driver i40evf, check the rxq and txq number. if set VF Max possible RX queues and TX queues to 2 by PF, the VF rxq and txq number is 2
#ethtool -S eth0
NIC statistics:
rx_bytes: 0
rx_unicast: 0
rx_multicast: 0
rx_broadcast: 0
rx_discards: 0
rx_unknown_protocol: 0
tx_bytes: 70
tx_unicast: 0
tx_multicast: 1
tx_broadcast: 0
tx_discards: 0
tx_errors: 0
tx-0.packets: 2
tx-0.bytes: 140
tx-1.packets: 6
tx-1.bytes: 1044
rx-0.packets: 0
rx-0.bytes: 0
rx-1.packets: 0
rx-1.bytes: 0
Try to set VF Max possible RX queues and TX queues to 1/4/8/16 by PF,
the VF rxq and txq number is 1/4/8/16::
Vhost/Virtio multiple queue qemu test plan¶
This test plan will cover the vhost/virtio-pmd multiple queue qemu test case. Will use testpmd as the test application.
Test Case: vhost pmd/virtio-pmd PVP 2queues mergeable path performance¶
flow: TG –> NIC –> Vhost –> Virtio–> Vhost –> NIC –> TG
- Bind one port to igb_uio, then launch testpmd by below command:
rm -rf vhost-net* ./testpmd -c 0xe -n 4 –socket-mem 1024,1024 –vdev ‘eth_vhost0,iface=vhost-net,queues=2’ – -i –nb-cores=2 –rxq=2 –txq=2 testpmd>set fwd mac testpmd>start
Launch VM with vectors=2*queue_num+2 and mrg_rxbuf/mq feature on:
qemu-system-x86_64 -name vm1 -cpu host -enable-kvm \ -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem \ -mem-prealloc -smp cores=3,sockets=1 -drive file=/home/osimg/ubuntu16.img \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=6 \ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 \ -vnc :2 -daemonize
- On VM, bind virtio net to igb_uio and run testpmd ::
./testpmd -c 0x07 -n 3 – -i –rxq=2 –txq=2 –txqflags=0xf01 –rss-ip –nb-cores=2 testpmd>set fwd mac testpmd>start
Check the performance for the 2core/2queue for vhost/virtio.
Test Case: PVP virtio-pmd queue number dynamic change¶
This case is to check if the virtio-pmd can work well when queue number dynamic change. In this case, set both vhost-pmd and virtio-pmd max queue number as 2 queues. Launch vhost-pmd with 2 queues. Launch virtio-pmd with 1 queue first then in testpmd, change the number to 2 queues. Expect no crash happened. And after the queue number changes, the virtio-pmd can use 2 queues to RX/TX packets normally.
flow: TG –> NIC –> Vhost –> Virtio–> Vhost –> NIC –> TG
Bind one port to igb_uio, then launch testpmd by below command, ensure the vhost using 2 queues:
rm -rf vhost-net* ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \ --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \ -i --nb-cores=2 --rxq=2 --txq=2 testpmd>set fwd mac testpmd>start testpmd>clear port stats all
Launch VM with vectors=2*queue_num+2 and mrg_rxbuf/mq feature on:
qemu-system-x86_64 -name vm1 -cpu host -enable-kvm \ -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem \ -mem-prealloc -smp cores=3,sockets=1 -drive file=/home/osimg/ubuntu16.img \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=6 \ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 \ -vnc :2 -daemonize
On VM, bind virtio net to igb_uio and run testpmd, using one queue for testing at first:
./testpmd -c 0x7 -n 3 -- -i --rxq=1 --txq=1 --tx-offloads=0x0 \ --rss-ip --nb-cores=1 testpmd>set fwd mac testpmd>start
Use scapy send packet:
#scapy >>>pk1= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.1")/UDP()/("X"*64)] >>>pk2= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.7")/UDP()/("X"*64)] >>>pk3= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.8")/UDP()/("X"*64)] >>>pk4= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.20")/UDP()/("X"*64)] >>>pk= pk1 + pk2 + pk3 + pk4 >>>sendp(pk, iface="ens785f1",count=10) check each queue's RX/TX packet numbers.
On VM, dynamic change queue numbers at virtio-pmd side from 1 queue to 2 queues, then ensure virtio-pmd RX/TX can work normally. The expected behavior is that both queues can RX/TX traffic:
testpmd>stop testpmd>port stop all testpmd>port config all rxq 2 testpmd>port config all txq 2 testpmd>port start all testpmd>start use scapy send packets like step 4. testpmd>stop then check each queue's RX/TX packet numbers.
There should be no core dump or unexpected crash happened during the queue number changes.
Test Case: PVP Vhost-pmd queue number dynamic change¶
This case is to check if the vhost-pmd queue number dynamic change can work well. In this case, set vhost-pmd and virtio-pmd max queue number as 2. Launch vhost-pmd with 1 queue first then in testpmd, change the queue number to 2 queues. At virtio-pmd side, launch it with 2 queues. Expect no crash happened. After the dynamical changes, vhost-pmd can use 2 queues to RX/TX packets.
flow: TG –> NIC –> Vhost –> Virtio–> Vhost –> NIC –> TG
Bind one port to igb_uio, then launch testpmd by below command, ensure the vhost using 2 queues:
rm -rf vhost-net* ./testpmd -c 0xe -n 4 --socket-mem 1024,1024 \ --vdev 'eth_vhost0,iface=vhost-net,queues=2' -- \ -i --nb-cores=1 --rxq=1 --txq=1 testpmd>set fwd mac testpmd>start testpmd>clear port stats all
Launch VM with vectors=2*queue_num+2 and mrg_rxbuf/mq feature on:
qemu-system-x86_64 -name vm1 -cpu host -enable-kvm \ -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem \ -mem-prealloc -smp cores=3,sockets=1 -drive file=/home/osimg/ubuntu16.img \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce,queues=2 \ -device virtio-net-pci,mac=52:54:00:00:00:01,netdev=mynet1,mrg_rxbuf=on,mq=on,vectors=6 \ -netdev tap,id=ipvm1,ifname=tap3,script=/etc/qemu-ifup -device rtl8139,netdev=ipvm1,id=net0,mac=00:00:00:00:10:01 \ -vnc :2 -daemonize
On VM, bind virtio net to igb_uio and run testpmd, using one queue for testing at first:
./testpmd -c 0x7 -n 4 -- -i --rxq=2 --txq=2 \ --tx-offloads=0x0 --rss-ip --nb-cores=2 testpmd>set fwd mac testpmd>start
Use scapy send packet:
#scapy >>>pk1= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.1")/UDP()/("X"*64)] >>>pk2= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.7")/UDP()/("X"*64)] >>>pk3= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.8")/UDP()/("X"*64)] >>>pk4= [Ether(dst="52:54:00:00:00:01")/IP(dst="1.1.1.20")/UDP()/("X"*64)] >>>pk= pk1 + pk2 + pk3 + pk4 >>>sendp(pk, iface="ens785f1", count=10) check each queue's RX/TX packet numbers.
On host, dynamic change queue numbers at vhost-pmd side from 1 queue to 2 queues, then ensure vhost-pmd RX/TX can work normally. The expected behavior is that both queues can RX/TX traffic:
testpmd>stop testpmd>port stop all testpmd>port config all rxq 2 testpmd>port config all txq 2 testpmd>port start all testpmd>start use scapy send packets like step 4. testpmd>stop then check each queue's RX/TX packet numbers.
There should be no core dump or unexpected crash happened during the queue number changes.
Vhost MTU Test Plan¶
The feature test the setting of MTU value of virtio-net and kernel driver.
Prerequisites:¶
The guests kernel should grand than 4.10 The qemu version should greater or equal to 2.9
Test Case: Test the MTU in virtio-net¶
Launch the testpmd by below commands on host, and config mtu:
./testpmd -c 0xc -n 4 --socket-mem 2048,2048 \ --vdev 'net_vhost0,iface=vhost-net,queues=1' \ -- -i --txd=512 --rxd=128 --nb-cores=1 --port-topology=chained testpmd> set fwd mac testpmd> start
Launch VM:
Use the qemu_2.9 or qemu 2.10 to start the VM and the VM kernel should grand than 4.10, set the mtu value to 9000 qemu-system-x86_64 \ -chardev socket,id=char0,path=./vhost-net \ -netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \ -device virtio-net-pci,netdev=netdev0,mrg_rxbuf=on,host_mtu=9000
Check the MTU value in VM:
Use the ifconfig command to check the MTU value of virtio kernel driver is 9000 in VM.
Bind the virtio driver to igb_uio, launch testpmd in VM, and verify the mtu in port info is 9000:
./testpmd -c 0x03 -n 3 \ -- -i --txd=512 --rxd=128 --tx-offloads=0x0 --enable-hw-vlan-strip testpmd> set fwd mac testpmd> start testpmd> show port info 0
- Check the MTU value of virtio in testpmd on host is 9000::
testpmd> show port info 1
Repeat the step 2 ~ 5, change the mtu value to 68, 65535(the minimal value and maximum value), verify the value is changed.
Unit Tests: Cmdline¶
This is the test plan for the Intel® DPDK Random Early Detection feature.
This section explains how to run the unit tests for cmdline. The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
The steps to run the unit test manually are as follow:
# make -C ./app/test/
# ./app/test/test -n 1 -c ffff
RTE>> cmdline_autotest
The final output of the test has to be “Test OK”
Unit Tests: CRC¶
the unit test compare the results of scalar and sse4.2 versions individually with the known crc results. Some of these crc results and corresponding test vectors are based on the test string mentioned in ethernet specification doc and x.25 doc
This section explains how to run the unit tests for crc computation. The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
The steps to run the unit test manually are as follow:
# cd ~/dpdk
# make config T=x86_64-native-linuxapp-gcc
# make test
# ./build/build/test/test/test -n 1 -c ffff
RTE>> crc_autotest
The final output of the test will has to be “Test OK”.
Algorithm Description¶
In some applications, CRC (Cyclic Redundancy Check) needs to be computed or updated during packet processing operations. This patchset adds software implementation of some common standard CRCs (32-bit Ethernet CRC as per Ethernet/[ISO/IEC 8802-3] and 16-bit CCITT-CRC [ITU-T X.25]). Two versions of each 32-bit and 16-bit CRC calculation are proposed.
The first version presents a fast and efficient CRC generation on IA processors by using the carry-less multiplication instruction PCLMULQDQ (i.e SSE4.2 intrinsics). In this implementation, a parallelized folding approach has been used to first reduce an arbitrary length buffer to a small fixed size length buffer (16 bytes) with the help of precomputed constants. The resultant single 16-bytes chunk is further reduced by Barrett reduction method to generate final CRC value. For more details on the implementation, see reference [1].
The second version presents the fallback solution to support the CRC generation without needing any specific support from CPU (for examples- SSE4.2 intrinsics). It is based on generic Look-Up Table(LUT) algorithm that uses precomputed 256 element table as explained in reference[2].
During initialization, all the data structures required for CRC computation are initialized. Also, x86 specific crc implementation (if supported by the platform) or scalar version is enabled.
References: [1] Fast CRC Computation for Generic Polynomials Using PCLMULQDQ Instruction http://www.intel.com/content/dam/www/public/us/en/documents/white-papers /fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf [2] A PAINLESS GUIDE TO CRC ERROR DETECTION ALGORITHMS http://www.ross.net/crc/download/crc_v3.txt
Unit Tests: Cryptodev¶
Description¶
This document provides the plan for testing Cryptodev API via Cryptodev unit tests. Unit tests include supported Hardware and Software PMD(polling mode device) and supported algorithms. Cryptodev API provides ability to do encryption/decryption by integrating QAT(Intel@ QuickAssist Technology) into DPDK. The QAT provides poll mode crypto driver support for Intel@ QuickAssist Adapter 8950 hardware accelerator.
The testing of Crytpodev API should be tested under either Intel QuickAssist Technology DH895xxC hardware accelerator or AES-NI library.
This test suite will run all cryptodev related unit test cases. Alternatively, you could execute the unit tests manually by app/test DPDK application.
Unit Test List¶
- cryptodev_qat_autotest
- cryptodev_qat_perftest
- cryptodev_aesni_mb_perftest
- cryptodev_sw_snow3g_perftest
- cryptodev_qat_snow3g_perftest
- cryptodev_aesni_gcm_perftest
- cryptodev_openssl_perftest
- cryptodev_qat_continual_perftest
- cryptodev_aesni_mb_autotest
- cryptodev_openssl_autotest
- cryptodev_aesni_gcm_autotest
- cryptodev_null_autotest
- cryptodev_sw_snow3g_autotest
- cryptodev_sw_kasumi_autotest
- cryptodev_sw_zuc_autotest
Test Case Setup¶
Build DPDK and app/test app
Bind cryptodev devices to igb_uio driver
Manually verify the app/test by this command, as example, in your build folder:
./app/test -c 1 -n 1 RTE>> cryptodev_qat_autotest
All Unit Test Cases are listed above.
Expected all tests could pass in testing.
Unit Tests: Dump Log History¶
This is the test plan for dump history log of Intel® DPDK .
This section explains how to run the unit tests for dump history log. The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
The steps to run the unit test manually are as follow:
# make -C ./app/test/
# ./app/test/test -n 1 -c ffff
RTE>> dump_log_history
The final output of the test will be the initial log of DPDK.
Unit Tests: Dump Ring¶
This is the test plan for dump the elements of Intel® DPDK ring.
This section explains how to run the unit tests for dump elements of ring. The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
The steps to run the unit test manually are as follow:
# make -C ./app/test/
# ./app/test/test -n 1 -c ffff
RTE>> dump_ring
The final output of the test will be detail elements of DPDK ring.
Unit Tests: Dump Mempool¶
This is the test plan for dump the elements of Intel® DPDK mempool.
This section explains how to run the unit tests for dump elements of mempool. The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
The steps to run the unit test manually are as follow:
# make -C ./app/test/
# ./app/test/test -n 1 -c ffff
RTE>> dump_mempool
The final output of the test will be detail elements of DPDK mempool.
Unit Tests: Dump Physical Memory¶
This is the test plan for dump the elements of Intel® DPDK physical memory.
This section explains how to run the unit tests for dump elements of memory. The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
The steps to run the unit test manually are as follow:
# make -C ./app/test/
# ./app/test/test -n 1 -c ffff
RTE>> dump_physmem
The final output of the test will be detail elements of DPDK physical memory.
Unit Tests: Dump Memzone¶
This is the test plan for dump the elements of Intel® DPDK memzone.
This section explains how to run the unit tests for dump elements of memzone. The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
The steps to run the unit test manually are as follow:
# make -C ./app/test/
# ./app/test/test -n 1 -c ffff
RTE>> dump_memzone
The final output of the test will be detail elements of DPDK memzone.
Unit Tests: Dump Struct Size¶
This is the test plan for dump the size of Intel® DPDK structure.
This section explains how to run the unit tests for dump structure size. The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
The steps to run the unit test manually are as follow:
# make -C ./app/test/
# ./app/test/test -n 1 -c ffff
RTE>> dump_struct_sizes
The final output of the test will be the size of DPDK structure.
Unit Tests: EAL¶
This section describes the tests that are done to validate the EAL. Each test can be launched independently using the command line interface. These tests are implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
Version¶
To Be Filled
Common¶
To Be Filled
Eal_fs¶
To Be Filled
Memory¶
- Dump the mapped memory. The python-expect script checks that at least one line is dumped.
- Check that memory size is different than 0.
- Try to read all memory; it should not segfault.
PCI¶
- Register a driver with a
devinit()
function. - Dump all PCI devices.
- Check that the
devinit()
function is called at least once.
Per-lcore Variables and lcore Launch¶
- Use
rte_eal_mp_remote_launch()
to callassign_vars()
on every available lcore. In this function, a per-lcore variable is assigned to the lcore_id. - Use
rte_eal_mp_remote_launch()
to calldisplay_vars()
on every available lcore. The function checks that the variable is correctly set, or returns -1. - If at least one per-core variable was not correct, the test function returns -1.
Spinlock¶
- There is a global spinlock and a table of spinlocks (one per lcore).
- The test function takes all of these locks and launches the
test_spinlock_per_core()
function on each core (except the master).- The function takes the global lock, displays something, then releases the global lock.
- The function takes the per-lcore lock, displays something, then releases the per-core lock.
- The main function unlocks the per-lcore locks sequentially and waits between each lock. This triggers the display of a message for each core, in the correct order. The autotest script checks that this order is correct.
Rwlock¶
There is a global rwlock and a table of rwlocks (one per lcore).
The test function takes all of these locks and launches the
test_rwlock_per_core()
function on each core (except the master).- The function takes the global write lock, display something, then releases the global lock.
- Then, it takes the per-lcore write lock, display something, and releases the per-core lock.
- Finally, a read lock is taken during 100 ms, then released.
The main function unlocks the per-lcore locks sequentially and waits between each lock. This triggers the display of a message for each core, in the correct order.
Then, it tries to take the global write lock and display the last message. The autotest script checks that the message order is correct.
Atomic Variables¶
The main test function performs three subtests. The first test checks that the usual inc/dec/add/sub functions are working correctly:
- Initialize 32-bit and 64-bit atomic variables to specific values.
- These variables are incremented and decremented on each core at
the same time in
test_atomic_usual()
. - The function checks that once all lcores finish their function, the value of the atomic variables are still the same.
The second test verifies the behavior of “test and set” functions.
- Initialize 32-bit and 64-bit atomic variables to zero.
- Invoke
test_atomic_tas()
on each lcore before doing anything else. The cores are awaiting synchronization using thewhile (rte_atomic32_read(&val) == 0)
statement which is triggered by the main test function. Then all cores do anrte_atomicXX_test_and_set()
at the same time. If it is successful, it increments another atomic counter. - The main function checks that the atomic counter was incremented twice only (one for 32-bit and one for 64-bit values).
Test “add/sub and return”
Initialize 32-bit and 64-bit atomic variables to zero.
Invoke
test_atomic_addsub_return()
on each lcore. Before doing anything else, the cores are waiting a synchro. Each lcore does this operation several times:tmp = atomic_add_return(&a, 1); atomic_add(&count, tmp); tmp = atomic_sub_return(&a, 1); atomic_sub(&count, tmp+1);
At the end of the test, the count value must be 0.
Prefetch¶
Just test that the macro can be called and validate the compilation. The test always return success.
Byteorder functions¶
Check the result of optimized byte swap functions for each size (16-, 32- and 64-bit).
Cycles Test¶
- Loop N times and check that the timer always increments and never decrements during this loop.
- Wait one second using rte_usleep() and check that the increment of cycles is correct with regard to the frequency of the timer.
Logs¶
- Enable log types.
- Set log level.
- Execute logging functions with different types and levels; some should not be displayed.
Memzone¶
- Search for three reserved zones or reserve them if they do not exist:
- One is on any socket id.
- The second is on socket 0.
- The last one is on socket 1 (if socket 1 exists).
- Check that the zones exist.
- Check that the zones are cache-aligned.
- Check that zones do not overlap.
- Check that the zones are on the correct socket id.
- Check that a lookup of the first zone returns the same pointer.
- Check that it is not possible to create another zone with the same name as an existing zone.
Memcpy¶
Create two buffers, and initialize one with random values. These are copied to the second buffer and then compared to see if the copy was successful. The bytes outside the copied area are also checked to make sure they were not changed.
This is repeated for a number of different sizes and offsets, with the second buffer being cleared before each test.
Debug test¶
- Call rte_dump_stack() and rte_dump_registers().
Alarm¶
- Check that the callback for the alarm can to be called.
- Check that it is not possible to set alarm with invalid time value.
- Check that it is not possible to set alarm without a callback.
- Check that it is not possible to cancel alarm without a callback pointer.
- Check that multiple callbacks for the alarm can be called.
- Check that the number of removed and unremoved alarms are correct.
- Check that no callback is called if all alarm removed.
- Check that it is not possible to cancel an alarm within the callback itself.
- Check that the callback which is the head of all is able to be removed.
- Check that all alarms for the same callback can be canceled.
CPU flags¶
- Using the rte_cpu_get_flag_enabled() checks for CPU features from different CPUID tables
- Checks if rte_cpu_get_flag_enabled() properly fails on trying to check for invalid feature
Errno¶
Performs validation on the error message strings provided by the rte_strerror() call, to ensure that suitable strings are returned for the rte-specific error codes, as well as ensuring that for standard error codes the correct error message is returned.
Interrupts¶
- Check that the callback for the specific interrupt can be called.
- Check that it is not possible to register a callback to an invalid interrupt handle.
- Check that it is not possible to register no callback to an interrupt handle.
- Check that it is not possible to unregister a callback to an invalid interrupt handle.
- Check that multiple callbacks are registered to the same interrupt handle.
- Check that it is not possible to unregister a callback with invalid parameter.
- Check that it is not possible to enable an interrupt with invalid handle or wrong handle type.
- Check that it is not possible to disable an interrupt with invalid handle or wrong handle type.
Multiprocess¶
Validates that a secondary Intel DPDK instance can be run alongside a primary when the appropriate EAL command-line flags are passed. Also validates that secondary processes cannot interfere with primary processes by creating memory objects, such as mempools or rings.
String¶
Performs validation on the new string functions provided in rte_string_fns.h, ensuring that all values returned are NULL terminated, and that suitable errors are returned when called with invalid parameters.
Tailq¶
Validates that we can create and perform lookups on named tail queues within the EAL for various object types. Also ensures appropriate error codes are returned from the functions if invalid parameters are passed.
Devargs¶
To Be Filled
Kvargs¶
To Be Filled
Acl¶
To Be Filled
Link_bonding¶
To Be Filled
Unit Tests: KNI¶
This is the test plan for the Intel® DPDK KNI library.
This section explains how to run the unit tests for KNI. The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
The steps to run the unit test manually are as follow:
# make -C ./app/test/
# insmod ./<TARGET>/kmod/igb_uio.ko
# insmod ./<TARGET>/kmod/rte_kni.ko
# ./app/test/test -n 1 -c ffff
RTE>> kni_autotest
RTE>> quit
# rmmod rte_kni
# rmmod igb_uio
The final output of the test has to be “Test OK”
Unit Tests: LPM¶
This is the test plan for the Intel® DPDK LPM Method.
This section explains how to run the unit tests for LPM.The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
The steps to run the unit test manually are as follow:
# make -C ./app/test/
# ./app/test/test -n 1 -c ffff
RTE>> lpm_autotest
The final output of the test has to be “Test OK”
Unit Tests: LPM_ipv6¶
This is the test plan for the Intel® DPDK LPM Method in IPv6.
This section explains how to run the unit tests for LPM in IPv6.The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
The steps to run the unit test manually are as follow:
# make -C ./app/test/
# ./app/test/test -n 1 -c ffff
RTE>> lpm6_autotest
The final output of the test has to be “Test OK”
Unit Tests: Mbuf¶
This is the test plan for the Intel® DPDK mbuf library.
Description¶
- Allocate a mbuf pool.
- The pool contains NB_MBUF elements, where each mbuf is MBUF_SIZE bytes long.
- Test multiple allocations of mbufs from this pool.
- Allocate NB_MBUF and store pointers in a table.
- If an allocation fails, return an error.
- Free all these mbufs.
- Repeat the same test to check that mbufs were freed correctly.
- Test data manipulation in pktmbuf.
- Alloc an mbuf.
- Append data using rte_pktmbuf_append().
- Test for error in rte_pktmbuf_append() when len is too large.
- Trim data at the end of mbuf using rte_pktmbuf_trim().
- Test for error in rte_pktmbuf_trim() when len is too large.
- Prepend a header using rte_pktmbuf_prepend().
- Test for error in rte_pktmbuf_prepend() when len is too large.
- Remove data at the beginning of mbuf using rte_pktmbuf_adj().
- Test for error in rte_pktmbuf_adj() when len is too large.
- Check that appended data is not corrupt.
- Free the mbuf.
- Between all these tests, check data_len and pkt_len, and that the mbuf is contiguous.
- Repeat the test to check that allocation operations reinitialize the mbuf correctly.
Unit Tests: Mempool¶
This is the test plan for the Intel® DPDK mempool library.
Description¶
Basic tests: done on one core with and without cache:
- Get one object, put one object
- Get two objects, put two objects
- Get all objects, test that their content is not modified and put them back in the pool.
Performance tests:
Each core get n_keep objects per bulk of n_get_bulk. Then, objects are put back in the pool per bulk of n_put_bulk.
This sequence is done during TIME_S seconds.
This test is done on the following configurations:
- Cores configuration (cores)
- One core with cache
- Two cores with cache
- Max. cores with cache
- One core without cache
- Two cores without cache
- Max. cores without cache
- Bulk size (n_get_bulk, n_put_bulk)
- Bulk get from 1 to 32
- Bulk put from 1 to 32
- Number of kept objects (n_keep)
- 32
- 128
- Cores configuration (cores)
Unit Tests: PMD Performance¶
Prerequisites¶
One 10Gb Ethernet port of the DUT is directly connected and link is up.
Continuous Mode Performance¶
This is the test plan for unit test to measure cycles/packet in NIC loopback mode.
This section explains how to run the unit tests for pmd performance with continues stream control mode. The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The final output of the test will be average cycles of IO used per packet.
Burst Mode Performance¶
This is the test plan for unit test to measure cycles/packet in NIC loopback mode.
This section explains how to run the unit tests for pmd performance with burst stream control mode. For get accurate scalar fast performance, need disable INC_VECTOR in configuration file first.
The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The final output of the test will be matrix of average cycles of IO used per packet.
Mode rxtx rxonly txonly vector 58 34 23 scalar 89 51 38 full 73 31 42 hybrid 59 35 23
Unit Tests: Power Library¶
This is the test plan for the Intel® DPDK Power library.
Description¶
This section explains how to run the unit tests for Power features. The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
The steps to run the unit test manually are as follow:
# make -C ./app/test/
# ./app/test/test -n 1 -c ffff
RTE>> power_autotest
The final output of the test has to be “Test OK”
ACPI CPU Frequency¶
This section explains how to run the unit tests for Power features. The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
The steps to run the unit test manually are as follow:
# make -C ./app/test/
# ./app/test/test -n 1 -c ffff
RTE>> power_acpi_cpufreq_autotest
The final output of the test has to be “Test OK”
Unit Tests: Random Early Detection (RED)¶
This is the test plan for the Intel® DPDK Random Early Detection feature.
This section explains how to run the unit tests for RED. The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
The steps to run the unit test manually are as follow:
# make -C ./app/test/
# ./app/test/test -n 1 -c ffff
RTE>> red_autotest
The final output of the test has to be “Test OK”
Unit Tests: Metering¶
This is the test plan for the Intel® DPDK Metering feature.
This section explains how to run the unit tests for Meter. The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
The steps to run the unit test manually are as follow:
# make -C ./app/test/
# ./app/test/test -n 1 -c ffff
RTE>> meter_autotest
The final output of the test has to be “Test OK”
Unit tests: Scheduler¶
This is the test plan for the Intel® DPDK Scheduler feature.
This section explains how to run the unit tests for Scheduler. The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
The steps to run the unit test manually are as follow:
# make -C ./app/test/
# ./app/test/test -n 1 -c ffff
RTE>> sched_autotest
The final output of the test has to be “Test OK”
Unit Tests: Ring Pmd¶
This is the test plan for the Intel® DPDK Ring poll mode driver feature.
This section explains how to run the unit tests for ring pmd. The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application and config RTE_LIBRTE_PMD_RING should be modified to ‘Y’.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
Ring pmd unit test required two pair of virtual ethernet devices and one virtual ethernet devices with full rx&tx functions.
The steps to run the unit test manually are as follow:
# make -C ./app/test/
# ./app/test/test -n 1 -c ffff --vdev='net_ring0'
--vdev='net_ring1'
RTE>> ring_pmd_autotest
The final output of the test has to be “Test OK”
Unit Tests: Ring¶
This is the test plan for the Intel® DPDK ring library.
Description¶
Basic tests (done on one core)
- Using single producer/single consumer functions:
- Enqueue one object, two objects, MAX_BULK objects
- Dequeue one object, two objects, MAX_BULK objects
- Check that dequeued pointers are correct
- Using multi producers/multi consumers functions:
- Enqueue one object, two objects, MAX_BULK objects
- Dequeue one object, two objects, MAX_BULK objects
- Check that dequeued pointers are correct
- Test watermark and default bulk enqueue/dequeue:
- Set watermark
- Set default bulk value
- Enqueue objects, check that -EDQUOT is returned when watermark is exceeded
- Check that dequeued pointers are correct
- Using single producer/single consumer functions:
Check quota and watermark
- Start a loop on another lcore that will enqueue and dequeue objects in a ring. It will monitor the value of quota (default bulk count) and watermark.
- At the same time, change the quota and the watermark on the master lcore.
- The slave lcore will check that bulk count changes from 4 to 8, and watermark changes from 16 to 32.
Performance tests
This test is done on the following configurations:
- One core enqueuing, one core dequeuing
- One core enqueuing, other cores dequeuing
- One core dequeuing, other cores enqueuing
- Half of the cores enqueuing, the other half dequeuing
When only one core enqueues/dequeues, the test is done with the SP/SC functions in addition to the MP/MC functions.
The test is done with different bulk size.
On each core, the test enqueues or dequeues objects during TIME_S seconds. The number of successes and failures are stored on each core, then summed and displayed.
The test checks that the number of enqueues is equal to the number of dequeues.
Change watermark and quota
Use the command line to change the value of quota and watermark. Then dump the status of ring to check that the values are correctly updated in the ring structure.
Unit Tests: Ring Performance¶
This is the test plan for the Intel® DPDK LPM Method in IPv6.
This section explains how to run the unit tests for LPM in IPv6.The test can be launched independently using the command line interface. This test is implemented as a linuxapp environment application.
The complete test suite is launched automatically using a python-expect
script (launched using make test
) that sends commands to
the application and checks the results. A test report is displayed on
stdout.
The steps to run the unit test manually are as follow:
# make -C ./app/test/
# ./app/test/test -n 1 -c ffff
RTE>> ring_perf_autotest
The final output of the test has to be “Test OK”
Unit tests: Timer¶
This section describes the test plan for the timer library.
Description¶
Stress tests.
The objective of the timer stress tests is to check that there are no race conditions in list and status management. This test launches, resets and stops the timer very often on many cores at the same time.
- Only one timer is used for this test.
- On each core, the rte_timer_manage() function is called from the main loop every 3 microseconds.
- In the main loop, the timer may be reset (randomly, with a probability of 0.5 %) 100 microseconds later on a random core, or stopped (with a probability of 0.5 % also).
- In callback, the timer is can be reset (randomly, with a probability of 0.5 %) 100 microseconds later on the same core or on another core (same probability), or stopped (same probability).
Basic test.
This test performs basic functional checks of the timers. The test uses four different timers that are loaded and stopped under specific conditions in specific contexts.
- Four timers are used for this test.
- On each core, the rte_timer_manage() function is called from main loop every 3 microseconds.
The autotest python script checks that the behavior is correct:
- timer0
- At initialization, timer0 is loaded by the master core, on master core in “single” mode (time = 1 second).
- In the first 19 callbacks, timer0 is reloaded on the same core, then, it is explicitly stopped at the 20th call.
- At t=25s, timer0 is reloaded once by timer2.
- timer1
- At initialization, timer1 is loaded by the master core, on the master core in “single” mode (time = 2 seconds).
- In the first 9 callbacks, timer1 is reloaded on another core. After the 10th callback, timer1 is not reloaded anymore.
- timer2
- At initialization, timer2 is loaded by the master core, on the master core in “periodical” mode (time = 1 second).
- In the callback, when t=25s, it stops timer3 and reloads timer0 on the current core.
- timer3
- At initialization, timer3 is loaded by the master core, on another core in “periodical” mode (time = 1 second).
- It is stopped at t=25s by timer2.
Sample Application Tests: Cmdline Example¶
The cmdline example is a demo example of command line interface in RTE. This library is a readline-like interface that can be used to debug your RTE application.
It supports some features of GNU readline like completion, cut/paste, and some other special bindings that makes configuration and debug faster and easier.
This demo shows how rte_cmdline library can be extended to handle a list of objects. There are 3 simple commands:
add obj_name IP
: add a new object with an IP/IPv6 address associated to it.del obj_name
: del the specified object.show obj_name
: show the IP associated with the specified object.
Refer to programmer’s guide in ${RTE_SDK}/doc/rst
for details.
Prerequisites¶
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
Launch the cmdline
with 24 logical cores in linuxapp environment:
$ ./build/app/cmdline -cffffff
Test the 3 simple commands in below prompt
example>
Test Case: cmdline sample commands test¶
Add a test object with an IP address associated to it:
example>add object 192.168.0.1
Object object added, ip=192.168.0.1
Verify the object existence:
example>add object 192.168.0.1
Object object already exist
Show the object result by show
command:
example>show object
Object object, ip=192.168.0.1
Verify the output matches the configuration.
Delete the object in cmdline and show the result again:
example>del object
Object object removed, ip=192.168.0.1
Double delete the object to verify the correctness:
example>del object
Bad arguments
Verify no such object exist now.:
example>show object
Bad arguments
Verify the hidden command ? and help command:
example>help
Demo example of command line interface in RTE
This is a readline-like interface that can be used to
debug your RTE application. It supports some features
of GNU readline like completion, cut/paste, and some
other special bindings.
This demo shows how rte_cmdline library can be
extended to handle a list of objects. There are
3 commands:
- add obj_name IP
- del obj_name
- show obj_name
example>?
show [Mul-choice STRING]: Show/del an object
del [Mul-choice STRING]: Show/del an object
add [Fixed STRING]: Add an object (name, val)
help [Fixed STRING]: show help
Sample Application Tests: Hello World Example¶
This example is one of the most simple RTE application that can be done. The program will just print a “helloworld” message on every enabled lcore.
Command Usage:
./helloworld -c COREMASK [-m NB] [-r NUM] [-n NUM]
EAL option list:
-c COREMASK: hexadecimal bitmask of cores we are running on
-m MB : memory to allocate (default = size of hugemem)
-n NUM : force number of memory channels (don't detect)
-r NUM : force number of memory ranks (don't detect)
--huge-file: base filename for hugetlbfs entries
debug options:
--no-huge : use malloc instead of hugetlbfs
--no-pci : disable pci
--no-hpet : disable hpet
--no-shconf: no shared config (mmap'd files)
Prerequisites¶
Support igb_uio and vfio driver, if used vfio, kernel need 3.6+ and enable vt-d in bios. When used vfio , used “modprobe vfio” and “modprobe vfio-pci” insmod vfio driver, then used “./tools/dpdk_nic_bind.py –bind=vfio-pci device_bus_id” to bind vfio driver to test driver.
To find out the mapping of lcores (processor) to core id and socket (physical id), the command below can be used:
$ grep "processor\|physical id\|core id\|^$" /proc/cpuinfo
The total logical core number will be used as helloworld
input parameters.
Test Case: run hello world on single lcores¶
To run example in single lcore
$ ./helloworld -c 1
hello from core 0
Check the output is exact the lcore 0
Test Case: run hello world on every lcores¶
To run the example in all the enabled lcore
$ ./helloworld -cffffff
hello from core 1
hello from core 2
hello from core 3
...
...
hello from core 0
Verify the output of according to all the core masks.
Sample Application Tests: Keep Alive Example¶
The Keep Alive application is a simple example of a heartbeat/watchdog for packet processing cores. It demonstrates how to detect ‘failed’ DPDK cores and notify a fault management entity of this failure. Its purpose is to ensure the failure of the core does not result in a fault that is not detectable by a management entity.
Overview¶
The application demonstrates how to protect against ‘silent outages’ on packet processing cores. A Keep Alive Monitor Agent Core (master) monitors the state of packet processing cores (worker cores) by dispatching pings at a regular time interval (default is 5ms) and monitoring the state of the cores. Cores states are: Alive, MIA, Dead or Buried. MIA indicates a missed ping, and Dead indicates two missed pings within the specified time interval. When a core is Dead, a callback function is invoked to restart the packet processing core; A real life application might use this callback function to notify a higher level fault management entity of the core failure in order to take the appropriate corrective action.
Note: Only the worker cores are monitored. A local (on the host) mechanism or agent to supervise the Keep Alive Monitor Agent Core DPDK core is required to detect its failure.
Note: This application is based on the L2 Forwarding Sample Application (in Real and Virtualized Environments). As such, the initialization and run-time paths are very similar to those of the L2 forwarding application.
Compiling the Application¶
To compile the application:
Go to the sample application directory:
export RTE_SDK=/path/to/rte_sdk cd ${RTE_SDK}/examples/keep_alive
Set the target (a default target is used if not specified). For example:
export RTE_TARGET=x86_64-native-linuxapp-gcc
See the DPDK Getting Started Guide for possible RTE_TARGET values. Build the application:
make
Running the Application¶
The application has a number of command line options:
./build/l2fwd-keepalive [EAL options] -- -p PORTMASK [-q NQ] [-K PERIOD] [-T PERIOD]
where,
- p PORTMASK: A hexadecimal bitmask of the ports to configure
- q NQ: A number of queues (=ports) per lcore (default is 1)
- K PERIOD: Heartbeat check period in ms(5ms default; 86400 max)
- T PERIOD: statistics will be refreshed each PERIOD seconds (0 to disable, 10 default, 86400 maximum).
To run the application in linuxapp environment with 4 lcores, 16 ports 8 RX queues per lcore and a ping interval of 10ms, issue the command:
./build/l2fwd-keepalive -c f -n 4 -- -q 8 -p ffff -K 10
Refer to the DPDK Getting Started Guide for general information on running applications and the Environment Abstraction Layer (EAL) options.
Sample Application Tests: Multi-Process¶
Simple MP Application Test¶
Description¶
This test is a basic multi-process test which demonstrates the basics of sharing information between Intel DPDK processes. The same application binary is run twice - once as a primary instance, and once as a secondary instance. Messages are sent from primary to secondary and vice versa, demonstrating the processes are sharing memory and can communicate using rte_ring structures.
Prerequisites¶
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
Assuming that an Intel DPDK build has been set up and the multi-process sample applications have been built.
Test Case: Basic operation¶
To run the application, start one copy of the simple_mp binary in one terminal, passing at least two cores in the coremask, as follows:
./build/simple_mp -c 3 --proc-type=primary
The process should start successfully and display a command prompt as follows:
$ ./build/simple_mp -c 3 --proc-type=primary EAL: coremask set to 3 EAL: Detected lcore 0 on socket 0 EAL: Detected lcore 1 on socket 0 EAL: Detected lcore 2 on socket 0 EAL: Detected lcore 3 on socket 0 ... EAL: Requesting 2 pages of size 1073741824 EAL: Requesting 768 pages of size 2097152 EAL: Ask a virtual area of 0x40000000 bytes EAL: Virtual area found at 0x7ff200000000 (size = 0x40000000) ... EAL: check igb_uio module EAL: check module finished EAL: Master core 0 is ready (tid=54e41820) EAL: Core 1 is ready (tid=53b32700) Starting core 1 simple_mp >
To run the secondary process to communicate with the primary process, again run the same binary setting at least two cores in the coremask.:
./build/simple_mp -c C --proc-type=secondary
Once the process type is specified correctly, the process starts up, displaying largely similar status messages to the primary instance as it initializes. Once again, you will be presented with a command prompt.
Once both processes are running, messages can be sent between them using the send command. At any stage, either process can be terminated using the quit command.
Validate that this is working by sending a message between each process, both from primary to secondary and back again. This is shown below.
Transcript from the primary - text entered by used shown in
{}
:EAL: Master core 10 is ready (tid=b5f89820) EAL: Core 11 is ready (tid=84ffe700) Starting core 11 simple_mp > {send hello_secondary} simple_mp > core 11: Received 'hello_primary' simple_mp > {quit}
Transcript from the secondary - text entered by the user is shown in
{}
:EAL: Master core 8 is ready (tid=864a3820) EAL: Core 9 is ready (tid=85995700) Starting core 9 simple_mp > core 9: Received 'hello_secondary' simple_mp > {send hello_primary} simple_mp > {quit}
Test Case: Load test of Simple MP application¶
- Start up the sample application using the commands outlined in steps 1 & 2 above.
- To load test, send a large number of strings (>5000), from the primary instance to the secondary instance, and then from the secondary instance to the primary. [NOTE: A good source of strings to use is /usr/share/dict/words which contains >400000 ascii strings on Fedora 14]
Test Case: Test use of Auto for Application Startup¶
- Start the primary application as in Test 1, Step 1, except replace
--proc-type=primary
with--proc-type=auto
- Validate that the application prints the line:
EAL: Auto-detected process type: PRIMARY
on startup. - Start the secondary application as in Test 1, Step 2, except replace
--proc-type=secondary
with--proc-type=auto
. - Validate that the application prints the line:
EAL: Auto-detected process type: SECONDARY
on startup. - Verify that processes can communicate by sending strings, as in Test 1, Step 3.
Test Case: Test running multiple processes without “–proc-type” flag¶
Start up the primary process as in Test 1, Step 1, except omit the
--proc-type
flag completely.Validate that process starts up as normal, and returns the
simple_mp>
prompt.Start up the secondary process as in Test 1, Step 2, except omit the
--proc-type
flag.Verify that the process fails to start and prints an error message as below:
"PANIC in rte_eal_config_create(): Cannot create lock on '/path/to/.rte_config'. Is another primary process running?"
Symmetric MP Application Test¶
Description¶
This test is a multi-process test which demonstrates how multiple processes can work together to perform packet I/O and packet processing in parallel, much as other example application work by using multiple threads. In this example, each process reads packets from all network ports being used - though from a different RX queue in each case. Those packets are then forwarded by each process which sends them out by writing them directly to a suitable TX queue.
Prerequisites¶
Assuming that an Intel� DPDK build has been set up and the multi-process sample applications have been built. It is also assumed that a traffic generator has been configured and plugged in to the NIC ports 0 and 1.
Test Methodology¶
As with the simple_mp example, the first instance of the symmetric_mp process must be run as the primary instance, though with a number of other application specific parameters also provided after the EAL arguments. These additional parameters are:
- -p <portmask>, where portmask is a hexadecimal bitmask of what ports on the system are to be used. For example: -p 3 to use ports 0 and 1 only.
- –num-procs <N>, where N is the total number of symmetric_mp instances that will be run side-by-side to perform packet processing. This parameter is used to configure the appropriate number of receive queues on each network port.
- –proc-id <n>, where n is a numeric value in the range 0 <= n < N (number of processes, specified above). This identifies which symmetric_mp instance is being run, so that each process can read a unique receive queue on each network port.
The secondary symmetric_mp instances must also have these parameters specified, and the first two must be the same as those passed to the primary instance, or errors result.
For example, to run a set of four symmetric_mp instances, running on lcores 1-4, all performing level-2 forwarding of packets between ports 0 and 1, the following commands can be used (assuming run as root):
./build/symmetric_mp -c 2 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=0
./build/symmetric_mp -c 4 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=1
./build/symmetric_mp -c 8 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=2
./build/symmetric_mp -c 10 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=3
To run only 1 or 2 instances, the above parameters to the 1 or 2 instances being
run should remain the same, except for the num-procs
value, which should be
adjusted appropriately.
Test Case: Performance Tests¶
Run the multiprocess application using standard IP traffic - varying source and destination address information to allow RSS to evenly distribute packets among RX queues. Record traffic throughput results as below.
Num-procs | 1 | 2 | 2 | 4 | 4 | 8 |
Cores/Threads | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 |
Num Ports | 2 | 2 | 2 | 2 | 2 | 2 |
Packet Size | 64 | 64 | 64 | 64 | 64 | 64 |
%-age Line Rate | X | X | X | X | X | X |
Packet Rate(mpps) | X | X | X | X | X | X |
Client Server Multiprocess Tests¶
Description¶
The client-server sample application demonstrates the ability of Intel� DPDK to use multiple processes in which a server process performs packet I/O and one or multiple client processes perform packet processing. The server process controls load balancing on the traffic received from a number of input ports to a user-specified number of clients. The client processes forward the received traffic, outputting the packets directly by writing them to the TX rings of the outgoing ports.
Prerequisites¶
Assuming that an Intel� DPDK build has been set up and the multi-process sample application has been built. Also assuming a traffic generator is connected to the ports “0” and “1”.
It is important to run the server application before the client application, as the server application manages both the NIC ports with packet transmission and reception, as well as shared memory areas and client queues.
Run the Server Application:
- Provide the core mask on which the server process is to run using -c, e.g. -c 3 (bitmask number).
- Set the number of ports to be engaged using -p, e.g. -p 3 refers to ports 0 & 1.
- Define the maximum number of clients using -n, e.g. -n 8.
The command line below is an example on how to start the server process on logical core 2 to handle a maximum of 8 client processes configured to run on socket 0 to handle traffic from NIC ports 0 and 1:
root@host:mp_server# ./build/mp_server -c 2 -- -p 3 -n 8
NOTE: If an additional second core is given in the coremask to the server process that second core will be used to print statistics. When benchmarking, only a single lcore is needed for the server process
Run the Client application:
- In another terminal run the client application.
- Give each client a distinct core mask with -c.
- Give each client a unique client-id with -n.
An example commands to run 8 client processes is as follows:
root@host:mp_client# ./build/mp_client -c 40 --proc-type=secondary -- -n 0 &
root@host:mp_client# ./build/mp_client -c 100 --proc-type=secondary -- -n 1 &
root@host:mp_client# ./build/mp_client -c 400 --proc-type=secondary -- -n 2 &
root@host:mp_client# ./build/mp_client -c 1000 --proc-type=secondary -- -n 3 &
root@host:mp_client# ./build/mp_client -c 4000 --proc-type=secondary -- -n 4 &
root@host:mp_client# ./build/mp_client -c 10000 --proc-type=secondary -- -n 5 &
root@host:mp_client# ./build/mp_client -c 40000 --proc-type=secondary -- -n 6 &
root@host:mp_client# ./build/mp_client -c 100000 --proc-type=secondary -- -n 7 &
Test Case: Performance Measurement¶
- On the traffic generator set up a traffic flow in both directions specifying IP traffic.
- Run the server and client applications as above.
- Start the traffic and record the throughput for transmitted and received packets.
An example set of results is shown below.
Server threads | 1 | 1 | 1 | 1 | 1 | 1 |
Server Cores/Threads | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 | 1/1 |
Num-clients | 1 | 2 | 2 | 4 | 4 | 8 |
Client Cores/Threads | 1/1 | 1/2 | 2/1 | 2/2 | 4/1 | 4/2 |
Num Ports | 2 | 2 | 2 | 2 | 2 | 2 |
Packet Size | 64 | 64 | 64 | 64 | 64 | 64 |
%-age Line Rate | X | X | X | X | X | X |
Packet Rate(mpps) | X | X | X | X | X | X |
Sample Application Tests: Netmap Compatibility¶
Introduction¶
The Netmap compatibility library provides a minimal set of APIs to give programs written against the Netmap APIs the ability to be run, with minimal changes to their source code, using the DPDK to perform the actual packet I/O.
Since Netmap applications use regular system calls, like open()
, ioctl()
and
mmap()
to communicate with the Netmap kernel module performing the packet I/O,
the compat_netmap
library provides a set of similar APIs to use in place of those system calls,
effectively turning a Netmap application into a DPDK application.
The provided library is currently minimal and doesn’t support all the features that Netmap supports, but is enough to run simple applications, such as the bridge example detailed below.
Knowledge of Netmap is required to understand the rest of this section. Please refer to the Netmap distribution for details about Netmap.
Running the “bridge” Sample Application¶
The application requires a single command line option:
./build/bridge [EAL options] -- -i INTERFACE_A [-i INTERFACE_B]
Where:
-i INTERFACE
: Interface (DPDK port number) to use.If a single
-i
parameter is given, the interface will send back all the traffic it receives. If two-i
parameters are given, the two interfaces form a bridge, where traffic received on one interface is replicated and sent to the other interface.
For example, to run the application in a linuxapp environment using port 0 and 2:
./build/bridge [EAL options] -- -i 0 -i 2
Refer to the DPDK Getting Started Guide for Linux for general information on running applications and the Environment Abstraction Layer (EAL) options.
Test Case1: netmap compat with one port¶
Run bridge with one port:
./examples/netmap_compat/build/bridge -c 0x1e -n 4 -- -i 0
waked up:
Port 0 now in Netmap mode
Bridge up and running!
Send one packet on Port0,check this port receive packet. It receive one packet that it send.
Test Case2: netmap compat with two port¶
Run bridge with one port:
./examples/netmap_compat/build/bridge -c 0x1e -n 4 -- -i 0 -i 1
waked up:
Port 0 now in Netmap mode
Port 1 now in Netmap mode
Bridge up and running!
Send one packet on Port0,check the port1 receive packet. It receive one packet that the port0 send.
Sample Application Tests: Quota and Water-mark¶
This document provides test plan for benchmarking of the Quota and Water-mark sample application. This is a simple example app featuring packet processing using Intel® Data Plane Development Kit (Intel® DPDK) that show-cases the use of a quota as the maximum number of packets enqueue/dequeue at a time and low and high water-marks to signal low and high ring usage respectively. Additionally, it shows how ring water-marks can be used to feedback congestion notifications to data producers by temporarily stopping processing overloaded rings and sending Ethernet flow control frames.
Prerequisites¶
2x Intel® 82599 (Niantic) NICs (2x 10GbE full duplex optical ports per NIC) plugged into the available PCIe Gen2 8-lane slots in two different configurations:
- card0 and card1 attached to socket0.
- card0 attached to socket0 and card1 to socket1.
Test cases¶
The idea behind the testing process is to send a fixed number of frames from the traffic generator to the DUT while these are being forwarded back by the app and measure some of statistics. Those configurable parameters exposed by the control app will be modified to see how these affect into the app’s performance.Functional test is only used for checking packet transfer flow with low watermark packets.
The statistics to be measured are explained below. A table will be presented showing all the different permutations.
Ring size
- Size of the rings that interconnect two adjacent cores within the pipeline.
Quota
- Value controls how many packets are being moved through the pipeline per en-queue and de-queue.
Low water-mark
- Global threshold that will resume en-queuing on a ring once its usage goes below it.
High water-mark
- Threshold that will stop en-queuing on rings for which the usage has it.
Frames sent
- Number of frames sent from the traffic generator.
Frames received
- Number of frames received on the traffic generator once they were forwarded back by the app.
Control flow frames received
- Number of Control flow frames (PAUSE frame defined by the IEEE 802.3x standard) received on the traffic generator TX port.
Transmit rate (Mpps)
- Rate of transmission. It is calculated dividing the number of sent packets over the time it took the traffic generator to send them.
Ring size Quota Low water-mark High water-mark Frames sent Frames received Control flow frames received Transmit rate (Mpps) 64 5 1 5 15000000 64 5 10 20 15000000 64 5 10 99 15000000 64 5 60 99 15000000 64 5 90 99 15000000 64 5 10 80 15000000 64 5 50 80 15000000
Test Case 1: Quota and Water-mark one socket (functional)¶
Using No.1 card configuration.
This test case calls the application using cores and ports masks similar to the ones shown below.
- Core mask
0xFF00
- Port mask
0x280
This core mask will make use of eight physical cores within the same socket. The used ports belong to different NIC’s attached to the same socket.
Sample command:
./examples/quota_watermark/qw/build/qw -c 0xFF00 -n 4 -- -p 0x280
After boot up qw and qwctl, send IP packets by scapy with low watermark value. Command format:
sendp([Ether()/IP()/("X"*26)]*<low watermark value>, iface="<port name>")
Sample command:
sendp([Ether()/IP()/("X"*26)]*10, iface="p785p1")
Test Case 2: Quota and Water-mark one socket (performance)¶
This test case calls the application using cores and ports masks similar to the ones shown below.
- Core mask
0xFF00
- Port mask
0x280
This core mask will make use of eight physical cores within the same socket. The used ports belong to different NIC’s attached to the same socket.
Sample command:
./examples/quota_watermark/qw/build/qw -c 0xFF00 -n 4 -- -p 0x280
Test Case 3: Quota and Water-mark two sockets (performance)¶
This test case calls the application using a core and port mask similar to the ones shown below.
- Core mask
0x0FF0
- Port mask
0x202
This core mask will make use of eight physical cores; four within the first
socket and four on the second one. The RX port will be attached to the first
socket whereas the TX is to the second. This configuration will provoke the
traffic going through the pipeline pass through the QPI
channel.
Sample command:
./examples/quota_watermark/qw/build/qw -c 0x8180706 -n 4 -- -p 0x202
Sample Application Tests: RX/TX Callbacks¶
The RX/TX Callbacks sample application is a packet forwarding application that demonstrates the use of user defined callbacks on received and transmitted packets. The application performs a simple latency check, using callbacks, to determine the time packets spend within the application.
In the sample application a user defined callback is applied to all received packets to add a timestamp. A separate callback is applied to all packets prior to transmission to calculate the elapsed time, in CPU cycles.
Running the Application¶
Open common_base and set CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y
.
To run the example in a linuxapp
environment:
./build/rxtx_callbacks -c 2 -n 4
Refer to DPDK Getting Started Guide for general information on running applications and the Environment Abstraction Layer (EAL) options.
test_rxtx_callbacks¶
Running:
./examples/rxtx_callbacks/build/rxtx_callbacks -c 2 -n 4
waked up::
Core X forwarding packets.
Send one packet on Port0,check the port1 receive packet. It receive one packet that the port0 send.
Sample Application Tests: Basic Forwarding/Skeleton Application¶
The Basic Forwarding sample application is a simple skeleton example of a forwarding application.
It is intended as a demonstration of the basic components of a DPDK forwarding application. For more detailed implementations see the L2 and L3 forwarding sample applications.
Running the Application¶
To run the example in a linuxapp
environment:
./build/basicfwd -c 2 -n 4
Refer to DPDK Getting Started Guide for general information on running applications and the Environment Abstraction Layer (EAL) options.
test_skeleton¶
Running:
./examples/skeleton/build/basicfwd -c 2 -n 4
waked up:
Core X forwarding packets.
Send one packet on Port0,check the port1 receive packet. It receive one packet that the port0 send.
Sample Application Tests: Timer Example¶
This example shows how timer can be used in a RTE application. This program print some messages from different lcores regularly, demonstrating how to use timers.
If using vfio the kernel must be >= 3.6+ and VT-d must be enabled in bios.When using vfio, use the following commands to to load the vfio driver and bind it to the device under test:
modprobe vfio
modprobe vfio-pci
usertools/dpdk-devbind.py --bind=vfio-pci device_bus_id
In the timer example there are two timers.
Timer 0 is periodical, running on the master lcore, reloaded automatically every second.
Timer 1 is single one, being loaded manually by every second/3 , once manually load will switch to next lcore.
Usage of application:
./timer [EAL options]
Where the EAL options are:
EAL option list:
-c COREMASK: hexadecimal bitmask of cores we are running on
-m MB : memory to allocate (default = size of hugemem)
-n NUM : force number of memory channels (don't detect)
-r NUM : force number of memory ranks (don't detect)
--huge-file: base filename for hugetlbfs entries
debug options:
--no-huge : use malloc instead of hugetlbfs
--no-pci : disable pci
--no-hpet : disable hpet
--no-shconf: no shared config (mmap'd files)
Prerequisites¶
To find out the mapping of lcores (processor) to core id and socket (physical id), the command below can be used:
$ grep "processor\|physical id\|core id\|^$" /proc/cpuinfo
The number of logical core will be used as parameter to the timer example.
Test Case: timer callbacks running on targeted cores¶
To run the example in linuxapp environment:
./timer -c ffffff
Timer0, every second, on master lcore, reloaded automatically. The check output as below by every second on master lcore:
timer0_cb() on lcore 0
Timer1, every second/3, on next lcore, reloaded manually. The check output as below by every second/3 on master lcore:
timer1_cb() on lcore 1
timer1_cb() on lcore 2
timer1_cb() on lcore 3
timer1_cb() on lcore 4
...
...
...
timer1_cb() on lcore 23
Verify the timer0_cb
and timer1_cb
care called properly
on the target cores.
Sample Application Tests: Vxlan Example¶
Vxlan sample simulates a VXLAN Tunnel Endpoint (VTEP) termination in DPDK. It is used to demonstrate the offload and filtering capabilities of i40 NIC for VXLAN packet.
Vxlan sample uses the basic virtio devices management function from vHOST example, and the US-vHost interface and tunnel filtering mechanism to direct the traffic to/from a specific VM.
Vxlan sample is also designed to show how tunneling protocols can be handled.
Prerequisites¶
1x Intel® X710 (Fortville) NICs (2x 40GbE full duplex optical ports per NIC) plugged into the available PCIe Gen3 8-lane slot.
2x Intel® XL710-DA4 (Eagle Fountain) (1x 10GbE full duplex optical ports per NIC) plugged into the available PCIe Gen3 8-lane slot.
DUT board must be two sockets system and each cpu have more than 8 lcores.
Update qemu-system-x86_64 to version 2.2.0 which support hugepage based memory. Prepare vhost-use requested modules:
modprobe fuse
modprobe cuse
insmod lib/librte_vhost/eventfd_link/eventfd_link.ko
Allocate 4096*2M hugepages for vm and dpdk:
echo 4096 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
Test Case: Vxlan Sample Encap packet¶
Start vxlan sample with only encapsulation enable:
tep_termination -c 0xf -n 3 --socket-mem 2048,2048 -- -p 0x1 \
--udp-port 4789 --nb-devices 2 --filter-type 3 --tx-checksum 0 \
--encap 1 --decap 0
Wait for vhost-net socket device created and message dumped:
VHOST_CONFIG: bind to vhost-net
Start virtual machine with hugepage based memory and two vhost-user devices:
qemu-system-x86_64 -name vm0 -enable-kvm -daemonize \
-cpu host -smp 4 -m 4096 \
-object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on \
-numa node,memdev=mem -mem-prealloc \
-chardev socket,id=char0,path=./dpdk/vhost-net \
-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce \
-device virtio-net-pci,netdev=netdev0,mac=00:00:20:00:00:20 \
-chardev socket,id=char1,path=./dpdk/vhost-net \
-netdev type=vhost-user,id=netdev1,chardev=char1,vhostforce \
-device virtio-net-pci,netdev=netdev1,mac=00:00:20:00:00:21 \
-drive file=/storage/vm-image/vm0.img -vnc :1
Login into virtual machine and start testpmd with additional arguments:
testpmd -c f -n 3 -- -i --tx-offloads=0x8000 --disable-hw-vlan
Start packet forward of testpmd and transit several packets for mac learning:
testpmd> set fwd mac
testpmd> start tx_first
Make sure virtIO port registered normally:
VHOST_CONFIG: virtio is now ready for processing.
VHOST_DATA: (1) Device has been added to data core 56
VHOST_DATA: (1) MAC_ADDRESS 00:00:20:00:00:21 and VNI 1000 registered
VHOST_DATA: (0) MAC_ADDRESS 00:00:20:00:00:20 and VNI 1000 registered
Send normal udp packet to PF device and packet dmac match PF device Verify packet has been received in virtIO port0 and forwarded by port1:
testpmd> show port stats all
Verify encapsulated packet received on PF device
Test Case: Vxlan Sample Decap packet¶
Start vxlan sample with only decapsulation enable:
tep_termination -c 0xf -n 3 --socket-mem 2048,2048 -- -p 0x1 \
--udp-port 4789 --nb-devices 2 --filter-type 3 --tx-checksum 0 \
--encap 0 --decap 1
Start vhost-user test environment like case vxlan_sample_encap.
Send vxlan packet to PF the device:
Ether(dst=PF mac)/IP/UDP/vni(1000)/Ether(dst=virtIO port0)/IP/UDP
Verify that packet received by virtIO port0 and forwarded by virtIO port1:
testpmd> show port stats all
Verify that PF received packet just the same as inner packet
Send vxlan packet to the PF device:
Ether(dst=PF mac)/IP/UDP/vni(1000)/Ether(dst=virtIO port1)/IP/UDP
Verify that packet received by virtIO port1 and forwarded by virtIO port0:
testpmd> show port stats all
Make sure PF received packet received inner packet with mac reversed.
Test Case: Vxlan Sample Encap and Decap¶
Start vxlan sample with only decapsulation enable:
tep_termination -c 0xf -n 3 --socket-mem 2048,2048 -- -p 0x1 \
--udp-port 4789 --nb-devices 2 --filter-type 3 --tx-checksum 0 \
--encap 1 --decap 1
Start vhost-user test environment like case vxlan_sample_encap
Send vxlan packet to the PF device:
Ether(dst=PF mac)/IP/UDP/vni(1000)/Ether(dst=virtIO port0)/IP/UDP
Verify that packet received by virtIO port0 and forwarded by virtIO port1:
testpmd> show port stats all
Verify encapsulated packet received on PF device. Verify that inner packet src and dst mac address have been conversed.
Test Case: Vxlan Sample Checksum¶
Start vxlan sample with only decapsulation enable:
tep_termination -c 0xf -n 3 --socket-mem 2048,2048 -- -p 0x1 \
--udp-port 4789 --nb-devices 2 --filter-type 3 --tx-checksum 1 \
--encap 1 --decap 1
Start vhost-user test environment like case vxlan_sample_encap
Send vxlan packet with wrong chksum:
Ether(dst = PF mac)/IP/UDP/vni(1000)/Ether(dst = virtIO port0)/IP wrong chksum/UDP
Verify that packet received by virtIO port0 and forwarded by virtIO port1:
testpmd> show port stats all
Verify encapsulated packet received on PF device. Verify that inner packet src and dst mac address have been conversed. Verify that inner packet ip checksum and udp checksum were corrected.
Send vxlan packet with wrong chksum:
Ether(dst = PF mac)/IP/UDP/vni(1000)/Ether(dst = virtIO port0)/IP wrong chksum/TCP
Verify that packet received by virtIO port0 and forwarded by virtIO port1:
testpmd> show port stats all
Verify encapsulated packet received on PF device. Verify that inner packet src and dst mac address have been conversed. Verify that inner packet ip checksum and tcp checksum were corrected.
Send vxlan packet with wrong chksum:
Ether(dst = PF mac)/IP/UDP/vni(1000)/Ether(dst = virtIO port0)/IP wrong chksum/SCTP
Verify that packet received by virtIO port0 and forwarded by virtIO port1:
testpmd> show port stats all
Verify encapsulated packet received on PF device. Verify that inner packet src and dst mac address have been conversed. Verify that inner packet ip checksum and sctp checksum were corrected.
Test Case: Vxlan Sample TSO¶
Start vxlan sample with tso enable, tx checksum must enable too. For hardware limitation, tso segment size must be larger 256:
tep_termination -c 0xf -n 3 --socket-mem 2048,2048 -- -p 0x1 \
--udp-port 4789 --nb-devices 2 --filter-type 3 --tx-checksum 1 \
--encap 1 --decap 1 --tso-segsz 256
Start vhost-user test environment like case vxlan_sample_encap
Send vxlan packet with 892 Bytes data, total length will be 1000:
Ether(dst = PF mac)/IP/UDP/vni(1000)/Ether(dst = virtIO port0)/TCP
Verify that packet received by virtIO port0 and forwarded by virtIO port1:
testpmd> show port stats all
Verify that four separated vxlan packets received on PF devices. Make sure tcp packet payload is 256, 256, 256 and 124.
Test Case: Vxlan Sample Performance Benchmarking¶
The throughput is measured for different operations taken by vxlan sample. Virtio single mean there’s only one flow and forwarded by single port in vm. Virtio two mean there are two flows and forwarded by both two ports in vm.
Function | VirtIO | Mpps | % linerate |
---|---|---|---|
Decap | Single | ||
Encap | Single | ||
Decap&Encap | Single | ||
Checksum | Single | ||
Checksum&Decap | Single | ||
Decap | Two Ports | ||
Encap | Two Ports | ||
Decap&Encap | Two Ports | ||
Checksum | Two Ports | ||
Checksum&Decap | Two Ports |
Sample Application Tests: IEEE1588¶
The PTP (Precision Time Protocol) client sample application is a simple example of using the DPDK IEEE1588 API to communicate with a PTP master clock to synchronize the time on the NIC and, optionally, on the Linux system.
Prerequisites¶
Assume one port are connected to the tester and tester has been installed “linuxptp.x86_64”. The sample should be validated on Forville, Niantic and i350 Nics.
Test case: ptp client¶
Start ptp server on tester with IEEE 802.3 network transport:
ptp4l -i p785p1 -2 -m
Start ptp client on DUT and wait few seconds:
./examples/ptpclient/build/ptpclient -c f -n 3 -- -T 0 -p 0x1
Check that output message contained T1,T2,T3,T4 clock and time difference between master and slave time is about 10us in niantic, 20us in Fortville, 8us in i350.
Test case: update system¶
Reset DUT clock to initial time and make sure system time has been changed:
date -s "1970-01-01 00:00:00"
Strip DUT and tester board system time:
date +"%s.%N"
Start ptp server on tester with IEEE 802.3 network transport:
ptp4l -i p785p1 -2 -m -S
Start ptp client on DUT and wait few seconds:
./examples/ptpclient/build/ptpclient -c f -n 3 -- -T 1 -p 0x1
Make sure DUT system time has been changed to same as tester. Check that output message contained T1,T2,T3,T4 clock and time difference between master and slave time is about 10us in niantic, 20us in Fortville, 8us in i350.
Sample Application Tests: Packet distributor¶
Packet Distributor library is a library designed to be used for dynamic load balancing of traffic while supporting single packet at a time operation. When using this library, the logical cores in use are to be considered in several roles:
rx lcore: responsible for receive packets from different ports and enqueue
distributor lcore: responsible for load balancing or distributing packets
- worker locres: responsible for receiving the packets from the distributor
- and operating on them.
tx lcore: responsible for dequeue packets from distrbutor and transmit them
Test Case: Distributor unit test¶
Start test application and run distributor unit test:
test -c f -n 4 -- -i
RTE>>distributor_autotest
Verify burst distributor API unit test passed
Test Case: Distributor performance unit test¶
Start test application and run distributor unit test:
test -c f -n 4 -- -i
RTE>>distributor_perf_autotest
Compared CPU cycles for normal distributor and burst API
Verify burst distributor API cost much less cycles then normal version
Test Case: Distributor packet check¶
Start distributor sample with one worker:
distributor_app -c 0x7c -n 4 -- -p 0x1
Send few packets (less then burst size) with sequence index which indicated in ip dst address
Check forwarded packets are all in sequence and content not changed
Send packets equal to burst size with sequence index
Check forwarded packets are all in sequence and content not changed
Send packets over burst size with sequence index
Check forwarded packets are all in sequence and content not changed
Test Case: Distributor with workers¶
Start distributor sample with two workers:
distributor_app -c 0xfc -n 4 -- -p 0x1
Send several packets with ip address increasing
Check packets distributed to different workers
Check all packets have been sent back from tx lcore
Repeat step 1 to step4 with 4(3fc)/8(3ffc)/16(0x3ffffc)/32(0xffff0003ffffc) workers
Test case: Distribute with maximum workers¶
Start distributor sample with 63(0xeffffffffffffffff0) workers
Send several packets with ip address increasing
Check packets distributed to different workers
Check all packets have been sent back from tx lcore
Test Case: Distributor with multiple input ports¶
Start distributor sample with two workers and two ports:
distributor_app -c 0x7c -n 4 -- -p 0x3
Send packets with sequence indicated in udp port id
Check forwarded packets are all in sequence and content not changed
Test case: Distribute performance¶
The number of workers are configured through the command line interface of the application:
The test report should provide the measurements(mpps and % of the line rate) for each action in lcores as listed in the table below:
+----+---------+------------------+------------------+------------------+------------------+------------------+------------------+
| # |Number of| Throughput Rate | Throughput Rate | Throughput Rate | Throughput Rate | Throughput Rate | Throughput Rate |
| |workers | Rx received | Rx core enqueued | Distributor sent | Tx core dequeued | Tx transmitted | Pkts out |
| | +------------------+------------------+------------------+------------------+------------------+------------------+
| | | mpps | % | mpps | % | mpps | % | mpps | % | mpps | % | mpps | % |
+----+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+
| 1 | 1 | | | | | | | | | | | | |
+----+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+
| 2 | 2 | | | | | | | | | | | | |
+----+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+
| 3 | 3 | | | | | | | | | | | | |
+----+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+
| 4 | 4 | | | | | | | | | | | | |
+----+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+
| 5 | 8 | | | | | | | | | | | | |
+----+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+
| 6 | 16 | | | | | | | | | | | | |
+----+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+
| 7 | 32 | | | | | | | | | | | | |
+----+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+
| 8 | 63 | | | | | | | | | | | | |
+----+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+--------+---------+
Sample Application Tests: Elastic Flow Distributor¶
Description¶
EFD is a distributor library that uses perfect hashing to determine a target/value for a given incoming flow key. It has the following advantages: 1. It uses perfect hashing it does not store the key itself and hence lookup performance is not dependent on the key size. 2. Target/value can be any arbitrary value hence the system designer and/or operator can better optimize service rates and inter-cluster network traffic locating. 3. Since the storage requirement is much smaller than a hash-based flow table (i.e. better fit for CPU cache), EFD can scale to millions of flow keys. 4. With the current optimized library implementation, performance is fully scalable with any number of CPU cores.
For more details, please reference to dpdk online programming guide.
Prerequisites¶
Two ports connect to packet generator.
DUT board must be two sockets system and each cpu have more than 16 lcores.
Test Case: EFD function unit test¶
Start test application and run efd unit test:
test> efd_autotest
Verify every function passed in unit test
Test Case: EFD performance unit test¶
Start test application and run EFD performance unit test:
test> efd_perf_autotest
Verify lookup and lookup bulk cpu cycles are reasonable. Verify when key size increased, no significant increment in cpu cycles. Verify when value bits increased, no significant increment in cpu cycles. Compare with cuckoo hash performance result, lookup cycles should be less.
Test Case: Load balancer performance based on EFD¶
In EFD sample, EFD work as a flow-level load balancer. Flows are received at a front end server before being forwarded to the target back end server for processing. This case will measure the performance of flow distribution with different parameters.
Value bits: number of bits of value that be stored in EFD table Nodes: number of back end nodes Entries: number of flows to be added in EFD table
Value Bits | Nodes | Entries | Throughput |
8 | 2 | 2M | |
16 | 2 | 2M | |
24 | 2 | 2M | |
32 | 2 | 2M |
Value Bits | Nodes | Entries | Throughput |
8 | 1 | 2M | |
8 | 2 | 2M | |
8 | 3 | 2M | |
8 | 4 | 2M | |
8 | 5 | 2M | |
8 | 6 | 2M | |
8 | 7 | 2M | |
8 | 8 | 2M |
Value Bits | Nodes | Entries | Throughput |
8 | 2 | 1M | |
8 | 2 | 2M | |
8 | 2 | 4M | |
8 | 2 | 8M | |
8 | 2 | 16M | |
8 | 2 | 32M |