FD.io VPP

This is beta VPP Documentation it is not meant to be complete or accurate yet!!!!

FD.io Vector Packet Processing (VPP) is a fast, scalable and multi-platform network stack.

FD.io VPP is, at it’s core, a scalable layer 2-4 network stack. It supports integration into both Open Stack and Kubernetes environments. It supports network management features including configuration, counters and sampling. It supports extending with plugins, tracing and debugging. It supports use cases such as vSwitch, vRouter, Gateways, Firewalls and Load Balancers, to name but a few. Finally it is useful both a software development kit or an appliance out of the box.

Overview

What is VPP?

FD.io’s Vector Packet Processing (VPP) technology is a Fast, Scalable and Deterministic, Packet Processing stack that runs on commodity CPUs. It provides out-of-the-box production quality switch/router functionality and much, much more. FD.io VPP is at the same time, an Extensible and Modular Design and Developer Friendly framework, capable of boot-strapping the development of packet-processing applications. The benefits of FD.io VPP are its high performance, proven technology, its modularity and flexibility, integrations and rich feature set.

FD.io VPP is vector packet processing software, to learn more about what that means, see the What is vector packet processing? section.

For more detailed information on FD.io features, see the following sections:

Packet Processing

  • Layer 2 - 4 Network Stack
    • Fast lookup tables for routes, bridge entries
    • Arbitrary n-tuple classifiers
    • Control Plane, Traffic Management and Overlays
  • Linux and FreeBSD support
    • Wide support for standard Operating System Interfaces such as AF_Packet, Tun/Tap & Netmap.
  • Wide network and cryptograhic hardware support with DPDK.
  • Container and Virtualization support
    • Para-virtualized intefaces; Vhost and Virtio
    • Network Adapters over PCI passthrough
    • Native container interfaces; MemIF
  • Universal Data Plane: one code base, for many use cases
  • Out of the box production quality, with thanks to CSIT.

For more information, please see Features for the complete list.

Fast, Scalable and Deterministic

  • Continuous integration and system testing
    • Including continuous & extensive, latency and throughput testing
  • Layer 2 Cross Connect (L2XC), typically achieve 15+ Mpps per core.
  • Tested to achieve zero packet drops and ~15µs latency.
  • Performance scales linearly with core/thread count
  • Supporting millions of concurrent lookup tables entries

Please see Performance for more information.

Developer Friendly

  • Extensive runtime counters; throughput, intructions per cycle, errors, events etc.
  • Integrated pipeline tracing facilities
  • Multi-language API bindings
  • Integrated command line for debugging
  • Fault-tolerant and upgradable
    • Runs as a standard user-space process for fault tolerance, software crashes seldom require more than a process restart.
    • Improved fault-tolerance and upgradability when compared to running similar packet processing in the kernel, software updates never require system reboots.
    • Development expierence is easier compared to similar kernel code
    • Hardware isolation and protection (iommu)
  • Built for security
    • Extensive white-box testing
    • Image segment base address randomization
    • Shared-memory segment base address randomization
    • Stack bounds checking
    • Static analysis with Coverity

Extensible and Modular Design

  • Pluggable, easy to understand & extend
  • Mature graph node architecture
  • Full control to reorganize the pipeline
  • Fast, plugins are equal citizens

Modular, Flexible, and Extensible

The FD.io VPP packet processing pipeline is decomposed into a ‘packet processing graph’. This modular approach means that anyone can ‘plugin’ new graph nodes. This makes VPP easily exensible and means that plugins can be customized for specific purposes. VPP is also configurable through it’s Low-Level API.

Extensible, modular graph node architecture?

Extensible and modular graph node architecture.

At runtime, the FD.io VPP platform assembles a vector of packets from RX rings, typically up to 256 packets in a single vector. The packet processing graph is then applied, node by node (including plugins) to the entire packet vector. The received packets typically traverse the packet processing graph nodes in the vector, when the network processing represented by each graph node is applied to each packet in turn. Graph nodes are small and modular, and loosely coupled. This makes it easy to introduce new graph nodes and rewire existing graph nodes.

Plugins are shared libraries and are loaded at runtime by VPP. VPP find plugins by searching the plugin path for libraries, and then dynamically loads each one in turn on startup. A plugin can introduce new graph nodes or rearrange the packet processing graph. You can build a plugin completely independently of the FD.io VPP source tree, which means you can treat it as an independent component.

Features

SDN & Cloud Integrations Control Plane Plugins
Tunnels Layer 4
Layer 3 Traffic Management
Layer 2
Devices

Devices

Operating System
Virtualization:
  • SSVM
  • Vhost / VirtIO
Containers
  • Vhost-user
  • MemIF

SDN & Cloud Integrations

Traffic Management

IP Layer Input Checks
  • Source Reverse Path Forwarding
  • Time To Live expiration
  • IP header checksum
  • Layer 2 Length < IP Length
Classifiers
  • Multiple million Classifiers - Arbitrary N-tuple
Policers
  • Colour Aware & Token Bucket
  • Rounding Closest/Up/Down
  • Limits in PPS/KBPS
  • Types:
    • Single Rate Two Colour
    • Single Rate Three Colour
    • Dual Rate Three Colour
  • Action Triggers
    • Conform
    • Exceed
    • Violate
  • Actions Type
    • Drop
    • Transmit
    • Mark-and-transmit

Switched Port Analyzer (SPAN) * mirror traffic to another switch port

ACLs
  • Stateful
  • Stateless
COP
MAC/IP Pairing

(security feature).

Layer 2

MAC Layer
  • Ethernet
Discovery
  • Cisco Discovery Protocol
  • Link Layer Discovery Protocol (LLDP)
Virtual Private Networks
  • MPLS
    • MPLS-o-Ethernet – Deep label stacks supported
  • Virtual Private LAN Service (VPLS)
  • VLAN
  • Q-in-Q
  • Tag-rewrite (VTR) - push/pop/Translate (1:1,1:2, 2:1,2:2)
  • Ethernet flow point Filtering
  • Layer 2 Cross Connect
Bridging
  • Bridge Domains
  • MAC Learning (50k addresses)
  • Split-horizon group support
  • Flooding
ARP
  • Proxy
  • Termination
  • Bidirectional Forwarding Detection
Integrated Routing and Bridging (IRB)
  • Flexibility to both route and switch between groups of ports.
  • Bridged Virtual Interface (BVI) Support, allows traffic switched traffic to be routed.

Layer 3

IP Layer
  • ICMP
  • IPv4
  • IPv6
  • IPSEC
  • Link Local Addressing
MultiCast
  • Multicast FiB
  • IGMP
Virtual Routing and forwarding (VRF)
  • VRF scaling, thousands of tables.
  • Controlled cross-VRF lookups
Multi-path
  • Equal Cost Multi Path (ECMP)
  • Unequal Cost Multi Path (UCMP)
IPv4
  • ARP
  • ARP Proxy
  • ARP Snooping
IPv6
  • Neighbour discovery (ND)
  • ND Proxy
  • Router Advertisement
  • Segment Routing
  • Distributed Virtual Routing Resolution
Forwarding Information Base (FIB)
  • Hierarchical FIB
  • Memory efficient
  • Multi-million entry scalable
  • Lockless/concurrent updates
  • Recursive lookups
  • Next hop failure detection
  • Shared FIB adjacencies
  • Multicast support
  • MPLS support

Layer 4

Tunnels

Layer 2
  • L2TP
  • PPP
  • VLAN
Layer 3
  • Mapping of Address and Port with Encapsulation (MAP-E)

  • Lightweight IPv4 over IPv6

    • An Extension to the Dual-Stack Lite Architecture
  • GENEVE

  • VXLAN

Segment Routing
  • IPv6
  • MPLS

Generic Routing Encapsulation (GRE) * GRE over IPSEC * GRE over IP * MPLS * NSH

Control Plane

  • DHCP client/proxy
  • DHCPv6 Proxy

Plugins

  • iOAM

Performance

Overview

One of the benefits of FD.io VPP, is high performance on relatively low-power computing, this performance is based on the following features:

  • A high-performance user-space network stack designed for commodity hardware.
    • L2, L3 and L4 features and encapsulations.
  • Optimized packet interfaces supporting a multitude of use cases.
    • An integrated vhost-user backend for high speed VM-to-VM connectivity.
    • An integrated memif container backend for high speed Container-to-Container connectivity.
    • An integrated vhost based interface to punt packets to the Linux Kernel.
  • The same optimized code-paths run execute on the host, and inside VMs and Linux containers.
  • Leverages best-of-breed open source driver technology: DPDK.
  • Tested at scale; linear core scaling, tested with millions of flows and mac addresses.

These features have been designed to take full advantage of common micro-processor optimization techniques, such as:

  • Reducing cache and TLS misses by processing packets in vectors.
  • Realizing IPC gains with vector instructions such as: SSE, AVX and NEON.
  • Eliminating mode switching, context switches and blocking, to always be doing useful work.
  • Cache-lined aliged buffers for cache and memory efficiency.

Packet Throughput Graphs

These are some of the packet throughput graphs for FD.io VPP 18.04 from the CSIT 18.04 benchmarking report.

L2 Ethernet Switching Throughput Tests

VPP NDR 64B packet throughput in 1 Core, 1 Thread setup, is presented in the graph below.