DPDK framework with enhancements. Source: Ramia and Jain 2017, slide 17.
DPDK framework with enhancements. Source: Ramia and Jain 2017, slide 17.

Switches, routers, firewalls and other networking devices typically need to process large volumes of packets in real time. Traditionally, efficient packet processing was done using specialized and expensive hardware. Data Plane Development Kit (DPDK) enables us to do this on low-cost commodity hardware. By using commodity hardware, we can also move networking functions to the cloud and run them within virtualized environments. DPDK enables innovation around multi-core CPUs, edge computing, real-time security, NFV/SDN, low-latency applications and more.

DPDK was originally started at Intel. Later it was open sourced and subsequently brought under the guidance of the Linux Foundation. It uses BSD-3-Clause licensing.

DPDK is not without its challenges. Not all NICs are supported. Support for Windows is limited. Being a low-level framework, it requires developers with relevant expertise. Debugging is harder.


  • How did vendors achieve efficient packet processing before DPDK?

    Before DPDK, specialized hardware did efficient packet processing. Such hardware might use custom ASICs, programmable FPGAs or Network Processing Units (NPUs). Sometimes low-level hardware-specific microcode or custom firmware could be present. Packet classification, flow control, TCP/IP processing, encryption/decryption, VLAN tagging, and checksum calculation example tasks that these specialized hardware did in an optimized manner.

    However, such hardware were expensive to buy and maintain. Upgrades and security patches were time-consuming to apply and needed a full-time network administrator. One solution was move from specialized hardware to Commercial Off-the-Shelf (COTS) hardware. While this was more cost effective and easier to maintain, performance suffered. Packets moved from Network Interface Cards (NICs) to the operating system (OS), where they were processed via the OS kernel stack.

    Even with fast NICs, the kernel stack proved a bottleneck. System calls, interrupts, context switches, packet copying and per-packet processing brought down performance.

    DPDK solves the performance problem on COTS hardware. We get efficient packet processing without requiring expensive customized hardware.

  • How does DPDK improve packet processing?
    Traditional packet processing versus DPDK processing. Source: Adapted from Haryachyy 2015.
    Traditional packet processing versus DPDK processing. Source: Adapted from Haryachyy 2015.

    DPDK bypasses the kernel and enables fast packet processing in userspace. It's essentially a set of network drivers and libraries. Environment Abstraction Layer (EAL) abstracts hardware-specific operations from applications. The figure shows how traditional processing with POSIX calls goes through the kernel space before packets reach the application. DPDK short-circuits this path and moves packets directly between NIC and userspace applications.

    Traditional processing is interrupt driven where the NIC interrupts the kernel when a packet arrives. DPDK uses polling instead and avoids the overhead associated with interrupts. This is performed by a Poll Mode Driver (PMD).

    Another important optimization is zero copy. In traditional networking, packets are copied from socket buffers in kernel space to user space. DPDK avoids this.

    For developers, DPDK's userspace approach is attractive since kernel need not be modified. Any network stack based on DPDK can be optimized for specific applications.

  • What's the packet processing model adopted by DPDK?
    Run-to-completion versus pipeline models in DPDK. Source: Ramia and Jain 2017, slide 20.
    Run-to-completion versus pipeline models in DPDK. Source: Ramia and Jain 2017, slide 20.

    There are broadly two processing models:

    • Run-to-Completion: A CPU core handles receive, processing and transmit of a packet. Multiple cores can be used with each core associated with a dedicated port. However, with Receive Side Scaling (RSS), traffic arriving at a single port can be distributed to multiple cores.
    • Pipeline: Each core is dedicated to a specific workload. For example, one core might handle receive/transmit of packets while other cores handle application processing. Packets are passed between cores via memory rings.

    For single-core multi-CPU deployments, one CPU is assigned to the OS and the other to the DPDK-based application. A less performant variant is when packets are accessed via the QPI interface. For multi-core deployments, we can assign more than one core to each port with or without hyperthreading.

    Deciding which model to use is not trivial. Some things to consider are cycles needed to process each packet, extent of data exchange across software modules, specific optimization at some cores, code maintainability, etc. Intel VTune Profiler can be used to analyze the efficiency of the pipeline model.

  • What other techniques does DPDK use to improve performance?

    Apart from kernel bypass, polling and zero copy, there are a few more techniques used by DPDK:

    • Processor Affinity: Ties specific processing to specific cores.
    • Huge Pages: Reduces TLB cache misses.
    • Lockless Sync: Queues are managed with the ring library. Enqueue and dequeue operations are lockless.
    • I/O Batch: Process a batch of packets rather than one at a time. This amortizes overheads in accessing the NIC.
    • NUMA Aware: Utilizes NUMA memory for better performance.
    • Cache Alignment: Align structures to 64-byte cache lines.

    These features are used by DPDK components such as EAL, memory manager, buffer manager, queue manager, flow classification, etc.

  • What are some metrics used to evaluate DPDK performance?
    DPDK versus POSIX performance: (a) latency; (b) throughput. Source: Lai et al. 2021, fig. 4.
    DPDK versus POSIX performance: (a) latency; (b) throughput. Source: Lai et al. 2021, fig. 4.

    Throughput is the most common metric. Often this is quoted in Mbps, Gbps or Mega Packets Per Second (MPPS). When MPPS is used, packet size must be mentioned. At a low packet size of 64 bytes, throughput will be limited by MPPS rather than Mbps. The figure above shows throughput at L2.

    In 2015, per core L3 performance on Linux was 1.1 MPPS. On Intel DPDK, this was 28.5 MPPS. In 2010, DPDK could achieve L3 performance of 55 MPPS with 64-byte packets on an Intel Xeon-based system. This was improved to 255 MPPS (2014) and 347 MPPS (2016).

    Latency is important for low-latency applications. The figure above shows that DPDK reduces latency by a factor of ten on 64-byte packets. Likewise, for real-time applications, jitter is important. One study showed that DPDK reduces jitter from 10μs to 2μs.

  • Does DPDK need a TCP/IP stack to work?

    DPDK doesn't include a TCP/IP stack. If an application requires a userspace networking stack, it can use F-Stack, mTCP, TLDK, Seastar and Accelerated Network Stack (ANS). These typically provide both blocking and non-blocking socket APIs. Some of these are based on FreeBSD implementation.

    By omitting a networking stack, DPDK doesn't have the inefficiency of a generic implementation. Applications can include networking modules optimized for their use cases. There might also be some use cases where no higher layer (above L2) processing is needed.

  • Who in industry is using DPDK?
    Open source projects based on DPDK. Source: DPDK 2023e.
    Open source projects based on DPDK. Source: DPDK 2023e.

    Load balancing, flow classification, routing, access control (firewall), and traffic policing are typical uses of DPDK. There's a misconception that DPDK is only for the telecom industry. However, DPDK has been used in cloud environments and enterprises alike. Traffic generators (TRex) and storage applications (SPDK) use DPDK. The figure above lists open-source projects powered by DPDK.

    Network Function Virtualization (NFV) and Software Defined Networking (SDN) are two technologies that leverage DPDK. Open vSwitch (OVS) when ported to DPDK has shown 7x performance improvement.

    In IoT applications, packets are tiny. DPDK reduces latency and allows more such packets to be processed per second.

    In 5G, the User Plane Function (UPF) processes user data packets. Delay, jitter and bandwidth are key performance metrics that need to be met. Some researchers have proposed DPDK for 5G UPF implementation. For deploying UPF at edge networks, DPDK APIs can be used to interface the UPF application (UPF-C) and SmartNICs (UPF-U).

  • What are the challenges with DPDK?

    DPDK requires low-level expertise. Developers should learn DPDK's programming model. They should know how to manage memories, pass packets without copying, and work with multi-core architectures.

    For example, PID namespace can cause problems with managing fbarray. Processes using mmap without specifying addresses can cause problems. Called thread or core affinity, threads must be assigned correctly to CPU cores for consistent performance. DPDK libraries offer developers many implementation choices. Getting these choices wrong can impact performance.

    Since kernel is bypassed, we lose all the protection, utilities (ifconfig, tcpdump) and protocols (ARP, IPSec) that the Linux kernel provides. Debugging and identifying root causes for networking problems are challenges. However, DPDK's tracing library and LLTng may help. A poor implementation can cause other processes/programs to fail.

    Finally, since polling is used instead of interrupts, DPDK causes 100% CPU usage even when only a few packets are being processed.

  • What are some alternatives to DPDK?
    Modern NICs can bifurcate traffic to DPDK or kernel. Source: Bo 2023.
    Modern NICs can bifurcate traffic to DPDK or kernel. Source: Bo 2023.

    Faster packet processing via kernel bypass is possible using Snabbswitch, Netmap or StackMap. Like DPDK, these process packets in the userspace. Packets completely bypass the kernel stack. Snabbswitch is written in Lua while DPDK is in C. PacketShader does kernel bypass for GPU-based hardware.

    An alternative approach is to modify the Linux kernel. Examples include eXpress Data Path (XDP) and network stacks based on Remote Direct Memory Access (RDMA). Other efficient tools include packet_mmap (but doesn't bypass the kernel) and PF_RING (with ZC drivers).

    Cloud providers may not wish to dedicate an entire NIC for a single offloaded application. Solarflare uses OpenOnload that uses the EF_VI proprietary library. It creates a "hidden queue" at the NIC that userspace processes can access. This approach is sometimes called bifurcated driver or queue splitting. A similar approach exists in the virtualization world to avoid the overhead of passing packets from host to VM. Fake virtual interfaces allow packets to reach the VM directly. In general, modern NICs can bifurcate traffic to use or skip kernel stack. SR-IOV and VFIO technologies enable this.



Intel creates DPDK and releases it under a permissive license. It uses the 3-Clause BSD License. This allows developers to modify the source code without requiring them to open source those modifications. Intel's Venky Venkatesan is often called the "Father of DPDK".


DPDK v1.2.3r0 is released. On DPDK Git archives, this is the earliest version available. Release notes are available since v1.8 (December 2014).


ETSI's white paper on NFV is published. This paper mentions DPDK as an enabling technology for NFV and cloud computing. This is attributed to DPDK's poll-mode Ethernet drivers that avoid interrupt-driven processing.


Website DPDK.org is launched and an open source community is formed.


The first DPDK Summit is organized in San Francisco. Similar summits are organized regularly over the following years. Meanwhile in 2014, DPDK comes with multi-vendor CPU and NIC support. This support grows rapidly through 2015.


Since DPDK doesn't offer a built-in TCP/IP stack, other open source projects attempt to provide this on top of DPDK. Examples of this include mTCP (v2 in Apr 2015), ANS (v16.08 in Aug 2016), and F-Stack (v1.11 in Nov 2017).

New horizontal logo of DPDK. Source: DPDK 2015.
New horizontal logo of DPDK. Source: DPDK 2015.

DPDK releases new project logos, licensed under CC BY-IND 4.0.

Multi-architecture and multi-vendor support of DPDK. Source: Ramia and Jain 2017, slide 9.
Multi-architecture and multi-vendor support of DPDK. Source: Ramia and Jain 2017, slide 9.

DPDK becomes a project under the guidance of the Linux Foundation, which provides a network platform for the growth of DPDK. By now, many open source projects have already adopted DPDK including MoonGen, mTCP, Ostinato, Lagopus, Fast Data (FD.io), Open vSwitch, OPNFV, and OpenStack.


DPDK v19.05 is released with support for Windows OS. Only a "Hello World" sample application is supported. However, even four years later, the v23.07.0 documentation notes that DPDK for Windows is "a work in progress" and some source files may not compile.


DPDK v20.11 is released. It's claimed to be the biggest and most robust release. It's an LTS release that will be supported for two years. The two biggest contributors for this release are Intel and Nvidia, followed by Huawei and Broadcom.


Intel publishes a technology guide explaining its Data Streaming Accelerator (DSA). While DPDK aims to avoid data copying, sometimes this is unavoidable. Such copying operations can be offloaded to Direct Memory Access (DMA) accelerators such as DSA. DPDK library that enables this is dmadev, available since DPDK v21.11 (November 2021).


Belkhiri et al. propose instrumenting DPDK libraries and collecting trace information. These traces can be used to debug performance issues and identify root causes. They claim that their approach is better than what existing tools VTune Amplifier (closed source), FlowWatcher-DPDK and DPDKStat are capable of.


DPDK website shows that the current gold members include AMD, ARM, Ericsson, Intel, Marvell, Microsoft, Nvidia, Red Hat and ZTE. Silver members include 6Wind, Broadcom, Huawei, NXP, and more.


  1. ACL Digital. 2021. "Refuting the Top Misconceptions of DPDK." Blog, ACL Digital, August 6. Accessed 2023-09-09.
  2. ANS. 2019. "Releases." ANS, on GitHub, February 14. Accessed 2023-09-08.
  3. Barbette, T., C. Soldani, and L. Mathy. 2015. "Fast userspace packet processing." ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS), Oakland, CA, USA, pp. 5-16. doi: 10.1109/ANCS.2015.7110116. Accessed 2023-09-12.
  4. Belkhiri, A., M. Pepin, M. Bly, and M. Dagenais. 2023. "Performance analysis of DPDK-based applications through tracing." J. of Parallel and Dist Computing, vol. 173, pp. 1-19, March. Accessed 2023-09-09.
  5. Bo, C. 2023. "Coroutine Made DPDK Development Easy." Blog, Alibaba Cloud, May 12. Accessed 2023-09-08.
  6. Cascardo. 2015. "Getting the Best of Both Worlds with Queue Splitting (Bifurcated Driver)." Blog, RedHat, October 2. Accessed 2023-09-12.
  7. Chen, W.-E. and C. H. Liu. 2020. "High-performance user plane function (UPF) for the next generation core networks." Special Issue: Intelligent Computing: a Promising Network Computing Paradigm, November 3. Accessed 2023-09-08.
  8. Chen, R. and G. Sun. 2018. "A Survey of Kernel-Bypass Techniques in Network Stack." CSAI '18: Proceedings of the 2018 2nd International Conference on Computer Science and Artificial Intelligence, pp. 474–477, December. doi: 10.1145/3297156.3297242. Accessed 2023-09-08.
  9. Cochinwala, Naveen. 2021. "Difficulties of a DPDK Implementation." Blog, on LinkedIn, January 16. Accessed 2023-09-12.
  10. DPDK. 2014. "DPDK Summit, San Francisco." Events, DPDK, September 8. Accessed 2023-09-08.
  11. DPDK. 2015. "Horizontal log with tag." DPDK, October 1.Accessed 2023-09-08.
  12. DPDK. 2019a. "About DPDK." DPDK, November 15. Accessed 2023-09-08.
  13. DPDK. 2019b. "In Loving Memory: Venky Venkatesan, The Father of DPDK." DPDK, August 16. Accessed 2023-09-08.
  14. DPDK. 2019c. "DPDK Release 19.05." Documentation, DPDK, May. Accessed 2023-09-08.
  15. DPDK. 2020. "DPDK Issues 20.11, Most Robust DPDK Release Ever!" Blog, DPDK, November 30. Accessed 2023-09-08.
  16. DPDK. 2023a. "DPDK Stable: refs." DPDK Git. Accessed 2023-09-08.
  17. DPDK. 2023b. "Release Notes." v23.07.0, Documentation, DPDK, July. Accessed 2023-09-08.
  18. DPDK. 2023c. "Past Events." DPDK. Accessed 2023-09-08.
  19. DPDK. 2023d. "Getting Started Guide for Windows: Introduction." v23.07.0, Documentation, DPDK, July. Accessed 2023-09-08.
  20. DPDK. 2023e. "Ecosystem." DPDK, July 27. Accessed 2023-09-09.
  21. DPDK. 2023f. "Tracing Library." Sec. 6, v23.07.0, Documentation, DPDK, July. Accessed 2023-09-12.
  22. DPDK. 2023g. "Ring Library." Sec. 8, v23.07.0, Documentation, DPDK, July. Accessed 2023-09-12.
  23. DPDK. 2023h. "Poll Mode Driver." Sec. 8, v23.07.0, Documentation, DPDK, July. Accessed 2023-09-14.
  24. ETSI. 2012. "Network Functions Virtualisation." White paper, SDN and OpenFlow World Congress, Darmstadt, Germany, October 22-24. Accessed 2023-09-08.
  25. F-Stack. 2022. "Releases." F-Stack, on GitHub, September 2. Accessed 2023-09-08.
  26. Gaio, G. and G. Scalamera. 2019. "Development of Ethernet based real-time applications in Linux using DPDK." 17th Int. Conf. on Acc. and Large Exp. Physics Control Systems, NY, USA. doi: 10.18429/JACoW-ICALEPCS2019-MOPHA044. Accessed 2023-09-12.
  27. Haryachyy, D. 2015. "Understanding DPDK." SlideShare, February 12. Accessed 2023-09-12.
  28. Intel. 2015. "Introduction to DPDK." Slides, September. Accessed 2023-09-08.
  29. Intel. 2017. "Introduction to the DPDK Packet Framework." Technical paper, Intel. Accessed 2023-09-12.
  30. Intel. 2023. "DPDK Event Device Profiling." Documentation, Intel® VTune™ Profiler Performance Analysis Cookbook, March 10. Accessed 2023-09-12.
  31. Klaff, B. and B. Perlman. 2019. "Using DPDK APIs as the I/F between UPF-C and UPF-U." DPDK, on YouTube, November 22. Accessed 2023-09-08.
  32. Lai, L., G. Ara, T. Cucinotta, K. Kondepu, and L. Valcarenghi. 2021. "Ultra-low Latency NFV Services Using DPDK." IEEE Conference on Network Function Virtualization and Software Defined Networks (NFV-SDN), Heraklion, Greece, pp. 8-14. doi: 10.1109/NFV-SDN53031.2021.9665131. Accessed 2023-09-12.
  33. Majkowski, M. 2015. "Kernel bypass." Blog, Cloudflare, July 9. Accessed 2023-09-12.
  34. NXP. 2017. "DPDK Overview." QorIQ SDK v2.0-1703 Documentation, August 7. Accessed 2023-09-08.
  35. O'Driscoll, T. 2015. "[dpdk-dev] DPDK Logo Release." Email archives, DPDK, October 1. Accessed 2023-09-08.
  36. Ramia, K. B. and D. K. Jain. 2017. "DPDK Architecture and Roadmap Discussion." Slides, DPDK Summit India, April 25-26. Accessed 2023-09-12.
  37. Richardson, B. 2022. "Intel® Data Streaming Accelerator (DSA) - Packet Copy Offload in DPDK with Intel® DSA." Technology guide, v001, Intel, December. Accessed 2023-09-08.
  38. Shukla, M. 2018. "DPDK in 3 Minutes or Less…" Blog, Calsoft Inc., November 20. Accessed 2023-09-08.
  39. Srinivas, R. K. 2020. "Design of high performance Automotive Network simulators — DPDK approach." Medium, February 9. Accessed 2023-09-12.
  40. The Linux Foundation. 2017. "Networking Industry Leaders Join Forces to Expand New Open Source Community to Drive Development of the DPDK Project." Press release, The Linux Foundation, April 3. Accessed 2023-09-08.
  41. Yong, W. 2019. "DPDK & Containers : Challenges + Solutions." Slides, DPDK Summit North America, November 12. Accessed 2023-09-12.
  42. Zhang, H., Z. Chen, and Y. Yuan. 2021. "High-Performance UPF Design Based on DPDK." IEEE 21st International Conference on Communication Technology (ICCT), Tianjin, China, pp. 349-354. doi: 10.1109/ICCT52962.2021.9657903. Accessed 2023-09-08.
  43. mTCP. 2018. "Releases." mTCP, on GitHub, October 8. Accessed 2023-09-08.

Further Reading

  1. DPDK Website
  2. DPDK Open-Source Code
  3. DPDK Programmer's Guide
  4. DPDK Sample Applications

Article Stats

Author-wise Stats for Article Edits

No. of Edits
No. of Chats

Cite As

Devopedia. 2023. "DPDK." Version 8, September 14. Accessed 2023-09-14. https://devopedia.org/dpdk
Contributed by
2 authors

Last updated on
2023-09-14 09:42:48
  • DPDK Poll Mode Driver
  • DPDK-Based 5G UPF
  • TestPMD
  • Vector Packet Processor
  • SmartNIC
  • SR-IOV