End-to-End Principle

When a function has to be supported in a networked system, the designer often asks if it should be implemented at the end systems; or should it be implemented within the communication subsystem that interconnects all the end systems. The end-to-end argument or principle states that it's proper to implement the function in the end systems. The communication system itself may provide a partial implementation but only as a performance enhancement.

The architecture and growth of the Internet was shaped by the end-to-end principle. It allowed us to keep the Internet simple and add features quickly to end systems. The principle enabled innovation.

More recently, the principle has been criticized or violated to support features such as network caching, differentiated services, or deep packet inspection.


  • Could you explain the end-to-end principle?
    E2E principle and the Internet. Source: Challen 2016.

    Suppose we need to transfer a file from computer A to computer B across a network. Let's assume that the network guarantees correct delivery of the file by way of checksums, retransmissions, and deduplication of packets. Thus, our hypothetical network is full of features but also complex. The problem is that despite such a smart network, the file transfer can still go wrong. The file could get corrupted on B during transfer from buffer to the file system. This implies that end computers still have to do the final checks even if the network has already done them.

    This is the essence of the end-to-end (E2E) argument. A communication system may do some things for performance reasons but it can't achieve correctness. For reasons of efficiency and performance, the communication system may implement some features at minimal cost but should avoid trying to achieve high levels of reliability. Reliability and correctness must be left to end systems.

    In addition, applications may not need features implemented in the communication system. An "open" system would give more control to end systems.

  • How has the end-to-end principle benefited the Internet?
    Hourglass shape of Internet architecture. Source: Mehani 2010.
    Hourglass shape of Internet architecture. Source: Mehani 2010.

    Complexity impedes scaling due to higher OPEX and CAPEX. Thus, the end-to-end principle leads to the simplicity principle. IP layer is simple, giving Internet it's hourglass-shaped architecture.

    At the network layer we have IP as the dominant protocol. At higher layers, we have many protocols for supporting diverse applications. At lower layers, we have many protocols suited to different physical networks. IP can be said to "hide the detailed differences among these various technologies, and present a uniform service interface to the applications above".

    IP itself is simple and general. It's supported by all routers within the network. Application-level functions are kept at endpoints. Application developers could therefore innovate without any special support from the network, with some calling it the generative Internet. The principle has been credited with making Internet a success.

    Research has also shown that future architectures for the Internet are likely to evolve into the hourglass shape.

  • Does the end-to-end principle prohibit the network from maintaining state?

    End-to-end applications can survive partial network failures. This is because the network maintains only coarse-grained states, while endpoints maintain the main states. State can be destroyed only when the endpoint itself is destroyed, called fate sharing. Fate of endpoints doesn't depend on the network.

    Routing, QoS guarantees, and header compression are some examples where the network may maintain state. However, this state is self-healing. It can be worked out even if network topology or activity changes. State maintained within the network must be minimal. Loss of this state should at most result in temporary loss of service.

    State maintained in the network may be called soft state while that maintain at endpoints, and required for the proper functioning of the endpoints, may be called hard state.

  • Besides the Internet, where else has the end-to-end principle been applied?

    On the Internet, end-to-end principle has been applied to reliable delivery, deduplication, in-order delivery, reputation maintenance, security, and fault tolerance. For security, both authenticity and encryption of messages are best done at endpoints. Doing these within the network will not only compromise security but also complicate key management.

    It's application to file transfer is well known. Checksum should be validated only after successful storage to disk. Another example is the EtherType field in Ethernet frame. Ethernet doesn't interpret this field. To do so would mean that all higher layer protocols would pay the price for the special few.

    In computer architecture, RISC favours simple instructions over the complex ones of CISC. In CISC, a designer may include complex instructions but it's hard for her to anticipate client requirements. Clients may end up with their own specific implementations.

    A case has been made to apply end-to-end principle for data commons, that is, sharing and organizing data for a discipline. Applications can decide how to import, export or analyze data. The core system would only define global identifiers, basic metadata, authenticated access and a configurable data model.

  • What are the essential aspects of end-to-end connectivity?

    RFC 2775 (published in 2000) mentions three aspects:

    • E2E Argument: This is as described by Saltzer et al. in 1981, and what is now called the end-to-end principle.
    • E2E Performance: This concerns both the network and the end systems. Research in this area has suggested some improvements to TCP plus optimized queuing and discard mechanisms in routers. However, this won't help other transport protocols that don't behave like TCP in response to congestion.
    • E2E Address Transparency: A single logical address space was deemed adequate for the early Internet of the 1970s. Packets could flow end to end unaltered and without change of source or destination addresses. RFC 2101 of 1997 analyzed this aspect and concluded that address transparency is no longer maintained in present day Internet. An example of this is Network Address Translation (NAT). Applications that assume address transparency are likely to fail unpredictably.
  • Are there instances where the end-to-end principle has been violated?
    Violating the end-to-end principle could lead to problems. Source: Kamniski 2019.
    Violating the end-to-end principle could lead to problems. Source: Kamniski 2019.

    Consider a client-server architecture involving a database write operation at the server. A "smart" server can return an immediate confirmation to the client even though it hasn't completed the database write operation. If the write fails, the server has to do retries, effectively taking up responsibilities of the client. It gets worse if the server itself fails, since client thinks that the database write actually happened.

    Bufferbloat was an interesting problem the Internet had in the late 2010s. Because routers could afford larger buffers, they started accepting higher load. They didn't drop packets until later when their larger buffers started filling up. Therefore, endpoints didn't backoff early enough. This resulted in a slower internet.

    HTTP caching, link-level encryption, SOAP 2.0, Network Address Translation (NAT) and firewalls are other counter-examples.

    Examples where the network gets involved are traffic management, capacity reservation, packet segmentation/reassembly, and multicast routing. But these shouldn't be seen as violating the principle. Likewise, cloud computing doesn't violate the principle. Cloud infrastructure is not part of the communication system. It's actually an endpoint.

  • How is end-to-end principle relevant to the net neutrality debate?

    Net neutrality is about creating a level-playing field for everyone, big or small. It ensures that big companies can't pay for preferential treatment of their content. The network sees and treats all content alike. Without net neutrality, a few companies that own or control online platforms or communication infrastructure become all too powerful. Power therefore moves from the end consumers to the network controlled by a few. This goes against the end-to-end principle.

    Tim Wu coined the termed "network neutrality" back in 2002, when he noticed that broadband providers were blocking certain types of services. Network providers might promise services such as blocking spam, viruses or even advertisements. Most users would rather do these on their end systems rather than lose control.

    Deep Packet Inspection (DPI) is used for QoS, security and even surveillance. It's at odds with net neutrality. With DPI, intermediate nodes look into packet headers and payload. Yet, the end-to-end principle doesn't actually prohibit this.

  • What are the common criticisms of the end-to-end principle?

    One criticism is that the original paper by Saltzer et al. never properly understood the true nature of packet switching, which is stochastic. The paper also confused moving packets through the network (statistical) with non-functional aspects such as confidentiality (computational). It would have been better to model timely arrival of packets to enable successful computation.

    The end-to-end principle never gave end users freedom. Network infrastructure has always been built and controlled for commercial reasons by those who had the means. Therefore, to protect user interests, discussions must involve everyone in the industry.

    Back in 2001, researchers noted new applications and scenarios that end-to-end principle didn't address very well: untrusted endpoints, video streaming, ISP service differentiation, third-parties, and difficulty in configuring home network devices. All of these could benefit with some intelligence in the network.

    In Service-Oriented Architecture (SOA), implementing stuff end-to-end would be too costly. A hop-by-hop approach would be better. Ultimately, the principle shouldn't be applied blindly. In the early days of the Internet, when bandwidth was scarce, HTTP caching made sense even when it violated the end-to-end principle, even when it made HTTP a considerably more complex protocol.



In the 1950s, for reading and writing files to magnetic tapes, engineers attempt to design a reliable tape subsystem. They fail to create such a system. Ultimately, applications take care of checks and recovery. An example of this from the 1970s is the Multics file system. Although there's low-level error detection and correction, they don't replace high-level checks.


Frenchman Louis Pouzin designs and develops CYCLADES, a packet switching network. It becomes the first network in which hosts are responsible for reliable delivery of packets. Networks transport datagrams without delivery guarantees. Even the term datagram is coined by Pouzin. CYCLADES inspires the first version TCP. Meanwhile, D.K. Branstad makes the end-to-end argument with reference to encryption.


What was originally TCP, is split into two parts: TCP and IP. Thus, layered architecture is applied and the functions of each layer becomes more well defined. On January 1st, 1983, TCP/IP becomes the standard protocol for ARPAnet that by now connects 500 sites.


J.H. Saltzer, D.P. Reed and D.D. Clark at the MIT Laboratory for Computer Science present a conference paper titled End-to-end Arguments in System Design. Given a distributed system, the paper gives guidance on where to place protocol functions. For example, end systems should perform recovery, encryption and deduplication. Low-level parts of the network could support them only as performance enhancements. Phil Karn, a well-known Internet contributor, comments years later that this is "the most important network paper ever written".


The IETF publishes RFC 1958 titled Architectural Principles of the Internet, with reference to the end-to-end principle of Saltzer et al. In 2002, this RFC is updated by RFC 3439. Other IETF documents relevant to this discussion are RFC 2775 (2000), and RFC 3724 (2004).


David S. Isenberg, an employee of AT&T, writes an essay titled The Rise of the Stupid Network. He notes that telephone networks were built on the assumption of scarce bandwidth, circuit-switching, and voice-dominated calls. This has led to the creation of Intelligent Network (IN), where the network took on more features. Given the rise of the Internet, Isenberg argues that the time has come for telephone networks to become stupid and allow endpoints to do intelligent things. Tell networks, "Deliver the Bits, Stupid".


  1. Beck, Micah. 2019. "On The Hourglass Model." Communications of the ACM, vol. 62, no. 7, pp. 48-57, July. Accessed 2019-10-04.
  2. Blumenthal, Marjory S and David D. Clark. 2001. "Rethinking the Design of the Internet: The End-to-End Arguments vs. the Brave New World." ACM Transactions on Internet Technology, vol. 1, no. 1, pp. 70-109, August. Accessed 2019-10-04.
  3. Bush, R. and D. Meyer. 2002. "Some Internet Architectural Guidelines and Philosophy." RFC 3439, IETF, December. Accessed 2019-10-06.
  4. Carpenter, B. ed. 1996. "Architectural Principles of the Internet." RFC 1958, IETF, June. Accessed 2019-10-06.
  5. Carpenter, B. 2000. "Internet Transparency." RFC 2775, IETF, February. Accessed 2019-10-06.
  6. Challen, Geoffrey. 2016. "What is the end to end principle?" internet-class, on YouTube, September 01. Accessed 2019-10-04.
  7. Columbia Law School. 2017. "Tim Wu in the Center of the Net Neutrality Debate." News, Columbia Law School, November. Accessed 2019-10-04.
  8. Garfinkel, Simson. 2003. "The End of End-to-End?" MIT Technology Review, July 01. Accessed 2019-10-04.
  9. Geddes, Martin. 2014. "The future of the Internet - a review of the end-to-end argument." March 05. Updated 2014-11-23. Accessed 2019-10-04.
  10. Georgia Institute of Technology. 2011. "How the Internet architecture got its hourglass shape and what that means for the future." Phys.org, August 15. Accessed 2019-10-04.
  11. Goland, Yaron Y. 2005. "End-To-End Confusion – The Changing Meaning of End-To-End in Transport and Application Protocols." November 23. Accessed 2019-10-04.
  12. Grossman, Robert. 2018. "A Proposed End-To-End Principle for Data Commons." Medium, July 06. Accessed 2019-10-04.
  13. History Computer. 2019. "TCP/IP." Accessed 2019-10-06.
  14. IGF. 2019. "End to End Principle in Internet Architecture as a Core Internet Value." IGF Dynamic Coalition on Core Internet Values. Accessed 2019-10-04.
  15. Isenberg, David. 1997. "Rise of the Stupid Network." Entropy Gradient Reversal. Accessed 2019-10-06.
  16. Kaminski, Ted. 2019. "The end-to-end principle in distributed systems." February 27. Accessed 2019-10-04.
  17. Karn, Phil. 2004. "Can the End-to-End Principle Survive?" NANOG, February. Accessed 2019-10-06.
  18. Kempf, J. and R. Austein, eds. 2004. "The Rise of the Middle and the Future of End-to-End: Reflections on the Evolution of the Internet Architecture." RFC 3724, IETF, March. Accessed 2019-10-06.
  19. Manjoo, Farhad. 2017. "The Internet Is Dying. Repealing Net Neutrality Hastens That Death." The New York Times, November 29. Accessed 2019-10-06.
  20. Mehani, Olivier. 2010. "File:Internet-hourglass.svg." Wikimedia Commons, December 13. Accessed 2019-10-04.
  21. Reed, David P. 2010. "End-to-End Arguments: The Internet and Beyond." USENIX Security Symposium, Washington, DC, August 11-13. Accessed 2019-10-04.
  22. Rutkowski, Anthony. 2017. "Weaponizing the Internet Using the 'End-to-end Principle' Myth." CircleID, November 12. Accessed 2019-10-04.
  23. Saltzer, J.H., D.P. Reed, and D.D. Clark. 1981. "End-to-end Arguments in System Design." Proceeding of the 2nd International Conference on Distributed Computing Systems, Paris, France, pp. 509-512, IEEE Computer Society. Accessed 2019-10-04.
  24. Sinha, Amber. 2016. "Deep Packet Inspection: How it Works and its Impact on Privacy." CIS India, December 16. Accessed 2019-10-04.
  25. de Beeck, P. Op. 2002. "A Short History of TCP/IP and the Internet." De Nayer Instituut, September. Accessed 2019-10-06.
  26. isen.com. 2002. "Rise of the Stupid Network." isen.com, August 06. Accessed 2019-10-06.

Further Reading

  1. Saltzer, J.H., D.P. Reed, and D.D. Clark. 1981. "End-to-end Arguments in System Design." Proceeding of the 2nd International Conference on Distributed Computing Systems, Paris, France, pp. 509-512, IEEE Computer Society. Accessed 2019-10-04.
  2. Bärwolff, Matthias. 2010. "End-to-End Arguments in the Internet: Principles, Practices, and Theory." Technische Universität Berlin, October 22. Accessed 2019-10-04.
  3. Isenberg, David S. 1998. "The Dawn of the Stupid Network." ACM Networker, vol. 2, no. 1, pp. 24-31, February/March. Accessed 2019-10-06.
  4. Calvert, Kenneth L., W. Keith Edwards, and Rebecca E. Grinter. 2007. "Moving Toward the Middle: The Case Against the End-to-End Argument in Home Networking." Proceedings of the Sixth ACM Conference on Hot Topics in Networks (HotNets-VI), Atlanta, GA. November 14–15. Accessed 2019-10-04.
  5. Moors, Tim. 2002. "A critical review of “End-to-end arguments in system design”." IEEE International Conference on Communications, April 28 - May 2. Accessed 2019-10-06.
  6. Willinger, Walter and John Doyle. 2002. "Robustness and the Internet: Design and evolution." March 01. Accessed 2019-10-04.

Article Stats

Author-wise Stats for Article Edits

No. of Edits
No. of Chats

Cite As

Devopedia. 2019. "End-to-End Principle." Version 3, October 7. Accessed 2023-11-12. https://devopedia.org/end-to-end-principle
Contributed by
2 authors

Last updated on
2019-10-07 03:15:57