Internet Performance Delivered right to your inbox

It’s the End of the Internet As We Know It

In a few weeks, I will be leaving Renesys, a company I have been associated with for over five years. I moved from New Hampshire (where Renesys is headquartered) to Pittsburgh, PA, over the summer, and I’ve decided to work a bit closer to my new home.

Before I go, there is work yet to be done. The Renesys blog has become an important place for Internet engineers, managers, developers and salespeople to seek unbiased information about what is happening on the backbones. I have enjoyed contributing to it over the years, and I have enjoyed watching some of my colleagues (most actively Earl Zmijewski and Martin Brown) take the helm more recently. Before I ride off into the sunset, there are at least two things I’d like to contribute to this forum:

  1. A clear assessment of where we are with this whole Internet project
  2. A good guess about where we’re going

At the end of the next series of posts by me, you should either be very, very worried or convinced that I’m very, very wrong. The Internet is facing a confluence of engineering, financial and policy storms that have some small potential to completely derail it. These tempests have a high likelihood of marking a sharp departure from several characteristics once considered fundamental to the the Internet.

If we get through the next five years, I’m sure everything will be fine. Today, I’ll tackle the technology and engineering issues. In my next post, I’ll address financial issues, followed by policy issues. At the end of this torrent of pessimism, I’ll try to point to some plausible ways out of the mess that we have gotten ourselves into.

So, technology and engineering: what’s wrong with the Internet as it is currently engineered and why hasn’t anyone fixed that yet?

The Internet is an amazing success story. Vint Cerf gave a fantastic keynote address at NANOG 44 in which he gave a fairly vivid recounting of how the Internet began and some of the transition that it has experienced since. The Internet grew from four computers, to a few dozen, to a few hundred, to a few thousand. And in each transition, it faced challenges.

Question: How do you keep track of all the machines on the Internet when their names won’t easily fit in a single file?
Answer: The hierarchical Domain Name System (DNS) was born and continues to serve our needs today.

Question: How can we find all the networks when the list keeps getting larger and larger.
Answer: A relatively scalable, global routing system evolved based on the Border Gateway Protocol (BGP4).

Question: We’re running out of IP addresses? What should we do?
Answer: Subdivide more carefully (Classless Interdomain Routing—CIDR), use private addresses for pure-client machines (Network Address Translation—NAT) and make up a new, better protocol that has more addresses in it.

Enter IPv6

IPv6 was supposed to solve exactly one problem: the address space shortage. The solutions above (CIDR and NAT) were really meant as stopgap measures to increase the utilization of the four billion IPv4 addresses available while something else was done. Unfortunately for all of us, IPv6 took so long and was implemented so badly, that it failed to plausibly solve the only problem it was designed to solve, and it also failed to solve any of the other serious problems that have cropped up in the mean time.

There were two huge problems inherent with the IPv6 design process from the start and another huge problem that became apparent during the many years since. The first problem is the kitchen sink, or second system problem. Once the Internet Protocol suite was opened up for redesign, everyone and their mother wanted to shove their favorite feature into the stack. Security? check. Auto configuration? check. Multicast? check. Mobility? Sure, with Mobile IPv6! Unlimited options? Yep, got it, with extension headers. Flow label for header-based QOS? Sure, why not. All of this complexity greatly increased the difficulty of implementation of the IPv6 protocol stack in operating systems and network hardware. But that wasn’t even close to the problem.

The second problem was far more severe: lack of backwards compatibility. IPv6 is an Internet Protocol in the sense that it was inspired by the existing IPv4. But it is completely and totally non-interoperable with IPv4. From a practical perspective, it’s just a different network protocol: like Appletalk or DECNet or IPX/SPX. Computers that speak IPv6 only have no practical way to talk to the rest of the Internet. When IPv6 was being designed, this wasn’t that big of a problem. The Internet was still pretty small and it was thought that everyone would just use both for a while and then eventually IPv4 would wither away and die. But they took sooooo long to design IPv6 and the Internet grew sooooo fast that by the time v6 was ready, the Internet was too big to care. NAT and CIDR saved the Internet by allowing it to survive the mid-1990s and by doing so, pretty much killed any chance of v6 to have a smooth adoption.

Routing Scaling

And while all that was happening, another problem cropped up: the size and churn of the global routing table. The Internet routing directory is published in a single, flat namespace. Everyone on the Internet either has to receive and carry a full copy of this routing table or has to punt traffic to someone who does. When the Internet was only a few hundred thousand computers and a few thousand networks, it didn’t matter that everyone had to keep an up-to-date copy of that list. Routing protocols made it easy. But the number has now crept to almost 270k networks, which is expensive to store in the kind of fast memory that routers use to quickly forward packets.

But worse than the space was the rate of updates. Badly configured networks all over the world spew their pointless routing updates at ridiculous rates (Geoff Huston has some good data on all of this). And when routers can’t keep up, they do not converge—that is to say, they no longer have a correct view of where things are on the Internet.

Routing right now is analogous to hostnames pre-DNS. It’s all centralized with the only real difference being the automated protocols for updating the information. That one difference has papered-over the architectural deficiency for a very long time, but that time is over. Someone is going to have to figure out a plausible way to separate locations from identifiers so that all local routing doesn’t have to be global routing in order to be reliable.

The only thing this has to do with IPv6 is that v6 makes this situation much, much worse. It brings nothing to the table to solve it all the while demanding more valuable router memory and CPU resources to run in parallel (v6 routing table entries are twice as big as v4 entries due to the larger header sizes).

So What?

Blah blah blah technical Dumbo jumbo blah blah blah. The Internet is (or will be) fine, right? Probably. But IPv4 addresses are running out (less than two years left, maybe much less) and there’s no plausible path forward on the horizon. When we run out of unallocated v4 addresses, people will have a choice to either stop using new addresses, start using v6 (although it’s not clear how) or buy addresses on the black market from someone else.

What happens in the next 18 months is crucial to the future of this amazing asset that we have created (The Internet, dude. I’m talking about the Internet here! Haven’t you been paying attention?). Many of the paths forward enable us to continue doing home banking and watching videos on you tube, but will mostly or completely destroy the creative infrastructure that allowed those kinds of services to be created in the first place. Next time, I’ll talk about plausible solutions to the problems facing us and which ones are, shall we say, sub-optimal.

In particular, this post was almost entirely about the engineering problems facing us. The policy and business problems facing the Internet are at least as large and they create the environment in which these business policies play out. Over the next couple of posts, I’ll try to give the engineers in the audience more visibility into the policy stakes. Policy is being made with or without any engineering clue. That’s the sad state of affairs right now, but it is not inevitable. I’ll try to talk about ways that the engineering can provide useful inputs to the Policy formation and practice. And eventually, of course, we’ll have to talk about money.


Share Now

Whois: Dyn Guest Blogs

Oracle Dyn is a pioneer in managed DNS and a leader in cloud-based infrastructure that connects users with digital content and experiences across a global internet.