At the end of July, Internet Service Providers asked for an en banc review of an earlier judgment that upheld the US FCC’s network neutrality rules. It is frustrating to many to watch this saga continue, because “network neutrality” seems like such an obviously good thing that it’s hard to understand what apart from greed could inspire this fight. As usual, however, the situation on the Internet is more complicated than it might initially seem.
It ought not to be too surprising that the FCC is attempting to act to protect consumers in their regulations. Most of the utilities that go by one’s residence on a pole — electricity, local phone service, cable, and so on — are subject to some sort of regulation. The reason for this is simple: poles and wires through neighbourhoods are natural monopolies. The people who own the poles and the wires are the only ones who have access to the customers. In large swathes of the United States, consumers of Internet service are in the same position as 20th century phone or electricity consumers. They have to use the one or two providers that have the local wires. Wireless is similar, because wireless spectrum is a scarce resource. So a provider who gets a part of the radio spectrum automatically excludes everyone else from using that spectrum to satisfy customers.
But there is a trouble here. An electrical network will fail if incompatible technologies are mixed within it without expensive interoperation mechanisms. The historic telephone network was circuit-switched, which meant that a phone call was literally a circuit between the two phones. This helps explain why the Bell system had so many rules about what you could do to it — incompatibility of a device really could be bad for the network. But the Internet protocols are designed precisely not to have this problem. The Internet is a network of networks, so the protocols are designed with the assumption that inside your network you can make your own rules.
The trouble with network neutrality rules, then, is that they represent some of erosion of that principle. First, there’s no doubt that there are some negatives to this sort of regulation. If my ISP makes a deal with television streaming service BigHoop, such that I can get all BigHoop’s content free, that’s a benefit to me as long as I want to watch BigHoop. But second, making rules against that is also, arguably, an attack on how the Internet works. For it means that my ISP is not allowed to operate its network how it wants.
At the same time, the other basic part of the Internet design is that the network is supposed to be “dumb”. The early designers of the Internet identified the tradition of the “smart” network as a barrier to innovation. The phone was a fairly simple device, and the intelligence about how to route a phone call was all in the network. Of course, that meant that changing the way calls could be routed involved making hundreds of millions of dollars of equipment obsolete. By putting all the intelligence out in the end systems, the Internet offered greater opportunity for innovation. If you want to use the latest Internet technology, in principle the only thing that needs to be upgraded is your own computer. And you get to choose when that happens. (This style of thinking is sometimes described as the “end to end principle”. I’ve been a little careless with the details, however, for the purposes of this post.) This approach can sometimes seem a little extreme, because there are some kinds of thing on the Internet (such as Internet Performance Management) that are all but impossible to do without putting certain kinds of intelligence into the network. But there is a difference between building a network that is basically smart and where the edge is made into mere passive consumers, and a network that is basically dumb and the edge is the real source of innovation.
As a practical matter, the sorts of things that the FCC is trying to regulate are really an effort to push smarts back into the network. Network neutrality is really an insistence that, if you want to provide access to the Internet to your customers, you have to do it in an Internet-like style. Your network is not allowed to play favourites. In other words, the ISP’s network should not be making implicit choices for your consumer network (which might have only one node, like your phone) about what traffic you want to prioritize. And the reason that ISPs in particular are subject to this consideration and others are not is because most of the ISPs have control over a natural monopoly, just like the electrical utility.
That does not mean we should simply accept any regulation when it comes cloaked in the mantle of “network neutrality”. A surprising number of efforts that are called “network neutrality” are in fact attempts to regulate away network operators’ ability to do different things. It’s usually in the mistaken belief that “the Internet” is a monolithic thing and that all network operators ought to act the same way. That kind of network neutrality really would be a threat to the strength of the Internet. But regulations that aim to discourage operators from making too many choices in their network, instead of at the edge, are plainly consistent with the design of the Internet. We already had a smart network. It was the phone system. It is gradually being dismantled in favour of cheaper and more reliable communications over the Internet. It would be a bitter irony if, just as the Internet has proved that the dumb network can supplant the smart one, we allowed ISPs (and their associated media companies) to reconstruct the smart network in order to extract more profit.
As a fellow at Dyn, Andrew Sullivan concentrates on the architecture of Internet systems, working on Internet standards, protocols and policy. In his more than 15 years in the business, he has focused on DNS, Internet standards and internationalization, systems architecture and databases, but not always that order.
Follow us and like us to get involved in the conversation.