Sign In

Internet Performance Delivered right to your inbox

How We Got Our Tattered IoT Insecurity Blanket

This post previously appeared in Network World.

How did the poor security that undermines the cloud, the Internet of Things and the online economy come to pass?

IoT blog image

In my last post—Your network, IoT, cloud computing and the future—I introduced a few trends that appear to be shaping the Internet we have today. This post is the first of two that detail my observations on the large-scale security picture on the Internet and what companies, network professionals and individuals need to take into consideration when addressing the new challenges presented by expanding trends such as the cloud and the Internet of Things (IoT).

Today’s installment outlines some fundamental architectural underpinnings of the security vulnerabilities we all face. The next installment will outline some near-term suggestions for things we each might do, as well as suggest some overall architectural moves that may make things safer for all users of the Internet.

Technologies have histories

It is sometimes tempting to imagine that a little more ingenuity, planning or effort could fix the Internet. The trouble is, like any large-scale, long-lived technology, the Internet did not appear out of nothing. Its history shapes its patterns of development.

Given what it has become, it is sometimes hard to remember that the global Internet was once (as Vint Cerf said) an experiment. Today’s Internet is made up of a bunch of internetworked technologies (hence the word “internet”) that were put together in order to see whether it was possible to make a network of networks.

That effort wasn’t supposed to be “the Internet” in the modern sense any more than any other prototype is supposed to be used as the shipping product. We can think of them as experiments in making an internet, not in making the Internet. But, as Cerf noted, the experiment escaped, and so the modern commercial global Internet has some of the properties of those early experiments. (Anyone in computing who has set up an internal prototype that some other department heard about may recognize this pattern.)

Cerf was talking about the decision to use IPv4 addresses, which have turned out to require a disruptive upgrade to IPv6 on an internet where nobody has central control. But he might as easily have been talking about the security properties of the Internet.

It’s worth remembering that back in the experimental days, the participating internetworks were working under contract with someone (often the U.S. Department of Defense, at least indirectly). As late as the mid-1980s the Defence Communications Agency (which was behind the pre-Internet ARPANET) requested “that each individual with a directory on an ARPANET or MILNET host, who is capable of passing traffic across the DoD Internet, be registered in the NIC WHOIS Database.” We can think of such a network environment as one where everyone knows your name, like a really large episode of the old sitcom Cheers.

Early experimental internet efforts had an obvious answer to security problems: If someone breaks the rules, throw them out. It is not clear this principle needed to be articulated because everyone’s names, addresses and phone numbers were in a central database related to one of the constituent interconnected networks. Since nearly all the early Internet users were also directly interested in its development, anyone who was thrown off would be faced with not only inconvenience, but possibly ostracism and terminal harm to their career or reputation.

Because the social pressures provided a certain kind of security, there were low incentives to build strong network protocol and system security. The security mostly relied on good behavior, with social pressure coming from others knowing who you were—despite the fact that the Internet itself was designed on purpose not to have a lot of central control.

In 1988, Robert Morris tested the limits of the collegial and relaxed approach to network security by launching a worm. While there remain disputes about what Morris intended, his worm certainly illustrated how networked systems can all fail when common vulnerabilities in each system contribute to the overall vulnerability of the internet of system.

This, in turn, led to widespread efforts to keep systems up to date with security patches. But we failed to tackle the most basic issue: that the underlying problem was the vulnerability of hosts. Since Internet connectivity is basically established host to host rather than service to service, a host-level vulnerability is automatically significant.

So, the experiment that escaped had inadequate security when it was used for a public internet, never mind for the global Internet. And while the engineers who make the Internet protocols have attempted to address that gap, the fundamental pattern of adding security later has persisted.

The incentives are wrong

So much for how we got here. The question for this series is how we build better networks. To understand “better,” we need to understand where we are.

Because the global Internet is made up of other networks (or little internets) all connecting voluntarily, it is robust. A system that is fully decentralized has no centre to fail under attack. But this property comes at a cost because the security issues within the system are also fully distributed. The overall security properties of the Internet depend on the individual actions of all the underlying networks. And there is no central authority to impose sanctions on those who misbehave by turning them off or imposing fines.

Under these circumstances, good security practices rely on the right incentives. But unfortunately, in this case the incentives are not aligned correctly. Device makers that ship products that are secure by default face both increased development costs and increased support costs when consumers try to use the devices. Those costs get passed on to purchasers, who naturally buy the cheaper product without realizing they’re buying a more dangerous one.

And there is a collective action problem: Everyone who starts doing better security faces higher costs than everyone else; and worse, the real benefits show up only when everyone does it. So, nobody starts worrying about overall security, and everyone just tries to defend their own systems.

The question is, what can we do about this? In my next post, I’ll look at some possible things we might be able to do to be sure we build better networks.


Share Now

Whois: Andrew Sullivan

As a fellow at Dyn, Andrew Sullivan concentrates on the architecture of Internet systems, working on Internet standards, protocols and policy. In his more than 15 years in the business, he has focused on DNS, Internet standards and internationalization, systems architecture and databases, but not always that order. Follow us and like us to get involved in the conversation.