Sign In

Internet Performance Delivered right to your inbox

Dyn Research: Global Cloud Adoption in 2013

What is the state of global cloud adoption in 2013? Working with the folks at Endurance International Group, we decided to dig in and take a look, and presented our findings at the MassTLC Cloud Summit. The results will surprise you!

For our study, we looked at two key adoption vectors:

  1. Where is the cloud infrastructure being deployed in the world, and how are users connecting to it?
  2. What types of cloud infrastructure are being used, and how is adoption changing over time?

We’re going to divide our research findings into two installments, and in this first installment, we’ll be exploring adoption by location.

Global Adoption by Location

Why is the location of your cloud infrastructure important? The further your users are from the infrastructure they are connecting with, the worse the user experience. For example, let’s compare three scenarios, each comparing a different user in a different part of the world with an average (for that location) broadband connection:

  1. An American user in New Hampshire connecting to infrastructure in Massachusetts (we’ll use the Boston Globe in our hypothetical example) with a 20 Mbps broadband connection.
  2. A Japanese user in Tokyo connecting to that same infrastructure, but this time with a faster connection of 40Mbps.
  3. A South African user in Cape Town connecting to that same infrastructure, but this time with a 4Mbps broadband connection.

Intuitively, you might expect the user experience of the Boston Globe to be faster for the Japanese user than the American user since the connection speed is doubled, while the user experience of the South African user to be somewhat slower. Let’s see how this plays out!

Using the great utility Charles Proxy, we’re able to simulate both the bandwidth and latency characteristics of these interactions.

User in Bedford, NH connecting to infrastructure in Boston, MA:

User in Tokyo, Japan connecting to infrastructure in Boston, MA:

User in Cape Town, South Africa connecting to infrastructure in Boston, MA:

The results:

  1. Bedford, NH, to Boston, MA – 20ms latency and 20Mbps connection: 3.1 seconds
  2. Tokyo, Japan, to Boston, MA – 250ms latency and 40Mbps connection: 4.2 seconds (36% slower than local user to local infrastructure)
  3. Cape Town, South Africa, to Boston, MA – 400ms latency and 4Mbps connection: 7.6 seconds (146% slower than local user to local infrastructure)

Examining The Speed Of Light

In addition to a user’s Internet connection speed, the latency, or time it takes for information to make the full network round trip from the user to the destination infrastructure and back again, has a significant impact on the overall page load speed. And while connection speeds and bandwidth are typically functions of economics, the latency is a function of something a little more fundamental: the speed of light.

The speed at which information can be exchanged (in this case, the bits and bytes exchanged for a user to view a web site) is governed by the speed of the photons in the fiber optic cables connecting the user to the infrastructure (some over-simplification here, but you understand). While we might try to make the paths between locations more direct, we can’t exceed this limit. For example, let’s look at the effect the speed of light has on communication between the east and west coasts of the United States:

In a vacuum, the speed of light is 299,792.458 km/second. As the crow flies, this would give us a theoretical round trip time latency of about 40 ms.

In reality, the paths traversed are not direct, and the equipment responsible for enabling this communication (e.g., switches, routers, and repeaters) add component latencies, giving us a more realistic round trip time latency of about 90ms from the east coast to the west coast, and back again.

Since we can’t yet break this law of physics, what can we do to improve the experience of our users? After all, the latency has a direct impact on the bottom line. Amazon found every 100ms of latency cost them 1% of sales, and Google found an extra 500ms in search page generation time dropped traffic by 20%.

What we can do is deploy our infrastructure closer to where users are in the world. And this has never been more easy and economical, thanks to the proliferation of cloud infrastructure.

Historically, if we wanted infrastructure close to where users were in the world, we would have to deploy more physical sites, deal with local customs, local business practices, and logistics, and manage a sprawling infrastructure composed of many vendors. Now, we have many options:


Provider: Firehost, Global Cloud Locations: 4


Provider: Rackspace, Global Cloud Locations: 6


Provider: Softlayer, Global Cloud Locations: 17


Provider: AWS, Global Cloud Locations: 31

For each of these great cloud providers, we can deploy global infrastructure at the click of a button without the historic logistics hurdles of physical infrastructure. In addition to cloud infrastructure-as-a-service providers, many folks are able to leverage content delivery networks and application delivery networks to better battle the effects of network latency.

But the question remains: how many people are taking advantage of these capabilities, and are connecting locally to cloud infrastructure nearby?

To answer these questions, we joined forces with Jim Salem from Endurance International Group, one of the world’s largest web hosting companies, and took a deep dive through our anonymized authoritative DNS query logs. After all, with Dyn powering DNS for 12% of the Alexa 500, we have a great vantage point on how users connect to infrastructure.

First, let’s take a look at the global distribution of our DNS traffic.

Clearly, both Dyn and EIG have a strong North American audience for their online properties with enough of a global audience to warrant an analysis. Between 45% and 50% of the traffic has an origin outside of North America.

For these users, how many are connecting to infrastructure that is hosted in the same general locale (i.e., how many are getting close to the first scenario of a user experience we showed above)?

In North America, more than 98% of the observed traffic originated in North America AND terminated in North America. But in the rest of the world, a different story starts to emerge. In Europe and Asia, a little more than 30% of the traffic terminates near the origin, resulting in the majority of users connecting to distant infrastructure, delivering a sub-par user experience. And for folks in Oceania, South America, and Africa, the situation is even more dire.

The next question that arises is what portion of overall traffic observed is single-homed to a continent (meaning all global users connect to a single location) versus multi-homed across two, three, four or even five continents? Below, we can see the majority of traffic is single-homed, but 44% of the traffic is load balanced across two or more continents. This is great news, and trending toward more and more globally load balanced traffic over time.

How about traffic that is sent to a cloud infrastructure provider? In each part of the world, what percentage of traffic is sent to one of the major cloud providers, and how have those traffic patterns evolved between Q2 2013 and Q3 2013?

Now we start to see a strong growth story in the global reach of cloud infrastructure. In the observed three-month period from Q2 to Q3 2013, all geographies reported substantial gains in the percentage of traffic sent to a major cloud provider. For each of these locations, which cloud infrastructure providers saw what growth in traffic quarter over quarter? Users in Africa, Asia, and South America saw a significant growth in the percentage of their traffic getting routed to cloud infrastructure providers. From a vendor perspective, Amazon Web Services and Rackspace saw the strongest growth in percentage of traffic.

Overall, we’re seeing strong growth in international cloud adoption, which over time will help deliver a better and faster user experience for more people in the world. But we still have a long way to go, and we’ll continue to publish our findings as we continue to power this incredibly important part of the Internet.

In our next installment, we’ll take a look at the second half of our study on what types of cloud infrastructure are being used, and how adoption is changing over time.


Share Now

Whois: Dyn Blog

Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Interact on Twitter and Facebook.