While we rely on the internet for many things – from infrastructure support to email and business applications to social interaction – its most popular usage is as a video-delivery tool. The amount of video consumed daily around the world is astronomical; Most estimates put video consumption at nearly 85% of all internet traffic by 2018 (and by then it may be more). And, by a large margin, much of the video content that is consumed is streamed via Netflix, which contributes 37% of all internet traffic, according to Sandvine.
Until recently, the strategy behind Netflix’s content delivery efficiency was left to speculation (though, with Dyn Internet Intelligence tools, mapping global CDNs and cloud instances can be accomplished in a matter of clicks). Researchers at Queen Mary University of London mapped the geographic infrastructure that Netflix uses to facilitate such massive traffic loads. The results weren’t so surprising for a SysAdmin or CIO; Netflix geolocated its 233 internal assets closest to where the most customers lived.
While data centers, cloud locations and CDN are very different, they operate under many of the same principals. The probability for a better user experience increases when assets are closer to the target end-users.
The problem, though, for many companies is knowing the right places to locate data centers, cloud locations to leverage or CDN providers to choose. Furthermore, organizations need to balance optimizing delivery across global infrastructure and actively determining the best value-to-performance ratio for their global network crosses the boundaries of network planning, network operations and devops.
At first glance finding these answers may seem daunting. An entire research team at a university spent time setting up a test infrastructure, finding this information on Netflix’s network and then publishing the report as if it was the cure to some ancient disease. The reality is, the methodology used and work done by Queen Mary is very impressive, but absolutely not the most efficient method today. The growing volatility of the internet combined with its influence on any digital business, means that companies need to have this information on-demand and can’t rely on prolonged studies to make decisions. As a result, the Internet Performance Management space has emerged. Tools like Dyn’s Internet Intelligence can provide valuable insight for any business and would have eliminated the need for the the entire setup used in the study.
Where does a company start when planning for network growth? What questions need to be asked and answered? How do I compare locations, providers and performance without actively engaging every vendor? Where do I place my data centers? What cloud providers do I choose? In which locations? Should I use a global CDN?
Let’s use this scenario: My company is delivering applications for 2020 Olympics tailored to the local population in Japan. The app will leverage the public cloud (AWS, Softlayer, Google, Azure, Rackspace, Digital Ocean). I must have a private data center as well for privacy sensitive data and I want to use a CDN for image delivery.
Where are my users? Let’s target Tokyo.
My developers like using softlayer for application development. If we deploy the application in the Tokyo instance, what will global latency be to my application?
Transit latency in Japan is good, but there is a 100ms latency penalty everywhere else in the world for making this choice.
I also have part of my application that I want to run in a data center. What network service providers give me the best connections to my users in Tokyo? Check the rankings to understand who is best connected at a backbone, wholesale or retail level.
NTT is the best connected wholesale provider to connect to my business partners, but I should connect via Softbank, KDDI, IIJNET and NTT to maximize coverage in country.
What about a global CDN? Is there any provider that has trouble delivering in Japan?
The visual below shows a serious amount of failures in Tokyo on August 23rd (note the highlighted dot and 50% failure rate), but good performance if you can reach the CDN (38 ms). Comparing CDN performance between providers over time or just using the past seven days as a lookback is a good starting point analysis to understand performance and availability.
Understanding what a company’s internet footprint looks like today is very straight forward in our Internet Intelligence tool and doesn’t require a team of university researchers. We already put together the analytics tools and visualizations that represent what is routed on the internet, what peering relationships exist, locations of domains and performance across almost every available internet path. We are actually completing 500M traces per day and on average, documenting all of the publicly available internet pathways traversed every 24 seconds. That gives an unprecedented view of what the internet currently looks like, but also provides a baseline for planning what is most important in a company’s network expansion. The visualizations and scenario above is simple, just to show what is possible, but the complexity of the internet can be explored with the right tools. Preparing your infrastructure to handle that complexity will have a positive impact on your business.
Charlie Baker is VP, Product Management for Dyn, a cloud-based Internet Performance company that helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience.