Improving The CDN Model With DNS

CNBC|cnbc.com|HQ: Englewood Cliffs, NJ|Founded In 1989| Industry: Media

Hundreds of millions of people around the world turn to CNBC for business news, breaking news, and up to the minute updates. Like many other large websites, CNBC relies on a content delivery network (CDN) and unsatisfied with our rate of service, we started looking to see if we could improve this model.

Our criteria was to:

  • improve response time
  • have better control over web traffic (real time reporting, change management, and alerting)
  • better utilize our internal data centers and their infrastructure
  • shield users from any troubles at the origin infrastructure
  • be cost effective

After researching the market in 2012, I looked to Dyn, a company I had began building a relationship with following a 2009 conference. At the time, they had started offering a geolocation load balancing solution powered by an anycast network. The distributed nature of that DNS presence was a key component of what we were trying to achieve: to steer users to their geographically closest origin point.

The traffic balancing rules could be very flexible. For example:

  • send 70% of US east coast traffic to origin point A, 30% to a CDN
  • send 50% of US west coast traffic to origin point B, 25% to C, and the rest to a CDN
  • send everything in the European Union to origin point D
  • send everything in Asia to a separate CDN

Other Managed DNS services from Dyn features we liked were automatic monitoring, failover, alerting and traffic reporting — all via a flexible and easy-to-use web portal.

A Non-Scientific Explanation Of How Anycast Works

In principle, to direct a user to the geographically closest origin point, one has to have an idea as to the user’s location. A very traditional way of doing that required some form of a database mapping IP addresses to locations. Such databases are widely available and used in all sorts of products, including geo-targeted ads.

A very different, albeit less granular, way to accomplish the same thing is to use Internet routing (BGP protocol) to advertise routes to the same IP addresses from multiple points of presence.

For example, let’s imagine one has four DNS clusters, each cluster containing 4 nodes with IPs of 1.1.1.1, .2,.3 and .4. Each cluster is positioned at a major peering point: US east coast and west coasts, one in EU, and one in Asia. From each location, one advertises (via BGP) the same subnet with our DNS servers on it. Mission accomplished!

Through the magic of routing, users in Asia will have their DNS requests come to one’s DNS servers in Asia, EU to EU and so on. It is easy to see how this implied knowledge of a requestor’s geolocation can now be used to direct their traffic in a certain, location-specific way.

We use a lower value of DNS TTL setting, so that we can assure any DNS changes take place within a reasonably short amount of time.

Put a different way: when a user wants to visit www.cnbc.com, his/her browser requests DNS resolution for www.cnbc.com. The DNS request will naturally flow to the closest Dyn data center where the DNS servers at the said data center have implied awareness of their location. Based on that, the DNS server infers that the requests are also coming from users in the same geo area and based on that and the set of rules we configured, it directs the requesting user to the proper origin point for www.cnbc.com.

For origin points, we’ve chosen our own data centers, each with multiple gigabits of egress capacity, in the US east and west coasts.


The Results

  • We were able to shave about 1 second (about 30%!) off page load times, as reported by Keynote.
  • Our CDN traffic has seen about 80% reduction as well, complete with 80% reduction in CDN fees.
  • The load on our CMS (Content Management System) infrastructure has dropped by more than 80%, resulting in a positive impact on the overall stability of our CMS environment.
  • We now have a complete, real-time view of traffic down to RPS (Requests Per Second), response time, number of connections, cached responses, and more.
  • We can now report, chart, and alert on traffic parameters, all in real-time.
  • We are better utilizing our own data centers’ capacity.
  • We now have the ability to instantaneously affect our caching rules or load distribution.
  • We still have a CDN configured as an overflow/safety valve, just in case.

The load balancing features within Managed DNS services from Dyn, along with aiScaler’s proven caching software, have enabled a top-tier financial news website to shave 30% off response time and save money, while experiencing better real-time monitoring, reporting, and alerting setups.


Download PDF

Want to learn more about Managed DNS?


CNBC uses our Managed DNS with traffic steering capabilities. To learn more, check out our products section!

Learn More