Moving to Hybrid
As enterprises move to provide diversity in their DNS infrastructure, there has been a move to add cloud services, such as Dyn, alongside existing on-premise infrastructure. This replicates the Hybrid Cloud model, which is increasingly popular in other layers of the stack. So why hybrid for GLB?
Just as we learned when we were adding multi-provider DNS to the zone originally, having multiple providers within the delegation allows for greater performance and resiliency. This is especially important in the event of DDoS, as taking the full force of an attack directly can be devastating to someone running DNS themselves. This post will get into some of the differences between the two types of GLB, how to get them to harmonize, and ultimately ways to execute a hybrid cloud architecture for GLB.
Pick the Right Tool for the Job
Whenever moving to a multi-DNS scenario, it is important to have the same answers contained in the zone information across your entire NS set. This is because recursive resolvers will send traffic across the full delegation until they gain a preference based on performance. Therefore, it is key for a query to be handled exactly the same between the cloud and hardware versions of the GSLB zone.
But that’s where we hit a little conundrum. Hardware tends to have access to information that cloud does not, namely the real-time state of assets. At the same time, cloud could have a different view of system health observing outside of the network. Lastly, if you ever want to think about multiple providers – hardware or cloud – the functionality tends to conform to the least common denominator despite both solutions offering unique features. What to do?
Some activities make a lot of sense to run within the DNS response, these include basic health monitoring to determine if something is up or down, Round Robin Load Balancing (RRLB), and geographic targeting. Other things, like closed loop load balancing and session persistence, tend to be hard to do with DNS LB, or are not widely adopted across systems. The common practice, is to keep your DNS response behavior to getting the user connected to the best local LB, which is responding to requests and is geographically close to the user, then use the local load balancer to get the user to the “correct” server. In other words, use DNS for the rough cut, and use the http connection for the fine finish. This allows for each service to do what it is good at, without serious competition for the same function. These functions are also generally available across hardware and cloud providers, so diversifying both won’t be as hard as a bespoke solution.
Taking Baby Steps
As enterprises look at the DNS portfolio for what the implementation will look like, there are two or three buckets things tend to fall into:
- Critical revenue producing or brand important domains (often under 100 zones) served via something like a unix box running BIND or a DDI appliance.
- A large group of non-critical domains (often 3000-10,000 zones for a large enterprise) for things like defensive registrations, microsites, leftover from acquisitions, and SEO farm sites. These have probably accumulated naturally over the years, and have maybe even gone through a few phases of culling. They could be resolved all over the place, or one central spot alongside the critical ones.
- Sub-delegations from throughout the domain portfolio to hardware load balancers (100-1000 hostnames) enabling features like health aware failover, weighted round robin load balancing, or geo targeting.
These groups tend to align themselves to a logical 1-2-3 phase approach. Phases 1 and 2 have been covered in great detail before, but the thesis is that cloud based DNS provides key resiliency and a safety net to your existing environment. This can be set up as a secondary zone configuration, allowing workflows to remain unchanged while sending updates to the cloud DNS service. You enter the cloud based DNS into your delegation, and take advantage of the full depth of a global anycast network.
A Hard Status Quo
The meat of our article today is the final phase of coverage: introducing a cloud provider alongside the hardware solution for the delegated sub-domains. Every deployment will be a little different, but in general, enterprises seem to want to manage the majority of their domain portfolio on a DDI appliance. And this makes sense, it makes things simple. Simple means easy. I like easy.
From there, DNS architects will go one of two primary routes, either a single management domain with CNAMEs or an independent delegation for every single subdomain using GSLB services. This means if you have 1000 hosts using GSLB, you may well have 1000 zones. It will look something like this:
Brother from another Mother
In either example, while the traffic initially has cloud coverage – were there an attack directly on the hostname which is answered by the hardware load balancer (attacking “www.example.com.gslb.example.com” in example A and “www.example.com” in example B) the whole of the attack would fall on the hardware itself and the enterprise to mitigate. This is obviously far from ideal, so the goal is then to add the cloud service to the delegation. To do that you need to replicate the DNS response behavior of the hardware GSLB. As long as you take the mindset of keeping DNS LB to Geo and RRLB and putting all the session stuff later, that should be easy and your vendors can help ensure the setup is appropriate.
Once the cloud service and the hardware LB is responding the same way, the cloud service may be added to the delegation of the specific management zone. Here is a visual depiction of what the environment would look like:
And there we go, we now have full coverage for your entire domain portfolio within the DNS. This solution borrows heavily from techniques to run DNS across multiple providers. The only difference, is that in a hybrid DNS setup, the other provider is your own systems. With the new age of high impact DDoS attacks on the DNS, there is no reason you have to do this alone.