Sign In

Internet Performance Delivered right to your inbox

DNS Security: Your Questions Answered

Last week, we hosted our third webinar on DNS security with January’s session focusing on how to move your infrastructure to the cloud.

We had quite a few questions and thought they would be of interest to the general security-loving public. If you have more, please ask them anytime! Be sure to check out our additional DNS security webinars for even more on the subject.

And with that, let’s get to some DNS security Q&A!

Chris asks:

“Can you elaborate more on host based security for a business with a private cloud that also leverages the public cloud for scalability? Were you only referencing locking down SSH? IP tables can get tricky when you jump between an environment with NAT routing for WAN communication vs the public cloud like Rackspace which uses direct routing (excluding their load balancers).”

Great question! To be truly portable, you obviously need to be able to control more than just SSH access. For example, if I’m running a database with a web server in front of it, followed my multiple load balancers in front of that, I obviously need to not only secure administrative access, but access between each of the systems as well.

Have a look at CloudPassage’s Halo product. (As a disclaimer, I was involved with the product’s development.) With that said, it does some pretty cool things at the host-based firewall level. You can block access to administrative ports until a user passes 2-factor authentication. The system is also smart enough to detect when the user and/or server is sitting behind a NAT device.

It also has some useful group management capabilities. For example, you can create a firewall rule that states “let the front end web servers talk to the database servers on the SQL port”. Once this rule is in place, configuring the firewall on a web server is as simple as adding it to the web server group. Halo takes care of setting all of the appropriate security policies, including the firewall rules. Another product you may want to consider is Dome9, although in my opinion, its firewall capability is not as extensive.

Bill asks:

“Some of our clients in certain verticals (pharma, finance, etc.) with specific security concerns tell us they are less likely to use a public cloud in order to protect their sensitive data.

Are you finding the same concerns with companies like those? And if so, do you see that changing over time?”

When we talk about data sensitivity, the two terms to keep in mind is “responsibility” and “liability.” If your company manages sensitive data, you are ultimately responsible for ensuring that it is handled appropriately. You can however “outsource” some of the liability associated with that responsibility by working with appropriate vendors. We’ve seen a precedence for this in the past.

For example, let’s say you accept credit cards and therefore need to maintain a PCI compliant environment. Let’s further assume that you do not process your own credit cards, but outsource that function to a credit card processing company. Historically, so long as you have ensured that the credit card processor is up to date on their own PCI attestations, you are not exposed to liability if that credit card processing company gets compromised. So I’ve reduced my risk profile by outsourcing a portion of the process to an appropriate vendor.

The same argument can be made for a PCI compliant cloud provider. So while some folks are leery because public cloud is still a relatively new technology, I expect we’ll see those concerns settle out over time.

Will asks:

“DNS companies are very reliant upon being able to migrate and manage their IP space. Some of the larger IaaS providers don’t allow you to announce your own space.

Short of creating and managing your own cloud, how can Dyn and other DNS providers still utilize external services while being able to continue announcing IP space as necessary?”

This is obviously a problem that Dyn has run into firsthand. For me, it’s a great example of “one size does not fit all needs”. Along with the announcement issues, I would also add in the fact that a sustained DDoS attack (which we see quite frequently) has the potential to dramatically run up operating costs. These are two of the major reasons why Dyn is not moving its DNS infrastructure into public cloud space at this time. We feel far more confident with our current model of using dedicated data centers.

With that said, Dyn obviously does far more data processing than just name server queries. Some of these other processing functions are far better suited to operating in public cloud. This provides an indirect benefit to the non-outsourced functions as well, as it provides greater available capacity within our data centers themselves. It also simplifies budget planning as you are only scheduling hardware purchases for the services that need it.

Emerson asks:

“From someone that is not a techie but oversees a large internal server group that is having a lot of growing pains, what would you recommend as the best way to start with moving some services to the cloud that are dependent on a very large & centralized database?”

What I recommend to a lot of companies is to start by moving their staging environment first. This gives you a model of your production environment to work with. You get the best of both worlds in that you can perform some serious testing while still being tolerant of latency or performance failures.

It also provides minimal impact to workflow as you would need to run through testing within your staging environment anyway. If this testing goes well, you can consider moving your production environment. If the testing shows your solution simply will not perform as required, you can choose to continue staging in public space (thus reclaiming old staging resources for production) or return to your original architecture model.

Client asks:

“How does Docker fit into this approach?”

Docker is an open source project that is designed to be portable runtime environment. The concept is that you can develop your application, package it with Docker, and now have an environment capable of executing on everything from your laptop to a public cloud.

While VM environments within IaaS clouds have offered the promise of portability, the reality has been that the tools required to perform such migrations have been severely lacking. Docker looks to solve this problem by removing the need for such tools. This is still a relatively new project, and some may argue it is not yet ready for production, but it offers some interesting possibilities and is certainly worth watching.

Randolph asks:

This report from OpenSSL.org is troubling. Comments?”

I agree it is troubling. From what I understand, the hosting provider was running a virtualization environment that provided console access to hosted VMs. This was the conduit that was leveraged to gain access to the VM in question. During my talk, I spoke about the dangers or introspection in a public IaaS environment. This is similar, but in my opinion not quite the same, as in this case accessing the VM still left a fingerprint trail on the VM itself, thus facilitating forensics.

With that said, I would personally consider it an increase to my risk model if my public provider maintained console access, an agent on the VM that provided them access, or some form of backdoor account. Each of these situations provides a central point of security failure with the provider. Some tenants prefer these services, as it permits the provider to gain access to the VM if you’ve locked yourself out.

Personally, I would prefer a clean hypervisor with no provider access. I’ll happily deal with the risk of locking myself out of my own VM in order to mitigate this potential risk exposure.

Szladek asks:

“If introspection allows a service provider to wipe data off my VMs, would you say that it’s not actually such good thing to pull off?”

I’m not as concerned about a provider wiping data off of my VM, as that changes the state of the VM in such a way that I know it can no longer trust its integrity. My concern is with eavesdropping: a situation where the provider, or some malicious party that has gained access to the hypervisor, quietly records information from disk and/or memory. This could lead to the compromise of password or key information in such a way that I would be unaware of the elevated risk.

Fernando asks:

“Is it better to have all of your data in the cloud?”

You really need to consider this on a case-by-case basis. As I mentioned earlier, sometimes it makes more sense to run things on a private infrastructure versus a public infrastructure. Of course you could ask the question, “Should that private infrastructure be cloud based?” Personally, I lean more often toward yes than no on this question. Even if it’s a situation where performance requirements are high and I’ll only be able to run a single VM on a given hardware platform, I still get the portability and configuration benefits I discussed during my talk.

Bryan asks:

“I’ve found that when we experimented with moving services to the cloud, outages at the cloud provider weren’t transparent (we couldn’t determine if the problem was ours or theirs) and their SLAs weren’t as good as ours, so the least common denominator was worse uptime.

When do you think we’ll be able to see deeper into public cloud infrastructure that will give us the same ability to troubleshoot outages as we have in our own data centers?”

I’ve see two efforts spin up regarding this topic. One is called “CloudAudit” and the other is “CloudTrust Protocol”. Both efforts are being facilitated via the Cloud Security Alliance. To date, it appears they have not received much traction. To be honest, I think this is the natural evolution of any emerging technology. We are tenants and are first going to focus on cost and scale. It is not until a good portion of the community is relying on availability that the attention shifts to uptime. Personally, I think we are still a year or two off from this being a core issue. Once it is, I look forward to seeing the above two projects revitalized.

John asks:

“What would be the minimum bandwidth? I haven’t set up a cloud service yet. I would like to offer it to my clients, however, most my clients have approx 10+ PCs, and ATT DSL generally limited to 2Mbps DL, .5Mbps or less upload… any thoughts on this?”

Bandwidth requirements are obviously going to vary with load. If we are talking regular administrative duties, you would probably be fine on bandwidth. If you are talking about graphic editing, you would probably be better off using an internal infrastructure. Geek tip: try using “netstat” on the local servers to get an idea of bandwidth requirements. This will at least get you in the ballpark.

Roidane asks:

“What are your thoughts on hosting your data in a cloud that is in another country?”

This is more of a legal question than a technical one. You need to consider the sensitivity of the data along with any legal restrictions. For example, have a look at the “US EU Safe Harbor” directive. It provides some great guidance on moving data between these two regions. You should also take a look at the cryptolaw.org website, as it can help identify restrictions on data privacy as you move data across country borders.

Irfan asks:

“Which is the best SaaS? Is cloud environment secure?”

“Best SaaS” is difficult to answer as SaaS solutions tend to be application-specific. For example, it would be difficult to compare and contrast Dyn and Salesforce. Even though both are SaaS companies, they provide completely different solutions. When selecting a SaaS company, first evaluate which ones fit your functional needs. You can then evaluate this subset to see which best meet your security needs.

Asking if cloud is secure is like asking if a Mustang is a fast car. “Secure” and “fast” are relative terms, so they are greatly dependent on what you are comparing them to and the yardstick you are using. A better question would be “Is this specific cloud deployment capable of mitigating security risks to an acceptable level for my organization?” Obviously the results of that question are going to vary depending on the cloud solution, the sensitivity of the data, and the security requirements of the tenant.

John asks:

“What kind of Internet bandwidth is required for cloud services?”

This is going to vary depending on the services you are using. For example, if you are using Gmail and Google Drive, bandwidth requirements tend to be low but relatively consistent. If you are using box.net for file storage, that will be periods of no bandwidth usage with bursts of high utilization. Best bet may be to run some tests on a single system and monitor network utilization. That will get you in the ballpark for your organization’s specific requirements.

Juan asks:

“Could Dyn provide cloud VM machines too? In that case, do they manage security or that would be my job as an administrator? Thanks!”

Dyn is a SaaS company, so we offer specific services rather than general purpose VMs for public consumption. Sorry if there was confusion as I did speak a lot about IaaS as it’s quite popular. The purpose of the talk was to convey helpful security information. I was honestly not trying to make a sales pitch. So much of my talk was outside the scope of what Dyn offers for products.

Borislav asks:

“What size are the companies you have in mind when you are speaking about these technologies?”

Quite honestly the cloud models I discussed can be applied to organizations of any size. Smaller companies tend to adopt new solutions earlier, as they can be more agile and have less of an investment in existing infrastructure. However large organizations are seeing the benefits of a cloud infrastructure and are moving in this direction as well.

For example, FedRAMP allows the US government to standardize the consumption and deployment of cloud technology. The US government obviously maintains a rather large network. They offer more details about FedRAMP on the gsa.gov website.

Ross asks:

“In your slide showing the delineation between Tenant and Provider in each deployment mode (IaaS, SaaS, and PaaS), are any of the deployment modes inherently more secure?”

There is nothing in the architecture of each model that gives it a clear win over the others. It really depends on how it is deployed and the organizations that are involved. For example, both Amazon EC2 and FireHost are IaaS providers. Amazon leaves it to you to supply most of the security tools you need while FireHost bundles them in. So while both are public IaaS providers, you could easily argue that one provides a more secure environment than the other.

Now try and compare two vendors using different models (like IaaS and SaaS) and it becomes even more dependent on external factors. For example, many folks go with public SaaS as they decide that a company specializing in a specific service can do a better job of supporting it and securing it than they can do internally. Conversely, if you have a staff of 100+ security professionals, you may decide the opposite is true.

Dwayne asks:

“I must say first of all wonderful presentation you provided some insightful information and advice. I have a question however about IaaS specifically that I hope you could answer: 

What based on your knowledge and experience would be top priority when addressing Risks for a Data Center IaaS and co-location project and why (security concerns)?”

Great questions!

This would depend on whether you are talking about using a public IaaS provider, or simply looking to apply a private IaaS cloud to an existing data center infrastructure. I’ll assume the former as it’s the most difficult. 😉

First step for me would be to assess the value and risk exposure of the data I plan to process. Company private computational data will have an entirely different risk level than say data that falls under PCI or HIPAA requirements.

Next I would evaluate the IaaS provider. Are the SOC 2 complaint? PCI DSS? Do they retain backdoor access to my VMs? This can help guide me in identifying how much effort has been put into securing their infrastructure. The more they bring to the table, the less I may feel I need to do myself.

I would then look at how much risk exposure I’m willing to live with, and let that drive what I feel I need to mitigate. From there I start making architecture choices as to which security solutions I will leverage to appropriately secure the setup.

 

Thanks again for all of the questions! If you have any more questions, tweet at Dyn with the hashtag #DNSSecurity or post on our Facebook page.


Share Now

Whois: Chris Brenton

Chris Brenton is the Senior Director of Information Systems at Dyn, a cloud-based Internet Performance company that helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Follow him on Twitter at @Chris_Brenton and @Dyn.