Internet Performance Delivered right to your inbox

The Blue Security Fiasco

I have been getting a lot of press attention for the recent fiasco regarding the denial of service attack suffered by anti-spam company Bluesecurity. Now bluesecurity have issued a purported timeline to describe what happened from their point of view. (You can tell someone is hostile or annoyed when they use words like ‘purported’ :-).

The timeline from bluesecurity (BS, as it’s such a great acronym in American English) is frustratingly vague. It uses phrases like ‘tampering with the Internet backbone using a technique called “Blackhole Filtering”.’ As Thomas Pogge, a philosophy professor of mine, used to say: that’s not even wrong yet. There is no “Internet backbone”, there is no technique known as “Blackhole Filtering”, and blackhole routing is not normally described as tampering. So the whole explanation is nonsense. It is literally non-sense: cannot be made to refer or mean anything. I don’t actually care whether BS knowingly redirected a DOS at the Six Apart sites or not (Although I’m sure that BS and its lawyers do). What I care about is that millions of angry netizens are being miseducated about how the Internet works. In the following, I’ll try to correct some of that miseducation.

Let’s clear one thing up for the press and everyone else: this event just wasn’t that interesting. The attack against bluesecurity was a run-of-the-mill denial of service attack. That’s actually one of the funny aspects of this story. As pointed out over at the LOOSE wire blog, Eran Reshef, CEO of BS is co-founder of skybox security, a company that focuses on helping companies simulate and survive Distributed Denial of Service (DDOS) attacks. Now Reshef shows up as someone who doesn’t really understand much about Internet routing or DDOS attacks. So the two possibilities are not good (either skybox was founded by someone with little understanding of its core market dynamics or BS’s CEO is currently dissimulating about the DDOS that they just suffered).

Interestingly enough, there was just a massive spam-run starting yesterday promoting skybox security. Either skybox are now evil spammers themselves, or this is the most sophisticated reputational joe-job I’ve ever seen. The theory would be that spammers are aware of the association between Reshef and skybox and sent out spam promoting skybox just to make BS look bad. It doesn’t sound very plausible, but I’d like to hear someone from skybox comment on this.

So let’s reconstruct the timeline, from a routing perspective, with evidence from non-BS sources. All times UTC/GMT (I don’t really care about the difference between GMT and UTC).

  1. 2006-May-02 02:00 – a DDOS starts against, the address for
  2. 2006-May-02 23:20 – BS changes the DNS A-record (Address record) for to point to
  3. 2006-May-03 00:00 – Six Apart sees a large DOS pointed at the servers serving

I’m purposefully ignoring everything but the DOS againt the BS website at and the subsequent DOS against Six Apart. Let’s look at this one, simple series of events. I know item 1 from reliable unnamed sources who work for these mysterious “Internet backbone” companies that Reshef keeps referring to. They confirm that a fairly sizable DOS started at that address at around that time. The DOS was a syn-flood that peaked at around 1.3 million packets per second. This is not huge, but it is certainly big enough to be noticed, especially for an Israeli carrier who is paying big bucks to transport bits from New York and Europe to Israel. More on that shortly.

Now, let’s take BS at their word and assume that they saw no such DOS heading at their corporate webservers (even though it was known to exist in other places). Given that the DDOS existed and given that it was not reaching BS, it must have been stopped before it got to them. In order to understand where it may have been stopped we have to look at how that IP address was connected to the Internet at the time all of this took place – we need a sort of Internet time machine. Luckily, Renesys has one of those and it makes this kind of investigation trivial.

So let’s take a look at routing. is and was routed out of There is no more specific in the global tables. So we need to look at who carried that route and how it is routed. is originated by AS1680, Netvision, a company that appears to be based in Haifa, Israel. Given what we know of BS, this makes sense. Netvision has a number of upstreams:

  • Beyond The Network (BTN) AS 3491
  • UUnet/Verizon Business AS 701
  • UUnet/Verizon Europe EMEA AS 702
  • TeliaNet Global Network AS 1299
  • and maybe Global Crossing AS 3549

That means that theoretically, any of these providers could have provided the best path for the rest of the Internet to reach this network, this host. Many people hear the term ‘best’ path and think it means ‘global best path’ but this is not how the Internet works. Every network on the Internet determines its own best path – there is rarely only one. So, it is worthwhile looking at the distribution of paths selected by Renesys’s peers (think of them as probes in a sensor network). This will give us a fairly good indication how the rest of the Internet selected paths.

  • 16% – UUnet North America AS 701
  • 19% – UUnet Europe AS 702
  • 61% – Beyond the Network (BTN) AS 3491
  • 2% – Global Crossing AS 3549

This means that BTN is the primary inbound provider for this netblock and followed by UUnet Europe and UUnet North America. Global Crossing is a distant last (and seen by few enough people to possibly not even be a valid transit path). Telia is never selected at all for this route.

Now that we have a sense of routing, we know where the blame might fall. If BS is telling the truth, then someone must have installed a null-route or blackhole route somewhere. A blackhole route is a simple device used by providers to mitigate denial of service attacks. In cases where the customer requests it, or the provider requires it to maintain service to other customers, a provider can choose to discard all traffic destined for a DOS victim when it first enters the provider’s network. This is a useful technique when the traffic would simply overwhelm the victim anyway. It avoids wasted network resources and causes no additional outage, since the site would have been unreachable anyway. Responsible providers who do this on their own initiative immediately notify their customers of the outage and discuss further remediation that may be possible. The key thing to note about blackhole routes is that they do not propagate from provider to provider. It only affects the single provider who installs it.

So what happened in this case? Clearly BTN, UUnet Europe, UUnet North America and Global Crossing did not all install a blackhole route at the same time. That’s just not plausible. In fact, I doubt that any of them installed one at all. It’s a dirty little secret that providers bill for bandwidth and utilization, and therefore have little incentive to stop traffic-generating events unless a customer complains or the traffic is affecting other customers.

BS claims they did nothing to mitigate a DOS, and I believe them for one simple reason: BS doesn’t have its own autonomous system number and does not operate the infrastructure that it’s web severs are served out of. In fact, that IP address looks suspiciously like a shared virtual server for all kinds of other customers of Netvision (AS1680 – remember them?). The fabulous passive dns replication database over at RUS-CERT shows that this exact IP address has been associated with a very large number of websites in the recent past including:


Moreover, it appears to be part of a block of addresses that answers on port 80. So this web server appears to have been in the middle of a bunch of virtual servers operated by Netvision. So I believe it’s likely that Netvision, BS’s provider, installed a null-route or used some other traffic-blocking device in order to protect their own infrastructure. I don’t know for certain this happened, but it is one of the only logical explanations. It would be nice for someone from Netvision to comment on this, but I sincerely doubt that they will

So what? Why waste so much time on this incident? I was unimpressed with BS’s business plan from the beginning, but that’s not what is making me cranky. I believe that the PR engine from BS is in overdrive spinning this event as fast as they can. But the concrete facts being put out by them simply do not add up. In the process they seem to be doing two things: 1) trying to imply or state that someone at UUnet was bribed by a spammer. This is simply ridiculous. I know many of the people who work for UUnet and they are honest, hardworking and extraordinarily clever people. They would not be crooked, or stupid, enough to do such a thing and if they were, they would have been trivially caught by change-management procedures. Moreover, such a change at UUnet (or BTN) wouldn’t have caused the event BS claims to have witnessed anyway. Additionally, 2) BS is trying to deflect attention from the damage that they caused at Six Apart. It would be much better if they could just claim ignorance of the DOS, apologize and move on. I recognize that that isn’t going to happen, but it sure would make this whole thing easier to handle.

I don’t blame BS for getting DOSes, although the old adage does sometimes hold true: Live by the DOS, Die by the DOS. I blame BS for not having a more DOS-resilient infrastructure. You don’t take on spammers with a virtually hosted website. I also blame them for not clearly explaining what happened. Phrases like “the Internet backbone” with no detail are meaningless. I blame BS for creating much more of a dust-up about this event than it warranted. For the moment, BS has got some sense and employed the services of Prolexic, DDOS mitigation specialists. This will cost big dollars, but probably not more than the loss to reputation that BS has already suffered.

Share Now

Whois: Dyn Guest Blogs

Oracle Dyn is a pioneer in managed DNS and a leader in cloud-based infrastructure that connects users with digital content and experiences across a global internet.

To current Dyn Customers and visitors considering our Dynamic DNS product: Oracle acquired Dyn and its subsidiaries in November 2016. After June 29th, 2020, visitors to will be redirected here where you can still access your current Dyn service and purchase or start a trial of Dynamic DNS. Support for your service will continue to be available at its current site here. Sincerely, Oracle Dyn