Internet Performance Delivered right to your inbox

AMS-IX hits 200 Gigabits per second

Today, for the first time, the Amsterdam Internet Exchange surpassed 200 Gigabits per second across its switch fabric. AMS-IX was already the biggest public Internet exchange on the planet, but this is impressive growth.


While AMS-IX hits 200 Gb/s on a single Internet Exchange in a single city, Tier 1 Research pointed out a few weeks ago that it was a big deal that Equinix recently hit an aggregate of 100Gb/s across all of their exchanges—including Ashburn, San Jose, Chicago, Dallas, Singapore and so on. So why is AMS-IX so much bigger than everyone else?

There are a number of factors that go into explaining why AMS-IX has grown so fast and so successfully over the past year. Many of these also point to the substantial differences in the way Internet traffic is exchanged in Europe versus in the US. I don’t have the time or energy to do a full history/overview of Internet traffic exchange for this posting (nor do I think that there’s demand for such a thing). Suffice it to say that there are lots of buildings / campuses / neighborhoods that have multiple different networks’ infrastructure present. Those sites become the facilities at which Internet traffic is exchanged. (Note well: these are not the mythical “NAPs” that I complained about previously. There really are not any such places anymore). In some of these places, networks primarily exchange traffic by dragging physical cables between their routers—called a Private Interconnect or PI. In others, they primarily interconnect with a series of Ethernet switches (or formerly ATM switches) shared among all participants and send traffic across this Layer-2 fabric. In most they do both. This latter is usually referred to as an Internet Exchange or IX.

In the US, colocation, PIs (cross-connects) and IX operations are usually all run by the same company in a given location. This means that the participating networks are locked into a given service provider for all three types of service. It means that those services cover a single facility run by a single corporation. It also means that there is no particular incentive to exchange traffic on any one type of medium (PI vs an exchange switch). In practice, US facilities with a high concentration of network operators have tended to have significantly more traffic exchanged on PI than across a switch. And hence, the switches have always been viewed as a platform for smaller customers and for testing, rather than the big-boys production environment.

In Europe, on the other hand, exchange operators have almost always been non-profit, collectively owned operations (similar to food and other cooperatives in US law). They are funded by their members (network operators) explicitly to operate a switch fabric to support the exchange of traffic. They are independent of any particular facility and the largest (LINX, AMS-IX and DECIX) are all in many buildings in a single city. The fact that the IXes deploy at multiple facilities has meant that using the IX as the primary means to exchange traffic can be substantially more cost-effective than ordering metro-fiber loops between buildings.

Additionally, since the largest Internet operators in Europe have always been members (indeed, founding members) of the exchanges, they have always routed traffic over them as well. So, where Sprint and UUnet and Level (3) were always reticent to connect to Equinix or PAIX exchanges in the US, Deutsche Telekom, BT, Telecom Italia, Telia and France Telecom were all founders or early members of the exchanges in their territories, and were only too happy to route their traffic over those exchanges. In fact, in a funny twist of fate, I believe that AMS-IX is the only public IX to which Sprint is connected.

As a result, IXes in Europe are going crazy. LINX, the grandmama of all IXes in Europe (and one of the first Internet Exchanges in the world) has doubled from 60 to just under 120 Gb/s this year.DECIX is up from 30Gb/s to 90Gb/s this year, which is obviously phenomenal. In this context, AMS-IX’s growth from 100Gb/s at the beginning of November last year to 200Gb/s today is unremarkable and clearly part of a broader trend.

Really, though, it is important to give credit where credit is due: AMS-IX has had tremendous growth and has handled that growth with an uptime and a service level that are justifiably the envy of the large-scale switching world. In fact, the speed and quality of AMS-IX’s migration to 10-gig e ports on their Foundry-based switch fabric is largely to thank for the scale of their growth. Well, that and the overall responsiveness and quality of the organization as a whole.

Two hundred gigabits per second is just a number, and most readers won’t be surprised that the Internet continues to grow. But to be honest, 200 is an impressive number that surprised and impressed me for at least a moment, so I thought it would be worthwhile to mark that moment here. Here’s to AMS-IX.

Brief note on the KIX (an important IX in Korea): I know that someone is going to say that AMS-IX is not the biggest IX in the world because the KIX is. People in the Internet engineering community are routinely making wild and sweeping claims about the Terabits and Petabits of traffic travelling across the KIX. About the KIX I will only say: 1) From the few public descriptions I have read it sounds much more like a large number of PIs than it does a Layer-2 IX, and 2) There are no public stats available for the KIX. As soon as anyone clarifies 1 and offers 2, we can put the KIX in its rightful place of mastery over all.

Share Now

Whois: Dyn Guest Blogs

Oracle Dyn is a pioneer in managed DNS and a leader in cloud-based infrastructure that connects users with digital content and experiences across a global internet.

To current Dyn Customers and visitors considering our Dynamic DNS product: Oracle acquired Dyn and its subsidiaries in November 2016. After June 29th, 2020, visitors to will be redirected here where you can still access your current Dyn service and purchase or start a trial of Dynamic DNS. Support for your service will continue to be available at its current site here. Sincerely, Oracle Dyn