Network Collapse: Why the internet is flirting with disaster

It’s surprising the internet works at all, given the age of its core software. The question is, can we catch it before it falls over?

This article can also be found in the Premium Editorial Download: Computer Weekly: The rise of machine intelligence

A panel of academic experts recently took part in a discussion on the future of the internet, and among other things highlighted its fragility, the ease with which it can be disrupted and its seeming resistance to change.

The weaknesses arise primarily from the fact that the internet comprises protocols for Layer 3 networking in the TCP/IP stack, invented many years ago.

“There are a lot of challenges for the internet. We face daily problems,” said Timothy Roscoe, a professor at ETH, Zurich’s science, technology and mathematics university in Zurich.

“Most of what we do is at Layer 3, which is what makes the internet the internet.” However, new and incredibly popular services, such as YouTube, Netflix, Twitter and Facebook, have put pressures on these protocols.

New age, old protocols

Laurent Vanbever, an assistant professor at ETH, said: “There is a growing expectation by users that they can watch a 4K video on Netflix while someone else in the house is having a Skype call. They expect it to work but the protocols of the internet were designed in the 1970s and 1980s and we are now stretching the boundaries.”

The internet is often described as a network of networks. What makes these networks communicate with one another is BGP, the border gateway protocol. In essence, it’s the routing protocol used by internet service providers (ISP). It makes the internet work.

Roscoe said: “BGP is controlled by 60,000 people, who need to cooperate but also compete.” These people, network engineers at major ISPs, email each other to keep the internet running.

Read more about internet protocols

  • What is BGP hijacking or IP hijacking and how do cyber criminals pull off the attacks? Expert Michael Cobb explains how enterprises can mitigate these risks.
  • Infoblox calls for the US, Germany and other sources of malicious domain name service (DNS) infrastructure to improve processes to remove the threat.

Routing for trouble

“When you visit a website, you really don’t know where your internet traffic goes,” said Roscoe. One would assume the route network traffic takes from a user’s computer to the server is the shortest possible.

But often, according to Roscoe, this is not the case. “I have seen network packets taking remarkably bizarre paths across the internet,” he said, and added that Pakistan was able to route all YouTube traffic through its servers, blocking the traffic, and effectively taking YouTube offline.

Due to the way BGP and other protocols work, he said, there is “very little control over where traffic goes”. The question is why there is so little control.

Mark Handley, a professor of network systems at University College, London, said: “The internet is built out of a set of networks, where the operators have their own desires about what they want their network to do. Internet operators partially hide pricing and routing policy information, while needing to communicate with their neighbours.”

So, there’s a paradox, driven by competition to route traffic, and they [the operators] “are hiding who they will talk to, while trying to talk to each other”, said Handley.

More recently, Edward Snowden’s revelations propelled into the public domain the ease with which the internet’s traffic can be routed and moved, highlighting the mass collection of internet data by US government spooks.

No need for internal change

Adrian Perrig, a network security professor at ETH Zurich, said his group at the university has been working on a new protocol and trying to tackle the internet’s secure routing challenge, in a way that is also more efficient than existing methods.

He said: “The architecture was started as an academic exercise, but we realised it is not that hard to deploy, as we do not need to change the internals of networks. We only need to change the points where different ISPs touch each other.”

So far, three major ISPs have begun deploying the new protocol along with a few banks ­– who want to gain greater transparency over their network packets. Perrig and his team are attempting to develop a protocol that can easily be deployed.

Too complex to change

Matt Brown, site reliability engineering head at Google, said: “A lot of the core protocols of the internet we rely on are very old. There are many improvements that need to be made to give us the level of robustness and security needed for the role the internet has in society.”

But, he argued, it is still extremely hard to upgrade these protocols. “With a network you get network effects. You are effectively constrained by the lowest common denominator, like the last person who hasn’t upgraded who holds everybody back.”

For instance, he said the digital subscriber line (DSL) router provided by ISPs to people at home to allow an internet connecting may be four years old, yet it contains critical protocols.

“Getting new functionality to everyone in the world is a huge challenge,” he added. For instance, while the number of available IPv4 addresses has effectively run out, Google recently found that only 10% of the world’s traffic has upgraded to the next version, IPv6.

There is a cost for ISPs if they want to make these changes. Moreover, as the slow rollout of IPv6 is revealing, many prefer to stick with old technology, simply because it can be made to work.

Read more on Internet infrastructure