peshkov - stock.adobe.com

AI may open dangerous new frontiers in geopolitics

Truly artificial intelligence has the potential to provoke an international geopolitical crisis, warns F-Secure’s Mikko Hypponen

The dawn of truly artificial intelligences will provoke an international security crisis, according to F-Secure chief research officer and security industry heavyweight, Mikko Hypponen.

Speaking to Computer Weekly in October 2019 during an event at the company’s Helsinki headquarters, Hypponen said that although true AI is a long way off – in cyber security it is largely restricted to machine learning for threat modelling to assist human analysts – the potential danger is real, and should be considered today.

“I believe the most likely war for superhuman intelligence to be generated will be through human brain simulators, which is really hard to do – it’s going to take 20 to 30 years to get there,” said Hypponen.

“But if something like that, or some other mechanism of generating superhuman levels of intelligence, becomes a reality, it will absolutely become a catalyst for an international crisis. It will increase the likelihood of conflict.”

Hypponen posited a scenario where a government, or even a corporation, announces it will debut a superhuman artificial intelligence within the next month.

“How are others going to react?” he said. “They will immediately see the endgame. If those guys get that it’s going to be game over, they will win everything, they will win every competition, they will beat us in every technological development, they will win every war. We must, at any cost, steal that technology. Or if we can’t steal it, we must, at any cost, destroy that technology.”

The idea that AI could eventually inform the development of autonomous cyber weapons is not new, and has been previously voiced by other threat researchers, including Trend Micro’s Rik Ferguson, who earlier this year said CISOs should be thinking about how to prepare for autonomous, self-aware, adaptive attacks, even though they are not yet a reality.

During a speech in 2018, the now-Liberal Democrat leader Jo Swinson proposed a Geneva Convention for cyber warfare, saying cyber defence was the new civil defence. Policies to this effect have appeared in the Lib Dem General Election manifesto, which was published on 20 November.

Hypponen stressed that currently, cyber threats from AI were extremely limited in their scope, and while companies such as F-Secure are using machine learning for defence, attackers have not yet used it for offence.

Read more about AI in security

“There are a lot of misconceptions,” he said. “People think it’s being used for offence because it could be. It’s easy to see, but it’s not being done. We haven’t seen a single example.”

Hypponen added: “What we have seen is that attackers who know that we are using machine learning for defence are trying to poison our data. They are attacking our machine learning systems to make them misbehave or malfunction or learn wrong – things like that. That is happening.

“But that’s a totally different problem than attackers deploying malware which rewrites its code to better infiltrate networks, learning what works and what doesn’t, or phishing attacks, which would change based on how people fall for it.”

Hypponen suggested one big reason why this is not yet happening is down to the skills gap within cyber security and machine learning. With so few specialist researchers working in the field, and such high demand for them, there is, as yet, no incentive for AI developers to break the law. But as the technology becomes more advanced and the barriers to entry drop, this could change in short order, he said.

Our full interview with Mikko Hypponen, discussing nation state attacks, cyber weapons, and the need for international collaboration, will appear later this week on ComputerWeekly.com.

Read more on IT risk management