Weissblick - Fotolia

AI a threat to cyber security, warns report

Artificial intelligence is being incorporated into a range of cyber security products, but the technology may also introduce new threats, a report warns

Artificial intelligence (AI) poses a range of threats to cyber, physical and political security, according to a report by 26 UK and US experts and researchers.

The Malicious use of artificial intelligence report examines the potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent and mitigate these threats.

“Because cyber security today is largely labour-constrained, it is ripe with opportunities for automation using AI. Increased use of AI for cyber defence, however, may risks,” the report warns.

As AI capabilities become more powerful and widespread, the report predicts an expansion of existing threats by making it easier and cheaper to carry out cyber attacks, the introduction of threats as attackers exploit vulnerabilities in AI systems used by defenders, and the increased effectiveness of existing attacks through automation, for example.

“The use of AI to automate tasks involved in carrying out cyber attacks will alleviate the existing trade-off between the scale and efficacy of attacks,” the report said. As a result, the researchers believe the threat from labour-intensive cyber attacks such as spear phishing will be increased. They also expect attacks that exploit human vulnerabilities by using speech synthesis for impersonation, for example.

Malicious actors have natural incentives to experiment with using AI to attack the typically insecure systems of others, the report said, and while the publicly disclosed use of AI for offensive purposes has been limited to experiments by “white hat” researchers, the pace of progress in AI suggests the likelihood of cyber attacks using machine learning capabilities soon.

“Indeed, some popular accounts of AI and cyber security include claims based on circumstantial evidence that AI is already being used for offence by sophisticated and motivated adversaries. Expert opinion seems to agree that if this hasn’t happened yet, it will soon,” the report said.

Read more about AI and security

According to the authors of the report, the world is “at a critical moment in the co-evolution of AI and cyber security, and should proactively prepare for the next wave of attacks,” which they predict may be larger in number and scale.

The report warns that while tools built upon a combination of heuristic and machine learning algorithms to provide capabilities are fairly effective against typical human-authored malware, research has already shown that AI systems may be able to learn to evade them.

The report makes four high-level recommendations:

  1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.
  2. Researchers and engineers in AI should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms.
  3. Best practices should be identified in research areas with more mature methods for addressing dual-use concerns, such as computer security, and imported where applicable to the case of AI.
  4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.

Attackers are expected to use the ability of AI to learn from experience to craft attacks that current technical systems and IT professionals are ill-prepared for, the report said.

Overall, the authors believe AI and cyber security will rapidly evolve in tandem in the coming years, and that a proactive effort is needed to stay ahead of motivated attackers.

The report also highlights the need to:

  • Explore and potentially implement red teaming, formal verification, responsible disclosure of AI vulnerabilities, security tools and secure hardware.
  • Re-imagine norms and institutions around the openness of research, starting with pre-publication risk assessment in technical areas of special concern, central access licensing models, sharing regimes that favour safety and security, and other lessons from other dual-use technologies.
  • Promote a culture of responsibility through standards and norms.
  • Developing technological and policy solutions that could help build a safer future with AI.

The report was a collaborative project between the University of Oxford’s Future of Humanity Institute, the University of Cambridge’s Centre for the Study of Existential Risk, the Center for a New American Security, the Electronic Frontier Foundation and OpenAI.

Read more on Hackers and cybercrime prevention