robsonphoto - stock.adobe.com

Can AI be secure? Experts discuss emerging threats and AI safety

International cyber security experts call for global cooperation and proactive strategies to address the security challenges posed by artificial intelligence

The adoption of artificial intelligence (AI) is surging ahead, with more organisations harnessing its power to automate tasks, streamline workflows, and unlock new levels of productivity and efficiency.

While offering unprecedented opportunities, the technology also presents complex security risks. From hallucinations and bias to the potential for prompt injections, data poisoning and adversarial attacks, the use of AI demands a fresh approach to security, one that transcends traditional IT security practices and requires international cooperation.

These were the key takeaways of a panel discussion at this year’s Singapore International Cyber Week, bringing together experts from government and industry to grapple with the question, ‘Can AI be secure?’

Evan Miyazono, CEO of Atlas Computing, a non-profit organisation focused on AI safety, highlighted the rapid advancements in AI capabilities, driven by the accessibility of training data and transformer models, which aren’t necessarily creating new risks, as they are trained on information already available on the internet.

“But what you’d have to pay attention to is not how to create a super flu, but how to craft an e-mail to someone selling the requisite components to convince them that you are a researcher working on this.

“Current models are already reaching a point where they can start posing these questions to people who would otherwise be restraining access to dangerous materials or insider secrets,” he warned.

Chris Hockings, chief technology officer for IBM Security Asia-Pacific, noted that while AI is susceptible to the same security threats faced by traditional IT systems, he underscored the urgency and scale of the problem.

“Attackers are interested in a few different parts of the equation,” Hockings said, citing data security, model security, and usage security as key concerns. “Integrity is another challenge we need to address, as the widespread use of AI can produce questionable or inaccurate information.”

Against this backdrop, Rod Latham, director for cyber security and digital identity at the UK’s Department for Science, Innovation and Technology, underscored the need for a proportionate response to protect users.

“There are things about AI that are different,” he said, citing the inherent uncertainty surrounding AI risks. Latham highlighted the UK’s voluntary code of practice for AI security, emphasising the importance of international collaboration in setting global standards.

“The nature of the subject means you have to have that level of international engagement to address challenges that are no nation can address on its own,” he added.

Léonard Rolland, head of international cyber policy at the French Ministry of Foreign Affairs, brought a diplomatic perspective, calling for countries to embrace international cyber norms.

“Our work is to set up the rules of the game at the international level to make sure cyber space stays stable and secure,” Rolland said, noting that the rapid pace of AI development makes it difficult to precisely identify AI risks. He advocated for inclusive international dialogue, highlighting the upcoming AI action summit in Paris as a key opportunity for progress.

When discussing organisational responsibility for AI security, both Hockings and Latham agreed it cannot be siloed.

Hockings noted that while chief information security officers (CISOs) are often tasked with securing AI, they will involve the chief data officer and development teams after they see the work and data that goes into building AI systems. Latham called for shared responsibility across the AI supply chain, stressing that “an entire organisation must take responsibility”.

In addressing the future of AI security threats, Rolland called for a collaborative approach modelled after the Intergovernmental Panel on Climate Change. He suggested creating a similar body for AI risks, tasked with informing governments and stakeholders on scientific findings.

Hockings, meanwhile, emphasised the need for organisations to adapt quickly. He urged businesses to modernise their data security programmes and build capabilities in areas like digital identity to combat future AI risks.

“All this identity work will give us the ability to identify content which could be a thing or a person that’s either real or not, and my trust in that thing or person,” Hockings said. “It's a larger data security issue, and this is the chance to look at your data security programme and make sure it’s up to up to speed.”

Miyazono advocated for a “Swiss cheese model” that combines multiple layers of security. He pointed to promising developments like mechanistic interpretability, which focuses on interpreting specific capabilities or features in large language models to make the models safer, as well as specification-based AI.

“Specification-based AI is a really compelling direction,” Miyazono said, adding that, like food and medicine, which are not considered safe until proven otherwise, there could be a time when AI systems don’t just do what they are asked to, but also have to prove that they have objective characteristics of AI safety.

Latham likened the ongoing AI security conversations to technological advancements during the Second World War. He cited a letter from the head of Bletchley Park, where Allied forces were engaged in codebreaking, that emphasised the need to adapt as the war progressed.

“When you have a new technology like AI, particularly one of this complexity, all we can do is be in a position to be attacking that problem in a joint fashion,” he said.

Read more about cyber security in APAC

Read more on Hackers and cybercrime prevention

CIO
Security
Networking
Data Center
Data Management
Close