Maksim Kabakou - Fotolia

Security Think Tank: Artificial intelligence will be no silver bullet for security

AI and machine learning techniques are said to hold great promise in security, enabling organisations to operate an IT predictive security stance and automate reactive measures when needed. Is this perception accurate, or is the importance of automation gravely overestimated?

Undoubtedly, artificial intelligence (AI) is able to support organisations in tackling their threat landscape and the widening of vulnerabilities as criminals have become more sophisticated. However, AI is no silver bullet when it comes to protecting assets and organisations should be thinking about cyber augmentation, rather than just the automation of cyber security alone.

Areas where AI can currently be deployed include the training of a system to identify even the smallest behaviours of ransomware and malware attacks before it enters the system and then isolate them from that system.

Other examples include automated phishing and data theft detection which are extremely helpful as they involve a real-time response. Context-aware behavioural analytics are also interesting, offering the possibility to immediately spot a change in user behaviour which could signal an attack.

The above are all examples of where machine learning and AI can be useful. However, over-reliance and false assurance could present another problem: As AI improves at safeguarding assets, so too does it improve attacking them. As cutting-edge technologies are applied to improve security, cyber criminals are using the same innovations to get an edge over these defences.

Typical attacks can involve the gathering of information about a system or sabotaging an AI system by flooding it with requests.

Elsewhere, so-called deepfakes are proving a relatively new area of fraud that poses unprecedented challenges. We already know that cyber criminals can litter the web with fakes that can be almost impossible to distinguish real news from fake.

The consequences are such that many legislators and regulators are contemplating the establishment of rule and law to govern this phenomenon. For organisations, this means that deepfakes could lead to much more complex phishing in future, targeting employees by mimicking corporate writing styles or even individual writing style.

In a nutshell, AI can augment cyber security so long as organisations know its limitations and have a clear strategy focusing on the present while constantly looking at the evolving threat landscape.


Ivana Bartoletti is a cyber risk technical director at Deloitte and a founder of Women Leading in AI.

Read more about AI in security

Read more on IT risk management