AlienCat - stock.adobe.com
Don’t believe the hype: AI is no silver bullet
We want to believe AI will revolutionise cyber security, and we’re not necessarily wrong, but it’s time for a reality check
You could be forgiven for thinking the entire world is now powered by artificial intelligence (AI) systems. McKinsey predicted a couple of years ago that the technology would add $13tn (€10.8tn/£9.7tn) to the global economy by 2030, and it’s currently easier to list the cyber security firms that aren’t shouting about their AI and machine learning capabilities than those that are.
Unfortunately, the reality of how it’s currently used doesn’t map to the marketing hyperbole. We want to believe in the narrative because, like flying cars and jetpacks, the technology is so appealing to us. The cold, hard truth is very different.
Chief information security officers (CISOs) looking for new security partners must therefore be pragmatic when assessing what’s out there. AI is helpful, in limited use cases, to take the strain off stretched security teams, but its algorithms still have great difficulty recognising unknown attacks. It’s time for a reality check.
A one-sided battle
We live in a world where cyber attackers seem to hold all the cards – or, if they don’t, they’re certainly on an impressive winning streak.
The Identity Theft Resource Center (ITRC) revealed a 17% increase in data breaches in 2019 versus 2018, with more than 164 million records exposed across virtually every vertical you can imagine. The vast cyber crime economy that supports these endeavours is estimated to be worth $1.5tn annually, almost as much as the GDP of Russia.
Covid-19 has only made things more challenging for CISOs. An explosion in unmanaged home-working endpoints, distracted employees, stretched IT support staff, overloaded virtual private networks (VPNs), and unpatched remote access infrastructure has ramped up cyber risk levels. Skilled security professionals remain worryingly hard to find – there’s now a global shortage of more than four million.
All of this sets the scene for AI to ride in and save the day. But while intelligent algorithms have been developed to beat the world’s best Go players, power the voice assistants in our homes, and unlock our smartphones via facial recognition, a breakthrough remains as elusive as ever when it comes to cyber security.
What can it do?
Let’s be clear. Machine and deep learning are good at some things. Give the system plenty of data and train it to spot subtle patterns and it can do so quite successfully. This could be useful in flagging known security threats and misconfigurations that human eyes may otherwise miss.
It’s good in areas like anti-fraud tooling, for example, because scammers usually riff off the same underlying ideas when trying to defraud banks and businesses. By spotting these needles in the haystack, AI can help in-demand security professionals to do their jobs more efficiently and effectively.
Yet in this respect, AI is similar to a Google search engine, filtering through large volumes of data that humans couldn’t possibly sort. What we haven’t achieved yet is the creation of independent learning machines that can draw new conclusions from patterns. The much-touted capability of baselining “normal” and then being able to spot abnormalities that could indicate suspicious patterns is actually much harder than it sounds.
Networks are incredibly complex – and the bigger they are, the harder they are to map. Add to this the fact that commercial networks are constantly changing and developing new behaviours and interactions, and you have even more complexity. That means AI systems end up flagging even just the regular evolution of a healthy network as “suspicious”, resulting in an overwhelming number of false positives.
Cyber criminals also have a few tricks up their sleeve. By making their behaviour appear as normal as possible they could trick these “intelligent” systems. On the other hand, well-documented adversarial techniques can trick AI into making the wrong decisions by creating the digital equivalent of optical illusions.
What happens next?
So where do we go from here? Can we design improvements into AI systems to make them more effective in cyber security? The single biggest challenge in this field is transparency: the ability of a system to explain why it arrived at a particular decision.
Unfortunately, those AI systems that can explain how they came up with an answer are less effective than the more inscrutable “black boxes”. Users don’t trust results from these opaque systems and find it challenging to follow up leads which simply say “something unusual happened” without explaining what made it worth flagging and why it matters to the business.
The lesson here for CISOs is ‘buyer beware’. Absolutely invest in AI systems for spotting well-established patterns that can make your security team more productive. But don’t imagine the tech will be able to achieve sophisticated detection of new and unknown threats or replace human security analysts.
AI doesn’t automate the work humans were already doing because there was never any way they could search through vast datasets in the first place. Your human experts will still need to take the lead, albeit with some help.
For what seems like the past 50 years, we have been a decade away from a breakthrough into artificial general intelligence (AGI). Despite the industry hype, this vision remains as elusive today as it ever was.
Read more about AI in security
- The importance of automation is not being overestimated, but the capacity of AI to achieve trust in it is. To succeed with AI for automated security, we need to let go of unrealistic goals.
- A predictive security stance may be some way off for many businesses and the belief that AI or ML will dissolve existing poor practice or protocols is as widespread as it is erroneous.
- The importance of AI in security is not necessarily overstated, but organisations will need to find a way of balancing the efficiencies of automation with the need for human oversight.