Maksim Kabakou - Fotolia
Does AI have a future in cyber security? Yes, but only if it works with humans
Do AI and ML hold the promise of helping cyber pros achieving the holy grail of operating quicker, cheaper, and with higher efficiency? We shouldn’t hold our breath, says Nominet’s Paul Lewis
I believe a part of making sense out of the issue of artificial intelligence (AI) in the world of cyber comes down to definitions. AI is the idea that a machine can mimic human intelligence, while machine learning (ML) teaches a machine how to perform a task and identify patterns. A lot of cyber security vendors are jumping on the bandwagon, hyping up their products, and slapping an AI sticker on them when they aren’t actually AI. It’s been the same way as any fad since snake oil.
And for some, the end goal for AI is that it is automatic and doesn’t need human intervention. But I’m a firm believer that we need to accept that the answers and processes that AI generates shouldn’t be taken as gospel. We should always treat its outputs as a starting point to then apply human decision-making to, rather than see it as the end product. AI will always need a human perspective to make it ethical, and its outputs relevant.
Meanwhile, the use cases – currently – are quite slender for AI. For instance, GitHub Copilot. It turns natural language prompts into coding suggestions. And while it’s great, it’s great at being deep on one particular thing.
Deep, in this context, is like training for a particular career – like a neurosurgeon. Whereas wide is a GP who is good at treating lots of different medical conditions. But you could argue that Copilot is ML and not true AI. Midjourney’s capabilities for creating images, for instance, go deep but not broad. You need the AI to be both deep and broad to do a particular thing well. We are getting a bit closer thanks to ChatGPT, but it still feels a while away.
And specifically for security, we haven’t got our head round how we can effectively use it. This is where it can be used as a baseline of what a security team needs to consider, and for humans to take it to the next level. For example, security controls and policy decisions. But the interesting part about that is how do we actually take it and put it into practical solutions we can use down the line.
One technique that has been around for a while is rolling AI technology into security operations, especially to manage repeating processes. What the AI does is filter out the noise, identifies priority alerts and screens these out. The other thing it is capable of is capturing this data and being able to look for any anomalies and joining the dots. Established vendors are already providing capabilities like this.
Here at Nominet, we have masses of data coming into our systems every day, and being able to look at correlations to identify malicious and anomalous behaviour is very valuable. But once again we find ourselves in the definition trap. Being alerted when rules are triggered is moving towards ML, not true AI. But if we could give the system the data and ask it to find us what looked truly anomalous, that would be AI.
Organisations might get tens of thousands of security logs at any point in time. Firstly, how do you know if these logs show malicious activity and if so, what is the recommended course of action? AI chatbots/LLMs can be used to summarise large datasets to potentially flag areas for further investigation or to tell us what is important and meaningful. They can help filter this information in a way that is easy for say security analysts to digest and act upon quickly, which is a huge improvement.
Chatbots and LLMs can also be used as a Human-Machine Interface into different security products. For example, rather than writing a vast amount of code, you could tell the AI that you need a procedure that does a certain task. The AI would then create a set of rules or analytics, for instance, and present these to you.
The Security Think Tank on AI
- As with any emerging technology, AI’s growth in popularity establishes a new attack surface for malicious actors to exploit, thereby introducing new risks and vulnerabilities to an increasingly complex computing landscape.
- Following the launch of ChatGPT in November 2022, several reports have emerged that seek to determine the impact of generative AI in cyber security. Undeniably, generative AI in cyber security is a double-edged sword, but will the paradigm shift in favour of opportunity or risk?
- Some data now suggests that threat actors are indeed using ChatGPT to craft malicious phishing emails, but the industry is doing its best to get out in front of this trend, according to the threat intelligence team at Egress.
- One of the most talked about concerns regarding generative AI is that it could be used to create malicious code. But how real and present is this threat?
- Balancing the risk and reward of ChatGPT – as a large language model (LLM) and an example of generative AI – begins by performing a risk assessment of the potential of such a powerful tool to cause harm.
- We know that malicious actors are starting to use artificial intelligence (AI) tools to facilitate attacks, but on the other hand, AI can also be a powerful tool within the hands of cyber security professionals.
- As AI is a hot topic right now, it is no surprise there are some cyber solutions coming to market that have been thrown together in haste, but that said, genuine AI-powered security products do exist and their abilities could yet prove transformative.
Another promising AI technology is for attack surface management. These technologies detect, monitor and manage all internet-connected devices and systems, both external and internal, for potential attack vectors. This is particularly important as the attack surface is changing constantly. Not just the infrastructure, but all the information we put out there as employees and citizens. Attack surface management could be a solution – not a silver bullet, but another string in our bow. If we know in almost real time where our weaknesses are, and remediate them quickly via infrastructure as code, this would dramatically decrease our risk as organisations.
Of course, there are no perfect solutions in cyber security, but AI and ML holds the promise of achieving the holy grail of operating quicker, cheaper, and with higher efficiency. But we must not hold our breath. I believe that the power of AI comes from augmenting it with humans, not in isolation, and learning from each other almost like a critical friend or colleague.