Maksim Kabakou - Fotolia
AI in cyber security: Distinguishing hype from reality
We know that malicious actors are starting to use artificial intelligence (AI) tools to facilitate attacks, but on the other hand, AI can also be a powerful tool within the hands of cyber security professionals
We know that malicious actors are using artificial intelligence (AI) tools to facilitate complex attacks with adversarial AI/ML enabling machine perception, human deception using deepfakes, tampering with algorithms, and influencing of geopolitical events.
On the other hand, AI can also be a powerful tool within the hands of cyber security professionals, with many AI-powered security technologies being developed and applied in the cyber security industry to strengthen defences and safeguard against emerging threats. With the wider business using AI, studies show 35% are utilising and 42% of businesses are researching the use of AI in their operations; we see cyber security by no means exempt from AI applications.
Harnessing the potential: key use cases driving the AI revolution
The benefits of AI in cyber security are numerous given how data rich the field of cyber security is. AI-driven solutions can improve threat detection accuracy and relieve the strain on human security experts, allowing them to focus on challenging tasks that require human judgement and creativity. Cyber security experts should also be interested in harnessing the capabilities of AI to improve their day-to-day work. Some of the top use cases we see for AI’s use in cyber security and where vendors are offering tools to professionals include:
1. AI network monitoring streamlines the process by analysing data streams, detecting anomalies, and establishing baselines. It increases threat detection by recognising cyber security concerns in real time, enabling pre-emptive solutions. Nokia’s Deepfield Defender utilises AI and machine learning techniques as part of its network analytics and security capabilities. The platform employs advanced algorithms to analyse large volumes of network traffic data in real-time, extracting valuable insights and patterns. This provides businesses with simplified monitoring, automation capabilities, and security insights.
By leveraging AI algorithms, organisations can enhance their cyber security posture and optimise their network performance. Organisations should define a clear strategy for implementing AI-powered network monitoring and identify the systems and devices that need monitoring, establish integration protocols, and ensure the correct training for IT/security teams is provided. This is essential as it enables early issue detection and facilitates predictive maintenance to improve operational efficiency and reduce downtime.
2. AI is rapidly being utilised in software testing to improve numerous elements of the testing process. It can build structured test cases, automate functional testing, and help with AI/ML testing reducing the risk of cyber security flaws. It also improves performance testing by simulating various situations and accelerates end-to-end testing by automating complex business procedures. Appvance is one of many AI automation testing platforms that uses machine learning and AI algorithms to accelerate the testing process and improve test coverage.
These AI-powered testing applications can automate a variety of testing operations, including test case generation, execution, and result analysis. AI-driven test automation frameworks that integrate with existing testing tools and workflows should be implemented by businesses to produce test cases automatically and execute them in parallel, allowing for faster and more complete testing.
3. Organisations can benefit from the continuous learning capabilities of AI models, which improve incident response, vulnerability management, and security screening systems. AI's ability to analyse risks, detect malware attacks, and respond to incidents enhances overall cyber security. CrowdStrike, has unveiled Charlotte AI, a generative AI security analyst which uses high-quality security data to deliver insights to users of all skill levels, automate processes, enhance threat hunting, and assist in risk assessment.
Generative AI analyses data, recognises trends, and detects cyber risks which allows organisations to respond more effectively. Organisations should recognise specific cyber security concerns and needs to determine the areas where generative AI can be valuable in enhancing their current security measures.
These highlight a few key examples, but there are many more that can be listed, such as vendor analysis and training content generation which can enhance the reach and capacity of cyber security teams.
The Security Think Tank on AI
- As with any emerging technology, AI’s growth in popularity establishes a new attack surface for malicious actors to exploit, thereby introducing new risks and vulnerabilities to an increasingly complex computing landscape.
- Following the launch of ChatGPT in November 2022, several reports have emerged that seek to determine the impact of generative AI in cyber security. Undeniably, generative AI in cyber security is a double-edged sword, but will the paradigm shift in favour of opportunity or risk?
- Some data now suggests that threat actors are indeed using ChatGPT to craft malicious phishing emails, but the industry is doing its best to get out in front of this trend, according to the threat intelligence team at Egress.
- One of the most talked about concerns regarding generative AI is that it could be used to create malicious code. But how real and present is this threat?
- Balancing the risk and reward of ChatGPT – as a large language model (LLM) and an example of generative AI – begins by performing a risk assessment of the potential of such a powerful tool to cause harm.
The spectrum of AI: narrow, broad, and the path ahead
Although we are still early in the application of AI and that the current development is at a stage of 'narrow AI' with a clear focus on specific tasks, AI has broad and valuable applications within the cyber security profession. While the label “narrow AI” implies a limited scope, it does not undermine the capabilities or potential influence of these AI systems. However, AI is a broad term with many possible interpretations, and tools may have been using a form of 'AI' for many years already, for example within machine learning and firewall monitoring.
There is a marketing benefit to the term AI, which incentivises vendors to use it, raising the possibility of a mismatch between cyber security professionals' expectations and the reality of the technology when they see that a vendor incorporates AI in their technology. In addition, it’s important to consider the risks of using AI such as in compliance, data security, bias, and copyright infringement.
Cyber security professionals therefore need to consider the tasks they want the technology to help them with, the degree to which solutions satisfy these needs balanced against the potential risks, and the likelihood that many tools will incorporate AI assistance. There are many great applications for the technology to assist cyber security teams and with appropriate consideration of available tools there is huge potential to support cyber security professionals with AI.
Dhairya Mehta is a cyber security expert at PA Consulting. With special thanks to PA’s Elisabeth Mackay and Richard Watson-Bruhn for contributing their insights to this article.