Maksim Kabakou - Fotolia

AI has a place in cyber, but needs effective evaluation

Organisations that don’t leverage AI-based security solutions will find themselves more vulnerable than those that do., but cyber pros still need to ensure they can effectively evaluate AI-enhanced tech to ensure it meets their use case

Artificial intelligence and its various sub-categories are playing an increasingly major role in cyber security. In a study conducted for Egress last year, 77% of cyber security leaders said they use a product that incorporates AI. When we dig in, however, we found only 66% of those using AI-based security products stated they ‘fully understand’ how AI increases their effectiveness and only 52% of respondents felt that cyber security vendors were ‘very clear’ in how they market their AI capabilities.

These figures tell us that the adoption rate of AI is greater than cyber security professionals’ understanding of the technology and far higher than their belief that vendors are being upfront about their offerings.

This isn’t surprising. I’m not expecting all cyber security professionals to be au fait with the technical intricacies of AI-based technologies; that’s not their job. Similarly, the cyber security market has expanded rapidly in recent years with vendors attempting to outcompete each other in customer acquisition and retention – each claiming to make bigger and better offerings to do so.

However, cyber security professionals do need to have frameworks for determining which solution best fits their needs and will effectively enhance their defenses. Helping cyber security professionals to ‘open up the AI black box’ is a personal passion of mine. The main message here is to ensure that technical salespeople can clearly explain how their solution answers specific use cases and, in particular, meets your needs. While their high-level marketing claims may sound good, you need to know what sits behind the promises offered on a vendor’s website.

There are also technical challenges associated with the use of AI and securing it. Adversarial system manipulation and poisoning of the data used to train algorithms can result in undetermined and incorrect outcomes and bias. Like any technology, it presents an attack surface, so you need to understand exactly what vendors are offering and the risks associated with it.

The Security Think Tank on AI

But this isn’t to say that AI has no place in cyber security – far from it, in fact! 

In a recent article, I showed how a generative chatbot, ChatGPT, can be used by cyber criminals to write tailored phishing emails and code malware. While there has been some progress to develop tools that can decode chatbot-generated content to prove it wasn’t written by a person, there’s currently no definitive way to do this accurately and reliably. However, Verizon’s 2023 Data Breach Investigations Report revealed that pretexting has almost doubled since last year, which correlates with a potential increase in using chatbots to create text-based attacks. It also correlates with cyber criminals favoring the types of attack that they know will succeed, such as text that socially engineers the victim (regardless of whether it was written by a bot or a person).

As an alternative to ChatGPT, the arrival of WormGPT and FraudGPT signals malicious actors to seize the opportunity to facilitate cyber crime and scams without the ethical boundaries or limitations of other generative AI tools, meaning that the AI arms race between attackers and defenders is all the more important.

Both text-based attacks and those that contain new or emerging malware payloads can bypass traditional perimeter email security, which uses signature-based and reputation-based detection to detect ‘known’ payloads and accounts that don’t pass domain reputation checks. Platform data from Egress shows that half (51%) of phishing emails got through these detection techniques over the two-week period between 1 June and 4 June 2023.

This is where cyber security professionals need to be leveraging the power of AI to detect these attacks. Natural language processing (NLP) and natural language understanding (NLU) are two of the algorithms used by integrated cloud email security (ICES) solutions to detect social engineering (including pretexting) that forms the foundation of text-based attacks such as business email compromise (BEC) and impersonation attacks. Patterns in malware and phishing links can also be detected without the signatures being present in definitions libraries by using one-to-many detection methodologies supported by AI. Similarly, AI and machine learning models make it possible to aggregate and analyse large datasets from multiple sources as part of an adaptive security model that enables organizations to automate their defences.

Ultimately, organisations that don’t leverage AI-based security solutions are going to inevitably find themselves more vulnerable than those that do. However, cyber security professionals need to ensure they can unlock the AI black box to effectively evaluate technology to ensure that AI is needed to solve their use case, they are not creating new attack surfaces in their organization, and that the AI solution they have selected does the job as advertised.

Jack Chapman is vice president of threat intelligence at Egress.

Read more on Business continuity planning