Maksim Kabakou - Fotolia

Security Think Tank: AI in cyber needs complex cost/benefit analysis

AI and machine learning techniques are said to hold great promise in security, enabling organisations to operate a IT predictive security stance and automate reactive measures when needed. Is this perception accurate, or is the importance of automation being gravely overestimated?

The importance of automation is not being overestimated, but the capacity of machine learning (ML) and other forms of artificial intelligence (AI) applications to achieve trustworthiness in automation is. To succeed with AI for automated cyber security, we need to let go of the unrealistic goal of trustworthiness. Use it, but don’t trust it.

The volume of data which could indicate an attack or be lost as a result of an attack requires a level of surveillance that is beyond what a team of human cyber security experts could achieve. The very definition of threat and anomaly detection (TAD) is a recipe for automation: finding outliers in a dataset, a repetitive task to identify patterns.

There are clear advantages to using AI to automate certain cyber security tasks. Identification of these patterns can contribute to building predictive models to identify attacks before they occur or to provide decision support while an attack is underway.

These AI applications usually that glean their insights from unsupervised learning for model development or neural networks, yielding very promising outcomes. However, they lack the ability to provide transparent explanations for individual decisions.

AI in cyber security has two main drawbacks

Automation brings its own controversy to cyber security. The way we justify automation inherently requires trust. If the goal is to free up valuable human capacity from repetitive, time-consuming tasks, we fundamentally want to remove humans from those tasks, meaning that we must trust the system to produce as good or better results than if our team were responsible.

There is a baseline assumption that when we automate, it is because we trust the system to complete the task satisfactorily. Many of the benefits of automation in cyber security come from AI applications, which are chronically incapable of invoking trust.

Using AI applications for a predictive and automated cyber security stance bring two major challenges to achieving trust in the company and for the stakeholders being safeguarded.

First, many of these predictive cyber security implementations rely on unsupervised learning techniques or neural networks which are currently unable to produce human-readable, localised explanations.

Second, these applications also increase a company’s attack vectors by presenting new vulnerabilities that hackers may exploit. Attacks on AI applications take a different form than traditional attacks. Instead of stealing a payload, attacks on AI applications attempt to change or influence the AI application’s behaviour to the hacker’s advantage. Despite efforts to develop retrospectively applied explanation models with some success, it is currently impossible to establish a high level of trust when using AI for automated cyber security tactics.

AI in cyber security comes at a cost

These two aspects – the perfect fit that AI is for many cyber security tasks, and the current inability of trusting unfamiliar technologies in an unfamiliar landscape – mean that we cannot manage AI when trust remains in the equation. We need a different management strategy for AI, especially for cyber security. We need to monitor, benchmark, assess, and improve these systems constantly, not trust them.

Because we are not yet at a place where we can reliably trust our AI cyber security tools to provide explanations or not to be influenced by backdoor attacks or data poisoning, we must remain suspicious of the framework that was used to train the model including input data, the results it produces, and our measurements of success.

This means that there must be other safeguards in place to monitor our newest AI applications in the security operations centre (SOC). One option out of many is to implement parallel dynamic monitoring of an AI system by launching a clone system in a controlled environment as a means of benchmarking the real system’s performance against one that is protected from model drift and some types of malicious attacks.

The fact that there is a growing number of  options for monitoring systems is promising, but just as the significant progress that has been made in developing explainable models still does not fill all the gaps to provide bulletproof trust, the monitoring strategy we employ to protect our AI applications will still have security and trust gaps. The real cost of using AI must also reflect the cost of monitoring and protecting it.

No AI system can be trusted

Our issue is not that some AI systems can be trusted and others not (as some people accessing a system can be trusted and others not), our issue is that no AI system can be trusted nor can it provide adequate means of verifying that it can be trusted, even if it produces satisfactory results.

This applies to the relatively straightforward classification tasks because of their sheer volume; it would take far too many resources to verify each decision. It also applies to the more challenging problem of establishing trust in unsupervised and deep learning models because as discussed, there are only limited options to provide an explanation.

So in the end, we’re left with the unsatisfying situation of having huge potential for AI in cyber security, limited current means to cultivate trust in those AI applications, and possibly high costs to develop and implement comprehensive monitoring processes to operate without trust. While there is undoubtedly potential in automated AI-based applications in cyber security, the question we are left with is at what cost are we willing to tap that potential.

We recommend that if a company decides to incorporate intelligent automated security features, they must allocate time and resources for lifecycle governance of that feature. This could be done in a variety of ways, such as by setting up an interdisciplinary team, by designating a team member to receive training to stay up-to-date on developments and challenges surrounding AI explainability, or by participating in the public efforts to develop governance frameworks and eventually regulation.

These, of course, are only a few of many options to improve the measurement and assessment of AI applications, and should be chosen according to a firm’s commitment to using AI.

For example, if a company is developing an AI system in-house, then there should be adequate expertise on their team to set up a sub-team to critically assess how the system is monitored.

But if a firm is implementing a pre-built AI cyber security application, it may be more appropriate to give extended training to one or two employees. Building scepticism into the monitoring and governance process can allow companies to use artificial intelligence in cyber security without trusting it.

Anne Bailey is an analyst at KuppingerCole. She specialises in emergent technologies such as AI and blockchain, and helps interpret their implications in the wider world.

Read more about AI in security

Read more on IT risk management