bluedesign - stock.adobe.com

The Security Interviews: Applying AI to Lego, and security

Ann Johnson, Microsoft corporate vice-president of cyber security, is on a mission to prove that artificial intelligence holds great promise for the security sector, and she has the analogies to back it up

This article can also be found in the Premium Editorial Download: Computer Weekly: Is facial recognition happening too fast?

Imagine, if you will, that your job was not in technology as we know it, but rather that you got paid to build Lego sets all day – one specific Lego set, in fact: Han Solo’s spaceship, the Millennium Falcon. Sure, it might be fun for a few days, but after completing your sixth, seventh or eighth Millennium Falcon, eventually you’re going to get bored out of your skull.

As we snatch 20 minutes to chat during a break at a recent summit on all things digital and cyber, hosted by the Irish government in Dublin, Microsoft’s vice-president of cyber security, Ann Johnson, draws just such an analogy.

“If you think about an SOC [security operations centre] admin – they have a brain that needs to be challenged,” she says. “If I keep handing them a Lego kit that always builds the Millennium Falcon, and that’s the only thing they ever get to do, they’re going to get bored eventually.”

So what does this have to do with cyber security, and artificial intelligence (AI) in particular? Don’t worry, this wasn’t just an excuse to shoehorn in a Star Wars reference. This is going somewhere.

Signal to noise

“We see so much signal, right?” says Johnson. “Too much. And so you need machine learning to rationalise all that signal that we see. But when AI becomes more than machine learning, something that reasons and thinks – it will help our analysts work better.

“We have now learned that analysis plus AI is a much better outcome. It will help them work better, smarter and faster. I talk about the automation of low-level tasks a lot – that’s because I want our humans working on complex stuff for two reasons: number one, they’re better at it; number two, they get bored.”

Let’s pivot back to Lego for a moment. If you give the same Millennium Falcon kits to the AI to build, the human master builders – or security analysts – can go away and challenge themselves with something new. Perhaps the Tower of Orthanc from Lord of the Rings, the Ghostbusters’ headquarters, or the Shield helicarrier from the Marvel Cinematic Universe (other Lego kits are available).

Or as Johnson puts it: “AI actually helps us discern what is a really complex task and what is a really simple task and then, using automated remediation on those simple tasks, we can let the humans work with the AI on the really complex tasks, and that’s the place we want to be, because then you start getting much better outcomes, you get faster outcomes.

“Time to detection is probably the most important thing in cyber security, so the faster we can get the machine learning engine to tell us this is something real and the faster we can get AI to say, actually, it’s not just real, but it’s important – because that’s the lens AI puts on it – is really incredibly valuable.”

For example, machine learning can identify a piece of malware with no problem, but AI will add the ability to take something that has never been seen before, which could be a new piece of malware, or a new internet of things (IoT) device that’s never been connected to the enterprise network, understand its behaviour, and classify it as safe or not.

“That’s the difference,” says Johnson. “Machine learning can help us get down to a certain subset, but we need AI to actually give us a little more intelligence.”

AI in the wild at Microsoft

Obviously, this sort of use case is still a way off, and most of Microsoft’s work in this area today is around the machine learning element, but the organisation has deployed AI into production for previously unknown malware strains and has seen some success.

It has also been working on a pilot programme within its own SOC to use AI to “de-conflict” and prioritise its queues so that the analysts don’t have to deal with a hundred run-of-the-mill tickets when they show up to work, but can instead get right down to the small number of incidents that, through its own analysis, the AI has modelled as a viable threat.

It is also using it in a security context throughout the wider software development process – Microsoft is, after all, a software company before it is a security one – to pore over newly-written code to spot potential vulnerabilities along the way, before something is released into customer environments. Could AI spell the end for Patch Tuesday? Maybe.

AI and security jobs

Johnson proclaims herself fairly relaxed about the coming together of AI tech and the security analyst jobs under her purview – which is fair enough if you think about it, because, after all, if you’re only augmenting the humans, you still need the humans.

“In my lifetime, you’re never going to get to a point where AI or machine learning is going to replace humans in an SOC,” she says. “We find AI plus human is a much better outcome.

“The global incident response team for customers is under my remit at Microsoft, so if a customer is breached, that’s my focus, right? There are things I can’t teach a computer or tool to do, such as the mode of the attacker, or how to follow an evidence trail.

“I can teach it to do computer forensics, but that’s only a piece of the puzzle when you’re trying to track somebody in the wild and figure out who broke into this environment and why.”

Johnson adds: “A lot of cyber is just core police work just applied to computers. That’s why there is such a need for humans in cyber security.” She says she hires a lot of ex-law enforcement personnel for exactly that reason.

So, is the goal to get AI to understand what motivates a cyber criminal? Not necessarily. It’s certainly not one of Johnson’s primary goals, not least because we are still some distance from the day when AIs can truly think like people.

Inclusive AI

But would you want them to? At the back of the mind in every conversation about AI in a law enforcement context are scare stories about its abuse and misuse. In the real world, court battles over the use of automated facial recognition technology are being fought right now, while in fiction, recent BBC miniseries The Capture shows an all-too-plausible scenario where machine learning helps to generate deep fake videos to implicate an innocent man in a kidnapping and murder.

It’s something the tech industry needs to be acutely aware of, particularly in a cyber security context because, when dealing with any sort of crime, it is all too easy and all too human to succumb to bias.

If you’re giving an AI the same sort of investigative powers as a law enforcement agency, you had better make sure it’s not engaging in, for example, racial profiling. A quick search online throws up hundreds of racist AI news articles – and Microsoft itself was implicated in one of them back in 2016.

Read more about AI in cyber security

  • Programs such as Box Shield use machine learning to secure content better, as it monitors existing content continually to learn about new threats over time.
  • AI-based cyber security systems have enormous potential, but under specific conditions that are essential for success, an AI expert tells Infosecurity Europe.
  • Countering cyber threats through human effort alone is impossible; you need to add AI and machine learning products to your security programme. Here’s how to get started.

For Johnson, this speaks to a need to focus on diverse hiring within the tech industry. “I’m just going to say this bluntly,” she says. “As a woman who’s been in tech for 30 years and the parent of a trans daughter, I have a lot of perspective on bias in the world, and here’s what I can tell you: we continue to hire people that all graduate with a PhD and they’re all white, heterosexual males, they all think alike.

“If we aren’t looking for people of colour who graduated from a state college and maybe don’t even have a computer science degree and we’re not willing to train them, we’re always going to have a bias problem.”

In a nutshell, to ensure AIs are not biased, it is crucial that they are developed by a diverse workforce and teams where individuals are empowered to speak out if they feel something is not quite right.

“We just have to broaden who we want to bring in,” says Johnson. “And we need people with liberal arts backgrounds and social sciences backgrounds if we’re going to program AI, because we have to think about the broad scope of society.”

Over prosecco and canapes at dinner the previous evening, another journalist talked about the value that social scientists bring to tech, and said they wanted to see philosophers wandering the corridors of Redmond. I put this idea to Johnson as we close out our time together. “Yeah, they should be,” she says.

Read more on Application security and coding requirements