Laurent - stock.adobe.com

Not all AI is equal, security channel warned

Vectra AI adds its voice to others that have warned partners to consider the merits of a product using artificial intelligence

Customers are getting used to the idea that artificial intelligence (AI) is being increasingly used in security tools, but there are warnings that not all AI offerings are equal and the channel needs to pick its vendors carefully.

Last month, Brooks Wallace, vice-president of Europe, Middle East and Africa (EMEA) sales at Deep Instinct, which uses deep learning to identify anomalous behaviour and threats, said questions needed to be asked about the technology that was being put in front of users.

“People say, ‘Well I’ve heard it all before, there’s artificial intelligence’, but they don’t understand that there’s artificial intelligence and machine learning, and some of that is deep learning, deeper than machine learning,” he said.

Wallace’s comments have been followed by thoughts from David Lomax, SE manager UK/I/Nordics at Vectra AI, who said AI was important because otherwise it was difficult to analyse gigabytes of data.

“You need AI to be able to do that at speed, to detect these things, but not to just detect anomalies because there are lots of players out there that do anomaly detection,” said Lomax.

The problem with using AI as a catch-all was that it generated numerous alerts, which led to user fatigue, putting users in a weaker position, he said.

“People miss things because there is so much hitting the SIM that you want something that is using AI to protect against the attackers in real time,” he added.

Lomax said the channel was aware that it had to use AI because there were limits to the success in detecting threats without it, but there had to be an awareness that simply putting AI on the box did not always mean the same thing.

“Not all AI is created equal,” he said. “Models are complex and a lot of them feed off other bits and algorithms and there are lots of organisations that we have seen come up through the ranks where they profess to have AI, but in reality it is a signature-based product that they have put some machine learning on top of, which doesn’t really get you to where you want to be.

“A lot of organisations have used products that profess to do AI and have integrated them into some of the workflows and they haven’t really delivered on that.”

Lomax added that understanding was increasing and partners understood the value of using a robust AI offering: “I think definitely the channel is adopting.”

Gary Veale, regional director for UK&I at Vectra, said the movement by customers towards the cloud, which had accelerated during the pandemic, meant they had to use more AI security tools.

“The numbers and volumes of data are growing exponentially,” he said, “and that’s when people now look and say: Which AI really works? Which ones are based on the best principles? Which ones will scale?”

Read more on Threat Management Solutions and Services