Laurent - stock.adobe.com

How ASEAN firms are using AI to combat cyber threats

Artificial intelligence tools are becoming a vital part of the security arsenal for organisations and cyber criminals alike

This article can also be found in the Premium Editorial Download: CW ASEAN: CW ASEAN: Trend Watch – Security

The vast United World College Southeast Asia (UWCSEA) network is exposed to risk at any point. With in excess of 200 applications and thousands of devices in the hands of students, the independent international school in Singapore knew it would be virtually impossible to manually sift through the thousands of logs and spot any threatening anomalies.

And it is well aware that a breach of its sensitive student and parent data could cause significant reputational damage.

“We needed a tool that could learn and manage our complex network environment and provide visibility of thousands of user devices, in order to stay on top of this rapidly evolving cyber climate,” said Ben Morgan, director of IT at UWCSEA.

The combination of artificial intelligence (AI) algorithms allowed the school to understand the internal state of its networks and then watch for deviations from the norm. On one occasion, the technology alerted the security team to a PC infected with malware, enabling them to take action to prevent the infection from spreading.

Although no organisation has so far found evidence of a full-blown AI-powered attack, there are sophisticated techniques that point to such attacks, heralding a potential future of machines fighting machines on corporate networks.

An example of such an attack took place at a power and water company, where a malware-infected device took intelligent steps to disguise its activity as legitimate, so it could remain undetected by legacy tools.

Darktrace, a cyber security supplier that specialises in AI, identified the file that had been downloaded onto the device from the Amazon S3 cloud storage service to establish a backdoor to the victim’s network. Although establishing a backdoor entry is a common tactic, the malware also showed signs of blending into the environment so as to not raise any alarm.

In the future, hackers could use AI to carry out advanced cyber attacks with the click of a button or to speed up polymorphic malware, causing it to constantly change its code so that it cannot be identified. Or they could use AI to learn about the victim’s environment, determining how to hide within the normal noise of the network before deciding the next course of action.

Steve Ledzian, vice-president and chief technology officer at FireEye Asia-Pacific, said that as well as circumventing security controls, attackers may also deploy AI to target individuals, as well as improve spear phishing, password cracking and vulnerability discovery.

“While AI is proven to be a useful tool in combating cyber threats, today’s threat landscape is such that there is no single technological answer to cyber attacks”
Steve Ledzian, FireEye

What is certain is that the future of cyber crime almost certainly lies in AI-driven attacks, said Sean Pea, head of threat analysis at Darktrace Asia-Pacific.

“We already see such sophisticated characteristics in existing malware,” he said. “While we don’t know when AI attacks will come, we do know that it will be a war of machine against machine, algorithm against algorithm. Defensive cyber AI will be our best option to fight back in this new age of cyber warfare.”

Speed is of essence

With AI increasingly being used by attackers and businesses alike, speed is of the essence in businesses’ cyber defence technology to enable them to detect increasingly complex and sophisticated threats.

“The longer a threat actor remains under the radar in a network, the more the damage and cost it is to an organisation,” said Sherrel Roche, senior analyst for security services at IDC Asia-Pacific. “Cyber threats have become more frequent, harder to detect, and more complex.”

The use of AI in cyber security enables IT professionals to predict and react to emerging cyber threats more quickly and effectively. Automation enabled by AI can analyse large volumes of data, recognise complex patterns of malicious behaviour and drive rapid detection of incidents and automated response. 

James Woo, CIO at Farrer Park Company, said the use of AI has alerted the Singapore-based medical and hospitality provider to security events that require further investigation.

“This allows us to focus our limited cyber security resources on handling the abnormal events,” he said. “Darktrace’s Enterprise Immune System gives us full visibility across our entire network, and enables us to detect subtle insider threats and emerging attacks on connected objects, including medical devices.”

Woo expects future cyber attacks to be more dynamic and less structured, so detection and behaviour learning is important to prevent an attack happening.

Read more about cyber security in ASEAN

Roche noted that for an AI algorithm to perform well, it needs to retrieve the right data, spot the right patterns, correlate the activity, classify the behaviour based on outcomes, and identify outliers or anomalies.

“If trained poorly, it will make inaccurate predictions,” she said. “Such models are only as good as the data that is fed in. AI needs human interaction and ‘training’ in AI-speak to continue to learn and improve, correcting for false positives and cyber criminal innovations.”

Today, most AI-related activity in the security market is around machine learning and deep learning.

“Implementing a particular machine learning or deep learning algorithm on a given dataset is not, in itself, difficult,” said Sid Deshpande, senior director analyst at Gartner. “What is difficult is for security vendors to use these techniques and apply them to real-world security problems.”

Some of the early areas where AI has been used in security are for user and entity behaviour analysis and threat detection, malware classification and endpoint security, to speed up security processes such as incident response, as well as vulnerability management.

Gartner expects machine learning to become a normal part of security strategies by 2025, especially for areas such as decision support.

Businesses beware

But businesses need to be wary of the hype surrounding artificial intelligence in security, said Deshpande. “There is a major disconnect between customer expectations, which are based on vendor messaging, and actual value delivered by security providers,” he said.

In fact, the use of AI-related techniques does not automatically mean the new approach is better than existing ones. For example, if a new machine learning-based approach generates more false positives than the older method, or takes a considerable amount of time for tuning, then it can be counterproductive.

Another example would be if a new machine learning algorithm successfully differentiates between all known malicious and benign files, but is unable to detect new malware.

But across the Asia-Pacific region, where cyber defence capabilities continue to lag behind the rest of the world, AI is no panacea for cyber security.

“Organisations in the Asia-Pacific region take almost three times as long as the rest of the world to realise that an attacker has successfully broken into their network – about 204 days,” said FireEye’s Ledzian. “It is still the only part of the world that relies on external third parties to tell them they’ve had a breach more often than figuring it out for themselves.  

“While AI is proven to be a useful tool in combating cyber threats, today’s threat landscape is such that there is no single technological answer to cyber attacks. If you’re looking for one, your approach to cyber security is strategically flawed.”

Read more on Hackers and cybercrime prevention