ra2 studio - stock.adobe.com

How the biggest cyber security disasters could have been avoided

The headline-grabbing breaches that hit Accenture and Equifax in 2017 could have been averted had basic cyber hygiene been in place

This article can also be found in the Premium Editorial Download: CW ANZ: CW ANZ: Report data break-ins – it’s the law

Some 60% of Australian businesses experienced at least one security breach a month in 2016, compared with just 23.7% in 2015, according to a Telstra report.

Even the largest global enterprises are facing cyber security challenges. In the past two years alone, some of the best-known brands have been struck down by data breaches and security failure.

So, what – if anything – could these companies have done to stop the disaster before it even happened? We asked global cyber security experts to tackle three headline-hitting incidents.

Sensitive data made public

In September 2017, it was revealed that Accenture had four cloud-based storage servers that were unsecure and publicly downloadable.

The Amazon Web Services (AWS) S3 storage buckets were configured for public access rather tha­­n private access. This meant the content could be downloaded by anyone who entered the web address of the buckets in their browser. The servers contained highly sensitive data about Accenture’s cloud platform, inner workings, client information and 40,000 plain text passwords.

“With more companies adopting public cloud, issues like this are now reported on a regular basis,” said Dmitry Kulshitsky, a security engineer. “Accenture is certainly not unique in this regard. Engineers are used to the ‘cosy’ datacentre model, where there are multiple layers of defence, usually managed by different teams. Public cloud changes all of that. In the software-defined world you are one click away from exposing your internal infrastructure to the rest of the world.”

Despite the rising number of reported breaches, there are multiple ways businesses can avoid similar data breaches. They include internal coordination, ensuring IT departments have the right skills to operate public cloud environments, and configuring S3 buckets correctly.

“In the software-defined world you are one click away from exposing your internal infrastructure to the rest of the world”
Dmitry Kulshitsky, security engineer

According to Yun Zhi Lin, vice-president of engineering in the Asia-Pacific region at global consultancy Contino, Accenture could have used more suitable services for storing sensitive data such as AWS Parameter Store, AWS Key Management Service (KMS) or HashiCorp Vault, encrypted all sensitive data using server side encryption, and ensured all S3 buckets were private and accessible only by virtual private cloud (VPC) resources via VPC endpoints.

“Plus, if the team hardened a multi-tenant cloud platform and segregated each environment into its own AWS account, they could have limited the blast radius so that one environment breach would not affect others,” he added.

Lin noted that such preventative measures ought to be “assumed knowledge” of any entry-level architect or associate-level AWS certification holder, and that Accenture, being a Premier AWS partner, would have a number of such certified people.

“Ultimately, the vital lesson is that companies should form their own cloud strategy, with the help of trusted consultants if necessary,” he said, noting that this would enable organisations to build capabilities and take ownership of their data in the cloud as opposed to “blindly trusting third-party, non-transparent and insecure platforms built by under-qualified external teams”.

When patch management lapses

The recent Equifax security breach resulted in the leak of personal data of an estimated 143 million Americans, making them vulnerable to identity theft as well as other fraud. The stolen information included names, addresses, social security numbers and, in some instances, credit card details.

The incident took place because of an exploited open source software flaw that the organisation was already aware of, but failed to fix, test and deploy. The magnitude of the disaster resulted in both the CIO and chief security officer “retiring” from the company.

Equifax had approximately two months to fix the vulnerability to prevent the data breach. While two months might sound sufficient, Kulshitsky said the reality is that many organisations struggle to identify and fix newly discovered vulnerabilities in a timely manner.

In this particular case, Kulshitsky said Equifax engineers would have had to scan all of their web applications to find those affected by this vulnerability, before performing rigorous testing which might take weeks before software fixes could be released to production.

“Updating critical applications might take a significant amount of time (measured in months), especially for risk-averse organisations. The change management process might delay this too, if the criticality of the security issue is assessed incorrectly, such as scheduling updates to the next patch cycle instead of initiating immediate patching. Many organisations just cannot afford the risk of applying fixes the day they are released by the vendors,” he said.

So, one reason this issue took so long to resolve may be down to the enterprise’s lack of agility. If deploying updates and new code is a lengthy process, it is vital every time a piece of code is changed or altered – even slightly – it is checked, and double-checked. Without doing this, it is a difficult task for a company to identify where the problem lies.

Emre Erkunt, senior DevSecOps consultant at Contino, noted that Equifax would have averted the data breach if DevOps practices had been in place.

“DevOps embodied itself in the market just because of these kinds of problems. Legacy enterprises where ‘power’ equals to ‘head count’ was only really strong when everything was done by human beings. Today, human beings build systems, validations, tests and more tests to automate as many human-based processes as possible. Strong entities of the past era must adopt both culture change and new technologies to keep pace,” he said.

“Recent events at Equifax serve as a stark reminder that perimeter defences by themselves are insufficient to protect critical data when hackers are increasingly attacking vulnerabilities that exist in the application layer”
Cameron Townshend, Sonatype

Cameron Townshend, solution architect at DevOps tools supplier Sonatype, added that with 80% to 90% of every modern application made up of open source components, organisations must automatically and continuously govern the quality of those components and third-party libraries in their software supply chains.

“To ignore this problem any longer is simply negligent,” Townshend said, pointing out that top-performing development teams not only automate open source hygiene practices within the software development lifecycle, they also do so in production applications by fixing software defects at the time of disclosure.

“For far too long, businesses have relied on network-based cyber security tools to defend the perimeter of the organisation. Recent events at Equifax serve as a stark reminder that perimeter defences by themselves are insufficient to protect critical data when hackers are increasingly attacking vulnerabilities that exist in the application layer.”

The IoT risk

In October 2016, a major distributed denial of service (DDoS) attack affected a number of online platforms and websites, including Twitter, Netflix, Reddit and CNN.

Those hit were using Dyn servers – Dyn controls the majority of the internet’s domain name system (DNS) infrastructure. The attack was done by bombarding a network of 100,000 internet of things (IoT) devices with traffic until the server crashed under the strain. The attack was significant because of its staggering size – measuring close to 1Tbps at one time – literally taking down the US internet. 

“1Tbps is a huge amount of traffic and it’s unlikely that many companies on the internet would have that much available bandwidth,” said security engineer Kulshitsky. “The solution is a combination of the cloud-based scrubbing centre and on-premise or datacentre-based solutions to protect the infrastructure on multiple levels.”

The cloud scrubbing component is essential to keep huge volumes of malicious traffic as far away from datacentre pipes as possible. Scrubbing centres heavily invest in network connectivity and bandwidth measured in terabits per second, which makes them better prepared to process and filter out attacks at that scale. Any volumetric type of attacks can be mitigated at that level.

“Furthermore, companies that rely on third-party services such as Dyn should start using multiple DNS providers. I often see a DNS zone containing only two name servers from one DNS provider,” said Kulshitsky.

“But nothing is stopping these companies from adding more DNS providers and utilising their DNS servers as additional secondary DNS servers to increase the resilience of their DNS infrastructure.”

Read more about cyber security in APAC

  • Amid growing cyber threats, Australia’s cyber security centre calls for businesses to be more open about cyber incidents and plug potential loopholes in their supply chains.
  • Telcos such as Telstra and industry associations in Australia are chipping in to help enterprises that are being targeted by cyber criminals with phishing and social engineering exploits.
  • Darktrace’s Asia-Pacific managing director, Sanjay Aurora, offers insights on what organisations can do to reverse the odds against them in combatting cyber threats.
  • Coordination is vital to ensure that ASEAN’s cyber security efforts are focused, effective and in synergy with one another.

Read more on Hackers and cybercrime prevention