Datacentre lessons learnt from Heartbleed bug

The Heartbleed bug, an OpenSSL flaw affecting millions of websites, has some lessons for datacentre providers and operators

This article can also be found in the Premium Editorial Download: Computer Weekly: Retail's data challenge

The Heartbleed bug, an OpenSSL cryptographic library flaw that allows attackers to steal sensitive information from remote servers and devices, affected nearly two-thirds of websites. 

Ever since the bug was made public, hardware, software and internet service providers have moved quickly to apply patches and advise customers to change passwords. 

But what datacentre lessons can be learnt from Heartbleed?

Heartbleed was introduced to the OpenSSL code in December 2011, but the bug was only made public on 8 April 2014 after researchers at Google and Finnish security firm Codenomicon discovered that a coding flaw could enable hackers to access unencrypted data repeatedly from the memory of systems using vulnerable versions of OpenSSL.

The bad news with the Heartbleed bug is that there is no data on the server than can be used to determine if you have or have not been compromised, said Erik Heidt, Gartner research director. This means response has to be fast, holistic and strategic.

“Organisations that just apply the patch and do not take other remedial actions will regret it later,” Heidt warned. “Applying patches and changing passwords does not mean victory. A patch is just like a Band-Aid – it does not cure the sore.”

Read more on Heartbleed

  • Heartbleed repairs threaten to cripple the internet
  • Mumsnet becomes first known UK victim of Heartbleed bug
  • Canada Revenue Agency reports Heartbleed data theft
  • Heartbleed denial reveals loophole for NSA spying
  • Cisco and Juniper warn of products hit by Heartbleed bug
  • The Heartbleed genie is out of the bottle – now what?
  • EFF calls for rapid mitigation of Heartbleed internet bug
  • OpenSSL vulnerability 'Heartbleed' may have exposed encrypted traffic
  • OpenSSL security flaw could affect millions of websites, warn researchers

Application automation, datacentre orchestration and access management

One important lesson datacentre professionals could learn from the Heartbleed bug incident is to enable application automation in datacentres.

Application automation offers a better response to security breaches across servers, the Gartner expert said. This is because a datacentre is home to thousands of web servers and updating the servers with automation will be easier and quicker.

“Having a good privilege access management strategy and datacentre orchestration are other ways datacentre professionals can respond better to such crises,” Heidt added.

Such an unprecedented security breach requires holistic action. IT professionals must have good relationships with technical experts inside and outside the company to solve the problem, he further advised.   

Companies that had provisioned for datacentre orchestration and centralised server management, as well as having up-to-date management tools, were able to respond quickly to the Heartbleed bug crisis.

Datacentre disaster recovery strategy

While at a technical level Heartbleed had fewer lessons, it offered lessons on how datacentre owners should react when the news broke, some experts have said.

Companies that had provisioned for datacentre orchestration and centralised server management, as well as having up-to-date management tools, were able to respond quickly to the Heartbleed bug crisis

Another important lesson for datacentre managers is that open source hardware isn’t necessarily risk-free.

“Any datacentre operator should have been able to provide cool, calm advice to its customers, and should have had the tools in place to rapidly and effectively patch OpenSSL to get rid of the problem – and then advise customers to change their passwords,” said datacentre expert and Quocirca director Clive Longbottom.

"There was far too much FUD [fear, uncertainty and doubt] around this – too much ‘advice’ to change all passwords now –  which only makes the problem worse, as the changed password could be compromised,” he added.

Server virtualisation provider VMware, which has nearly 500,000 customers, started issuing Heartbleed patches this week. As many as 27 VMware products were affected by Heartbleed. 

“Throughout the week commencing 14 April, VMware will be releasing product updates that address the OpenSSL Heartbleed issue. VMware expects to have updated products and patches for all affected products by 19 April,” its security announcement email to users read. 

But some VMware users took to Twitter to moan about the provider’s security patches – that the update was slow and came late.

Each operator should have been able to rapidly evaluate the scale of the issue and advise accordingly, experts said.

Such a datacentre disaster recovery strategy and processes should have already been in place and datacentre professionals must only be scaling that up to respond to the Heartbleed incident, not modifying it or devising a new strategy after the incident, added Heidt.

More articles on datacentre management

Ethical hacking tests

A well-run professional datacentre should have consultancy services available to help its customers test their systems in advance, and it should implement training for staff to make them aware of information security threats, according to London-based datacentre provider City Lifeline.

An example is “penetration testing”, otherwise known as “ethical hacking”, where a benign expert attempts to evade the security precautions taken by the target company and gain access to confidential information. The expert reports back to the company on its success, with recommendations for improvements, said Roger Keenan, City Lifeline’s managing director.

“Although on this occasion the process would not have identified Heartbleed, it provides datacentre users with confidence that it has identified and mitigated against many other, more common, more well-known threats,” he said.

Managing customer service expectations amid the crisis

For datacentre operators, how they manage customer services and how they deal with the OpenSSL vulnerability appropriately are the big issues.

“If an operator was affected and believed customers’ passwords had been put at risk, they have to clearly state that they will fix the problem and the appropriate time users must change passwords,” said Andrew Kellett, principal analyst, infrastructure and software, at Ovum. Such communication was not very clear this time around, he said.

“Some operators and big tech giants reassured customers, saying they are not at risk, but it was not clear whether there was a breach and it was fixed or whether their servers were not affected at all,” said Kellett.

“A holding page on their website could explain what it means to customers and what steps the operator is taking,” added Longbottom. If the operator is dealing with highly sensitive data, then it should suspend logins and deal with each customer separately, experts advised.

There will always be another Heartbleed, and it is likely that the Googles and Amazons will handle the problem very efficiently. It is the smaller, medium-sized datacentre providers that may take time to respond, Kellett said.

His advice to CIOs: “Look towards your service level agreements and see what it says on security and check with datacentre providers that they have, if they had the problem, dealt with it.”

Read more on Datacentre disaster recovery and security