3dkombinat - stock.adobe.com
Backup in the age of cloud
The 3-2-1 backup strategy remains relevant in the age of cloud, giving rise to hybrid approaches that combine on-premise and cloud backups to minimise data loss, reduce downtime and improve redundancy
First mooted by US photographer Peter Krogh in a book about digital asset management for photographers, the 3-2-1 backup rule has been seminal in guiding organisations in their backup strategy.
This involves making three copies of data, storing the data in two different types of storage and storage at least one of the backups at an offsite location. The goal is to ensure data availability, protection against data loss, and quick recovery in the event of data corruption, hardware failure, or other unforeseen circumstances such as natural disasters.
Beni Sia, Asia-Pacific and Japan leader at Veeam, notes that the 3-2-1 strategy has proven to be a valuable framework for assessing data risk exposure.
While it originated at a time when 30GB hard drives and CD backups were prevalent, it has adapted to the present era of 18TB drives and widespread cloud storage. The strategy's simplicity and effectiveness in safeguarding valuable information, Sia says, has contributed to its popularity among data protection experts.
Many enterprises today have embraced the 3-2-1 concept, with primary backups stored in a datacentre for quick recovery, and a second copy kept on a different infrastructure to avoid a single point of failure, says Daniel Tan, head of solution engineering for ASEAN, Japan, Korea and Greater China at Commvault.
“In addition, the same data could be uploaded to an offsite cloud on a regular basis as the third online copy, which can be switched offline if required, to provide an air gap that effectively protects data from being destroyed, accessed, or manipulated in the event of a cyber security attack or system failure.”
Indeed, the cloud, with its geographical and zone redundancy, flexibility, ease of use, and scalability, is an increasingly important part of an organisation’s 3-2-1 backup strategy, which remains relevant today
“Cloud provides organisations with so many options in their implementation of this strategy and enables the choice of other even more robust iterations easily,” says Matt Swinbourne, chief technology officer for cloud architecture at NetApp Asia-Pacific.
“For example, using two different clouds as well as a datacentre to increase the geographical separation will allow the separation of administrative control domains, as each cloud could use different access control planes. There can also be potentially multiple levels of administrative authorisation to further protect organisations against cyber attacks,” he adds.
Beni Sia, Veeam
Veeam’s Sia notes that when organisations adopt a cloud service, they entrust their data to the cloud provider's infrastructure. But while hyperscalers ensure high availability within their architecture, organisations have limited control over the underlying storage mechanisms.
“By implementing the 3-2-1 backup strategy, organisations can retain control over two independent copies of their data stored in different storage media, including on-premise or alternate cloud providers. This redundancy protects against the risks associated with relying solely on a single storage architecture,” he says.
Furthermore, although cloud providers offer service-level agreements (SLAs) that specify the level of availability, durability, and integrity of the organisation’s data, they have limitations and cannot always prevent data loss, or meet every specific requirement by an organisation, says Commvault’s Tan. “By implementing the 3-2-1 strategy, companies can reduce dependence on a single SLA and gain an additional layer of control over their backups.”
Tan says the ever-evolving nature of ransomware threats also makes the 3-2-1 strategy more relevant than ever before, noting, though, that this strategy should be enhanced.
“In addition to maintaining independent backup copies stored in a different storage media, offsite backups must be air-gapped, which isolates and segments secondary or tertiary backup copies and makes them inaccessible from the public portions of the environment. This allows IT teams to create an extra layer of protection, reducing the impact of such attacks and enabling faster recovery,” he adds.
Despite the benefits of cloud backups – particularly those that use object storage which is potentially cheaper and does not incur capital expenditure – on-premise backups are the only viable option for many enterprises to meet their own backup and recovery SLAs, says Timothy Chien, senior director for product management at Oracle.
“In particular, for large data volumes, recovering from public cloud storage is more challenging to meet business recovery time objectives [RTOs], and secondly, enterprises may have to meet data residency and regulatory requirements where production and backup data must stay on-premise in their geographic location.”
However, some organisations may find on-premise backups incomparable to cloud backups as in-house systems may be more susceptible to data loss, and they may risk losing critical data to natural disasters or human error. Having large up-front and maintenance costs, on-premise systems can also be more costly than cloud-based offerings, says Veeam’s Sia.
In finding the middle ground, Sia notes that more organisations are adopting hybrid backup approaches, for added security of their data. “As companies deal with increasingly complex IT environments and different modern workloads, a hybrid backup solution that includes both hardware and cloud storage can go a long way toward mitigating the issue of failed backups.”
Cloud backup capabilities such as point-in-time recovery and object versioning offer good protection against accidental deletion of data, protection against human errors, software bugs, or other unexpected issues, making them highly complementary in backup strategies, says Commvault’s Tan.
“Point-in-time recovery allows users to restore data to a specific moment in time, as it captures incremental changes or snapshots of your data over time. This minimises data loss and reduces downtime, while providing more granularity in data recovery. This approach is significantly better compared to traditional backups, where periodic snapshots are taken at specific intervals (such as weekly), and the recovery process involves restoring the entire backup set.
“Versioning allows a similar granularity in the recovery of files from the cloud. Each time a change is made to an object, instead of overwriting the existing version, the system stores the new version while preserving older ones – ensuring a complete historical record of changes made to the data.
“While these features do not fundamentally change the way backups are done for organisations, they provide extensive benefits in terms of data protection against unforeseen errors or issues, and the flexibility to restore systems or objects to a specific state. And, compared to traditional backup systems, this process is less time-consuming and resource intensive.”
Daniel Tan, Commvault
Oracle’s Chien notes that some cloud providers have also developed capabilities such as using storage snapshots to further reduce backup and recovery windows, adding, though, that these approaches need to be carefully considered because for a backup to be viable, it must be intact and available for recovery in event of any problems with the original data source or media. “So, at the end of the day, the data still needs to be copied to separate media or location and that will take time depending on the volume.”
Future of backup
With ransomware attacks increasingly targeted at backups, industry experts have called for a shift in backup strategies, even as next-generation backup offerings are starting to incorporate ransomware detection and instant recovery capabilities.
For one thing, Veeam’s Sia expects immutable backup technologies to gain further popularity to mitigate ransomware threats against backup repositories.
“Immutable backup delivers tamper-proof, ever-evolving techniques that match conventional data protection needs. The data stored in an immutable backup solution cannot be modified, deleted, or overwritten. In immutable backup, data is stored in a read-only format, prohibiting write privileges to ensure data cannot be changed,” he says.
Commvault’s Tan notes that edge computing will improve performance and reduce latency, which is important for backup and recovery applications. In fact, IDC predicts that by 2023, over 50% of new enterprise IT infrastructure will be deployed at the edge rather than in corporate datacentres, making the edge the next frontier for backup solutions.
However, while edge computing presents opportunities for backup technologies, Sia notes that it will not completely replace cloud-based backup solutions. “The future is likely to see a blend of both edge and cloud backup strategies to cater to different requirements and use cases,” he says.
Finally, with the growing use of artificial intelligence (AI) across industries and technology areas, new backup capabilities that leverage automation, AI and machine learning for more intelligent and proactive data protection will emerge.
“This can be in the form of employing advanced algorithms that differentiate between actual threats and false positives across the organisation and its backups, or intelligently quarantine and protect sensitive data, giving organisations the ability to discover, analyse and prevent cyber exposure and data exfiltration,” Tan says.
Factors to consider in your backup strategy
NetApp’s Swinbourne points out seven factors that organisations should consider in their backup strategies in the age of cloud:
- Cost – backups can represent a major expense, and costs can grow over time. Organisations may need to purchase software and hardware, pay for a maintenance contract, and train employees. As data volumes grow, organisations may also need to purchase additional equipment or, in a cloud model, pay for additional bandwidth and storage.
- Location – many organisations default their backups to the cloud. They should still consider storing a copy of their data in a different location, to safeguard the organisation from a cloud outage or misconfiguration. This could be a separate availability zone or region in the same cloud provider, another cloud provider, or on-premises.
- Method – organisations can select from various backup methods, such as full backup, differential backup, and incremental backup. Each approach demands a different volume of storage, impacting cost. Organisations also need to invest different amounts of time into each method, impacting the recovery and backup process time.
- Flexibility – organisations typically want to back up everything when creating backups, but this is not the case for recovery. Recovery should be flexible, allowing them to restore anything from a single file to an entire server.
- Schedule – backups must be automated and run on a schedule. Organisations should schedule backups around production workflow requirements. They should consider their recovery point and recovery time objectives – how long they can wait before a system recovers, and how much data they can afford to lose.
- Scale – organisations should expect their data and backup needs to grow. And their backup processes must manage expected volumes of new data. It’s important to have a process in place to ensure new applications, data stores, and servers are added to their backups.
- Security – access to backups should be carefully controlled, limiting domains of control to each functional role inside the organisation. In addition, it is critical to take measures to avoid ransomware and other malware from infecting backups and implement strategies to not only plan to recover, but also to detect these attacks and stop them at the source. Prevention is always better than cure.