kjekol - Fotolia
Backup technology explained: The fundamentals of enterprise backup
We look at backup and its role in enterprise data protection, including what to backup and how often, RPO and RTO, full and incremental, and if backups can be replaced by snapshots
Backup is a fundamental protection required by any IT system. That’s all the more so in a world where ransomware attacks can take out company data in a stroke.
Effective backup has also become more complex as IT departments deal with a variety of systems – including virtual servers and desktops, and containerised applications – as well as multiple datacentres and cloud locations.
In this article, we look at what backup is, and what it isn’t (and how it differs from snapshots and replication). We also look at what data to backup, how often, and with which backup scheme (such as full and incremental).
Also dealt with are the key roles of recovery point objective and recovery time objective (RPO and RTO), the 3-2-1 rule in backup, and questions about backup for virtualised environments – not least in the wake of the Broadcom-VMware acquisition – and for Kubernetes and containerised environments.
Backup involves copying data in an IT system to another location so it can be recovered if the original data is lost. That loss can be due to hardware failure, cyber attack such as ransomware, natural disaster, or human error, such as accidental deletion.
Data backup is a key component of an organisation’s data protection strategy and business continuity and disaster recovery (BCDR) planning.
Arguably, the rise in importance of analytics and AI in recent years makes backups more important than ever. That’s because they comprise a comprehensive repository of corporate knowledge in the data they hold, upon which AI-based analytics could run.
What data should be backed up and how often?
Backup jobs can (should, in fact) be run on any – obvious or definable – source in the IT stack, including servers (physical or virtual), databases, storage arrays, virtual machines (VMs) or desktops, public cloud storage and software-as-a-service (SaaS) applications, containerised environments, and endpoints such as laptops and mobile devices.
Backups also have a target, which is where data is copied to during the process. Backup targets include tape libraries, dedicated disk arrays, and potentially also cloud storage.
The idea is that applications and data can be recovered if needed, at levels of granularity that range from single files to entire datacentres.
The frequency of backup depends on variables such as the criticality of the data and how frequently it changes.
Full backups might be scheduled daily, weekly or at some other interval, with backups traditionally taking place outside working hours at night or during weekends to reduce the performance impact on systems. Partial backups such as incrementals might occur between these times because of their lower impact on resources.
Backup service-level agreements (SLAs) govern the frequency of backups and how quickly data must be restored.
Here, the main criteria include the recovery time objective (RTO), which determines how quickly after an event data must be recovered, and the recovery point objective (RPO), which determines how recently files that need to be recovered were written.
What are RPO and RTO?
RTO is defined by the global ICT standard for disaster recovery, ISO/IEC 27031:2011, as: “The period of time within which minimum levels of services and/or products and the supporting systems, applications or functions must be recovered after a disruption has occurred.”
RPO, meanwhile, is defined as: “The point in time to which data must be recovered after a disruption has occurred.”
In plain English, RTO is the amount of time you can afford systems and data to be unavailable. It is measured in time and is the period in which you require systems to be restored after an outage.
RPO is the amount of data you can afford to lose. It, too, is measured in time, but seen through the lens of how much data you can afford to lose. So, it will be governed by how long ago it was to the last backup and/or snapshot or how recent the data is at a site to which you failover.
So, for example, an organisation may determine that it can work with an RTO of one hour and an RPO of two hours’ worth of data.
What are full, differential and incremental?
There are several types of backup, including full, differential, incremental, and hybrids of these, such as synthetic and incremental-forever.
A full backup is where all data in a specified dataset is copied. It is usually done when a backup service is deployed and at regular intervals after that. Because it encompasses an entire dataset, it is the most time-consuming and takes up the most storage capacity.
With a full backup already completed at deployment and then repeated on a regular basis, incremental backups copy only data changed since the last backup. That makes incrementals the least time- and storage space-consuming method of backup. To restore, you must rebuild it from the last full backup plus all subsequent incremental backups.
Differential backups make a copy of all data that has changed since the last full backup, so restores need the last full backup and just the latest differential. That makes restores potentially less complex than with a full-plus-incremental backup regime, but take more time to carry out.
A synthetic backup combines the last full backup with subsequent incrementals to provide a full backup that is always up-to-date. Synthetic full backups are easy to restore from, but also do not overly tax the network during the backup itself because only changes are transmitted.
Finally, incremental forevers retain fulls and subsequent incrementals so restores can be to chosen points in time, while reverse incremental backups are where a synthetic full backup is the default, but incrementals are retained to allow roll-back to a specified point.
What is the 3-2-1 rule in backup?
The term 3-2-1 was coined by US photographer Peter Krogh while writing a book about digital asset management in the early noughties.
The rule said there should be three copies of data. One is the original or production copy, then there should be two more copies, making three.
The two backup copies should be on different media. The theory here was that should one backup be corrupted or destroyed, another would be available.
The final one copy of the two backups should be taken off-site, so that anything that affected the first copy would likely not affect it.
Is backup the same as snapshots or replication?
Snapshots are not copies in the same way backups are. Snapshots are built of numerous pointers to the state of data at a point in time, but which could be created from information assembled over a long period when parts of files, directories, underwent changes. Backups are therefore still required.
Replication produces a replica of a defined set of stored data. It can be a replica of a drive, volume or logical unit number (LUN), for example. What you get with replication is an exact copy. How replication types differ is the mechanism by which they are created and whether that replica arrives almost immediately or maybe just eventually.
A snapshot is different to a replica, because for snapshots to become a usable replica some sort of rebuilding process has to occur.
Replication cannot replace backup – the two things are quite different, but they can both be used as part of a data protection strategy.
What is the role of backups in protection against ransomware?
The best ransomware protection starts with not letting malware into IT systems in the first place, but that’s not always possible.
Key to recovery from a ransomware attack is to regularly make effective backups. That’s because if you are hit by ransomware, you need a clean copy of your data to roll back to.
Bear in mind that backups are likely to be the most reliable backstop because they usually date back the furthest of all data protection copies and are therefore more likely to provide a clean copy from before ransomware infiltrated systems.
Snapshots are another popular method of data protection, but are more likely to be compromised by being taken during ransomware dwell periods, as they generally don’t date back as far as backups.
Putting an air gap between backups and production systems is another key method of ensuring ransomware cannot affect backup copies.
Backups, for example, are only good to restore from as long as they are clean. That is, they are uninfected by ransomware files, including those that have remained inactive but undetected.
Ransomware gangs often target an organisation’s backup files to make it difficult or impossible to restore to a clean point in time.
What backup is available for virtualised environments?
Moving workloads from one hypervisor to another is difficult, but customers also need to ensure workloads and data are backed up.
This became a prominent issue in the wake of Broadcom’s 2023 acquisition of VMware and changes in the virtualisation provider’s licence model that left some enterprises facing higher costs.
The larger backup and disaster recovery suppliers already have support for a range of virtualisation platforms. Hyper-V is well supported for businesses that also run on Microsoft infrastructure. At the same time, suppliers such as Veeam, Rubrik and Nakivo have strengthened their support for open-source platforms, especially Proxmox.
This means organisations will often able to continue with their current backup and recovery supplier, even if they move to a mixed approach to virtualisation.
What about backup for containers?
Containers – and container orchestration, most commonly via Kubernetes – are changing the way enterprises develop and run applications.
But as enterprises use containerised applications more widely, they are also using them to handle more critical data – and this data needs to be backed up.
This is leading to two main approaches for Kubernetes backup – dedicated products, and broader-based backup and recovery tools that support container environments. These are some of the products in the market.
Veeam acquired Kasten as a purpose-built, Kubernetes data management service. The application runs in its own namespace on a Kubernetes cluster, and supports all the main cloud platforms, as well as on-premise architecture.
Pure Storage’s Portworx provides backup to Kubernetes environments via its PX-Backup software. The tool supports block, file and object, as well as cloud storage. It has storage discovery and provisioning tools, and backup, disaster recovery, security and migration features.
Verelo is an open source backup, restore, recovery and migration tool for Kubernetes. It can backup entire clusters, or parts of one using namespaces and label selectors.
Red Hat OpenShift Container Storage adds the supplier’s data protection tools to container environments, without any additional technology or infrastructure. Features include snapshots via the container storage interface, and clones of existing data volumes.
Read more about backup and data protection
- Seven ways to be sure you can restore from backup. Backups are no good if you can’t restore from them. We look at key elements of backup restoration, including backup audits, RPOs and RTOs, and how and when to test backups.
- Backup maintenance: Make sure backups can deliver. Deploying backup but overlooking the need to make sure it works is a common error. We look at the why and how of backup maintenance to help ensure you can recover from your backups.
NetApp’s Astra is positioned around simplifying storage across containers and VMs, and making it more efficient, so it allows firms to use the same storage pool and backup tools across both architectures.
Rancher provides its own backup and restore operator installed in the local Kubernetes cluster, and backs up the Rancher app. However, the Rancher UI allows etcd and cluster backups, including snapshots that can be saved locally or to an S3-compatible cloud target.
Trilio positions its TrilioVault tool as cloud-native data protection for Kubernetes. Trilio claims to be application-centric, and has a wide range of Kubernetes platform and cloud support.
Cohesity positions its Helios backup tool as a cloud-native service for containers. The supplier works with the three hyperscaler platforms, and backs up applications’ persistent states, persistent volumes and operational metadata.
Veritas’s NetBackup supports a range of backup and recovery, and business continuity options for Kubernetes. As well as standard backups, Veritas supports ransomware protection, via immutable backups on AWS S3, and Kubernetes data management with integrated disaster recovery.
Catalogic’s Cloudcasa operates as backup as a service (BaaS). It provides cluster-level recovery and free snapshots, retained for 30 days, along with a range of paid-for options including Kubernetes Persistent Volume backups.
What is cloud-to-cloud backup?
Cloud providers do not automatically provide backup and recovery for their customers’ data. They do not provide protection against risks that include accidental file deletion or ransomware attack. The same is usually the case with SaaS application providers.
Customers therefore often turn to cloud-to-cloud backup. Several suppliers specialise in backups specifically for SaaS applications, or more generally for cloud-based workloads.
Most backup providers support multiple cloud environments in which customers keep redundant copies of data with more than one provider.
Some cloud-based backup tools can also backup on-premise applications, and there are even options to backup cloud data to on-premise hardware.
Cloud-to-cloud backup is not the same as BaaS. Cloud-to-cloud services focus on protecting data that is already in the cloud, including SaaS data. BaaS is more focused on backup for on-premise systems, although there is an overlap and some tools perform both tasks.