One of the features coming in Veeam Backup & Replication v9 is per-VM backup chains.This great addition was in between a general announcement related to other backup storage improvements; in case you missed it, here we go with a dedicated post, because this feature is going to be great!
Small is a better fit
Veeam stores virtual machine data in regular files. The reason of this choice is that files are self-contained objects, and every information needed to do a restore is in the backup file itself. No need for an external database to make sense of a bunch of small files named with complex UUID’s, with the risk of ending up with a “meaningless” bunch of those files because the database has been lost. Or worse, just a bunch of blocks in an object storage, which are completely useless without the metadata. With Veeam, even if you lose the entire backup server but still keep the backup files, you just need to import them in a new Veeam installation.
One of the best practices in Veeam backup jobs has always been to group similar virtual machines into the same job. There were different reasons for this, one being for sure leveraging dynamic containers (datastore, resource pools, folders, tags) and thus automatically protect newly created VMs that were added to that container without having to reconfigure the job itself.
However, the side effect of grouping multiple virtual machines in the same backup file is that the backup file can become very large. Depending on the file system used for the backup repository, this can be a problem if you hit the maximum size limit (EXT4 or NTFS before Windows 2012 has a maximum file size of 16 TB), but even on file systems with huge limits do you really want to deal with such a large file? It’s not just a problem of storing a large file, it’s about its manageability: try to migrate a 200 TB backup file into a new repository when the old goes out of free space, and for the entire duration of the move you will have to stop any additional activity involving that file. Or think about fragmentation: it happens in any file system, and it decreases performance as the file grows and gets fragmented. A large backup file will be probably heavily fragmented as it grows over time (like in forever forward incremental or the reversed mode, where the full file is daily injected with new blocks) because it’s not layered in the file system as a single space at the beginning like a eager-zero thick disk in VMware.
More performance, less management burden
To deal with this issue before v9, some customers created one job per VM for their largest VMs, and with just few dozens of VMs this could be a nightmare. But what’s even worse, this prevents the usage of dynamic containers for backup jobs like resource pools or tags. I’ve seen users creating complex scripts to manage hundreds of single-VM jobs, but still the effort to manage such a solution is insane. So why suggesting per-VM backup files? Because management will not be a problem in Veeam v9: when enabling per-VM backup chains, a job will still be able to hold multiple VMs; per-VM backup chains are going to be an option of the repository, not of the job. Once the new option is enabled in the repository, it will start writing separated chains, without the job even knowing that this is happening. In this way, the job will still be able to manage resource pools, folders, tags, and so on, all with multiple VMs in it. In a way, grouping VMs in a job will be even easier: regardless how many VMs will be placed in a job, you will never suffer for backup files growing too much.
It’s not a coincidence that per-VM backup chains is a feature coming out together with another great feature of v9, the new Scale-out Backup Repository. When you have multiple storage locations mapped as repositories, and each of them have some free space, it’s way easier to consume that space fully if you have many small files to fill those spaces, rather than few huge files. See this example from one of our customers:
There are many repositories available, and each of them has a certain amount of free space. Scale-out Backup Repository (SOBR) will allow to automatically pick everytime the extent with the largest free amount of space (among other criteria in the selection algorithm, but this is a topic for another post), but if there’s no extent big enough to store a large backup file containing multiple VMs, the job will have to fail. By splitting a job into multiple backup files and chains, all these small chunks of free space can be leveraged, thus optimising storage consumption.
Last but not least, probably the major reason for wishing per-VM backup chains: backup job performance will improve dramatically thanks to multi-threading. A single backup file is written in a single thread into the underlying storage. In fact, in my paper about backup repository performance, my tests where using a IO queue equal to 1 to simulate this. Modern storage devices however are able to handle a large amount of queues/threads concurrently, so a good design choice has always been to create multiple repositories, even in the same physical storage by creating multiple folders and mapping all of them as different repositories. In this way, each concurrent job would write to a different repository, and multiple running jobs would create a decent amount of concurrent write threads to leverage the storage capabilities. This, on the other side, lead again to an increase in the complexity of the design: multiple repositories in the same physical machine, multiple jobs, the need to plan in advance which job has to go into which repository.
With per-VM backup chains, even a single job that was once a single-thread activity will become a multi-thread task. With 20 VMs in a job for example, supposing you will have enough proxies and the environmental conditions to process all of them concurrently, this single job will create 20 threads on the underlying storage (with or without Scale-Out Backup Repository). This will mean faster backup times by orders of magnitude, and the possibility to push the backup repository to its limit without the need for complex job and/or repository configurations.
So, what about built-in deduplication?
While the new per-VM backup chain option coming in Veeam Backup & Replication v9 will bring many benefits, then an immediate doubt could arise: if similar virtual machines are not going into the same file any more, my backup footprint is going to increase dramatically due to the loss of Veeam’s built-in deduplication?
The short answer is NO, you will not see a significant difference. If you are hands-on with deduplication, you know that those beautiful deduplication ratio numbers can usually be seen when a deduplicating storage appliance processes multiple full backups. However, Veeam has addressed this issue differently starting all the way at v1 – by providing forever-incremental backup modes which maintain a single full backup file stored on the disk, with the rest being incremental (or reverse incremental) backup files containing only unique, changed data.
Sure, no doubt a small degree of deduplication is going to be lost. Why? Think about it: what a mail server, a file server, and a database server have in common, when it comes to data? Yep, a few GB of the operating system – a small percentage of the entire virtual machine. In a Microsoft Exchange server, the OS (the part in common with other virtual machines) may be as large as few GB, but then you have maybe multiple TB of mails. So, deduplication typically will indeed win you a small percentage of backup size – but you will have to pay a hefty price of CPU, RAM and – more importantly – your backup window to achieve this win, because your backup server still has to go through ALL the TBs of data to find those common OS data blocks.
Still, if you really care about those few percents in an age of huge cheap hard drives, a smarter way to regain those saving would be by leveraging deduplicating storage appliances (again, if your main goal is maximum possible storage savings, and not backup and restore performance). In fact, as discussed earlier, per-VM backup chain dramatically improve backup performance to deduplicating storage appliances, thanks to multi-threading.
Final notes
Per-VM backup chains will bring many advantages that can be hardly ignored. And the nice part is that it will not require any re-design activity: it is just a flag to be selected in an existing repository, without even touching the existing jobs. From the following full backup run, the same backup (or backup copy) job will be automatically split in multiple chains.
I’ve already tested in my lab the beta version of Veeam Backup & Replication v9, and the new per-VM backup chain. And I can tell you, I will never go back to single chains except for a handful of special scenarios, such as Exchange DAG VMs backup for instance.