Veeam reverse incremental backups are since their introduction one of the most appreciated features among their users. At the price of an increase in I/O on the backup storage when compared with a traditional incremental backup, it allows for a single full backup file and a long chain of increments, thus offering huge savings on disk space.
However many users ignore that even a Reverse Incremental backups are not “Set and Forget”. Even if a reversed chain can be years long, this kind of usage is not recommended, rather it can lead to several problems.
Let’s suppose we are saving 10 VMs in a single job. Their totale size is 1 Tb, and Veeam thanks to deduplication and compression creates a 500 Gb VBK file. Then after some time 2 of these VMs are deleted, and their total disk usage is lowered to 700 Gb. The supposed size of the new backup would be around 350 Gb, but we see instead the VBK file still is 500 Gb. This happens because there is no way to shrink the VBK file while running a job, unless we run an Active Full backup.
Another problem is fragmentation. Because of the way a Reverse Incremental works, its internal blocks are continuosly swapped and updated by new blocks. This actvity leads over time to an excessive fragmentation of the VBK file, and is useless to defrag the underlying NTFS filesystem for example since the fragmentation is “inside” the VBK itself. Again, the solution is an Active Full backup.
the main concern however is another one. During a restore we could face an error like this one:
What’s happened? The VBK file we are restoring from is basically corrupted, and the restore fails.
The error may remain unseen form months, usually because we restore only single files, read from non corrupted blocks inside the backup file. If the corrupted blocks are inside the VBK rather than the reverse files, it could be those blocks are never updated during daily increments, bacuse for example they host the operating system of the Guest OS and they are never updated after the initial installation. So you could realize those blocks are corrupted only after several months, right when you need to read them to do a restore…
To solve this problem, you first need to think that even Reverse Incremental backups need to be managed and maintained, specifically performing regular Active Full backups. This kind of backup creates a new VBK file without reading any of the previous file in the chain, thus creating a new chain. You can run an Active Full manually:
or even better by configuring the Job to do it in a scheduled way. In the next screen you can see an Active Full configured to run the first saturday of the month, every two months.
There is no best practice about how frequently you need to run an Active Full, I usually run them every two months. You need to figure out your best settings, trying to balance between an excessive frequency (that would frustrate the advantages of reverse incremental) and a too long chain (that would increase the risk of having corrupted blocks and bigger VBK files).
Moreover, you need to take into account two essentials facts when introducing Active Full into your schedules:
– backup execution times will obviously increase, so you better schedule active full during non-production times like weekends
– as long as the retention period of the backup jobs is not reached, you will have 2 full VBK files in your backup storage, so you better estimate space usage
Finally, how could we know when the last full backup has been made? You can read all the logs of the jobs inside Veeam, or you can use this quick powershell script I made:
# Check how old is the last successful active full backup in Veeam Backup & Replication # # Author: Luca Dell'Oca # ldelloca@gmail.com # # You need to run this script directly in the Veeam Backup & Replication server. # # Version 1.0 - 26 March 2013 # Load Veeam Powershell Snapin Add-PSSnapin -Name VeeamPSSnapIn -ErrorAction SilentlyContinue # Get successful backup executions where the backup was a full one, and list only the most recent one. foreach($job in (Get-VBRJob | ? {$_.JobType -eq "Backup"})) { Get-VBRBackupSession | ?{$_.Jobname -eq $Job.name -and $_.JobType -eq "Backup" -and $_.IsFullMode -eq "True" -and $_.IsCompleted -eq "True" -and $_.Result -ne "Failed" -and ($_.isretrymode -eq $False -or $_.isretrymode -eq $True)} | Sort-Object EndTime -Descending | Select-Object -First 1 | select JobName, EndTime }
Its output will be something like this (for non european readers, date are dd/mm/yyyy and actual date was 27th March at the time of writing this article):
You can see two backups where the last full was made almost 5 months before.
A lesson learned: there are NO “set and forget” solutions, NEVER.