In yesterday’s article we have installed and configured Veeam Backup for Google Cloud Platform. In this article, we will run the first backup and restore operations.
Adding components
My GCP lab is totally new, and thus it has no workloads. So, even before configuring the needed components from the Veeam point of view, we need to have at least one virtual machine to be protected.
To make things easier, I’ve created a quick script that we can run directly into the web-based shell after it has been uploaded. This script creates two tiny virtual machines in out GCP project.
# Create two tiny VM's for my Veeam Backup For GCP lab resources: - name: tinyvm-1 type: compute.v1.instance properties: zone: europe-west1-b machineType: https://www.googleapis.com/compute/v1/projects/yourprojectnamehere/zones/europe-west1-b/machineTypes/f1-micro disks: - deviceName: bootdisk type: PERSISTENT boot: true autoDelete: true initializeParams: sourceImage: https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/family/debian-9 networkInterfaces: - network: https://www.googleapis.com/compute/v1/projects/yourprojectnamehere/global/networks/default accessConfigs: - name: External NAT type: ONE_TO_ONE_NAT - name: tinyvm-2 type: compute.v1.instance properties: zone: europe-west1-b machineType: https://www.googleapis.com/compute/v1/projects/yourprojectnamehere/zones/europe-west1-b/machineTypes/f1-micro disks: - deviceName: bootdisk type: PERSISTENT boot: true autoDelete: true initializeParams: sourceImage: https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/family/debian-9 networkInterfaces: - network: https://www.googleapis.com/compute/v1/projects/yourprojectnamehere/global/networks/default accessConfigs: - name: External NAT type: ONE_TO_ONE_NAT
NOTE: replace “yourprojectnamehere” with the ID of the GCP project where you want to deploy the VM’s.
We can then run this script in the Cloud shell:
gcloud deployment-manager deployments create tiny-vms --config two-gcp-vms.yaml
The two new virtual machines are quickly created, in the same subnet where our Veeam machine was already deployed.
Reconfigure networking
One of the requirements in Veeam Backup for GCP to correctly operate is to have the networks where the workers are deployed with Private Google Access enabled. To do this, we run these command:
gcloud compute networks subnets update default --region=europe-west1 --enable-private-ip-google-access
We get the “updated” message in return, and we can move to the next steps.
Create a storage bucket
Veeam Backup for GCP can do both native VM snapshots and regular backups. The latter are stored in a GCP storage bucket, so we need at least one of them as the target of our backups.
We leverage again the cloud shell to quickly create one:
gsutil mb -c standard -l europe-west1 gs://veeamgcp-backup
Configure Veeam Backup
We can now go back into the Veeam console, and configure the missing components. We start with deploying at least one worker in the same region where our VM’s are. To do so, we go into the configuration section and we add a worker, and its parameters will be like these:
We can see in the summary that there’s a nice verification step, so we are sure that we did all the configurations correctly before applying them.
Note: the firewall rule can be anything that allows https over port 443.
We now need one repository to store our backups. We are going to use the bucket we created before. In the wizard, we select again our project, we create the credentials to access the bucket and copy them somewhere safe, we choose if we want to enable encryption, and we complete the wizard:
Backup policy
It’s finally time to create our first backup policy.
To do so, we go and add our first policy. As usual, we give it a name, and we then select the sources, that are “what we want to protect”. Our only project is already selected, we then select the region(s) we want to protect, and the specific VM’s:
We can also choose “all resources” so that every new VM that we be added will be also automatically protected.
In the target, we select our repository that we created before, and we move to scheduling. Here we can configure any combination that we want.
A very nice feature is the cost estimator:
the estimator is telling us what are the expected costs of the policy we are creating, BEFORE applying it. In this way, we can plan for our budget without any bad surprise at the end of the month.
First execution and fine tuning
After the policy has been saved, we immediately run it manually to test it. After a few seconds, the policy fails, and when checking it, we see the error is in the backup portion:
By clicking on the “Failed” link, we see the details of the execution, and once again we can appreciate how accurate and “self-explaining” are the error messages from GCP. The Cloud Pub/Sub API has never been used in our project, and we need to enable it. GCP , again, gives us the direct link to do it.
Once we have executed the policy again, we can finally see our protected data:
The two restore points for each VM are the native snapshot and the backup:
Our VM’s are fully protected now!