I’m working with a customer to refresh his VMware infrastructure. At the moment, the platform is based on ESX 3.5 and vCenter 2.5, so it’s pretty old. The upgrade phase is not going to be a problem, but the customer in the past has followed too much some old reccomendations, and has made a high usage of RDM disks, bacause he was worried about performances of his virtual machines.
I found several VMs with the primary disk in VMDK format, followed by a second disk (dedicated to the applications, mainly databases, app servers or mail) in physical RDM format.
Since a part of the project is to have a complete data protection solution completely based on VMware VADP libraries, we suggested the customer to convert all the RDM disks into VMDK disks. With the latest vSphere releases, there are really no performance differences to justify RDM disks if not for Microsoft clusters or other situations requiring shared disks between VMs, while in the contrary it’s not possible to take snapshots of physical RDM disks, thus preventing backups of those disks and forcing to use backup agents inside the Guest OS.
To validate the procedure, and to reassure the customer, I realized a quick test to show the conversion process. This task can only be done once the infrastructure is upgraded at least to vSphere 4, since (based on KB1005241) a storage vmotion on ESX 3.5 of a virtual RDM does not convert it to VMDK, while this is possible with vSphere 4.0 or newer versions.
In my lab, I created a simple Windows 2003 VM, with a primary 20 Gb vmdk disk , and a secondary 5 Gb physical RDM disk:
To complete the conversion, the VM must be shut down at least once, so you need to schedule the activity with the customer.
Once the VM is stopped, you need to edit its settings and remove the RDM disk. Write down the Virtual Device Node (0:1 in my example), because later you will have to reconnect the disk with the same value.
Select the option “Deletes files from datastore”. Don’t be scared: since it’s a physical RDM, the only thing that will be deleted is the pointer to the RDM disk, not the disk itself.
Then, add a new RDM disk to the VM using the same LUN you removed before:
This time, choose “virtual” competibility mode, and select the same device node value as before. This let the Guest OS think the disk is the same as before.
Now you can start again the VM. The downtime could be reduced to a couple of minutes if you do all the operation quickly. Then, initiate a storage vmotion, it will automatically convert the virtual RDM disk in a VMDK disk without further downtime (but be prepared to it if you do not have SVMotion license and thus you need a cold migration):