NetApp ONTAP Simulator is freely available, and allows anyone to test out their storage platform without having to own a physical array. In the past I’ve used the NetApp Edge VSA, but since some months this is not available anymore, and the simulator is the only way to go. In this article, I’ll show you how to install and configure the Simulator with its latest version 8.3 RC1, and connect it to a vSphere cluster.
Deployment
First, you need to download the ONTAP Simulator. This part is pretty easy, you just need to go to this web page, login or register to the NetApp support, and download the Simulator itself. You need to download the ESX version, plus the license codes.
Once you’ve downloaded the compressed archive, let’s deploy it. You need to expand the archive, and then upload it to an ESXi datastore using the datastore browser.
By default, the simulator comes with two virtual shelves of 14 x 1GB disks, so 28 disks in total. With 3 disks taken for the dedicated Clustered ONTAP root aggregate, this leaves 25 disks for a data aggregate, of which two will be parity disks, giving a maximum usable space of around 23GB. It’s almost impossible to run any serious VM on it, but it’s possible to get close to 400GB usable space by editing the configuration. Some commands you may found in Internet relate to previous versions and they have changed from 8.2 to 8.3; I will show you here the updated commands for 8.3.
Before the first boot, you will need to change the 4th virtual disk of the simulator. This is created as “sparse” and is not supported in ESXi. I’ve read around the workaround of loading the multiextent module in the kernel (via the command vmkload_mod multiextent), but honestly it’s a really ugly solution, it’s unsupported by VMware, and it does not survive reboots unless you store the command in a start-up script. Better to convert first of all the disk to a peoper thin format. Also, a sparse disk cannot be extended in ESXi, and we will need to expand it at some point.
So, first remove the disk from the powered off virtual machine without deleting it:
In the command line, go into the folder where the disk in located, and run this commands:
vmkfstools -i DataONTAP-sim.vmdk thin.vmdk -d thin (clone the disk into a thin disk)
vmkfstools -U DataONTAP-sim.vmdk (deletes the old sparse disk)
vmkfstools -E thin.vmdk DataONTAP-sim.vmdk (renames the cloned disk like the original).
After these operations, reconnect the new thin disk to the virtual machine in the same IDE channel. Also, before powering on the virtual machine, edit the original 4 network cards and configure them based on your network. Power it on, and run these commands:
1. Press Ctrl-C for Boot Menu when prompted
2. Enter selection 4 ‘Clean configuration and initialize all disks’ and answer ‘y’ to the two prompts. Wait for the procedure to complete, it will reboot the VSA atuomatically
3. at the setup screen, type exit to go into the prompt, and login with admin with no password
4. set the password for the admin user with: security login password
5. security login unlock -username diag
6. security login password -username diag (enter the new password twice)
7. set -privilege diagnostic (and press y. The old command was “advanced” but systemshell is no more under advanced in 8.3)
8. systemshell local (and login with the diag user that was unlocked on step 5)
9. setenv PATH “${PATH}:/usr/sbin”
10. echo $PATH
11. cd /sim/dev/,disks
12. ls (see all the disks listed)
13. sudo rm v0*
14. sudo rm v1*
15. sudo rm ,reservations
16. cd /sim/dev
17. vsim_makedisks -h (we will the type 36)
18. sudo vsim_makedisks -n 14 -t 36 -a 0
19. sudo vsim_makedisks -n 14 -t 36 -a 1
20. sudo vsim_makedisks -n 14 -t 36 -a 2
21. sudo vsim_makedisks -n 14 -t 36 -a 3
22. ls ,disks/ (we now have 4 shelves with 14 disks each, all at 9 GB in size)
23. exit
24. system node halt local
At this point, to accomodate the new disks, we need to expand the containing vmdk disk. Additional problem, the disk is in IDE format, so it cannot be expanded live. Honestly, summing this issue and the original sparse format, I’m not sure why this is listed as the “ESX” version of the simulator, anyway… after powering down the appliance:
1. remove again the disk from the VM without deleting it
2. edit the vmdk descriptor file in the command line and change ddb.adapterType from “ide” to “lsilogic”
3. add again the disk to the VM. Now it will be listed as scsi and it can be expanded to 550GB (to accomodate the additional shelves and disks we created before)
4. Remove the vmdk from the VM
5. again, edit the vmdk file descriptor and change back ddb.adapterType from “lsilogic” to “ide”
6. add for the final time the IDE vmdk to the VM
Now, with finally the disk expanded, time for another bunch of commands:
1. power on the SIM
2. Invoke menu with CTRL+C when offered the option
3. select option 5
5. disk show (we check disks 0.16 0.17 and 0.18 are assigned for the system aggregate, all other 53 disks should be assigned later)
6. halt
7. power cycle the simulator
8. Ctrl-C for Boot Menu when prompted
9. select option 4 and wait the end of the process again, this time with many more and bigger disks it will take quite longer
10. configure the node management network as proposed
11. login with the admin user and run cluster setup
12. my choices are to create a new cluster and be it a single node cluster for simplicity
13. the base license is in the text file downloaded together with the simulator
Once the cluster node is up and running, we still need to assign all the created disks to the node and configure a second aggregate to hold your data. In the command line of the simulator:
1. system show (to retrieve the node name, in my case dataontap-01)
2. storage disk assign -all true -node dataontap-01
3. system node run -node dataontap-01 options disk.maint_center.spares_check off
4. storage aggregate create -aggregate dataontap01_01 -diskcount 53 -nodes dataontap-01 -maxraidsize 28
Configuration
The Simulator has an integrated web interface for management, the OnCommand System Manager. You can reach it by connecting over https to the IP assigned to the cluster. First, we verify the new aggregate has all the 53 new disks:
Then, it’s time to create an SVM. SVM is Storage Virtual Machine, and is an intermediate object between the clients and the entire cluster. Without going into further details, you can read this great post by Cormac Hogan to learn more about Clustered Data ONTAP and SVMs. For us, what’s important to know is that we need to configure the Simulator by creating and configuring at least one SVM. Everything happens here in Clustered Mode.
Before configuring the storage resources, you have to configure at least one subnet, since it will be requested in other wizards. Go into Cluster, select the cluster, open Configuration – Network and create a subnet representing the storage network you are using:
Here you define the subnet the storage will be connected to, and at least two IP addresses that will then be used by SVM. Once you have the subnet, open the “Storage Virtual Machines” section of the System Manager. Here you see the cluster with no SVM yet. You need to hit “Create” to start the configuration process:
You give here a name to the new SVM, the protocols you want it to support, and you select dataontap01_01 as the root aggregate to store the SVM volumes into. This aggregate is the one we created before. In the following step, you create a target alias for iscsi (if you enabled it like me), and select the subnet created before. You can also immediately create a new LUN for your vSphere cluster if you want. I’m going to do it in a second step, as I want to better configure the host initiators that will be authorized to access the LUNs:
Finally, you configure the username and password of the SVM (remember, SVM is all about multi-tenancy, so in a production environment you can give access only to a specified SVM instead of the entire cluster), and assign an additional interface and IP for management. One IP of the subnet has been assigned to the data interface (the one that will be pointed to from vSphere to connect via iSCSI), the other for the dedicated management interface of the SVM. The SVM itself is now ready to be used:
Create your iSCSI datastore
Once the SVM is up and running, it’s time to export an iSCSI volume to be used as a new datastore by vSphere. In the SVM, go under Storage -> LUNs, select the tab Initiator Groups and create a new one. It’s good to group all the ESXi hosts of a cluster into one single group, so whenever there’s a new LUN, it’s easy to simply authorize the entire group instead of manually adding each single host. Also, if you add or remove an ESXi hosts from the group, permissions are automatically updated:
Now, go into the tab “LUN Management” and click Create. The LUN wizard is started. The first activity is to assign name, size and type to the LUN:
In the next steps, I chose to create a new flexible volume to hold the LUN, based on the aggregate I created at the beginning:
Finally, I assign permissions to access this LUN to the Initiator Group I created before:
I skip the quality of service part and complete the wizard. The LUN is ready to be used by my vSphere cluster. To confirm which IP address you need to connect to, you can quickly check in Configuration -> Protocols -> iSCSI:
10.2.70.147 is the IP address that you need to use as the target in ESXi configuration. Once you’ve added this new IP address into the iSCSI targets of the ESXi hosts and run a rescan of the storage, you can see the new 100GB LUN:
With the usual datastore creation wizard you can now select the LUN and format it as a new VMFS volume.A quick rescan command at the cluster level, after configuring the new IP in the target section of each ESXi host, allow to see the new datastore in each ESXi host.