I recently received from Fusion-IO a 640 Gb IODrive card to be used in my Home Lab.
Fusion-IO designs, produces and sells a series of cards loaded with NAND chipsets, installable in servers via PCI-E connections. They are seen by the supported OS (Windows Server, VMware ESX and some linux distributions) as a local disk.
In a different way from other solutions like SSD, these cards require specialized drivers to be seen by the Operating Systems. On the other hand their performances are not even comparable to those of SSD connected via standard SAS or SATA buses. In fact Fusion-IO not only uses PCI-E bus, but they also intercept via through their drivers the calls from the Operating System to its filesystem and redirect them to the card; here their is a proprietary file system shown to the operating system as a sort of “logical view”.
All this technology translates in stunning numbers: the card I have at home is declared to be able to reach 145.000 IOPS (Sequential Write IOPS at 512 Byte) and most of all 30 microseconds latency; this value is 1000 times lower than every disk system, usually measured in milliseconds. It’s a technology really near to RAM memory in term of speed and latency, with the advantage of persistency of data like a hard disk.
How can we use this type of card? For sure HPC and Databases will benefit of it, but also our VMware environments can take advantage of it: this card can be used as a super-fast local storage, as a host cache (think about VDI environments where you can load many many VMs on a single host), but even as a shared storage leveraging a VSA on top of it or using their fascinating technology called IOTurbine.
Infact VMware solutions like Nutanix are using Fusion-IO cards as their first storage tier.