On the face of it the VMware Storage Appliance is a really good idea.
Many installations of virtualisation have a bunch of servers, but no separately installed network storage on which the VM's can be stored. This means that VM's are tied to the host on which they are running. Amongst other disadvantages, it means that if the host fails, the VM's go. It's a bit like the old physical days, lose the server, lose the service.
In a decently configured SAN setup, HA will cause any guest servers to be restarted on other hosts, subject to certain conditions - but in principal provided you have both a) the capacity and b) configured it correctly; then your network servers will be back quite quickly.
If you factor in Fault Tolerance (or guest server level resilience like Exchange DAG's) then users might not even notice an outage. Perfect.
The vSA gives the owner of servers without a SAN the benefits. Internal storage on the host servers is consolidated into a single space available to all hosts. In the event of a host failure, the other hosts still have copies of the VM guests and can bring them back quickly.
But the conditions/requirements attached to this are somewhat, ahem, interesting,
1. You must have RAID10 configuration for the internal storage.
2. Each server must have 4 GB Ethernet ports to provide triangulated connections to the other 2 servers (the vSA is aligned with the SMB editions and only runs on 3 servers).
3. Best practice is that the vCentre should not run on the vSA. VMware staff at VMworld suggested it run on a separate box outside the cluster - how 2008!!
The consequences of this:
1. To provide (say) 3TB of usable storage the installation will need 12TB of raw disk space.
2. You need to re-use an old box (hardware support contract anyone? RAID support anyone? Driver support anyone?) to run the vCentre server. And don't forget, this "old" box has to be 64bit!!
3. You need to invest in 6 dual port NIC's (you could get quads, but better to spread the physical risk across 2 cards per server).
4. You should have a separate GB switch to link up the vSA so that there is no LAN traffic impacting performance, and your SAN traffic is secure.
You then get an under the covers SAN running across all the hosts and provisioning storage for your VM guests.
Lets's say £100 for each dual NIC card, and £200 for each of 12 1TB drives. That's £3,000 in total.
The alternative of say, a NetGear ReadyNAS 3200 (other SAN's are available!) with 6TB raw disk space providing about 3.5TB available in a RAID6 style configuration. This can be got for around £3,000. I'd put a second dual NIC card in the SAN to give resilience for the SAN connections, and another 2 resilient ports for a network management interface; say £175 (it's special, it's for a SAN). You'd need the switch still, and I'd certainly consider two NIC cards in the server for physical resilience. So let's say will still get the 6 dual NIC cards for £600 total again. You might also want a pair of disks in each server to provide a RAID1 mirrored boot drive, but as you can boot ESXi from USB I'm going to say no (we are in an economy drive after all)
This means the SAN is going to set you back about £775 more than the vSA cost (or about 25%).
Oh, but wait, i forgot something. The vSA licence costs money. A shade under $8,000, or say (and I'm being generous) about £5,000. But hold on, if you're a new customer and buying VMware for the project, they'll give you a whacking 40% discount. So let's call it £3,000.
Your 25% saving by not buying the SAN has just turned into a 125% premium cost.
What the %^]{ were they smoking when they came up with that idea???
Not only are you paying more but:
1. Your ESX servers are spending valuable computing resources managing a virtual SAN across themselves.
2. Your ES servers are also spending valuable computing resources handling data from the virtual SAN.
3. The setup is so intertwined (vSA is managed by vCentre, as are the ESX hosts themselves) that VMware recommend you host it off the cluster - so the vCentre server is more exposed to risk, and an additional cost and burden (which I've not coated)
4. By recommending a physical vCentre server VMware are exposing you to all the problems of a physical server - which they would normally rubbish.
5. If you hosted the vCentre on the VMware cluster then if everything was shutdown, you might not be able to start your servers up again. No risk there then :-)
I am appalled.
If the licence was a factor of 10 cheaper then it might be worth considering. But for any business looking at new kit for a virtualisation project, steer well clear.
If (as VMware said in targeting the product) you are worried about managing another box then a) you have to in this model - the vCentre and b) get some training or good support for the SAN. If you truly think managing the SAN is going to be a problem, then managing the ESX farm as well will be. So get someone in to do it for you.
VMware - I expressed concerns directly to you this week about your perception and targeting of SMB's. This proves it to me.
Peter
PS all numbers in the article are top of the head recollections not Internet searched latest figures. But they serve to prove the point.