Buying the right NAS device for a vSphere home lab is not an easy task. This blog post documents the decision process you should go through IMHO.
First, decide which data you are going to put on it. Lots of people buy a NAS for secondary data only (I.e. backups), but in a home lab, there's probably primary data too. How important is the data, and do you require a backup of this primary data?
Then, think about the volume of data you need. Is it 1TB, more like 5TB, or rather 10TB?
Number three, protection level. No one wants to lose data, but how badly? Surviving one disk failure is a minimum, but a RAID5 set enters its "danger zone" when that happens. That means an additional failure will make you lose all the data on the set. The danger zone ends after you've replaced the failed disk and it's contents have been rebuilt. RAID6 enters the danger zone after losing a second device before the first is rebuilt. Know your danger zone!
A fourth decision is speed. Bandwidth is a concern to some, but on a Gbit switch, a device with 4 or more disks can often saturate that bandwidth. Multiple Gbit links can help if more bandwidth is needed. But the most important performance indicator is IOPS. Knowing how many IOPS you want is extremely difficult, but once you arrive at a figure, getting the IOPS is a matter of spreading your data over enough individual disks. One WD Caviar Red drive can do about 112 write IOPS or 45 read IOPS of 4 KB. Caching can greatly improve host-facing IOPS as well. This article gives a great view on the world of disk bandwidth, IOPS and latency.
You should also know which protocols your NAS will need to speak, but as most do CIFS, NFS and iSCSI anyway, most use types are covered. If you need specialty features like replication, filter on that too. Also, is your device really supported? The actual support might not matter for a home lab, but it's the strongest statement you can get that it will work.
Conclusion: in most environments, this is going to lead to a NAS configuration with a high number of slots (forget the 2 to 4 bay models), and relatively small disks in those slots. And that is ... a lot more expensive than just adding 3TB drives until you reach the volume you need. As always, there's no such thing as a free lunch: you'll get what you pay for.
First, decide which data you are going to put on it. Lots of people buy a NAS for secondary data only (I.e. backups), but in a home lab, there's probably primary data too. How important is the data, and do you require a backup of this primary data?
Then, think about the volume of data you need. Is it 1TB, more like 5TB, or rather 10TB?
Number three, protection level. No one wants to lose data, but how badly? Surviving one disk failure is a minimum, but a RAID5 set enters its "danger zone" when that happens. That means an additional failure will make you lose all the data on the set. The danger zone ends after you've replaced the failed disk and it's contents have been rebuilt. RAID6 enters the danger zone after losing a second device before the first is rebuilt. Know your danger zone!
A fourth decision is speed. Bandwidth is a concern to some, but on a Gbit switch, a device with 4 or more disks can often saturate that bandwidth. Multiple Gbit links can help if more bandwidth is needed. But the most important performance indicator is IOPS. Knowing how many IOPS you want is extremely difficult, but once you arrive at a figure, getting the IOPS is a matter of spreading your data over enough individual disks. One WD Caviar Red drive can do about 112 write IOPS or 45 read IOPS of 4 KB. Caching can greatly improve host-facing IOPS as well. This article gives a great view on the world of disk bandwidth, IOPS and latency.
You should also know which protocols your NAS will need to speak, but as most do CIFS, NFS and iSCSI anyway, most use types are covered. If you need specialty features like replication, filter on that too. Also, is your device really supported? The actual support might not matter for a home lab, but it's the strongest statement you can get that it will work.
Conclusion: in most environments, this is going to lead to a NAS configuration with a high number of slots (forget the 2 to 4 bay models), and relatively small disks in those slots. And that is ... a lot more expensive than just adding 3TB drives until you reach the volume you need. As always, there's no such thing as a free lunch: you'll get what you pay for.
Comments