Skip to main content

Posts

Buying the right NAS device for your home lab.

Buying the right NAS device for a vSphere home lab is not an easy task. This blog post documents the decision process you should go through IMHO. First, decide which data you are going to put on it. Lots of people buy a NAS for secondary data only (I.e. backups), but in a home lab, there's probably primary data too. How important is the data , and do you require a backup of this primary data? Then, think about the volume of data you need. Is it 1TB, more like 5TB, or rather 10TB? Number three, protection level. No one wants to lose data, but how badly? Surviving one disk failure is a minimum, but a RAID5 set enters its "danger zone" when that happens. That means an additional failure will make you lose all the data on the set. The danger zone ends after you've replaced the failed disk and it's contents have been rebuilt. RAID6 enters the danger zone after losing a second device before the first is rebuilt. Kn ow your danger zone ! A fourth decision is spe...

Boot device priority in a vSphere VM

While playing around with the bios.bootDeviceClasses parameter (as shown in this example ), we found out that a device not specified in allow: would still be used if all "allow:"ed devices are unusable (no CD connected, no PXE server found, etc.) a device specified in deny: would still be used if all other devices are unusable. So contrary to what the documentation suggests, "allow:" will just move certain devices to the front of the boot device list, and "deny:" moves those devices to the end of the list. Hope this can help other people trying to make sense of setting boot order in a VM to achieve a specific behavior. In our case: get a VM to reliably boot from CD for automated deployment using the SDK.

Too much redundancy will kill you

A customer asked me to verify their vSphere implementation. Everything looked perfectly redundant, in the traditional elegant way: cross over between layers to avoid single points of failure. I had to break the bad news: too much redundancy can mean NO redundancy . In this case: host has 4 network interfaces (2x dual port card). VM's connect to a vSwitch, which has redundancy over vmnic0 and vmnic2 (using 1 port of each card). Another vSwitch for the storage traffic, same level of redundancy, using vmnic1 and vmnic3. Looking good. Then the physical level. 4 host interfaces, 2 interconnected network switches. The traditional |X| design connects the two interfaces of every card to different switches. Looking good. But looking at both configurations together, you'll see that every vSwitch gets connected to one physical switch. The sum of two crossed redundancy configurations equals no redundancy at all. Enabling CDP or LLDP can help you identify this problem, as you can identi...

vCenter Appliance and underscores in hostnames

Found out the hard way: don't use underscores in hostnames. It's not allowed by DNS, and it breaks things. In this case: joining vCenter Server Appliance (VCSA) in an Active Directory doesn't work if the hostname of the appliance contains an underscore (_). It also doesn't work if the hostname is "localhost". If your appliance uses DHCP, the appliance gets its hostname through reverse DNS. So in that case, it _is_ a freaking DNS problem.

vSphere5 nested virtualization as seen in /proc/cpuinfo

I won't blog about the whole vhv.allow="true" procedure here, that's been covered elsewhere. But what does nested virtualization change in a VM ? Well, the CPU features that are exposed change: A regular 64-bit Linux VM sees # grep flags /proc/cpuinfo flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc up arch_perfmon pebs bts rep_good xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq ssse3 cx16 sse4_1 sse4_2 popcnt aes xsave avx hypervisor lahf_lm ida arat A 64-bit VM with nested virtualization enabled sees # grep flags /proc/cpuinfo flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss syscall nx rdtscp lm constant_tsc up arch_perfmon pebs bts rep_good xtopology tsc_reliable nonstop_tsc aperfmperf unfair_spinlock pni pclmulqdq vmx ssse3 cx16 sse4_1 sse4_2 popcnt aes xsave avx hy...

SSH cipher speed

When setting up backups over SSH (e.g. rsnapshot with rsync over SSH), it's important to know that the default SSH cipher isn't necessarily the fastest one. In this case, the CPU-based encryption is the performance bottleneck, and making it faster means getting faster backups. A test (copying a 440 MB file between a fast Xeon CPU (fast=no bottleneck there) and an Atom based NAS) shows that the arcfour family of ciphers are clearly the fastest in this setup: cipher real time user time bandwidth arcfour 0m9.639s 0m7.423s 45.7 MB/s arcfour128 0m9.751s 0m7.483s 45.1 MB/s arcfour256 0m9.856s 0m7.764s 44.7 MB/s blowfish-cbc 0m13.093s 0m10.909s 33.6 MB/s aes128-cbc 0m22.565s 0m20.129s 19.5 MB/s aes128-ctr 0m25.400s 0m22.951s 17.3 MB/s aes192-ctr 0m28.047s 0m25.771s 15.7 MB/s 3des-cbc 0m51.067s 0m48.018s 8.6 MB/s The default configuration of openssh uses aes128-ctr, so changing the cipher to arcfour gets me a 2.5-fold increase in bandwidth here ! Use the "Ciph...

Dell's R210-II as vSphere home lab server

My VI3 and vSphere4 home lab consisted of whitebox PCs. For VI3 I used MSI based nonames, for vSphere4 I used Shuttle SX58j3. For the new vSphere5 generation, I wanted some real server hardware. Because of shallow depth requirements, the choice of rackmount servers was limited. I picked the Dell Poweredge R210II instead of the sx58j3 because - on the vSphere HCL (the sx58j3's won't boot vSphere5 RC !) - Sandy Bridge low TDP CPUs available (I got the E3-1270) - onboard dual BCM5716 nics support iSCSI offload (aka "dependent HW iSCSI") - IPMI built-in (not tested yet) - dense: 1U (the sx58j3 is about 4 units, but can fit 2 in 19") - one free PCIe slot (The sx58j3 has 2 slots, but needs a VGA card) - not incredibly expensive (up to 16GB RAM) Downsides: - only one free PCIe slot (max GbE nics needs expensive quadport card) - incredibly expensive (with 32GB RAM it's 3x the price of a 16GB config) - can't buy without at least one disk. I'll be ru...