A lab environment is a very effective tool to keep up with technology, without having to worry about messing something up in a production environment. Several of the places I have worked have development/test/demo environments setup for the purpose of exactly that, development/test/demo work. I really couldn’t imagine testing new code, application or OS updates, or “what if” type scenarios in a production environment. But those environments weren’t really “lab” environments.
From my experience, lab environments are often comprised of whatever equipment that can be found. They often are found under a cubicle desk, a staging area, or some closet. In a previous position, my ESX hosts were simply HP/Compaq D510’s with Pentium 4 (without Hyper-Threading) processors, 2GB of RAM, and 40GB local disks. Those are terribly slow by today’s standards, and to be honest, they weren’t stellar then either.
As a virtualization tech guy that is looking to build a home vSphere lab, there are a couple things that I need to take into account:
- Will vSphere (current version) run on it without extensive modifications? (Like the work Dave’s doing at vm-help.com)
- Will the next version of vSphere (whatever/whenever that is) run on it without extensive modifications? – Simply speculation
- Will alternate hypervisors run on it? (XenServer/Hyper-V/KVM/etc) – Who knows, maybe some comparisons
- Will it cost an arm and a leg? – What can I get, and at what price?
- What are the dimensions of the hosts?
- Will it be viable for a decent amount of time? – What is my RTO going to be?
- The long debated question, Intel or AMD?
These are some important questions to ponder.
1. Will vSphere run on it without extensive modifications?
Any VMware rep/tech/enthusiast will simply say “Is it on the Hardware Compatibility List (HCL)?” That is the standard response. I’m not going to go into the whole HCL debate, but suffice to say that I personally would prefer as much compatibility as possible. I have upgraded a lab before, only to have the hosts fail on reboot after an updated release was installed. This can be a major pain.
2. Will the next version of vSphere run on it without extensive modifications?
There is no way to truly know this until some new release is generally available.
3. Will alternate hypervisors run on it?
The only way to know this, is to see if their minimum requirements are also met by the gear you choose. Hyper-V, when enabled in Windows 2008 R2, is pretty easy to determine, provided the processor/motherboard has the appropriate virtualization instruction sets, as well as hardware drivers. XenServer and other Linux based hypervisors take a little more research.
4. Will it cost an arm and a leg?
This is a pretty tough question. There are a couple things to take into account here.
A. What type of processor will be used? How many cores are needed? How many cores will be sufficient?
From what I have seen in my previous labs, CPU is seldom a constraint, given “production” workloads aren’t typical.
B. What type of motherboard will be used? Is the chipset supported? Is the onboard nic supported?
To get a motherboard with an onboard nic that vSphere can use out of the box, typically a server-class motherboard has to be used. This can add anywhere from $100 to $300 to the cost of the motherboard. An alternative to using a server-class motherboard, would be to add supported nics to a desktop board.
C. How much RAM can be installed in the motherboard? What type of RAM does the motherboard require?
Depending on the socket type and board type, anywhere from 2 to 6 memory slots can be found on desktop board, with server boards having up to 12 slots. That can be quite a bit of RAM. With memory stick sizes ranging from 1GB to 8GB, and their prices across the board, depending on size and memory type (DDR/DDR2/DDR3/ECC), it can be a difficult task to get a good combination of price per GB. *Don’t forget that Server class motherboards often say they will use non-ECC or ECC RAM, but you may have issues anyway. Despite the cost difference, if you get a Server class motherboard with an ECC requirement, go ahead an be prepared to give up some more of your hard earned money.
D. Will local storage be used? Or will remote storage be used?
Not really needed with vSphere, but there are some VSA’s available that can share storage across hosts to present a single datastore. Remote storage can be pricey, but there are some very good offerings out there.
E. What about environmental concerns like power and cooling?
What size power supply does the whole setup require, as well as how hot is it going to run? Do I really need a 1kilowatt power supply? How much more will my utility bill be every month if I leave these hosts running 24×7 (electricity/cooling)? There are quite a few low power options, but will they be fast/efficient enough to perform appropriately?
5. What are the dimensions of the hosts? Where are they going to go? I live in Louisiana, no basements here.
There are many options for micro ATX sized systems, and coupled with a micro ATX slim case, they can get pretty small.
6. Will it be viable for a decent amount of time?
I had a couple of Dell PowerEdge 2650’s several years ago. They were great for VI3, but when vSphere went GA, I found out that they weren’t usable anymore, given that they only had 32bit processors. Prime example of good equipment, that I had to put out to pasture, because they couldn’t keep up with the technology.
7. The long debated question, Intel or AMD?
I was burned once by a Cyrix configuration, and almost by an AMD configuration. My gut tells me to stick with Intel
So those are the questions I’m currently pondering. What are you doing in your home lab? Feel free to take the poll on the right side of my blog, and tell me if you are using a Desktop or Server class configuration. Maybe I’ll get enough responses to push me one direction or another.
Thanks,
Jase
Here’s what I’m doing:
http://professionalvmware.com/2011/04/home-lab-gear/
It runs vSphere.current well enough… also does Citrix Xen like a boss.
Overall, my only “upgrade” that I’m considering is more spindles… or SSD’s, etc. Maybe another IX4 with SSDs and some ghetto tiering.
-Cody
http://professionalvmware.com
So, I know you will call me crazy, but I am using Win2K3 as my NFS server. Follow me here – I can run Win2K3 Std on older hardware, team my NICs, and feel comfortable with the target file system, since I’ve been dealing with it for quite awhile :-). The extras I have loaded are Omni NFS by http://www.xlink.com/ and Drivesnapshot by http://www.drivesnapshot.de/en/index.htm.
Omni NFS is a rock solid, simply implemented NFS service for Win2K3 – easy to configure with PHENOMINAL support – they coded fixes for me within a day when we realized the ESXi NFS client was crashing it. And, like a Mac, “…it just works.”
Snapshot.de is a little gem of a package from Germany that gives me the ability to not just snap an NTFS volume to an external media device of my choosing, but also allows me to mount the snap as a Network Drive to recover an individual VMDK or use something like Winmount to dig inside the file.
As for hardware, I am straddling the line on whether to build or buy. Cody’s price for the T110s is phenominal, but I just found a Supermicro ATX board on Newegg that REALLY fits the bill. Maybe I’ll just sponge of Jase’s lab instead :-).