Building the 2012 AnandTech SMB / SOHO NAS Testbed
by Ganesh T S on September 5, 2012 6:00 PM EST- Posted in
- IT Computing
- Storage
- NAS
Motherboard
A number of vendors exist in the dual processor workstation motherboard market. At the time of the build, LGA 2011 Xeons had already been introduced, and we decided to focus on boards supporting those processors. Since we wanted to devote one physical disk and one network interfaces to each VM, it was essential that the board have enough PCI-E slots for multiple quad-ported server NICs as well as enough native SATA ports. For our build, we chose the Asus Z9PE-D8 WS motherboard with an SSI EEB form factor..
Based on the C602 chipset, this dual LGA 2011 motherboard supports 8 DIMMs and has 7 PCIe 3.0 slots. The lanes can be organized as (2 x16 + 1 x16 + 1x8 or 4 x8 + 1x16 + 1 x8). All the slots are physically 16 lanes wide. The Intel C602 chipset provides two 6 Gbps SATA ports and eight SATA 3 Gbps ports. A Marvell PCIe 9230 controller provides four extra 6 Gbps ports making for a total of 14 SATA ports. This allows us to devote two ports to the host OS of the workstation and one port to each of the twelve planned VMs. The Z9PE-D8 WS motherboard also has two GbE ports based on the Intel 82574L. Two Gigabit LAN controllers are not going to be sufficient for all our VMs. We will address this issue further down in the build.
The motherboard also has 4 USB 3.0 ports, thanks to an ASMedia USB 3.0 controller. The Marvell SATA - PCIe bridge and the ASMedia USB3 controller are connected to the 8 PCIe lanes in the C602. All the PCIe 3.0 lanes come from the processors. Asus also provides support for SSD caching (where any installed SSD can be used as a cache for frequently accessed data, without any size limitations) in the motherboard. The Z9PE-D8 WS also has a Realtek ALC898 HD audio codec, but neither of the above aspects are of relevance to our build.
CPUs
One of the main goals of the build was to ensure low power consumption. At the same time, we wanted to run twelve VMs simultaneously. In order to ensure smooth operation, each VM needs at least one vCPU allocated exclusively to it. The Xeon E5-2600 family (Sandy Bridge-EP) has CPUs with core counts ranging from 2 to 8, with TDPs from 60 W to 150 W. Each core has two threads. Keeping in mind the number of VMs we wanted to run, we specifically looked at the 6 and 8 core variants, as two of those processors would give us 12 and 16 cores. Within these, we restricted ourselves to the low power variants. These included the hexa-core E5-2630L (60 W TDP) and the octa-core E5-2648L / E5-2650L (70 W TDP).
CPU decisions for machines meant to run VMs have to be usually made after taking the requirements of the workload into consideration. In our case, the workload for each VM involved IOMeter and Intel NASPT (more on these in the software infrastructure section). Both of these softwares tend to be I/O-bound, rather than CPU-bound, and can run reliably on even Pentium 4 processors. Therefore, the per-core performance of the three processors was not a factor that we were worried about.
Out of the three processors, we decided to go ahead with the hexa-core Xeon E5-2630L. The cores run at 2 GHz, but can Turbo up to 2.5 GHz when just one core is active. Each core has a 256 KB L2 cache, with a common 15 MB L3. With a TDP of just 60W, it enabled us to focus on energy efficiency. Two Xeon E5-2630Ls (a total of 120W TDP) enabled us to proceed with our plan to run 12 VMs concurrently.
Coolers
The choice of coolers for the processors is dictated by the chassis used for the build. At the start of the build, we decided to go with a tower desktop configuration. Asus recommended the Dynatron R17 for use with the Z9PE-D8 WS, and we went ahead with their suggestion.
The R17 coolers are meant for the LGA 2011 sockets for 3U and above rackmount form factors as well as tower desktop and workstation solutions. They are made of aluminium fins with four copper heat pipes. A thermal compound is pre-printed at the base. Installation of the R17s was quite straightforward, but care had to be taken to ensure that the side meant to mount the cooler’s fans didn’t face the DIMM slots on the Z9PE-D8 WS.
The fans on the R17 operate between 1000 and 2500 rpm, and consume between 0.96W and 3W at these speeds. Noise levels are respectable and range from 17 dbA to 32 dbA. The R17 has the ability to cool CPUs with up to 160W TDP. The 60W E5-2630Ls were effectively maintained between 45C and 55C even under our full workloads by the Dynatron R17s.
74 Comments
View All Comments
xTRICKYxx - Wednesday, September 5, 2012 - link
May I ask why do you guys need such high requirements? And why 12 VMs? I just think this is overkill. But it doesn't matter anyways... If I had a budget like this, I would totally build an awesome NAS like you guys have and follow this guide. Great job!xTRICKYxx - Wednesday, September 5, 2012 - link
I should clarify I am looking at this NAS as a household commodity, not something where 10+ computers will be heavily accessing it.mfed3 - Wednesday, September 5, 2012 - link
still didn't read...this is hopeless..extide - Thursday, September 6, 2012 - link
Dude they are NOT BUILDING A NAS!!!They are building a system to TEST other NAS's
thomas-hrb - Thursday, September 6, 2012 - link
It would also be nice to test against some of the other features like for example iSCSI. Also since the Thecus N4800 supports iSCSI, I would like to see that test redone with a slightly different build/deployment.Create a single LUN on iSCSI. then mount that LUN in the VM like ESXi, create some VM's 20GB per server should be enough for server 2K8R2 and test it that way.
I don't know who would use NAS over SAN in an enterprise shop, but some of the small guys who can't afford an enterprise storage solution (less than 25 clients) might want to know how effectively a small NAS, can handle VM's with advanced features like vMotion and fault tolerance. In fact if you try some of those HP ML110G7 (3 of them with a vmware essentials plus kit) you can get 12 CPU cores with 48GB RAM, with licensing for about 10K. This setup will give you a decent amount of reliability, and if the NAS can support data replication, you could get a small setup with enterprise features (even if not enterprise performance) for less than the lost of 1-tray of FC-SAN storage.
Wixman666 - Wednesday, September 5, 2012 - link
It's because they want to be able to really hammer the storage system.The0ne - Wednesday, September 5, 2012 - link
"The guest OS on each of the VMs is Windows 7 Ultimate x64. The intention of the build is to determine how the performance of the NAS under test degrades when multiple clients begin to access it. This degradation might be in terms of increased response time or decrease in available bandwidth."12 is a good size, if not too small for a medium size company.
MGSsancho - Wednesday, September 5, 2012 - link
12 is also a good size for a large workgroup.. Alternatively this is a good benchmark for students in dorms. sure there might be 4-5 people but when you factor in computers using torrents, game consoles streaming netflix along with tvs, could be interesting. granted all of this is streaming except for the torrents and their random i/o. However most torrent clients cache as much of the writes. With the current anandtech bench setup with VMs this can be replicated.DanNeely - Wednesday, September 5, 2012 - link
The same reason they need 8 threaded benchmark apps to fully test a Quad-HT CPU. They're testing NASes designed to have more than 2 or 3 clients attached at once; simulating a dozen of them puts the load on the nases up, although judging by the results shown by the Thecus N4800 they probably fell short of maxing it out.theprodigalrebel - Wednesday, September 5, 2012 - link
Well, this IS Anandtech and the article is filed under IT Computing... ;)