ZFS - Building, Testing, and Benchmarking
by Matt Breitbach on October 5, 2010 4:33 PM EST- Posted in
- IT Computing
- Linux
- NAS
- Nexenta
- ZFS
Test Blade Configuration
Our bladecenters are full of high performance blades that we use to run a virtualized hosting environment at this time. Since the blades that are in those systems are in production, we couldn’t very well use them to test the performance of our ZFS system. As such, we had to build another blade. We wanted the blade to be similar in spec to the blades that we were using, but we also wanted to utilize some of the new technology that has come out since we put many of our blades into production. Our current environment is mixed with blades that are running Dual Xeon 5420 processors w/ 32GB
Following that tradition we decided to use the SuperMicro SBI-7126T-S6 as our base blade. We populated it with Dual Xeon 5620 processors (Intel Xeon Nehalem/Westmere based 32nm quad core), 48GB Registered
Front panel of the SBI-7126T-S6 Blade Module
Intel X25-V
Dual Xeon 5620 processors, 48GB Registered
Our tests will be run using Windows 2008R2 and Iometer. We will be testing iSCSI connections over gigabit Ethernet, as this is what most budget
Price OpenSolaris box
The OpenSolaris box, as tested was quite inexpensive for the amount of hardware added to it. Overall costs for the OpenSolaris system was $6765. The breakdown is here :
Part |
Number |
Cost |
Total |
1 |
$1,199.00 |
$1,199.00 |
|
2 |
$166.00 |
$332.00 |
|
1 |
$379.00 |
$379.00 |
|
1 |
$253.00 |
$253.00 |
|
2 |
$378.00 |
$756.00 |
|
2 |
$414.00 |
$828.00 |
|
2 |
$109.00 |
$218.00 |
|
20 |
$140.00 |
$2,800.00 |
|
Total |
|
|
$6,765.00 |
Price of Nexenta
While OpenSolaris is completely free, Nexenta is a bit different, as there are software costs to consider when building a Nexenta system. There are three versions of Nexenta you can choose from if you decide to use Nexenta instead of OpenSolaris. The first is Nexenta Core Platform, which allows unlimited storage, but does not have the GUI interface. The second is Nexenta Community Edition, which supports up to 12TB of storage and a subset of the features. The third is their high end solution, Nexenta Enterprise. Nexenta Enterprise is a paid-for product that has a broad feature set and support, accompanied by a price tag.
The hardware costs for the Nexenta system are identical to the OpenSolaris system. We opted for the trial Enterprise license for testing (unlimited storage, 45 days) as we have 18TB of billable storage. Nexenta charges you based on the number of TB that you have in your storage array. As configured the Nexenta license for our system would cost $3090, bringing the total cost of a Nexenta Enterprise licensed system to $9855.
Price of Promise box
The Promise M610i is relatively simple to calculate costs on. You have the cost of the chassis, and the cost of the drives. The breakdown of those costs is below.
Part |
Number |
Cost |
Total |
1 |
4170 |
$4,170.00 |
|
16 |
$140.00 |
$2,240.00 |
|
Total |
|
|
$6,410.00 |
How we tested with Iometer
Our tests are all run from Iometer, using a custom configuration of Iometer. The .icf configuration file can be found here. We ran the following tests, starting at a queue depth of 9, ending with a queue depth of 33, stepping by a queue depth of 3. This allows us to run tests starting below a queue depth of 1 per drive, to a queue depth of around 2 per drive (depending on the storage system being tested).
The tests were run in this order, and each test was run for 3 minutes at each queue depth.
4k Sequential Read
4k Random Write
4k Random 67% write 33% read
4k Random Read
8k Random Read
8k Sequential Read
8k Random Write
8k Random 67% Write 33% Read
16k Random 67% Write 33% Read
16k Random Write
16k Sequential Read
16k Random Read
32k Random 67% Write 33% Read
32k Random Read
32k Sequential Read
32k Random Write
These tests were not organized in any particular order to bias the tests. We created the profile, and then ran it against each system. Before testing, a 300GB iSCSI target was created on each system. Once the iSCSI target was created, it was formatted with NTFS defaults, and then Iometer was started. Iometer created a 25GB working set, and then started running the tests.
While running these tests, bear in mind that the longer the tests run, the better the performance should be on the OpenSolaris and Nexenta systems. This is due to the L2ARC caching. The L2ARC populates slowly to reduce the amount of wear on MLC
102 Comments
View All Comments
diamondsw2 - Tuesday, October 5, 2010 - link
You're not doing your readers any favors by conflating the terms NAS and SAN. NAS devices (such as what you've described here) are Network Attached Storage, accessed over Ethernet, and usually via fileshares (NFS, CIFS, even AFP) with file-level access. SAN is Storage Area Network, nearly always implemented with Fibre Channel, and offers block-level access. About the only gray area is that iSCSI allows block-level access to a NAS, but that doesn't magically turn it into a SAN with a storage fabric.Honestly, given the problems I've seen with NAS devices and the burden a well-designed one will put on a switch backplane, I just don't see the point for anything outside the smallest installations where the storage is tied to a handful of servers. By the time you have a NAS set up *well* you're inevitably going to start taxing your switches, which leads to setting up dedicated storage switches, which means... you might as well have set up a real SAN with 8Gbps fibre channel and been done with it.
NAS is great for home use - no special hardware and cabling, and options as cheap as you want to go - but it's a pretty poor way to handle centralized storage in the datacenter.
cdillon - Tuesday, October 5, 2010 - link
The terms NAS and SAN have become rightfully mixed, because modern storage appliances can do the jobs of both. Add some FC HBAs to the above ZFS storage system and create some FC Targets using Comstar in OpenSolaris or Nexenta and guess what? You've got a "SAN" box. Nexenta can even do active/active failover and everything else that makes it worthy of being called a true "Enterprise SAN" solution.I like our FC SAN here, but holy cow is it expensive, and its not getting any cheaper as time goes on. I foresee iSCSI via plain 10G Ethernet and also FCoE (which is 10G Ethernet + FC sharing the same physical HBA and data link) completely taking over the Fibre Channel market within the next decade, which will only serve to completely erase the line between "NAS" and "SAN".
mbreitba - Tuesday, October 5, 2010 - link
The systems as configured in this article are block level storage devices accessed over a gigabit network using iSCSI. I would strongly consider that a SAN device over a NAS device. Also, the storage network is segregated onto a separate network already, isolated from the primary network.We also backed this device with 20Gbps InfiniBand, but had issues getting the IB network stable, so we did not include it in the article.
Maveric007 - Tuesday, October 5, 2010 - link
I find iscsi is closer to a NAS then a SAN to be honest. The performance difference between iscsi and san are much further away then iscsi and nas.Mattbreitbach - Tuesday, October 5, 2010 - link
iSCSI is block based storage, NAS is file based. The transport used is irrelevent. We could use iSCSI over 10GbE, or over InfiniBand, which would increase the performance significantly, and probably exceed what is available on the most expensive 8Gb FC available.mino - Tuesday, October 5, 2010 - link
You are confusing the NAS vs. SAN terminology with the interconnects terminology and vice versa.SAN, NAS, DAS ... are abstract methods how a data client accesses the stored data.
--Network Attached Storage (NAS), per definition, is an file/entity-based data storage solution.
- - - It is _usually_but_not_necessarily_ connected to a general-purpose data network
--Storage Area Network(SAN), per definition, is a block-access-based data storage solution.
- - - It is _usually_but_not_necessarily_THE_ dedicated data network.
Ethernet, FC, Infiniband, ... are physical data conduits, they are the ones who define in which PERFORMANCE class a solution belongs
iSCSI, SAS, FC, NFS, CIFS ... are logical conduits, they are the ones who define in which FEATURE CLASS a solution belongs
Today, most storage appliances allow for multiple ways to access the data, many of the simultaneously.
Therefore, presently:
Calling a storage appliance, of whatever type, a "SAN" is pure jargon.
- It has nothing to do with the device "being" a SAN per se
Calling an appliance, of whatever type, a "NAS" means it is/will be used in the NAS role.
- It has nothing to do with the device "being" a NAS per se.
mkruer - Tuesday, October 5, 2010 - link
I think there needs to be a new term called SANNAS or snaz short for snazzy.mmrezaie - Wednesday, October 6, 2010 - link
Thanks, I learned a lot.signal-lost - Friday, October 8, 2010 - link
Depends on the hardware sir.My iSCSI Datacore SAN, pushes 20k iops for the same reason that their ZFS does it (Ram cacheing).
Fibre Channel SANs will always outperform iSCSI run over crappy switching.
Currently Fibre Channel maxes out at 8Gbps in most arrays. Even with MPIO, your better off with an iSCSI system and 10/40Gbps Ethernet if you do it right. Much cheaper, and you don't have to learn an entire new networking model (Fibre Channel or Infiniband).
MGSsancho - Tuesday, October 5, 2010 - link
while technically a SAN you can easily make it a NAS with a simple zfs set sharesmb=on as I am sure you are aware.