Synology DS414j: An Ideal Backup NAS
by Ganesh T S on July 10, 2014 9:00 AM ESTSingle Client Performance - CIFS & NFS on Linux
A CentOS 6.2 virtual machine was used to evaluate NFS and CIFS performance of the NAS when accessed from a Linux client. We chose IOZone as the benchmark for this case. In order to standardize the testing across multiple NAS units, we mount the CIFS and NFS shares during startup with the following /etc/fstab entries.
//<NAS_IP>/PATH_TO_SMB_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER cifs rw,username=guest,password= 0 0
<NAS_IP>:/PATH_TO_NFS_SHARE /PATH_TO_LOCAL_MOUNT_FOLDER nfs rw,relatime,vers=3,rsize=32768,wsize=32768,namlen=255,hard,proto=tcp,timeo=600,retrans=2, sec=sys,mountaddr <NAS_IP>,mountvers=3,mountproto=udp,local_lock=none,addr=<NAS_IP> 0 0
The following IOZone command was used to benchmark the CIFS share:
IOZone -aczR -g 2097152 -U /PATH_TO_LOCAL_CIFS_MOUNT -f /PATH_TO_LOCAL_CIFS_MOUNT/testfile -b <NAS_NAME>_CIFS_EXCEL_BIN.xls > <NAS_NAME>_CIFS_CSV.csv
IOZone provides benchmark numbers for a multitude of access scenarios with varying file sizes and record lengths. Some of these are very susceptible to caching effects on the client side. This is evident in some of the graphs in the gallery below.
Readers interested in the hard numbers can refer to the CSV program output here. These numbers will gain relevance as we benchmark more NAS units with similar configuration.
The NFS share was also benchmarked in a similar manner with the following command:
IOZone -aczR -g 2097152 -U /nfs_test_mount/ -f /nfs_test_mount/testfile -b <NAS_NAME>_NFS_EXCEL_BIN.xls > <NAS_NAME>_NFS_CSV.csv
The IOZone CSV output can be found here for those interested in the exact numbers.
A summary of the bandwidth numbers for various tests averaged across all file and record sizes is provided in the table below. As noted previously, some of these numbers are skewed by caching effects. A reference to the actual CSV outputs linked above make the entries affected by this effect obvious.
Synology DS414j - Linux Client Performance (MBps) | ||
IOZone Test | CIFS | NFS |
Init Write | 57 | 34 |
Re-Write | 56 | 36 |
Read | 20 | 91 |
Re-Read | 20 | 91 |
Random Read | 11 | 34 |
Random Write | 47 | 35 |
Backward Read | 11 | 28 |
Record Re-Write | 33 | 885* |
Stride Read | 19 | 68 |
File Write | 59 | 38 |
File Re-Write | 56 | 37 |
File Read | 14 | 64 |
File Re-Read | 14 | 65 |
*: Number skewed due to caching effect |
41 Comments
View All Comments
edzieba - Thursday, July 10, 2014 - link
Why RAID10 rather than RAID6 (or RAIDZ2)? Surely the superior robustness is worth the minimal performance reduction?JimmaDaRustla - Thursday, July 10, 2014 - link
RAID 6 has no space efficiency over RAID 1+0 in a 4 drive setup. It also has no write performance gains, especially when considering it needs to calculate the parity blocks. And read speed is theoretically slower since RAID 1+0 has two sets of data to work off of. And lastly, if a drive dies, RAID 1+0 has no performance decrease, but RAID 6 would take a hit because it would need to calculate blocks using the parity.JimmaDaRustla - Thursday, July 10, 2014 - link
Edit: I'm amateur though, not sure if there is more to RAID 6, but in a 4 drive setup, I would go with RAID 1+0bernstein - Thursday, July 10, 2014 - link
RAID6 in a 4bay home nas is just asking for unnecessary trouble. However RAID6 can take the death of any two drives, whereas in RAID 1+0 if the wrong two drives fail your data is toast. But if you value your data enough to invest in a NAS with RAID1/5/6 you actually want RAIDZ2.piroroadkill - Thursday, July 10, 2014 - link
4-bay NAS is such a pain in the ass! For years I've seen 4-bays across the board, but that's never been enough.DanNeely - Thursday, July 10, 2014 - link
Because 2/4 bay units are enough for the vast majority of home users.Gunbuster - Thursday, July 10, 2014 - link
Also no ARM chip is going to keep up with the overhead of more drive bays. You get a real server or SAN for that.Samus - Thursday, July 10, 2014 - link
Not necessarily CPU limited. I have an Areca RAID controller with 8 SAS channels (and up to three arrays) on an XSCALE 800MHz CPU with good overall performance. It's running two arrays in a server (one array is three S3500 SSD's, another is five 900GB SAS drive.)I simulated a drive failure by pulling power from one of the 900GB SAS drives, wiping it, and reattaching it while in Server 2012 and it rebuilt the array (3.5TB, 2TB of data) in ~10 hours while maintaining high availability. A tax system running in Hyper-V and the 50GB exchange store resided on that array while being rebuilt.
M/2 - Thursday, July 10, 2014 - link
I agree. I've never understood I would want to go thru the trouble of configuring NAS and then live slow throughput. Especially when I can connect a USB3 RAID to any cheap server (I've got a $600 Mac Mini) and get better performance. I get 104 MB/s on my external drive over the network vs. 240 MB/s locally. That's over gigabit ethernet, I get about 23-30 Mb/s on 5 Ghz Wifi-N. But, I'm just a home user, what do I know? Maybe when you get many users, there's an advantage, but look at how slow!DanNeely - Thursday, July 10, 2014 - link
Feeding multiple computers is one of the primary reasons to use a NAS instead of just connecting more drives locally.