Synology DS2015xs Review: An ARM-based 10G NAS
by Ganesh T S on February 27, 2015 8:20 AM EST- Posted in
- NAS
- Storage
- Arm
- 10G Ethernet
- Synology
- Enterprise
Introduction and Testbed Setup
Synology is one of the most popular COTS (commercial off-the-shelf) NAS vendors in the SMB / SOHO market segment. The NAS models introduced by them in 2014 were mostly based on Intel Rangeley (the Atom-based SoCs targeting the storage and communication market). However, in December, they sprang a surprise by launching the DS2015xs, an ARM-based model with dual 10GbE ports. We covered the launch of the Synology DS2015xs in December, and provided some details about the Annapurna Labs AL514 SoC in it.
ARM-based SoCs for SMB / SOHO NAS units typically support up to 4 bays and come with dual GbE links. Intel's offerings have had a virtual monopoly in the other tiers of the market. Synology's DS2015xs, with its native 10G capabilities, brings in another contender into the market.
The DS2015xs is 8-bay NAS unit presented as a step-up from the DS1815+. While the DS1815+ can expand up to a total of 18 bays with two DX513 expansion chassis, the DS2015xs is compatible with the 12-bay DX1215 expander (for a total of 20 bays). The main step-up from the DS1815+ is the presence of two built-in 10G SFP+ links (supporting direct-attach copper cables). The gallery below takes us around the unit's chassis.
The specifications of the Synology DS2015xs are provided in the table below
Synology DS2015xs Specifications | |
Processor | Annapurna Labs AL514 SoC (Quad-Core Cortex-A15 @ 1.7 GHz) |
RAM | 4 GB |
Drive Bays | 8x 3.5"/2.5" SATA II / III HDD / SSD (Hot-Swappable) |
Network Links | 2x 1 GbE RJ-45 + 2x 10GbE SFP+ |
External I/O Peripherals | 2x USB 3.0, 1x Infiniband for Expansion Bay |
Expansion Slots | N/A |
VGA / Display Out | N/A |
Full Specifications Link | Synology DS2015xs Specifications |
Price | USD 1400 |
The Synology DS2015xs runs the latest DiskStation Manager OS, which, subjectively speaking, is one of the best COTS NAS operating systems in the market. Geared towards both novice and power users, it also provides SSH access. Some additional aspects can be gleaned through SSH. For example, the unit runs on Linux kernel version 3.2.40. The AL514 SoC has hardware acceleration for cryptography and two in-built USB 3.0 ports. There are also four network links (we know from external inspection that two are 10GbE, while the others are 1GbE) with unified drivers for both types of interfaces.
In the rest of the review, we will first take a look at the performance of the unit as a direct-attached storage device. This is followed by benchmark numbers for both single and multi-client scenarios across a number of different client platforms as well as access protocols. We have a separate section devoted to the performance of the NAS with encrypted shared folders. Prior to all that, we will take a look at our testbed setup and testing methodology.
Testbed Setup and Testing Methodology
The Synology DS2015xs can take up to 8 drives. Users can opt for different RAID types depnding on their requirements. We expect typical usage to be with multiple volumes in a RAID-5 or RAID-6 disk group. However, to keep things consistent across different NAS units, we benchmarked a SHR volume with single disk redundancy (RAID-5). Tower / desktop form factor NAS units are usually tested with Western Digital RE drives (WD4000FYYZ). However, the presence of 10-GbE on the DS2015xs meant that SSDs had to be used to bring out the maximum possible performance. Therefore, evaluation of the unit was done by setting up a RAID-5 volume with eight OCZ Vector 4 120 GB SSDs. Our testbed configuration is outlined below.
AnandTech NAS Testbed Configuration | |
Motherboard | Asus Z9PE-D8 WS Dual LGA2011 SSI-EEB |
CPU | 2 x Intel Xeon E5-2630L |
Coolers | 2 x Dynatron R17 |
Memory | G.Skill RipjawsZ F3-12800CL10Q2-64GBZL (8x8GB) CAS 10-10-10-30 |
OS Drive | OCZ Technology Vertex 4 128GB |
Secondary Drive | OCZ Technology Vertex 4 128GB |
Tertiary Drive | OCZ Z-Drive R4 CM88 (1.6TB PCIe SSD) |
Other Drives | 12 x OCZ Technology Vertex 4 64GB (Offline in the Host OS) |
Network Cards | 6 x Intel ESA I-340 Quad-GbE Port Network Adapter |
Chassis | SilverStoneTek Raven RV03 |
PSU | SilverStoneTek Strider Plus Gold Evolution 850W |
OS | Windows Server 2008 R2 |
Network Switch | Netgear ProSafe GSM7352S-200 |
The above testbed runs 25 Windows 7 VMs simultaneously, each with a dedicated 1 Gbps network interface. This simulates a real-life workload of up to 25 clients for the NAS being evaluated. All the VMs connect to the network switch to which the NAS is also connected (with link aggregation, as applicable). The VMs generate the NAS traffic for performance evaluation.
Thank You!
We thank the following companies for helping us out with our NAS testbed:
- Thanks to Intel for the Xeon E5-2630L CPUs and the ESA I-340 quad port network adapters
- Thanks to Asus for the Z9PE-D8 WS dual LGA 2011 workstation motherboard
- Thanks to Dynatron for the R17 coolers
- Thanks to G.Skill for the RipjawsZ 64GB DDR3 DRAM kit
- Thanks to OCZ Technology for the two 128GB Vertex 4 SSDs, twelve 64GB Vertex 4 SSDs and the OCZ Z-Drive R4 CM88
- Thanks to SilverStone for the Raven RV03 chassis and the 850W Strider Gold Evolution PSU
- Thanks to Netgear for the ProSafe GSM7352S-200 L3 48-port Gigabit Switch with 10 GbE capabilities.
DAS Evaluation Setup and Methodology
In addition to our standard NAS evaluation suite, the Synology DS2015xs also warrants investigation under ideal network conditions as a direct-attached storage unit. The presence of 10G network links in the unit has prompted Synology
The Emulex PCIe NIC doesn't support teaming under Windows 8.1. Therefore, we had to install Windows Server 2012 R2 on the additional SSD to make our DAS testbed dual-boot for evaluating NAS units. The DHCP Server feature was also activated on the teamed port to which the NAS's 10G ports were connected. On the NAS side, the ports were set up for teaming too and configured to receive an IP address from a DHCP server. The MTU for the interface was configured to be 9000 bytes. The details of the tests that were run in this mode will be presented along with the performance numbers in the next section.
49 Comments
View All Comments
chrysrobyn - Friday, February 27, 2015 - link
Is there one of these COTS boxes that runs any flavor of ZFS?SirGCal - Friday, February 27, 2015 - link
They run Syn's own format...But I still don't understand why one would use RAID 5 only on an 8 drive setup. To me the point is all about data protection on site (most secure going off site) but that still screams for RAID 6 or RAIDZ2 at least for 8 drive configurations. And using SSDs for performance fine but if that was the requirement, there are M.2 drives out now doing 2M/sec transfers... These fall to storage which I want performance with 4, 6, 8 TB drives in double parity protection formats.
Kevin G - Friday, February 27, 2015 - link
I think you mean 2 GB/s transfers. Though the M.2 cards capable of doing so are currently OEM only with retail availability set for around May.Though I'll second your ideas about RAID6 or RAIDZ2: rebuild times can take days and that is a significant amount of time to be running without any redundancy with so many drives.
SirGCal - Friday, February 27, 2015 - link
Yes I did mean 2G, thanks for the corrections. It was early.JKJK - Monday, March 2, 2015 - link
My Areca 1882 ix-16 raid controller uses ~12 hours to rebuild a 15x4TB raid with WD RE4 drives. I'm quite dissappointed with the performance of most "prouser" nas boxes. Even enterprise qnaps can't compete with a decent areca controller.It's time some one built som real NAS boxes, not this crap we're seeing today.
JKJK - Monday, March 2, 2015 - link
Forgot to mention it's a Raid 6vol7ron - Friday, February 27, 2015 - link
From what I've read (not what I've seen), I can confirm that RAID-6 is the best option for large drives these days.If I recall correctly, during a rebuild after a drive failure (new drive added) there have been reports of bad reads from another "good" drive. This means that the parity drive is not deep enough to recover the lost data. Adding more redundancy, will permit you to have more failures and recover when an unexpected one appears.
I think the finding was also that as drives increase in size (more terabytes), the chance of errors and bad sectors on "good" drives increases significantly. So even if a drive hasn't failed, it's data is no longer captured and the benefit of the redundancy is lost.
Lesson learned: increase the parity depth and replace drives when experiencing bad sectors/reads, not just when drives "fail".
Romulous - Sunday, March 1, 2015 - link
Another benefit of RAID 6 besides 2 drives being able to die, is the prevention of bit rot. In Raid 5, if i have a corrupt block, and one block of parity data, it wont know which one is correct. However since RAID 6 has 2 parity blocks for the same data block, its got a better chance if figuring it out.802.11at - Friday, February 27, 2015 - link
RAID5 is evil. RAID10 is where it's at. ;-)seanleeforever - Friday, February 27, 2015 - link
802.11at:cannot tell whether you are serious or not. but
RAID 10 can survive a single disk failure, RAID 6 can survive a failure of two member disks. personally i would NEVER use raid 10 because your chance of losing data is much greater than any raid that doesn't involve 0 (RAID 0 was a afterthought, it was never intended, thus called 0).
RAID 6 or RAID DP are the only ones used in datacenter for EMC or Netapp.