Crucial MX100 (256GB & 512GB) Review
by Kristian Vättö on June 2, 2014 3:00 PM ESTPerformance Consistency
Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal defragmentation. The reason we don’t have consistent IO latency with SSD is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.
To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.
We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.
Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the buttons below each graph to switch the source data.
For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.
Crucial MX100 | Crucial M550 | Crucial M500 | SanDisk Extreme II | Samsung SSD 840 EVO mSATA | |||||
Default | |||||||||
25% Spare Area |
The IO consistency is a match with the M550. There is effectively no change at all even though the 256GB M550 uses 64Gbit NAND and the MX100 uses 128Gbit NAND, which from a raw NAND performance standpoint should result in some difference due to reduced parallelism. I'm thinking this must be due to the firmware design because as we saw in the JMicron JMF667H review, the NAND can have a major impact on IO consistency because of program and erase time differences. On the other hand, it's great to see that Crucial is able to keep the performance the same despite the smaller (and probably slightly slower) NAND lithography.
With added over-provisioning there appears to be some change in consistency, though. While the 256GB M550 has odd up-and-down behavior, the MX100 has a thick line at 25K IOPS with drops ranging to as low as 5K IOPS. The 512GB MX100 exhibits behavior similar to the 256GB M550, so it looks like the garbage collection algorithms could be optimized individually for each capacity depending on the amount of die (the 256GB M550 and 512GB MX100 have the same number of die but each die in MX100 is twice the capacity).
Crucial MX100 | Crucial M550 | Crucial M500 | SanDisk Extreme II | Samsung SSD 840 EVO mSATA | |||||
Default | |||||||||
25% Spare Area |
Crucial MX100 | Crucial M550 | Crucial M500 | SanDisk Extreme II | Samsung SSD 840 EVO mSATA | |||||
Default | |||||||||
25% Spare Area |
50 Comments
View All Comments
extide - Monday, June 2, 2014 - link
Wow, 256Gbit dies! That would mean up to 2TB in a standard 2.5" SSD -- Crazy!hojnikb - Monday, June 2, 2014 - link
Actually one could fit 4TB into a standard 2.5" (or even 8GB when using 32 packages) but the problem is, as far as i can tell, no single controller can adress so much space.hojnikb - Monday, June 2, 2014 - link
*TB obviously :)extide - Monday, June 2, 2014 - link
Yeah but it's a chicken and egg thing I think. There seems to be a max price cap of about $600 for these SSD's, and so for 64gbit NAND that was ~512GB and 128Gbit NAND it is about 1TB. When they design a controller to exist during the lifetime of 256Gbit NAND there is a good chance that someone is actually going to make a 2TB drive because that much NAND would then fit inside that 'max price' so they will design the controller for that max amount. And in the same vein a contrller for the 128Gbit era would be 'OK' with a 1TB max.... if that makes sense, heh.hojnikb - Monday, June 2, 2014 - link
Also, there is already 2TBs drives out thre on the old 64Gbit flash :)danwat1234 - Monday, January 26, 2015 - link
Intel S3500 2TB exists, not sure if it works in laptops thoughfruitcrash - Wednesday, June 4, 2014 - link
It's not that you can't address it (for ONFI NAND you can use the Volume Select command), but that you can't have more than about 8 chips on a channel because of capacitive loading.extide - Monday, June 2, 2014 - link
NOTE: I am talking about the future NAND, NOT what is used in this drive.hojnikb - Monday, June 2, 2014 - link
Still, 256Gbit dies can can't help you much, if controller can't adress that much space. As i've said above, once could fit 4-8TB of flash, it's just isn't possible yet.hojnikb - Monday, June 2, 2014 - link
Any details on the 128GB version ?I've read somewhere, that it will be using the old 20nm flash...