The Samsung 983 ZET (Z-NAND) SSD Review: How Fast Can Flash Memory Get?
by Billy Tallis on February 19, 2019 8:00 AM ESTWhat Is Z-NAND?
When Samsung first announced Z-NAND in 2016, it was a year after 3D XPoint memory was announced and before any Optane products had shipped. Samsung was willing to preview some information about the Z-NAND based drives that were on the way, but for a year and a half they kept almost all information about Z-NAND itself under wraps. Initially, the company would only state that Z-NAND was a high-performance derivative of their V-NAND 3D NAND flash memory. Meanwhile at Flash Memory Summit 2017, they confirmed that Z-NAND was a SLC (one bit per cell) memory, while the company also announced that they were also working on a second generation of Z-NAND will introduce a MLC version of Z-NAND. (for reference, mainstream NAND flash is now almost always 3 bit per cell TLC).
If simply operating existing NAND as SLC was all there is to Z-NAND, then we would also expect Toshiba, WD, SK Hynix to have also delivered their competitors by now. But there are further tweaks required to challenge 3D XPoint. A year ago at IEEE's International Solid State Circuits Conference (ISSCC), Samsung pulled back the veil a bit and shared more information about Z-NAND. The full presentation was not made public, but PC Watch's coverage captured the important details. Samsung's first-generation Z-NAND is a 48-layer part with a capacity of 64Gb. Samsung's mainstream capacity-optimized NAND is currently transitioning from 64 layers to what's officially "9x" layers, most likely 96. There are probably several factors for why Z-NAND is lagging behind by almost two generations of manufacturing tech, but one important element is that adding layers can be detrimental to performance.
Samsung 3D NAND Comparison | |||||
Generation | 48L SLC Z-NAND |
48L TLC |
64L TLC |
9xL TLC |
|
Nominal Die Capacity | 64Gb (8GB) |
256Gb (32GB) |
512Gb (64GB) |
256Gb (32GB) |
|
Read Latency (tR) | 3 µs | 45 µs | 60 µs | 50 µs | |
Program Latency (tPROG) | 100 µs | 660 µs | 700 µs | 500 µs | |
Page Size | 2kB, 4kB | 16kB | 16kB | 16kB? |
Compared to their past few generations of TLC NAND, Samsung's SLC Z-NAND improves read latency by a factor of 15-20x, but program latency is only improved by a factor of 5-7x. Note however that the read and program times shown above denote how long it takes to transfer information between the flash memory array and the on-chip buffers; so that 3µs read time doesn't include transferring the data to the SSD controller, let alone shipping it over the PCIe link to the CPU.
With Samsung using 16kB page sizes for their TLC NAND, the 4kB page size for SLC Z-NAND seems to be a reasonable choice as only a slight shrink in total number of memory cells per page, but the capability to instead operate with a 2kB page size indicates that small page sizes are an important part of the performance enhancements Z-NAND is supposed to offer.
Missing from this data set is information about the erase block size and erase time. Erasing flash memory is a much slower process than the program operation and it requires activating large and power-hungry charge pumps to generate the high voltages necessary. For this reason, all NAND flash memory groups many pages together to form each erase block, which nowadays tends to be at least several megabytes.
Samsung's Z-NAND may be able to offer far better read and program times than mainstream NAND, but they may not have been able to improve erase times as much. And shrinking erase blocks would significantly inflate the die space required for peripheral circuitry, further harming memory density that is already at a steep disadvantage for 48L SLC compared to mainstream 64L+ TLC.
Test System
Intel provided our enterprise SSD test system, one of their 2U servers based on the Xeon Scalable platform (codenamed Purley). The system includes two Xeon Gold 6154 18-core Skylake-SP processors, and 16GB DDR4-2666 DIMMs on all twelve memory channels for a total of 192GB of DRAM. Each of the two processors provides 48 PCI Express lanes plus a four-lane DMI link. The allocation of these lanes is complicated. Most of the PCIe lanes from CPU1 are dedicated to specific purposes: the x4 DMI plus another x16 link go to the C624 chipset, and there's an x8 link to a connector for an optional SAS controller. This leaves CPU2 providing the PCIe lanes for most of the expansion slots, including most of the U.2 ports.
Enterprise SSD Test System | |
System Model | Intel Server R2208WFTZS |
CPU | 2x Intel Xeon Gold 6154 (18C, 3.0GHz) |
Motherboard | Intel S2600WFT |
Chipset | Intel C624 |
Memory | 192GB total, Micron DDR4-2666 16GB modules |
Software | Linux kernel 4.19.8 fio version 3.12 |
Thanks to StarTech for providing a RK2236BKF 22U rack cabinet. |
The enterprise SSD test system and most of our consumer SSD test equipment are housed in a StarTech RK2236BKF 22U fully-enclosed rack cabinet. During testing for this review, the front door on this rack was generally left open to allow better airflow, since the rack doesn't include exhaust fans of its own. The rack is currently installed in an unheated attic and it's the middle of winter, so this setup provided a reasonable approximation of a well-cooled datacenter.
The test system is running a Linux kernel from the most recent long-term support branch. This brings in the latest Meltdown/Spectre mitigations, though strategies for dealing with Spectre-style attacks are still evolving. The benchmarks in this review are all synthetic benchmarks, with most of the IO workloads generated using FIO. Server workloads are too widely varied for it to be practical to implement a comprehensive suite of application-level benchmarks, so we instead try to analyze performance on a broad variety of IO patterns.
Enterprise SSDs are specified for steady-state performance and don't include features like SLC caching, so the duration of benchmark runs doesn't have much effect on the score, so long as the drive was thoroughly preconditioned. Except where otherwise specified, for our tests that include random writes, the drives were prepared with at least two full drive writes of 4kB random writes. For all the other tests, the drives were prepared with at least two full sequential write passes.
Our drive power measurements are conducted with a Quarch XLC Programmable Power Module. This device supplies power to drives and logs both current and voltage simultaneously. With a 250kHz sample rate and precision down to a few mV and mA, it provides a very high resolution view into drive power consumption. For most of our automated benchmarks, we are only interested in averages over time spans on the order of at least a minute, so we configure the power module to average together its measurements and only provide about eight samples per second, but internally it is still measuring at 4µs intervals so it doesn't miss out on short-term power spikes.
44 Comments
View All Comments
jabber - Tuesday, February 19, 2019 - link
I just looked at the price in the specs and stopped reading right there.Dragonstongue - Tuesday, February 19, 2019 - link
Amen to that LOLFunBunny2 - Tuesday, February 19, 2019 - link
well... if one were to run a truly normalized RDBMS, i.e. 5NF and thus substantially smaller footprint compared to the common NoSQL flatfile alternative, this could be quite competitive. but that would require today's developers/coders to stop making apps just like their COBOL granddaddies did.FreckledTrout - Tuesday, February 19, 2019 - link
I have no idea why you are talking coding and database design principles as it does not apply here at all. To carry your tangent along, if you want to make max use of a SSD's you denormalize the hell out of the database and spread the load over a ton of servers, ie NoSQL.FunBunny2 - Tuesday, February 19, 2019 - link
well... that does keep coders employed forever. writing linguine code all day long.FreckledTrout - Tuesday, February 19, 2019 - link
Well it still is pointless in this conversation about fast SSD's. What spaghetti code has to do with that I have no idea. Sure they can move cloud native way of designing applications using micro services etl al but what the hell that has to do with fast SSD's baffles me.FunBunny2 - Tuesday, February 19, 2019 - link
" What spaghetti code has to do with that I have no idea. "well... you can write neat code against a 5NF datastore, or mountains of linguine to keep all that mound of redundant bytes from biting you. again, let's get smarter than our granddaddies. or not.
GreenReaper - Wednesday, February 20, 2019 - link
They have at least encouraged old-school databases to up their game. With parallel queries on the back-end, PostgreSQL can fly now, as long as you give it the right indexes to play with. Like any complex tool, you still have to get familiar with it to use it properly, but it's worth the investment.FunBunny2 - Wednesday, February 20, 2019 - link
"They have at least encouraged old-school databases to up their game. "well... if you actually look at how these 'alternatives' (NoSql and such) to RDBMS work, you'll see that they're just re-hashes (he he) of simple flat files and IMS. anything xml-ish is just another hierarchical datastore, i.e. IMS. which predates RDBMS (Oracle was the first commercial implementation) by more than a decade. hierarchy and flatfile are the very, very old-school datastores.
PG, while loved because it's Open Source, is buggy as hell. been there, endured that.
anyway. the point of my comments was simply aimed at naming a use-case for these sorts of devices, nothing more, since so many comments questioned why it should exist. which is not to say it's the 'best' implementation for the use-case. but the use-case exists, whether most coders prefer to do transactions in the client, or not. back in your granddaddies' day, terminals (3270 and VT-100) were mostly dumb, and all code existed on the server/mainframe. 'client code' existed in close proximity to the 'database' (VSAM files, mostly), sometimes in the same address space, sometimes just on the same machine, and sometimes on a tethered machine. the point being: with today's innterTubes speed, there's really no advantage to 'doing transactions in the client' other than allowing client-centric coders to avoid learning how to support data integrity declaratively in the datastore. the result, of course, is that data integrity is duplicated both places (client and server) by different folks. there's no way the database folks, DBA and designer, are going to assume that all data coming in over the wire from the client really, really is clean. because it almost never is.
GruffaloOnVacation - Thursday, March 18, 2021 - link
FunBunny2 you sound bitter, and this is the sentiment I see among the old school "database people". May I suggest, with the best of intentions for us all, that instead of sneering at the situation, you attempt to educate those who are willing to learn? I've been working on some SQL in my project at work recently, and so have read a number of articles, and parts of some database books. There was a lot of resentment and sneering at the stoopid programmers there, but no positive programme of action proposed. I'm interested in this subject. Where should I go for resources? Which talks should I watch? What books to read? Let's build something that is as cool as something you described. If it really is THAT good, once it is built - they will come, and they will change!