OCZ's Indilinx Everest 2 Launching in June 2012, Features Vastly Improved Random Performance
by Anand Lal Shimpi on January 9, 2012 2:42 AM ESTOCZ has a lot to show off this year at CES/Storage Visions, but the most exciting product from a client standpoint is its new Everest 2 controller. While the original Everest just launched two months ago, Everest 2 is expected in the next 6 months. OCZ already has silicon back and it is fast.
Whereas Everest 1 was a competitive performer, Everest 2 is designed to really go after the performance crown. OCZ has focused more on small block performance as well as reducing write amplification, both issues I've brought up regarding the current Everest in our reviews of OCZ's Octane. Everest 2 is an updated controller (IDX400M00-BC vs. IDX300M00-BC) with a new firmware architecture designed to address these shortcomings. Both it and the original Everest controller will co-exist in the market for a period of time, but the Everest 2 will clearly be aimed at the high end. Target performance is around 550MB/s sequential reads, 500MB/s sequential writes and 90K 4KB random write IOPS (the Iometer screenshots in the gallery below show early 4KB Random Write and 128KB Sequential Write performance, using our own Iometer tests, in that order). If OCZ ends up really confident in the Everest 2, it might just end up being the heart of the Vertex 4...
11 Comments
View All Comments
Wardrop - Monday, January 9, 2012 - link
I'm wondering what characteristics factor in to making an SSD fast. Are SSD's typically bottlenecked by the brute performance of the controller, the speed of the NAND, or is it more down to a good storage strategy (firmware/logic design)?r3loaded - Monday, January 9, 2012 - link
All of the above! :PDanNeely - Monday, January 9, 2012 - link
Nand bottlenecks are mostly a factor on smaller capacity drives. Most designs stop seeing gains with capacity before hitting their largest available size; at which point the controller hardware/firmware dominates.SleepyFE - Monday, January 9, 2012 - link
Smaller capacity drvie is slower because of larger capacity NAND chips. If a single NAND chip is 16GB it takes 8 of them to make 128GB drive and each uses one channel. Since controllers have 8 channels you can reach maximum speed that way. But there are also 32GB chips, and when you make a 128GB drive out of those you only use 4 channels and that is why it is slower.Shadowmaster625 - Monday, January 9, 2012 - link
It's all firmware. Look at your typical NTFS file. 40% of files are less than 4KB, meaning they fit into a single NTFS cluster. My computer has 52000+ files that are less than 4KB. That is 40% of my files. Many thousands of these files are only a few bytes, or a few hundred bytes. When you go to write this file to NAND, you need only write those bytes. A smart controller has to understand that.Then there's compression. It is stupid to not compress easily compressible data when the compression can be done faster than the writes can take place, even when done by a $2 microcontroller.
And there is also the fact that most writes are partial in nature. ie, you read a file, change a few bytes, and write it back. A dumb controller will read out a 2MB file and write the whole entire thing back even though only 2 bytes have changed. Windows does this sort of crap all the time. A smart controller will be aware that this file was recently read, and it will look in its RAM to see if the file is there, do some comparions, do some data integrity checks, then write only those two new bytes to flash. Huge performance increase potential there.
FunBunny2 - Monday, January 9, 2012 - link
-- A smart controller will be aware that this file was recently read, and it will look in its RAM to see if the file is there, do some comparions, do some data integrity checks, then write only those two new bytes to flash. Huge performance increase potential there.Only under rare and wonderful circumstances can current NAND be written like that; an SSD doesn't write like an HDD. A controller could implement logic to identify such circumstances, but running through that logic every write request for those .001% of same which can be written this way; I wouldn't bet on it.
FaaR - Friday, January 13, 2012 - link
You can't write just two bytes to flash, you gotta write an entire page (and if there are no free pages you must first erase a block of pages. And due to wear leveling and such you wouldn't want to keep re-writing the same two bytes over and over if that's all that's changing in a particular file, even if that was possible...davele - Saturday, January 28, 2012 - link
Re: "Windows does this sort of crap all the time" There is no free lunch with large file I/O.Yes; It would be much less work to just write the 2 bytes. But the minimum sector size is 512 bytes (or 4K). Writing complete sectors helps to let the electronics in the drive stabilise. (assuming a non-SSD drive) & compensates for minute fluctuations in drive speed. Any data changes also cause a need to write a checksum, so need to update the page header. Often the 2 bytes become 3 or 1 & you need to move everything. Is it really worth having special case code to handle the rare condition of changing the same number of bits. (this is not a database)
Of course if the file was big & split into a binary tree / linked list of blocks you could just update only the block you need.
But others prefer robustness. They feel that the changes should be written to another part of the disk & only when successful would you remove the original data.
You could have a completely different algorithm for SSD vs Magnetic drives. But then you get overhead caused by adding another level of abstraction.
Others have already mentioned SSD Page erase requirements.
In short it is not as trivial as you may first think.
SleepyFE - Monday, January 9, 2012 - link
NAND is fast enough for the moment (since it is the smallest possible array of transistors), firmware is what makes the controller tick, so it is mostly up to the controller.Movieman420 - Monday, January 9, 2012 - link
SF-like performance with both compressible and incompressible data both...that's a total win-win situation!