The OCZ Vertex 3 Review (120GB)
by Anand Lal Shimpi on April 6, 2011 6:32 PM ESTThe Vertex 3 120GB
Whenever we review a new SSD many of you comment asking for performance of lower capacity drives. While we typically publish the specs for all of the drives in the lineup, we're usually only sampled a single capacity at launch. It's not usually the largest, but generally the second largest and definitely an indicator of the best performance you can expect to see from the family.
Just look at the reviews we've published this year alone:
Intel SSD 510 (240GB)
Intel SSD 320 (300GB)
Crucial m4 (256GB)
While we always request multiple capacities, it normally takes a little while for us to get those drives in.
When OCZ started manufacturing Vertex 3s for sale the first drives off of the line were 120GB, and thus the first shipping Vertex 3 we got our hands on was a more popular capacity. Sweet.
Let's first look at the expected performance differences between the 120GB Vertex 3 and the 240GB drive we previewed earlier this year:
OCZ Vertex 3 Lineup | |||||
Specs (6Gbps) | 120GB | 240GB | 480GB | ||
Max Read | Up to 550MB/s | Up to 550MB/s | Up to 530MB/s | ||
Max Write | Up to 500MB/s | Up to 520MB/s | Up to 450MB/s | ||
4KB Random Read | 20K IOPS | 40K IOPS | 50K IOPS | ||
4KB Random Write | 60K IOPS | 60K IOPS | 40K IOPS | ||
MSRP | $249.99 | $499.99 | $1799.99 |
There's a slight drop in peak sequential performance and a big drop in random read speed. Remember our discussion of ratings from earlier? The Vertex 3 was of course rated before my recent conversations with OCZ, so we may not be getting the full picture here.
Inside the 120GB Vertex 3 are 16 Intel 25nm 64Gbit (8GB) NAND devices. Each device has a single 25nm 64Gbit die inside it, with the capacity of a single die reserved for RAISE in addition to the typical ~7% spare area.
The 240GB pre-production drive we previewed by comparison had twice as many 25nm die per package (2 x 64Gbit per NAND device vs. 1 x 64Gbit). If you read our SF-2000 launch article one of the major advantages of the SF-2000 controller has over its predecessor is the ability to activate twice as many NAND die at the same time. What does all of this mean for performance? We're about to find out.
RC or MP Firmware?
When the first SF-1500/1200 drives shipped last year they actually shipped with SandForce's release candidate (RC) firmware. Those who read initial coverage of the Corsair Force F100 drives learned that the hard way. Mass production (MP) firmware followed with bug fixes and threatened to change performance on some drives (the latter was resolved without anyone losing any performance thankfully).
Before we get to the Vertex 3 we have to talk a bit about how validation works with SandForce and its partners. Keep in mind that SandForce is still a pretty small company, so while it does a lot of testing and validation internally the company leans heavily on its partners to also shoulder the burden of validation. As a result drive/firmware validation is split among both SandForce and its partners. This approach allows SF drives to be validated heavier than if only one of the sides did all of the testing. While SandForce provides the original firmware, it's the partner's decision whether or not to ship drives based on how comfortable they feel with their validation. SandForce's validation suite includes both client and enterprise tests, which lengthens the validation time.
The shipping Vertex 3s are using RC firmware from SandForce, the MP label can't be assigned to anything that hasn't completely gone through SandForce's validation suite. However, SF assured me that there are no known issues that would preclude the Vertex 3 from being released today. From OCZ's perspective, the Vertex 3 is fully validated for client use (not enterprise). Some features (such as 0% over provisioning) aren't fully validated and thus are disabled in this release of the firmware. OCZ and SandForce both assure me that the SF-2200 has been through a much more strenuous validation process than anything before it.
Apparently the reason for OCZ missing the March launch timeframe for the Vertex 3 was a firmware bug that was discovered in validation that impacted 2011 MacBook Pro owners. Admittedly this has probably been the smoothest testing experience I've encountered with any newly launched SandForce drive, but there's still a lot of work to be done. Regardless of the performance results, if you want to be safe you'll want to wait before pulling the trigger on the Vertex 3. SandForce tells me that the only difference between RC and MP firmware this round is purely the amount of time spend in testing - there are no known issues for client drives. Even knowing that, these are still unproven drives - approach with caution.
The Test
CPU |
Intel Core i7 965 running at 3.2GHz (Turbo & EIST Disabled) Intel Core i7 2600K running at 3.4GHz (Turbo & EIST Disabled) - for AT SB 2011, AS SSD & ATTO |
Motherboard: |
Intel DX58SO (Intel X58) Intel H67 Motherboard |
Chipset: |
Intel X58 + Marvell SATA 6Gbps PCIe Intel H67 |
Chipset Drivers: |
Intel 9.1.1.1015 + Intel IMSM 8.9 Intel 9.1.1.1015 + Intel RST 10.2 |
Memory: | Qimonda DDR3-1333 4 x 1GB (7-7-7-20) |
Video Card: | eVGA GeForce GTX 285 |
Video Drivers: | NVIDIA ForceWare 190.38 64-bit |
Desktop Resolution: | 1920 x 1200 |
OS: | Windows 7 x64 |
153 Comments
View All Comments
Xcellere - Wednesday, April 6, 2011 - link
It's too bad the lower capacity drives aren't performing as well as the 240 GB version. I don't have a need for a single high capacity drive so the expenditure in added space is unnecessary for me. Oh well, that's what you get for wanting bleeding-edge tech all the time.Kepe - Wednesday, April 6, 2011 - link
If I've understood correctly, they're using 1/2 of the NAND devices to cut drive capacity from 240 GB to 120 GB.My question is: why don't they use the same amount of NAND devices with 1/2 the capacity instead? Again, if I have understood correctly, that way the performance would be identical compared to the higher capacity model.
Is NAND produced in only one capacity packages or is there some other reason not to use NAND devices of differing capacities?
dagamer34 - Wednesday, April 6, 2011 - link
Because price scaling makes it more cost-effective to use fewer, more dense chips than separate smaller, less dense chips as the more chips made, the cheaper they eventually become.Like Anand said, this is why you can't just as for a 90nm CPU today, it's just too old and not worth making anymore. This is also why older memory gets more expensive when it's not massively produced anymore.
Kepe - Wednesday, April 6, 2011 - link
But couldn't they just make smaller dies? Just like there are different sized CPU/GPU dies for different amounts of performance. Cut the die size in half, fit 2x the dies per wafer, sell for 50% less per die than the large dies (i.e. get the same amount of money per wafer).A5 - Wednesday, April 6, 2011 - link
No reason for IMFT to make smaller dies - they sell all of the large dies coming out of the fab (whether to themselves or 3rd parties), so why bother making a smaller one?vol7ron - Wednesday, April 6, 2011 - link
You're missing the point on economies of scale.Having one size means you don't have leftover parts, or have to pay for a completely different process (which includes quality control).
These things are already expensive, adding the logistical complexity would only drive the prices up. Especially, since there are noticeable difference in the manufacturing process.
I guess they could take the poorer performing silicon and re-market them. Like how Anand mentioned that they take poorer performning GPUs and just sell them at a lower clockrate/memory capacity, but it could be that the NAND production is more refined and doesn't have that large of a difference.
Regardless, I think you mentioned the big point: inner RAIDs improve performance. Why 8 chips, why not more? Perhaps heat has something to do with it, and (of course) power would be the other reason, but it would be nice to see higher performing, more power-hungry SSDs. There may also be a performance benefit in larger chips too, though, sort of like DRAM where 1x2GB may perform better than 2x1GB (not interlaced).
I'm still waiting for the manufacturers to get fancy, perhaps with multiple controllers and speedier DRAM. Where's the Vertex3 Colossus.
marraco - Tuesday, April 12, 2011 - link
Smaller dies would improve yields, and since they could enable full speed, it would be more competitive.A bigger chip with a flaw may invalidate the die, but if divided in two smaller chips it would recover part of it.
On other side, probably yields are not as big problem, since bad sectors can be replaced with good ones by the controller.
Kepe - Wednesday, April 6, 2011 - link
Anand, I'd like to thank you on behalf of pretty much every single person on the planet. You're doing an amazing job with making companies actually care about their customers and do what is right.Thank you so much, and keep up the amazing work.
- Kepe
dustofnations - Wednesday, April 6, 2011 - link
Thank God for a consumer advocate with enough clout for someone important to listen to them.All too often valid and important complaints fall at the first hurdle due to dumb PR/CS people who filter out useful information. Maybe this is because they assume their customers are idiots, or that it is too much hassle, or perhaps don't have the requisite technical knowledge to act sensibly upon complex complaints.
Kepe - Wednesday, April 6, 2011 - link
I'd say the reason is usually that when a company has sold you its product, they suddenly lose all interest in you until they come up with a new product to sell. Apple used to be a very good example with its battery policy. "So, your battery died? We don't sell new or replace dead batteries, but you can always buy the new, better iPod."It's this kind of ignorance towards the consumers that is absolutely appalling, and Anand is doing a great job at fighting for the consumer's rights. He should get some sort of an award for all he has done.