Intel Core i7 3960X (Sandy Bridge E) Review: Keeping the High End Alive
by Anand Lal Shimpi on November 14, 2011 3:01 AM EST- Posted in
- CPUs
- Intel
- Core i7
- Sandy Bridge
- Sandy Bridge E
No Integrated Graphics, No Quick Sync
All of this growth in die area comes at the expense of one of Sandy Bridge's greatest assets: its integrated graphics core. SNB-E features no on-die GPU, and as a result it does not feature Quick Sync either. Remember that Quick Sync leverages the GPU's shader array to accelerate some of the transcode pipe, without its presence on SNB-E there's no Quick Sync.
Given the target market for SNB-E's die donor (Xeon servers), further increasing the die area by including an on-die GPU doesn't seem to make sense. Unfortunately desktop users suffer as you lose a very efficient way to transcode videos. Intel argues that you do have more cores to chew through frames with, but the fact remains that Quick Sync frees up your cores to do other things while SNB-E requires that they're all tied up in (quickly) transcoding video. If you don't run any Quick Sync enabled transcoding applications, you won't miss the feature on SNB-E. If you do however, this will be a tradeoff you'll have to come to terms with.
Tons of PCIe and Memory Bandwidth
Occupying the die area where the GPU would normally be is SNB-E's new memory controller. While its predecessor featured a fairly standard dual-channel DDR3 memory controller, SNB-E features four 64-bit DDR3 memory channels. With a single DDR3 DIMM per channel Intel officially supports speeds of up to DDR3-1600, with two DIMMs per channel the max official speed drops to 1333MHz.
With a quad-channel memory controller you'll have to install DIMMs four at a time to take full advantage of the bandwidth. In response, memory vendors are selling 4 and 8 DIMM kits specifically for SNB-E systems. Most high-end X79 motherboards feature 8 DIMM slots (2 per channel). Just as with previous architectures, installing fewer DIMMs is possible, it simply reduces the peak available memory bandwidth.
Intel increased bandwidth on the other side of the chip as well. A single SNB-E CPU features 40 PCIe lanes that are compliant with rev 3.0 of the PCI Express Base Specification (aka PCIe 3.0). With no PCIe 3.0 GPUs available (yet) to test and validate the interface, Intel lists PCIe 3.0 support in the chip's datasheet but is publicly guaranteeing PCIe 2.0 speeds. Intel does add that some PCIe devices may be able to operate at Gen 3 speeds, but we'll have to wait and see once those devices hit the market.
The PCIe lanes off the CPU are quite configurable as you can see from the diagram above. Users running dual-GPU setups can enjoy the fact that both GPUs will have a full x16 interface to SNB-E (vs x8 in SNB). If you're looking for this to deliver a tangible performance increase, you'll be disappointed:
Multi GPU Scaling - Radeon HD 5870 CF | |||||
Max Quality, 4X AA/16X AF | Metro 2033 (19x12) | Crysis: Warhead (19x12) | Crysis: Warhead (25x16) | ||
Intel Core i7 3960X (2 x16) | 1.87x | 1.80x | 1.90x | ||
Intel Core i7 2600K (2 x8) | 1.94x | 1.80x | 1.88x |
Modern GPUs don't lose much performance in games, even at high quality settings, when going from a x16 to a x8 slot.
I tested PCIe performance with an OCZ Z-Drive R4 PCIe SSD to ensure nothing was lost in the move to the new architecture. Compared to X58, I saw no real deltas in transfers to/from the Z-Drive R4:
PCI Express Performance - OCZ Z-Drive R4, Large Block Sequential Speed - ATTO | ||||
Intel X58 | Intel X79 | |||
Read | 2.62 GB/s | 2.66 GB/s | ||
Write | 2.49 GB/s | 2.50 GB/s |
The Letdown: No SAS, No Native USB 3.0
Intel's current RST (Rapid Story Technology) drivers don't support X79, however Intel's RSTe (for enterprise) 3.0 will support the platform once available. We got our hands on an engineering build of the software, which identifies the X79's SATA controller as an Intel C600:
Intel's enterprise chipsets use the Cxxx nomenclature, so this label makes sense. A quick look at Intel's RSTe readme tells us a little more about Intel's C600 controller:
SCU Controllers:
- Intel(R) C600 series chipset SAS RAID (SATA mode)
Controller
- Intel C600 series chipset SAS RAID ControllerSATA RAID Controllers:
- Intel(R) C600 series chipset SATA RAID ControllerSATA AHCI Controllers:
- Intel(R) C600 series chipset SATA AHCI Controller
As was originally rumored, X79 was supposed to support both SATA and SAS. Issues with the implementation of the latter forced Intel to kill SAS support and go with the same 4+2 3Gbps/6Gbps SATA implementation 6-series chipset users get. I would've at least liked to have had more 6Gbps SATA ports. It's quite disappointing to see Intel's flagship chipset lacking feature parity with AMD's year-old 8-series chipsets.
I ran a sanity test on Intel's X79 against some of our H67 data for SATA performance with a Crucial m4 SSD. It looks like 6Gbps SATA performance is identical to the mainstream Sandy Bridge platform:
6Gbps SATA Performance - Crucial m4 256GB (FW0009) | ||||||
4KB Random Write (8GB LBA, QD32) | 4KB Random Read (100% LBA, QD3) | 128KB Sequential Write | 128KB Sequential Read | |||
Intel X79 | 231.4 MB/s | 57.6 MB/s | 273.3 MB/s | 381.7 MB/s | ||
Intel Z68 | 234.0 MB/s | 59.0 MB/s | 269.7 MB/s | 372.1 MB/s |
Intel still hasn't delivered an integrated USB 3.0 controller in X79. Motherboard manufacturers will continue to use 3rd party solutions to enable USB 3.0 support.
163 Comments
View All Comments
SonicIce - Monday, November 14, 2011 - link
cool good review.wharris1 - Monday, November 14, 2011 - link
It would be interested to test the OC'd SBE vs an OC'd SB; I suspect that the 2x advantage of the SBE would fall back in line to around the ~30-40% speed advantage seen in non-OC'd testing (in heavily threaded workloads). I have the feeling that between being defective xeon CPU parts and lacking more SATA 6Gbs as well as USB 3.0 functionality on the motherboard side, this release is a bit hamstrung. I be that with the release of Ivy Bridge E parts/motherboards, this combo will be more impressive. Part of the problem is that the regular SB parts are so compelling from a price/performance perspective. As always, nice review.Johnmcl7 - Monday, November 14, 2011 - link
I thought that odd as well as it almost implies the regular Sandybridge processors are poor overclockers when there are results for the new processor overclocked and Bulldozer overclocked. I guess though it's more it would be interesting to see rather than actually change anything, I currently have an i7 960 and was hoping for an affordable six core processor but it's looking like I'll wait until Ivybridge nowTunnah - Monday, November 14, 2011 - link
although i can understand the expectation of all 6 ports being sata 3, maybe the reasoning is implementing it would probably be pointless for 99.9% of users - i can't even begin to imagine any none-enterprise usage for 6 SSDs running at max speed!Exodite - Monday, November 14, 2011 - link
While I personally don't disagree with most people not needing more than two SATA 6Gbps ports you have to keep in mind that 99.9% of all users have no need for the SB-E /platform/ in its entirety.Since it's squarely aimed at workstation power users and extreme-end enthusiasts, those last 0.1% of users if you will, offering more SATA 6.0Gbps ports makes sense.
Zoomer - Monday, November 14, 2011 - link
I can't imagine the area difference being an issue. Like, are sata3 controllers really that different once it was already done and validated? Having two types of sata controllers on chip seems redundant to me. It's like PCIe 1.0 vs 2.0; once you have the 2.0 implementationd one, there's no reason to have 1.0 only lanes since it is backwards compatible.Jaybus - Tuesday, November 15, 2011 - link
The reason for keeping SATA 3Gbps and PCIe 1.0 is not a die area issue or lack of reasoning. SATA 6Gbps takes considerably more power than 3Gbps, and PCIe 2.0 likewise consumes more power than 1.0. It's simply the physical reality of higher transfer rates. SB-E is already at 130 W, so there simply isn't room in the power envelope to make every interface the highest speed available.MossySF - Tuesday, November 15, 2011 - link
We ran into this problem. Our data processing database has 1 slow SSD for a boot drive and 5 x Sandforce SATA3 SSDS in a RAID0 array ... and we can't do even half the speed the SSDs can run at.You might say why would a non-enterprise user being using this many SSDs? Uh, why would a non-enterprise user be running this obscenely fast computer? You need this much speed to play Facebook Farmville?
ltcommanderdata - Monday, November 14, 2011 - link
Given Ivy Bridge is coming in a few months, perhaps you could comment whether SB-E is worth it even for power users at this time? Has there been indications that high-end Ivy Bridge will likewise launch much later than mainstream parts? Is LGA 2011 going to be around a while or will it need to be replaced if high-end Ivy Bridge decides to integrate an IGP for QuickSync support and as an OpenCL co-processor?DanNeely - Monday, November 14, 2011 - link
I don't think Intel's spoken publicly about IB-E yet.That said, Intel hasn't done socket changes for any of the other recent die shrinks so I doubt we'll see one for ivy. Incremental gains in clock speed, and possibly pushing more cores down to lower price points ($300 6 core, or $1000 8 core) are the most likely results.
OTOH if its launch is as delayed as SB-E's was Haswell will be right around the corner and there will again be the risk of the new quad core wiping the floor with the old hex for most workloads.