Crucial Demonstrates DDR4-2133 Modules
by Jarred Walton on January 10, 2013 4:56 PM ESTWe’re not likely to be running DDR4 any time soon on desktops, and even most laptops are probably over a year away from getting the upgrade, but now is the time to prepare for the shift. To that end, Micron (Crucial) has a DDR4 demonstration running at CES 2013. The system is an Intel test platform with undisclosed internal hardware, affectionately (or perhaps not) referred to as the Frankenstein Box. My guess is that there might be an Ivy Bridge, Sandy Bridge-E, or current-gen Xeon running in the box with a converter board of some sort to allow the DDR4 to talk to the DDR3 on-die controller (similar to the RDRAM to SDRAM converters we saw back in the Pentium 4 Rambus days), but how they’re running it right now isn’t particularly important so much as the fact that they are able to run Windows. It’s just as possible that the box has an unannounced next generation Xeon with a DDR4 controller. But I digress….
The systems wasn’t open for inspection, but Crucial had an oscilloscope showing the signals on the test unit and a bootable Windows 7 platform running a bouncing balls simulation, which is what we usually see with early/prototype/Frankenstein systems. The memory was running at 2133MHz, apparently with similar timings to what we currently see with DDR3-2133, so at launch the base DDR4 speeds are going to be around twice as fast as what we saw with DDR3. The test platform unfortunately is limited to 2133MHz as the maximum clock; Crucial said they’re seeing 2400-2800MHz without any trouble, and the JEDEC DDR4 specifications currently go up to DDR4-3200. Worth note is that DDR4 makes some changes to help reduce power use, including dropping the standard voltage to 1.2V (down from 1.5V with DDR3), and we’ll likely see low voltage and ultra low voltage DDR4 in the 1.0-1.1V range.
The standard memory chips are now 4Gb compared to 2Gb for most DDR3, which makes 4GB single-sided and 8GB double-sided DIMMs the starting capacity. The initial target devices will be servers where the improved memory density and power savings are needed most, effectively doubling the amount of RAM we’re likely to see at similar price points/configurations to the current generation DDR RAM. Crucial will be releasing a full portfolio of products based off the memory chips, including RDIMMs, LRDIMMs, SO-DIMMs, and UDIMMs (standard and ECC). Mobile and desktop adoption of DDR4 is expected to occur in 2014. One change Crucial pointed out with their UDIMMs can be seen in the photo above: the edge with the 284 pins is no longer perfectly straight. The idea is that when you insert a DDR4 DIMM, it will require less pressure as you’ll only be pushing in about half of the pins at a time.
Besides the DDR4 demonstration, Crucial had a display showing their various memory modules over time, from DDR through DDR4. It’s always fun to see how far we’ve come over the past eight or so years. At the top we have a DDR DIMM with 184 pins—state-of-the-art circa 2001/2002. Launch speeds were DDR-200 (PC1600), with capacities of 128MB to 1GB being typical over the life of the product. DDR2 moved to 240 pins around 2003, launching at DDR2-400 and moving up to DDR2-1066 by the end of its life cycle; typical capacities ranged from 256MB to 2GB. 2007 brought about the advent of DDR3, and while there are technically DDR3-800 parts, for the mainstream market DDR3 started at 1066 and has officially moved up to DDR3-2133; capacities are generally in the 512MB to 4GB range, with 8GB being the maximum. And last we have DDR4, launching by the end of the year. If the pattern continues, we should see DDR4-4266 by the time DDR5 memory is ready for mainstream use and capacities will start at 2GB and likely top out at 16GB (possibly more) per module.
22 Comments
View All Comments
DanNeely - Thursday, January 10, 2013 - link
Is that showing what's going on in the Frankenstein box? If so the fact that the memory is showing as DDR4 seems to be implying that the CPU is some sort of engineering sample with a DDR4 memory bus; not a standard chip with a DDR4-DDR3 converter in the the middle.Yorgos - Friday, January 11, 2013 - link
you don't need a cpu, you just need the controller and a circuit to feed it with commands and recieve/send data.I believe fpgas/cpld are being used in those projects, or some sort of developer board(which usually has an fpga on it).
All those big companies spend r&d money on asic testing units.
That's the big advantage of Intel against AMD, they print essential parts of the processor and test them. fpgas and simulators are good in testing but not as good as testing an implemented circuit.
Also, the cost for taping a bunch of some asic circuits is not that high.
"Given that a wafer processed using latest process technologies can cost $4000 - $5000, it is not hard to guess that the increase may significantly affect the costs of forthcoming graphics cards or consumer electronics."s|a
5k $ is low compared to the billions of $ in r&d those companies have
torsionbar - Sunday, January 27, 2013 - link
Huh? No "converter" is necessary. The article mentions some similar nonsense. Apparently nobody here has ever heard of Fully Buffered dimms. FB-DIMMS allow any kind of memory you want, DDR2, DDR3, DDR4, DDR5, anything, to sit behind the buffer processor. The CPU & memory controller don't know the difference - they're only talking to the buffer processor. This has been around for years. Most large servers use FB DIMMS, even the old Apple Mac Pro used FB DIMMS. They're pretty expensive, because the of the buffer processor, but they allow the system to be memory agnostic.Kevin G - Thursday, January 10, 2013 - link
The problem with the move to DDR4 is that it drops down to one DIMM per channel maximum. For mobile platforms, this isn't going to be an issue as there is already a move to solder down memory (as well as CPU's, see Broadwell). Desktops can get away with using high capacity DIMM's for retail desktops. The DIY enthusiasts will likely just buy the initial high capacity DIMMs at launch and stick with them for some time.The one DIMM per channel limitation becomes a problem with servers. For VM hosts, it is common to have three registered DIMMs per channel for the added memory support even though bandwidth typically decreases. While DDR4 supports 8 rank DIMMs to double capacity, for servers at launch will experience a decrease in overall capacity. It won't be until 8 Gbit dies arrive that DDR4 will over take DDR3 in terms of capacity. There is another means of side stepping the 1 DIMM per channel limitation and that's adding more channels. The consumer market is set on dual channel for the foreseeable future and servers are currently at quad channel. I do not see a desire from the x86 players to migrate to 6 or 8 channel setups to increase overall memory capacity even at the server level.
name99 - Thursday, January 10, 2013 - link
What's the current state of SMI/SMB?I can't find details, but I would imagine that any real server (Xeon Haswell) will use SMI/SMB --- don't they have to already?) And that frees up some flexibility in design; you will have say three channels, but the SMB chip at the end could be a low-end version that supports a single DIMM, or a high end version that supports 4 DIMMs.
That's, after all, kinda the point of SMI/SMB --- to decouple the CPU from the limitations of the JEDEC RAM bus, and keep that bus limited to as small an area of the motherboard as possible.
More interesting is how long till we see the end of the JEDEC bus. It's obviously already happened in mobile, where vendors have a whole lot more flexibility in how the package and hook up DRAM, and I could see it happening in a few years in PCs. We start with Intel providing SMI/SMB on every x86 chip, then they let it be known that while their SMB chip will support standard JEDEC DIMMs, they will also support some alternative packaging+connection which is lower power at higher performance.
We'll get the usual bitching and whining about "proprietary" and "pity the poor overclocker" and "back in my day, a SIMM was a SIMM, and nothing should ever change, so this alternative sux", but it strikes me as as inevitable as the move to on-core GPUs.
Kevin G - Friday, January 11, 2013 - link
The SMB used in the Xeon 7500/E7 line and Itanium 9300/9500 lines spun off the FB2-DIMM spec that was proposed but never made it through JEDEC. Intel adapted what they had and integrated the buffer chip as part of the chipset as redesigning the memory controllers on the Xeon 7500 would have delayed the chip further.The interesting thing is that SMB chip has an internal clock speed 6 times that of the effective memory bus speed. IE a SMB using 1066 Mhz run at 6.4 Ghz internally. Supporting 1333 Mhz DDR3 memory would require the SMB to run at 8 Ghz. Any future SMD chip would have to have a redesigned serial-to-parallel protocol, especially since DDR4 starts at 2133 Mhz.
JEDEC not only defines the DIMM format but also the memory protocols on used by the chips on the DIMM. So while the DIMM format is in decline due to the rise of soldered memory in the mobile space, JEDEC still has a role in defining the memory technologies used in the industry.
The only move in the mobile space that wouldn't utilize a JEDEC defined memory bus* would be an SoC that entirely uses custom eDRAM that is either on-die or in package.
*Well there is Rambus but they don't have a presence in mobile and haven't scored any design wins on the desktop in ages.
The Von Matrices - Thursday, January 10, 2013 - link
The capacity issue with 1 DIMM per channel shouldn't be an issue since LRDIMMs are available. You can use 4x or more capacity or more per module with LRDIMMs.Pneumothorax - Thursday, January 10, 2013 - link
Seeing those DDR dram sticks brought up my repressed memory of the 'Dark Ages' of RDRAM and PC133 SDRAM!extide - Thursday, January 10, 2013 - link
Heh, IMO the dark ages truly were 72-pin, and the earlier 30-pin SIMM's!DanNeely - Friday, January 11, 2013 - link
The dark ages were when you inserted individual ram chips into your mobo's DIP sockets.