Comments Locked

22 Comments

Back to Article

  • DanNeely - Thursday, January 10, 2013 - link

    Is that showing what's going on in the Frankenstein box? If so the fact that the memory is showing as DDR4 seems to be implying that the CPU is some sort of engineering sample with a DDR4 memory bus; not a standard chip with a DDR4-DDR3 converter in the the middle.
  • Yorgos - Friday, January 11, 2013 - link

    you don't need a cpu, you just need the controller and a circuit to feed it with commands and recieve/send data.
    I believe fpgas/cpld are being used in those projects, or some sort of developer board(which usually has an fpga on it).
    All those big companies spend r&d money on asic testing units.
    That's the big advantage of Intel against AMD, they print essential parts of the processor and test them. fpgas and simulators are good in testing but not as good as testing an implemented circuit.

    Also, the cost for taping a bunch of some asic circuits is not that high.
    "Given that a wafer processed using latest process technologies can cost $4000 - $5000, it is not hard to guess that the increase may significantly affect the costs of forthcoming graphics cards or consumer electronics."s|a

    5k $ is low compared to the billions of $ in r&d those companies have
  • torsionbar - Sunday, January 27, 2013 - link

    Huh? No "converter" is necessary. The article mentions some similar nonsense. Apparently nobody here has ever heard of Fully Buffered dimms. FB-DIMMS allow any kind of memory you want, DDR2, DDR3, DDR4, DDR5, anything, to sit behind the buffer processor. The CPU & memory controller don't know the difference - they're only talking to the buffer processor. This has been around for years. Most large servers use FB DIMMS, even the old Apple Mac Pro used FB DIMMS. They're pretty expensive, because the of the buffer processor, but they allow the system to be memory agnostic.
  • Kevin G - Thursday, January 10, 2013 - link

    The problem with the move to DDR4 is that it drops down to one DIMM per channel maximum. For mobile platforms, this isn't going to be an issue as there is already a move to solder down memory (as well as CPU's, see Broadwell). Desktops can get away with using high capacity DIMM's for retail desktops. The DIY enthusiasts will likely just buy the initial high capacity DIMMs at launch and stick with them for some time.

    The one DIMM per channel limitation becomes a problem with servers. For VM hosts, it is common to have three registered DIMMs per channel for the added memory support even though bandwidth typically decreases. While DDR4 supports 8 rank DIMMs to double capacity, for servers at launch will experience a decrease in overall capacity. It won't be until 8 Gbit dies arrive that DDR4 will over take DDR3 in terms of capacity. There is another means of side stepping the 1 DIMM per channel limitation and that's adding more channels. The consumer market is set on dual channel for the foreseeable future and servers are currently at quad channel. I do not see a desire from the x86 players to migrate to 6 or 8 channel setups to increase overall memory capacity even at the server level.
  • name99 - Thursday, January 10, 2013 - link

    What's the current state of SMI/SMB?

    I can't find details, but I would imagine that any real server (Xeon Haswell) will use SMI/SMB --- don't they have to already?) And that frees up some flexibility in design; you will have say three channels, but the SMB chip at the end could be a low-end version that supports a single DIMM, or a high end version that supports 4 DIMMs.

    That's, after all, kinda the point of SMI/SMB --- to decouple the CPU from the limitations of the JEDEC RAM bus, and keep that bus limited to as small an area of the motherboard as possible.

    More interesting is how long till we see the end of the JEDEC bus. It's obviously already happened in mobile, where vendors have a whole lot more flexibility in how the package and hook up DRAM, and I could see it happening in a few years in PCs. We start with Intel providing SMI/SMB on every x86 chip, then they let it be known that while their SMB chip will support standard JEDEC DIMMs, they will also support some alternative packaging+connection which is lower power at higher performance.
    We'll get the usual bitching and whining about "proprietary" and "pity the poor overclocker" and "back in my day, a SIMM was a SIMM, and nothing should ever change, so this alternative sux", but it strikes me as as inevitable as the move to on-core GPUs.
  • Kevin G - Friday, January 11, 2013 - link

    The SMB used in the Xeon 7500/E7 line and Itanium 9300/9500 lines spun off the FB2-DIMM spec that was proposed but never made it through JEDEC. Intel adapted what they had and integrated the buffer chip as part of the chipset as redesigning the memory controllers on the Xeon 7500 would have delayed the chip further.

    The interesting thing is that SMB chip has an internal clock speed 6 times that of the effective memory bus speed. IE a SMB using 1066 Mhz run at 6.4 Ghz internally. Supporting 1333 Mhz DDR3 memory would require the SMB to run at 8 Ghz. Any future SMD chip would have to have a redesigned serial-to-parallel protocol, especially since DDR4 starts at 2133 Mhz.

    JEDEC not only defines the DIMM format but also the memory protocols on used by the chips on the DIMM. So while the DIMM format is in decline due to the rise of soldered memory in the mobile space, JEDEC still has a role in defining the memory technologies used in the industry.

    The only move in the mobile space that wouldn't utilize a JEDEC defined memory bus* would be an SoC that entirely uses custom eDRAM that is either on-die or in package.

    *Well there is Rambus but they don't have a presence in mobile and haven't scored any design wins on the desktop in ages.
  • The Von Matrices - Thursday, January 10, 2013 - link

    The capacity issue with 1 DIMM per channel shouldn't be an issue since LRDIMMs are available. You can use 4x or more capacity or more per module with LRDIMMs.
  • Pneumothorax - Thursday, January 10, 2013 - link

    Seeing those DDR dram sticks brought up my repressed memory of the 'Dark Ages' of RDRAM and PC133 SDRAM!
  • extide - Thursday, January 10, 2013 - link

    Heh, IMO the dark ages truly were 72-pin, and the earlier 30-pin SIMM's!
  • DanNeely - Friday, January 11, 2013 - link

    The dark ages were when you inserted individual ram chips into your mobo's DIP sockets.
  • JPForums - Friday, January 11, 2013 - link

    This ; ' )
  • Nfarce - Thursday, January 10, 2013 - link

    Hey I resent that! I still have a P4 running 1GB Samsung RAMBUS PC-800 memory running XP built ten years ago. It was my primary gaming rig up to the end of 2008 when I built a Core2 Duo rig! It is still working as a PC dedicated to old games when I feel nostalgic.
  • custom33 - Thursday, January 10, 2013 - link

    I really wonder what year these will become available for mainstream laptops. Wouldn't entirely change by decision but getting a laptop in 2014 hopefully it will have ddr4.
  • Beenthere - Thursday, January 10, 2013 - link

    Other than servers, there is no need or advanatge to DDR4 at this time, especially with DDR3 LV DRAM running @ 1.35v and capable of lower voltage operation whenever Samsung and some other manufacturers desire to do so.

    Test after test of real applications shows that there is no tangible performance gain above 1600 MHz. for typical disktop PCs, be they AMD or Intel powered because DDR3 even at 1333 MHz. is not a system bottleneck.
  • JonnyDough - Friday, January 11, 2013 - link

    No, but if you're building a new PC it's better to have DDR4 just because it saves energy. Plus, the manufacturing process will use less sili making them even cheaper to manufacture. As long as there is no price setting...hopefully that advantage will trickle down to the consumer.
  • kyuu - Friday, January 11, 2013 - link

    I would think that DDR4 will be a real boon for integrated graphics, though, at least until they start integrating a fair amount of memory into the CPU itself.
  • Death666Angel - Friday, January 11, 2013 - link

    That's my take on it as well. Intel apparently gets big performance improvements from the on-chip RAM with Haswell. And AMD gets big improvements by going from 1066 to 1866 RAM with their iGPUs. So I don't think having more bandwidth for those is a bad thing.
  • Kevin G - Friday, January 11, 2013 - link

    Looking at road maps, the migration to DDR4 and the addition of eDRAM will happen at roughly the same time. One Haswell part will be receiving eDRAM this year but Broadwell will the be one to really popularize eDRAM. On that same note, Broadwell is looking to be a mobile only part in a BGA configuration and would be an ideal way to introduce DDR4 to the mobile market. This would equate to a massive increase in bandwidth for mobile devices and move the limiting factor of performance more toward the compute side.
  • menting - Friday, January 11, 2013 - link

    http://www.micron.com/products/dram/ddr3-to-ddr4

    some small advantages to signaling and noise. Not worth a price premium, but lower power isn't the only thing that DDR4 has over DDR3.

    Not clear about the bank groups giving faster burst access.
  • Beenthere - Sunday, February 17, 2013 - link

    DDR4 is of no value on the desktop. It may offer some value for servers or portables, but it's and expensive change and requires all or nothing as far as RAM quantity. You can't just add RAM like with standard DDR RAM, you replace it all. SInce DDR3 @ 1600 MHz. still isn't a system bottleneck of desktop PCs, DDR4 brings nothing to the table.
  • takeship - Sunday, December 1, 2013 - link

    DDR4 is a huge desktop value once IGPs take over the discrete world and we all need as much memory bandwidth as possible all the time (with reasonable latencies). The current system/gpu split memory setup is going the way of the dodo.
  • TjPjMusic - Monday, June 17, 2013 - link

    the upcoming 8 core Haswell-E processors will reportedly support DDR4

Log in

Don't have an account? Sign up now