Comments Locked

36 Comments

Back to Article

  • Shadowmaster625 - Thursday, April 7, 2016 - link

    It's funny how back in 2010 I bought 8 GB of DDR3 for $5 per GB. And now after 6 years, memory still costs $5 per GB even though the size of the silicon has shrunk by what 70%? Greedy ...
  • plopke - Thursday, April 7, 2016 - link

    I have no clue where you are coming from , 16GB DDR4 is as low as 65 dollars and still dropping.

    Most new memory types start of extremely expensive then they go down in price a lot to finish off their lifetime riding low supply and still compatibility demand so expensive to very exenspive end of life. Taking speed in consideration i think you can buy soon double the amount of DDR4 over DDR3 since prices of DDR4 are still goign down.

    Not to mention Research , New fabrication technology , Fixed Cost , Speed improvements, Inflation , A company trying to make money , greedy? Urgh , everybody wants to be rhich but for some reason it may only happen to them.
  • plopke - Thursday, April 7, 2016 - link

    Forgot the most important one , supply and demand ...
  • Samus - Saturday, April 9, 2016 - link

    Without a doubt memory has been dropping in price consistently (although incrementally) over the last 10 years. The last spike was late 2008-early 2009 when DDR3 launched. By contrast, the spike for DDR4 was incredibly short...it DDR4 is now cheaper than DDR3 and it only took 4 months. as many have said, supply and demand. Demand for DDR3 surged through much of 2009 because all the platforms hit the market and overall PC demand was figuratively high. There were huge energy and performance improvements over DDR2. Nehalem, the first ten Core microarchitecture launched and the IPC gains were huge over the Core 2. Xeon and server platforms launched based on the X58/C200 chipsets.

    My comparison, Skylake is the only platform that uses DDR4, and unlike in 2009, it doesn't require it. Half of Skylake platforms are DDR3, including mobile. AMD has no DDR4 platforms. And no GPU's are using it. GPU's have been using DDR3 heavily since its launch in 2008. And with new GDDR5 technology and HBM supplementing DDR in the high performance segment, I doubt DDR4 will find a place in GPU platforms anytime soon unless DDR3 becomes cost prohibitive.

    All of this is actually good news for Skylake, because with Intel dropping prices of the CPU's below that of Haswell/Broadwell, and DDR4 remaining cheaper than DDR3 because of low demand, anyone building a new PC is obviously going to use Skylake, including OEM's, and that's good for the consumer because it will actually be cheaper than the previous few generations of PC's based on DDR3.

    High supply and low demand is always good for consumers that have demand.
  • DanNeely - Saturday, April 9, 2016 - link

    I disagree with one part of your assessment. With the price gap nearly gone, DDR4 offering double the potential bandwdith of DDR3, and low end GPUs being notoriously bandwidth starved; I'd be shocked if the next generation of low end GPUs don't use DDR4. The only question is if AMD/nVidia will actually release them this year or wait until next. The 14/16nm die shrink for thier high end parts has to be consuming a very large chunk of their attention for the year; but letting the other company leapfrog them in performance on the high volume market segment would be bad, so it's possible both are going to rush out a low end DDR4 chip this year to make sure they're not scooped by the other.
  • niva - Thursday, April 7, 2016 - link

    No, that's less than $5 for an 8 Gb chip, used to make modules that will be with huge capacities (128 GB as stated in the article.) Vastly better, higher performance memory than what you bought in 2010. Go find your old memory that you bought in 2010 and check out it's price today, I'm sure it's selling for less, though you do bring up a valid point, it ate a lot of silicon so the raw materials used cost more.
  • yuhong - Thursday, April 7, 2016 - link

    Technically 8Gbit equals 1GB. The problem is that 4Gbit is currently mainstream.
  • bcronce - Friday, April 8, 2016 - link

    I paid $100 for 32GiB(2x16GiB) of DDR4 low latency high speed G.Skill memory 3 months ago. That's $3.13/GiB and would have been closer to $2.5 if I didn't go for premium overclocking memory. Looks like prices have gone up a bit since, but they tend to be low around tax season.
  • Pork@III - Thursday, April 7, 2016 - link

    Fake 10nm. Bad, bad Samsung :D
  • theduckofdeath - Friday, April 8, 2016 - link

    Read much?
  • Samus - Saturday, April 9, 2016 - link

    Is he really wrong? Samsung has a habit of dishonesty with various marketing strategies. Instead of calling things what they actually are, they segment products into "class" i.e. 55" class TV, 4K class bluray/DVD up conversion, ultrabook class PC, 10nm/20nm class NAND/DDR4, etc.

    Come on Samsung what is it, is it "like 10nm" or is it 10nm. Just call it 15nm if that's what it is. It's no secret, just say wtf it is.
  • none12345 - Thursday, April 7, 2016 - link

    In 2008 i bought 2x2 gigs for $46, in 2013 i bought another 2x2 gigs for the same computer for $72.

    You can currently buy 2x8 gigs for ~$55(cheapest on newegg as of this is 49). And ~$120 for 4x8 or 2x16 gigs.

    So, it went up and then came back down again.

    I remember back when $100 for megabyte(4x256k) was a good deal heh. And dont forget inflation. That would be $300 or so in todays dollars.
  • jjj - Thursday, April 7, 2016 - link

    lol even you guys go with the 10nm class BS, it's 18nm everybody knows that, just say it as it is.
  • frenchy_2001 - Thursday, April 7, 2016 - link

    The "critical dimension" you quote does not represent much anymore either way.
    Since the difficulty of improving process and how late Extreme UV has been, all manufacturers are using multi-patterning and those "critical dimensions" have very different meaning for each (TSMC 16nm is not at the same scale as Samsung 14nm or Intel 14nm).
    A more useful comparison would be for a whole chip (for memory) and getting memory density (bits/sq.mm). Between cell architecture and different node dimensions, this would be a better representation. Anandtech tried this for flash.
  • jjj - Friday, April 8, 2016 - link

    The process is one thing, the density is another(cell size is the better metric anyway) and costs an entirely different matter. The topic here was the process.
  • Concillian - Thursday, April 7, 2016 - link

    Let's hope we have some new "Samsung Green" sticks for DDR4 like we did for DDR3.

    For those who don't remember they were inexpensive sticks of plain looking high latency DDR3-1600 that routinely OC'ed to 2133 or optionally lower speeds but extremely low latency.
  • extide - Thursday, April 7, 2016 - link

    Oooh yes I have like 48+GB of that DDR3 "Samsung wonder ram" at home, in various systems.
  • T1beriu - Thursday, April 7, 2016 - link

    Pfffff... What's this 10nm BS?! This can't be true. Even Samsung admits it: "new DRAM devices are reported to consume 10 – 20% less power than equivalent DDR4 memory ICs made using a 20 nm fabrication process, based on tests conducted by the memory maker"

    This can't be real 10nm.
  • Kristian Vättö - Thursday, April 7, 2016 - link

    It's 10nm-class i.e. 10-19nm. It's more or less an industry standard to report "X0nm-class" instead of the exact geometry, even though that isn't a secret.

    The quote in your comment says that the new 10nm-class DRAM devices consume 10-20% less power THAN similar devices on a 20nm process.
  • yuhong - Thursday, April 7, 2016 - link

    Thinking about it, Intel seems to have outlawed x16 chips on DIMMs/SO-DIMMs making 4GB sticks impossible with 8Gbit DDR4. And of course, you know that single channel RAM is bad for AMD APUs such as Bristol Ridge. I wonder if 16GB of RAM will ever become mainstream on laptops.
  • DanNeely - Thursday, April 7, 2016 - link

    It's an engineering limit. The more chips you have hsnging off a bus the more the theoretically square wave of the digital signal gets rounded off. The problem gets progressively worse at higher speeds. A few years ago they weren't sure if it would even be possible to have 2 DIMMs per channel at DDR4. An x16dimm would have the same load on the bus as 2 x8's. Servers get their huge ram capacities by using buffer chips, so each dimm onlu puts a single chip on the bus. The tradeoff from doing so is increased latency.
  • yuhong - Thursday, April 7, 2016 - link

    I am talking about 4GB sticks using four x16 chips.
  • Arnulf - Friday, April 8, 2016 - link

    ??? You're spouting nonsense ... while DanNeely gave you a wonderful explanation of how things actually work in the real world.

    Admittedly he didn't mention input and lane inductance and parasitic capacitance as the reason for that "rounding off" of the signal edges but I guess he simply didn't want to get too technical there.
  • extide - Friday, April 8, 2016 - link

    Dude you guys both don't get it. Most DDR IC's are 8 bits wide, meaning you need 8 of them to make the 64 bits width of a single memory channel. If you could use 16-bit wide chips, you would only need 4 chips instead of 8.
  • extide - Friday, April 8, 2016 - link

    My Reply was to Arnulf, BTW
  • extide - Friday, April 8, 2016 - link

    Obviously he is talking about the bit width of each IC on the DIMM because you CAN use sticks with 16 IC's on them ...

    He is saying that when 8Gbit IC's become standard, the smallest DIMM you can make will be 8GB, unless you run 16-bit wide IC's in which case you could put 4 chips on and make a 4GB stick. With 8GB being the smallest stick available it will mean that many laptops will come with a single 8GB stick, at least until 16GB of memory becomes the new low-end mainstream amount of memory. Thus it will actually hurt performance in a lot of cases where integrated graphics is used because these machines will only be single channel.
  • BrokenCrayons - Friday, April 8, 2016 - link

    The trick of that is simply to hold off purchasing a laptop until 16GB of RAM becomes commonplace. As we're in something of a slump in CPU development, it's lately been more doable since upgrades aren't offering large leaps in performance and software isn't really demanding much compute power in workloads that are typically encountered on a low-end laptop.
  • yuhong - Friday, April 8, 2016 - link

    It is not a problem in the short term though as 4Gbit DDR4 is still the most common.
  • BeethovensCat - Thursday, April 7, 2016 - link

    On RAM prices. Agree that they went up from 2008 and then down again. Anyway, being in my mid 40s I remember what RAM cost in the early 90s. In those days I bought an extra 4mb RAM for my first desktop. I paid $150 for those. Note that it was 4mb, not 4gb! About at the same time, a friend of mine bought the first 1gb HDD. He paid $1000 for it! Enjoy your day!
  • Snake_Doc - Friday, April 8, 2016 - link

    $1000 for a 1GB hdd?! Incredible.
  • yuhong - Friday, April 8, 2016 - link

    The funny thing is that this price did not fall much until about 1996!
  • Arnulf - Friday, April 8, 2016 - link

    Nice article Anton!

    I like the fact that you clarified the "10 nm class" part of the PR announcement because most sites took it as meaning literally "10 nm production process" rather than "10-and-some-single-digit-figure nm".
  • 3ogdy - Friday, April 8, 2016 - link

    You forgot to mention that ...In the news today, Samsung’s new DDR4 memory chips are produced using 10nm-class manufacturing technology,...You also forgot to mention that this should make DRAM cheaper....yeah...you forgot to mention that about 3 times...in a pathetic 5 paragraph article. Article minimum length issues, or what?
  • Michael Bay - Saturday, April 9, 2016 - link

    Come on, stop living.
  • iwod - Saturday, April 9, 2016 - link

    Really wanted more 2.5D / 3D stack DRAM for server, We are still stuck with common server Max of 1 TB Memory, would have been nice if we could scale up to 4 or even 8TB memory.
  • andychow - Sunday, April 17, 2016 - link

    Me too. I think the interface is rather different, so we will need to wait a couple of years before we see that. The Fury does not come with a gddr5 variant, suggesting this. We'll have to stick with 768 GB for now... I'd go down size if the bus with was higher (the way it's been going).

Log in

Don't have an account? Sign up now