Presumably the "Two DIMMs One Channel"-on-a-board layout would preclude these being used in 4-slot consumer boards (which would require effectively 4 DIMMs per channel)? I can't think of any boards off the top of my head that support more than 2 DIMMs per channel without using FBDIMMs.
Intel has validated their 8th and 9th gen desktop cpu's to work with 128gb of memory so that would suggest its possible, it's just up to the motherboard manufacturer to implement it appropriately.
I believe that's using regular (not double) modules with 16x Samsung's new 16 Gb memory packages. You can still use 2 of those per channel on regular consumer motherboards.
I wonder if something that this could be designed for servers using RDIMMS or LRDIMMS. Current cost of 64GB LRDIMMs is more than double that of 32GB RDIMMs. 128GB LRDIMMs are about 4x more expensive than 64GB LRDIMMs. Could be a nice way to increase RAM capacity there without breaking the bank.
It's pretty common for various forms of registered/buffered memory to use x4 width ICs rather than the standard x8, meaning you have 16 per rank rath than 8 per rank with the same capacity per IC. That acheives the same thing in terms of capacity.
Certainly looks like the future of RAM, but like most things, I would wait for v3.0 before jumping in. There's bound to be more power savings, compatibility tweaks, and performance tweaks. When is DDR5 arriving?
I honestly really love it aesthetically. It makes the rest of the motherboard balance out Z-height wise with PCIe cards. I know nobody else cares but there's something about it that I like. It's also an interesting solution; reminds me of early OCZ PCIe SSDs with dozens of chips on them. I would honestly love if these worked on quad and deca DIMM slot boards.
I was pressed for time upon initial scanning and jumped to conclusion. Then I dug a bit deeper. In the conclusion you state that the G.Skill is substantially larger while the other only has a marginal height difference. This makes the ZADAK sound smaller, when in fact, the G. Skill double is actually the same size as the ZADAK single. G.Skill 45 mm height up to 60 for the double ZADAK 60 mm height up to 73 for the double
Anyway. Based on percentage of size increases, yes, the G.Skill grew more, but you may want to consider some clarity in your description as there are a lot of people that just read the conclusion and may be concerned about size.
Clearly the base technology would also scale up to using 16Gb dies for 64GB in one DIMM. The argument for small systems with only two slots still exists.
The number of people who want/need 64GB on a 2-DIMM-slot motherboard can be counted in the thousands, maybe. The number of people who want/need 128GB on a 2-slot motherboard can be counted on your fingers. This is a product without a market.
It also makes me see how much I need to upgrade my own system because I am still on DDR3@2200MHz with tight timings though but still I am sure this is holding me back a lot. My Aidia 64 latency score is like 42ns-43ns and I also thought that was fairly good and would help me get better responsiveness in my games and maybe it does. Like I said though my 2200MHz is probably really on the slow side when you think of DDR4@3200MHz and higher being able to push the systems so much harder.
Why not test these on an AMD Ryzen based motherboard or two and see if they will work? Just because things are only officially supported on one platform or another doesn't always mean that they won't work. Also, Intel vs. AMD is very different in terms of memory support, and while it is more difficult to get DDR4 3633 and above working on the AM4 platform, if it works, it works. 3200 isn't a difficult speed at this point.
I disagree with the final paragraph where the focus is for SFF and motherboards with only two memory slots. If you have four memory slots and you can install 32GB in each slot, 128GB of RAM might be important for some people. What if you have a motherboard with more slots, if you really want/need the RAM and this stuff works, it doubles the allowed amount of RAM on the motherboard.
The official specs for what a MOTHERBOARD supports are often limitations of the technology of the time. How many old machines said "Max 4GB RAM", because there were two slots, and at the time, you never saw 4GB memory modules. I've gone back to some of those old machines, and used 4GB modules to give 8GB of RAM in a machine that theoretically could only take 4GB.
Much of that would be a function of the chipset/BIOS and the memory controller which is on the CPU these days. These double height modules can't directly talk to the memory controller as if they are two modules, because the memory slots themselves and how they talk to the chipset/memory controller/CPU are not different.
That is why I wanted to see at least an attempt to get these to work with a socket AM4 system and see how they show up. Going from 1T or 2T to 2T or 4T for the memory is probably the issue, and if the chipset would even allow it. Asking Asus if they have plans to allow DDR4 to run with a 4T command rate on more boards would be worth the call.
Since one of those DIMMs basically acts like two DIMMs, you probably cannot use two of them on the same channel, since that would require supporting 4 DIMMs on the same channel.
I don't think it works that way. Yes, internally each one is working as if it was two, but how the module talks to the BIOS/chipset/processor is the big question. Does the BIOS need to support a 4T command rate on the memory for these to work, and if so, that explains why support is limited.
He explained in the article that the way memory channels are handled that the traces are daisy chained or combined. These dimms act as 2 dimms slapped onto one card and won't work stacked 2 too a channel (because then you'd have 4 dimms on a channel).
Above and beyond that you also have the fact that now 32GB modules exist which render these far less useful. There are very very few scenarios where you'd want 128GB of ram and not want it to be ECC RAM. Any sort of server and many workstation tasks I'd much prefer to use ECC ram instead.
Because most boards don't run ECC memory. These are a niche product to get 64GB on boards that only have 2 slots. Lots of memory doesn't overclock well. You're making assumptions with your statement that are completely off base.
I've used ECC memory on consumer grade boards. All that happens is that the ECC part of the ECC, Registered RAM gets disabled and the rest of the DIMM module functions like a UDIMM instead of a RDIMM.
"You're making assumptions with your statement that are completely off base."
Yes! Which is why I literally said: "Maybe I'm missinng something here - but I don't understand why people just don't get 32 GB DDR4-2666 ECC Registered DIMMS like this one (https://www.newegg.com/Product/Product.aspx?Item=N... and be done with it?"
Lots of memory don't overclock well is just a broad and generic statement that can also be applied to these modules as well.
The fact of the matter is that DDR4-3000 and DDR4-3200 modules ARE overclocked by default anyways because those speeds aren't part of the JEDEC JESD79-4 spec (p.163).
The point being that a) they exist already and b) you should be able to disable the ECC, registered capabilities of these DIMMs. (I've never had a consumer grade board that didn't know how to ignore the ECC registered capabilities of RDIMMs.)
In other words, you can already do this with existing modules that you can buy, at those speeds, "second" hand (which tells you a little bit about the maturity of the tech).
How do u disable registered capability on a reg. DIMM? They are physically different to UDIMMs. Thats absolutely BS. You can run ECC UDIMM on every board, maybe without ECC working. But you will never run reg. DIMM on Hardware not supporting reg. DIMMs
To the best of my knowledge, you can put ECC RDIMMs in consumer-grade systems where you have overclocking capabilities made apparent and available to you just as people have overclocked Xeons before.
Just because you have RDIMMs doesn't necessarily mean that you have to only use them with a motherboard that supports and requires the use of RDIMMs. (In fact, many motherboards that recommend using RDIMMs will also run with UDIMMs, but usually with a lower installed capacity limit.)
But both 8- and 32-bit SIMMs existed under the same name, so it clearly isn't bus width the name refers to.
For what it is worth, I was wrong for a different reason. Research indicates that the D refers to rows of contacts. In a SIMM, the backside contacts were electrically the same as the frontside contacts, so there was only a single row. In a DIMM, they aren't, so you have dual contact rows. And since this still only has two rows of contacts...
The connection to the motherboard and support in the BIOS and memory controller are the key. More memory on the same memory channel will not boost performance, and will even degrade performance a bit due to needing to connect the additional memory on the same channel.
DDR5(not to be confused with GDDR5) will help. Going to Gen-Z where the memory can connect to the Gen-Z bus might potentially help as well if done right, because you want that memory bus to be able to access more RAM at the same time, so latency will hopefully go down with the next generation of desktop memory.
Could you please mention the names of the motherboards which did not post with these memories? Also could you please try these with Intel 6xxx/ 7xxx series CPUs with 2xx chipsets ( z270, b250).
Any thoughts on how scalable this apporach is? I mean...the obvious issues (heatsink fan clearance being one of them) aside, 4-row-high-DIMMs would look absolutely hilarious. I'd buy them. Just for the joke.
I don't get why there is a need for double height. There are lots of DIMMs in the market, that have 18 ICs per side on a regular DIMM.
I think it's just marketing, to show visually they have something new. The Cooler occupies the space anyway. But don't get, why every Tech-website mention it's neccessary.
I suspect it is all about the memory density. So, rather than trying to get 7nm fab process RAM, these companies are using less expensive chips and just increasing the size of the board to compensate, plus the need to connect the RAM chips on the DIMM. What sort of timings are on these things, 2T, 3T, or 4T for the command rate? How about the latency ratings?
Mh? I talk not about the number of DRAM Dies. I speak just about the size of the PCB. What has lithographie to do with PCB size? Here u have 16 packages per side. There are a lot of normal sized DIMMs outside with that amount of packages.
Interesting -- now if they could just pair them with some 3D nand and allow hybrid RAM / storage like intel wants to do with optane. Perhaps it'd come in a variant that sees 16GB PC 3000 and a slot similar to M.2 with capacities that mirror that of NVMEs thus moving storage over to the RAM bus and freeing up the PCI bus. The modules would be L shaped so that the storage addition completes the form factor thus allowing it to retain the same profile as these taller units, vs having a module hanging off the side.
You can fit 2 SODIMMs using a single locking mechanism on each end within the limits of a mITX board. Surely that would be much easier and more universal. [img]https://lh3.googleusercontent.com/-L0fCpsbFSWA/We5...[/img]
Hi Ian and Gavin! Very helpful article! I’m building a SFF workstation using these modules. I want to remove the heat spreaders but I’m afraid of damaging the DIMMs. I see you guys managed it. How risky is it? Is there anything special I should know about removing the RGB lighting strips? (Yes, everyone, I know it will void the warranty)
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
50 Comments
Back to Article
mickulty - Wednesday, January 23, 2019 - link
Really interesting article, thanks Gav and Ian!I'd love to see how a configuration using these DC sticks compares to 4x16GB on a 4-dimm T-topology board, especially in ability to hit higher speeds.
edzieba - Wednesday, January 23, 2019 - link
Presumably the "Two DIMMs One Channel"-on-a-board layout would preclude these being used in 4-slot consumer boards (which would require effectively 4 DIMMs per channel)? I can't think of any boards off the top of my head that support more than 2 DIMMs per channel without using FBDIMMs.Ej24 - Wednesday, January 23, 2019 - link
Intel has validated their 8th and 9th gen desktop cpu's to work with 128gb of memory so that would suggest its possible, it's just up to the motherboard manufacturer to implement it appropriately.Hul8 - Wednesday, January 23, 2019 - link
I believe that's using regular (not double) modules with 16x Samsung's new 16 Gb memory packages. You can still use 2 of those per channel on regular consumer motherboards.Ej24 - Wednesday, January 23, 2019 - link
https://www.anandtech.com/show/13473/intel-to-supp...schujj07 - Wednesday, January 23, 2019 - link
I wonder if something that this could be designed for servers using RDIMMS or LRDIMMS. Current cost of 64GB LRDIMMs is more than double that of 32GB RDIMMs. 128GB LRDIMMs are about 4x more expensive than 64GB LRDIMMs. Could be a nice way to increase RAM capacity there without breaking the bank.brakdoo - Wednesday, January 23, 2019 - link
128 GB and 256 GB DIMMs use TSV (sometimes called 3DS or 3D stacked in the server business) memory. That's why they are more expensive.Other than that: This approach doubles the rank. Typical servers already reach their "maximum rank" on each channel with regular sized memory.
mickulty - Wednesday, January 23, 2019 - link
It's pretty common for various forms of registered/buffered memory to use x4 width ICs rather than the standard x8, meaning you have 16 per rank rath than 8 per rank with the same capacity per IC. That acheives the same thing in terms of capacity.nathanddrews - Wednesday, January 23, 2019 - link
Certainly looks like the future of RAM, but like most things, I would wait for v3.0 before jumping in. There's bound to be more power savings, compatibility tweaks, and performance tweaks. When is DDR5 arriving?oddity1234 - Wednesday, January 23, 2019 - link
That's a bizarre existential predicament the sea slug is stuck in.CheapSushi - Wednesday, January 23, 2019 - link
I honestly really love it aesthetically. It makes the rest of the motherboard balance out Z-height wise with PCIe cards. I know nobody else cares but there's something about it that I like. It's also an interesting solution; reminds me of early OCZ PCIe SSDs with dozens of chips on them. I would honestly love if these worked on quad and deca DIMM slot boards.SmCaudata - Wednesday, January 23, 2019 - link
I was pressed for time upon initial scanning and jumped to conclusion. Then I dug a bit deeper. In the conclusion you state that the G.Skill is substantially larger while the other only has a marginal height difference. This makes the ZADAK sound smaller, when in fact, the G. Skill double is actually the same size as the ZADAK single.G.Skill 45 mm height up to 60 for the double
ZADAK 60 mm height up to 73 for the double
Anyway. Based on percentage of size increases, yes, the G.Skill grew more, but you may want to consider some clarity in your description as there are a lot of people that just read the conclusion and may be concerned about size.
DanNeely - Wednesday, January 23, 2019 - link
I was confused too, because the ZADAK dimms looked significantly larger than the G.Skill.DigitalFreak - Wednesday, January 23, 2019 - link
This proprietary solution is pretty much DOA now that true 32GB DIMMs are supported by Intel on Z390 systems. All it needs is a BIOS update.nevcairiel - Wednesday, January 23, 2019 - link
Clearly the base technology would also scale up to using 16Gb dies for 64GB in one DIMM. The argument for small systems with only two slots still exists.The_Assimilator - Wednesday, January 23, 2019 - link
The number of people who want/need 64GB on a 2-DIMM-slot motherboard can be counted in the thousands, maybe. The number of people who want/need 128GB on a 2-slot motherboard can be counted on your fingers. This is a product without a market.yuhong - Wednesday, January 23, 2019 - link
The DDR4 spec only goes up to 16Gbit though.rocky12345 - Wednesday, January 23, 2019 - link
Great write up thank you.It also makes me see how much I need to upgrade my own system because I am still on DDR3@2200MHz with tight timings though but still I am sure this is holding me back a lot. My Aidia 64 latency score is like 42ns-43ns and I also thought that was fairly good and would help me get better responsiveness in my games and maybe it does. Like I said though my 2200MHz is probably really on the slow side when you think of DDR4@3200MHz and higher being able to push the systems so much harder.
Targon - Wednesday, January 23, 2019 - link
Why not test these on an AMD Ryzen based motherboard or two and see if they will work? Just because things are only officially supported on one platform or another doesn't always mean that they won't work. Also, Intel vs. AMD is very different in terms of memory support, and while it is more difficult to get DDR4 3633 and above working on the AM4 platform, if it works, it works. 3200 isn't a difficult speed at this point.Targon - Wednesday, January 23, 2019 - link
I disagree with the final paragraph where the focus is for SFF and motherboards with only two memory slots. If you have four memory slots and you can install 32GB in each slot, 128GB of RAM might be important for some people. What if you have a motherboard with more slots, if you really want/need the RAM and this stuff works, it doubles the allowed amount of RAM on the motherboard.The official specs for what a MOTHERBOARD supports are often limitations of the technology of the time. How many old machines said "Max 4GB RAM", because there were two slots, and at the time, you never saw 4GB memory modules. I've gone back to some of those old machines, and used 4GB modules to give 8GB of RAM in a machine that theoretically could only take 4GB.
Alim345 - Wednesday, January 23, 2019 - link
It’s not clear whether it is possible to use four sticks. Probably it is not since those sticks replicate what should happen in motherboard.Targon - Thursday, January 24, 2019 - link
Much of that would be a function of the chipset/BIOS and the memory controller which is on the CPU these days. These double height modules can't directly talk to the memory controller as if they are two modules, because the memory slots themselves and how they talk to the chipset/memory controller/CPU are not different.That is why I wanted to see at least an attempt to get these to work with a socket AM4 system and see how they show up. Going from 1T or 2T to 2T or 4T for the memory is probably the issue, and if the chipset would even allow it. Asking Asus if they have plans to allow DDR4 to run with a 4T command rate on more boards would be worth the call.
nevcairiel - Wednesday, January 23, 2019 - link
Since one of those DIMMs basically acts like two DIMMs, you probably cannot use two of them on the same channel, since that would require supporting 4 DIMMs on the same channel.Targon - Thursday, January 24, 2019 - link
I don't think it works that way. Yes, internally each one is working as if it was two, but how the module talks to the BIOS/chipset/processor is the big question. Does the BIOS need to support a 4T command rate on the memory for these to work, and if so, that explains why support is limited.Bp_968 - Thursday, January 24, 2019 - link
He explained in the article that the way memory channels are handled that the traces are daisy chained or combined. These dimms act as 2 dimms slapped onto one card and won't work stacked 2 too a channel (because then you'd have 4 dimms on a channel).Above and beyond that you also have the fact that now 32GB modules exist which render these far less useful. There are very very few scenarios where you'd want 128GB of ram and not want it to be ECC RAM. Any sort of server and many workstation tasks I'd much prefer to use ECC ram instead.
yuhong - Wednesday, January 23, 2019 - link
Even Intel don't usually update ARK or the datasheet.alpha754293 - Wednesday, January 23, 2019 - link
Maybe I'm missinng something here - but I don't understand why people just don't get 32 GB DDR4-2666 ECC Registered DIMMS like this one (https://www.newegg.com/Product/Product.aspx?Item=N... and be done with it?It's going to run about the same price and they can just overclock it.
Wixman666 - Wednesday, January 23, 2019 - link
Because most boards don't run ECC memory. These are a niche product to get 64GB on boards that only have 2 slots. Lots of memory doesn't overclock well. You're making assumptions with your statement that are completely off base.CheapSushi - Thursday, January 24, 2019 - link
Does that mean ECC RAM will NOT WORK AT ALL or just the ECC portion, soft error correcting, won't work but it'll show up like regular RAM?alpha754293 - Thursday, January 24, 2019 - link
I've used ECC memory on consumer grade boards. All that happens is that the ECC part of the ECC, Registered RAM gets disabled and the rest of the DIMM module functions like a UDIMM instead of a RDIMM."You're making assumptions with your statement that are completely off base."
Yes! Which is why I literally said: "Maybe I'm missinng something here - but I don't understand why people just don't get 32 GB DDR4-2666 ECC Registered DIMMS like this one (https://www.newegg.com/Product/Product.aspx?Item=N... and be done with it?"
Lots of memory don't overclock well is just a broad and generic statement that can also be applied to these modules as well.
The fact of the matter is that DDR4-3000 and DDR4-3200 modules ARE overclocked by default anyways because those speeds aren't part of the JEDEC JESD79-4 spec (p.163).
Here is a 32 GB module from Micron Technology, DDR4-3200, ECC Registered: https://www.ebay.com/itm/Micron-MTA36ASF4G72PZ-3G2...
The point being that a) they exist already and b) you should be able to disable the ECC, registered capabilities of these DIMMs. (I've never had a consumer grade board that didn't know how to ignore the ECC registered capabilities of RDIMMs.)
In other words, you can already do this with existing modules that you can buy, at those speeds, "second" hand (which tells you a little bit about the maturity of the tech).
Bp_968 - Thursday, January 24, 2019 - link
Or you could go with an AMD board and get to keep the ECC features enabled. :)KarlKastor - Friday, January 25, 2019 - link
How do u disable registered capability on a reg. DIMM? They are physically different to UDIMMs. Thats absolutely BS.You can run ECC UDIMM on every board, maybe without ECC working. But you will never run reg. DIMM on Hardware not supporting reg. DIMMs
nevcairiel - Wednesday, January 23, 2019 - link
You cannot use ECC registered RAM on consumer motherboards.CheapSushi - Thursday, January 24, 2019 - link
Yes you can. It depends on the CPU/BIOS.jordanclock - Wednesday, January 23, 2019 - link
The kind of systems that use ECC RAM aren't the kind of systems that are going to see any overclocking.alpha754293 - Thursday, January 24, 2019 - link
To the best of my knowledge, you can put ECC RDIMMs in consumer-grade systems where you have overclocking capabilities made apparent and available to you just as people have overclocked Xeons before.Just because you have RDIMMs doesn't necessarily mean that you have to only use them with a motherboard that supports and requires the use of RDIMMs. (In fact, many motherboards that recommend using RDIMMs will also run with UDIMMs, but usually with a lower installed capacity limit.)
Lord of the Bored - Wednesday, January 23, 2019 - link
Wouldn't two DIMMs on one card be a quad in-line memory module(QIMM)?Billy Tallis - Thursday, January 24, 2019 - link
No, because this approach doesn't make the memory bus any wider, it just stacks the memory deeper on the same bus than a normal DIMM would.Lord of the Bored - Thursday, January 24, 2019 - link
But both 8- and 32-bit SIMMs existed under the same name, so it clearly isn't bus width the name refers to.For what it is worth, I was wrong for a different reason. Research indicates that the D refers to rows of contacts. In a SIMM, the backside contacts were electrically the same as the frontside contacts, so there was only a single row. In a DIMM, they aren't, so you have dual contact rows. And since this still only has two rows of contacts...
Targon - Thursday, January 24, 2019 - link
The connection to the motherboard and support in the BIOS and memory controller are the key. More memory on the same memory channel will not boost performance, and will even degrade performance a bit due to needing to connect the additional memory on the same channel.DDR5(not to be confused with GDDR5) will help. Going to Gen-Z where the memory can connect to the Gen-Z bus might potentially help as well if done right, because you want that memory bus to be able to access more RAM at the same time, so latency will hopefully go down with the next generation of desktop memory.
prateekprakash - Thursday, January 24, 2019 - link
Could you please mention the names of the motherboards which did not post with these memories?Also could you please try these with Intel 6xxx/ 7xxx series CPUs with 2xx chipsets ( z270, b250).
mito0815 - Thursday, January 24, 2019 - link
Any thoughts on how scalable this apporach is? I mean...the obvious issues (heatsink fan clearance being one of them) aside, 4-row-high-DIMMs would look absolutely hilarious. I'd buy them. Just for the joke.KarlKastor - Thursday, January 24, 2019 - link
I don't get why there is a need for double height.There are lots of DIMMs in the market, that have 18 ICs per side on a regular DIMM.
I think it's just marketing, to show visually they have something new. The Cooler occupies the space anyway. But don't get, why every Tech-website mention it's neccessary.
Targon - Thursday, January 24, 2019 - link
I suspect it is all about the memory density. So, rather than trying to get 7nm fab process RAM, these companies are using less expensive chips and just increasing the size of the board to compensate, plus the need to connect the RAM chips on the DIMM. What sort of timings are on these things, 2T, 3T, or 4T for the command rate? How about the latency ratings?KarlKastor - Friday, January 25, 2019 - link
Mh? I talk not about the number of DRAM Dies. I speak just about the size of the PCB. What has lithographie to do with PCB size?Here u have 16 packages per side. There are a lot of normal sized DIMMs outside with that amount of packages.
Danvelopment - Friday, January 25, 2019 - link
What are the use cases? I would have thought that, by the time you need those sort of capacities, you would be better served by a quad channel Xeon.NoSoMo - Friday, January 25, 2019 - link
Interesting -- now if they could just pair them with some 3D nand and allow hybrid RAM / storage like intel wants to do with optane. Perhaps it'd come in a variant that sees 16GB PC 3000 and a slot similar to M.2 with capacities that mirror that of NVMEs thus moving storage over to the RAM bus and freeing up the PCI bus. The modules would be L shaped so that the storage addition completes the form factor thus allowing it to retain the same profile as these taller units, vs having a module hanging off the side.13Gigatons - Wednesday, January 30, 2019 - link
Maybe they could focus on lowering the price????Other then that what is the case use?
DPete27 - Tuesday, February 12, 2019 - link
You can fit 2 SODIMMs using a single locking mechanism on each end within the limits of a mITX board. Surely that would be much easier and more universal.[img]https://lh3.googleusercontent.com/-L0fCpsbFSWA/We5...[/img]
ExclamationMediaLLC - Wednesday, July 10, 2019 - link
Hi Ian and Gavin! Very helpful article! I’m building a SFF workstation using these modules. I want to remove the heat spreaders but I’m afraid of damaging the DIMMs. I see you guys managed it. How risky is it? Is there anything special I should know about removing the RGB lighting strips? (Yes, everyone, I know it will void the warranty)