Frankly this is one of the more exciting products from Intel in God knows how long? I also remember how a number of people dissed ARM & Big Little especially on AT(forums) now lo & behold 🙄
Anyway good luck to Intel, as an AMD fan I'd love Intel to bring more competition in this space & perhaps drive AMD towards bringing their (revised) version of Cat cores back from the dead, or K12 for that matter!
It's important for Intel to at least attempt to compete. It drives ARM-derived development harder and gets us better useful products that are actually relevant to consumers. With that said, Atom cores can probably find a home in embedded devices and (hopefully) low end notebook computers that don't need cooling fans. Like seriously, once you use a fanless laptop, it's really hard to put up with anything that has a fan. It's like going back a decade.
Not really. I prefer a certain amount of bezel around a screen on a laptop anyway so I don't have incidental finger smudges on the screen and in the pricing class of computer I'm prone to purchase (sub-$400), there just aren't serious efforts made to eliminate bezels anyway.
Unless you need performance from the higher end designs. Fans never bothered me, especially with lower rpm (I think the ambient noise is calming).You need to probably be upgrading from a sandy bridge system or older for the performance difference not to be noticeable, or you must be sticking to bursty workloads.
Unless the cores in Intel tremont are shrinkwrapped skylake cores this has little chance of even catching Qualcomm's latest offerings let alone the Apple stuff.
Tremont is not based off Skylake but closer to Sunny Core which is entirely new architexture. I yet discover accurate performance measure between x86 and ARM. Plus this is a completely new architexture and it would be hard to compare to current cpu designs and especially existing atom's.
big.LITTLE is not just yet cores, it was a very specific IMPLEMENTATION of that idea. And it wasn’t a great implementation. Which is why even ARM has ditched it, replaced with DynamIQ...
You’ll get more out of tech sites if you distinguish - the commenters who know what they are talking about from the idiots - exactly WHAT is being criticized when (knowledgeable) commenters criticize something. Knowledgeable criticism usually accepts some aspect of an idea is valuable, while also pointing out other parts that are problematic.
If Intel adds AVX to their atom cores (as the slides suggest they will), I wonder how much area advantage they'll still hold over Zen. If it's only like a factor of 2, then I'd rather have one Zen2 cores with HT than 2 Atom-derived cores.
ARM's x86 emulation is so good now it is ridiculous, can the new nano lineup beat it? Its efficiency gonna be life or death determining for x86 I think.
Big difference between ARM and x86 based cpu is simple RISC vs CISC which by definition means more RISC instructions are required for single CISC instruction. But fortunately for most applications is they are likely a simple set of instructions. They might be ok for most application but more complex application uses higher level instructions. I am curious if application designed to work with AVX or higher other instruction. I would think anything that uses Machine language would have difficultly. But like something with .NET could be made portable in both architexture quite easy. In fact a .NET interpreted library could made in native ARM,
.NET already compiles to native Arm of course! It's only old x86 applications that need translation. And the performance of the translated code is not all that important for Windows applications which spend much of their time in native Windows libraries.
Microsoft demonstrated good Windows performance on a 2 year old Arm SoC (https://www.youtube.com/watch?v=PaSmZzo3Y_c), and performance has doubled since then. Cortex-A77 will be out in a few months and is about 3 times faster. Emulation is more than fast enough.
Emulation done right is keeping the recompiled native binaries, speeding up the application the more you run it until it's completely native, like DEC's FX!32.
A similar approach with todays compiler technology and processing power and largely similar dev librarys on many platforms could yield you close to native speed at the cost of a larger storage footprint, but the compiled code part of an application is much less today, GUI resources etc is a much larger part of the footprint today than back then.
It's not "ARM's x86 emulation," it's a Windows-specific technology. Windows uses a HAL for x86 on x86_64, and they added a similar layer for x86 on ARM. The times when the HAL isn't molasses are when it can drop in an ARM binary instead of an x86, and the binary is doing most of the heavy work. In essence, the "emulation" is fast when it's not emulation.
>Enabling an L3 cache on Atom does two potential things to Intel’s design: it adds power, but also adds performance. By having an L3, it means that data in the L3 is quicker to access than it would be in memory, however there is an idle power hit by having L3 present.
I'm pretty sure it's a net reduction in power consumption, no? The "only" real reason to not include a cache of any level/size is cost -- and cost is a really big deal.
DRAM for CPU caches are called embedded DRAM or "eDRAM" for short.
DRAM has much higher leakage current because the circuitry requires power just for refreshing data, while its unnecessary for SRAM.
SRAM is also much faster. eDRAM is in a handful of CPUs where absolute capacity trumps everything else. Like eDRAM used in Intel's Iris CPUs. Or IBM's Power CPUs where it already uses so much power.
You don't just put DRAM on-die and call it done. It doesn't work that way. You have to make a CPU-process specific DRAM and that's why they call it eDRAM. eDRAM also has lot lower density compared to regular DRAM. It's still better than SRAM in density, but not like order of magnitude(and in case its not clear, 1 order of magnitude = 10x) or greater better. Intel's eDRAM for example had a density advantage of just 3x over their SRAM.
Yes you have a point about the power compared to system memory but that power is in addition to being quite a bit slower. You are talking slower in both latency and bandwidth. The density increase isn't enough to displace DRAM anyways.
And reducing power in complex systems require tackling it in multiple areas. Just because its in ok in DRAM doesn't make sense to increase it on the CPU.
L3 cache is always (or practically always) SRAM based. L4 cache is usually DRAM based. SRAM is much more expensive die area wise, but it's also quite faster and more power efficient than DRAM. DRAM is not fast enough for an L3 cache.
Perhaps in the future (STT-)MRAM will instead be used for L4 cache, or maybe even L3 cache. MRAM is almost as fast as SRAM but it's much denser (thus much more cache can fit in each mm^2), it's more power efficient and it's also non volatile. I have no idea if CPU cache non volatility can become a useful feature, but I imagine it might.
I very curious about the efforts that Intel is doing here, I believe there is more happening here because there is a major change it chip design here - it sounds to me that the Lakefield has a fast core for computer and 4 Tremont cores for backgound tasks - which sounds idea for portable always on computer.
But this Snow Ridge is a server based product and I believe it more than just Xeon D. There might be reason for Xeon Phi discontinuation and Snow Ridge could be the reason. With larger cache and more cpus in device, this could be excellent low power server. I would not doubt 16 or more cores is likely in very small space.
Toms Hardware has interest article about Intel plans past Foveros and part it discusses CO-EMiB which is designed for datacenter - which sounds like it combine 4 Foveros together which each having 8 chiplets - to me that sound like at least 32 core system that is only if one core per chiplet which I would assume it could have more
Do me Snow Ridges sound like next generation of C3000's series.
So that's the reason you can score Gemini Lake Atoms or J5005 all of a sudden again?
They used to be near impossible to obtain!
I got one last week; turned out very much more responsive than the J1900 and N3700 I already had, at least on Windows 10: Really nothing to complain about even on a 4K screen, can't say that an Nvidia 2080ti on an 5GHz i7 or a 4GHz Xeon 18 core is dramatically faster on Firefox or Office.
I had DDR4 SO-DIMMs lying around 1x 32GB and 2x 16GB that seemed sad to waste (and were priceless a year ago), so that's why I clicked on 'order' when a full Mini-ITX ASRock MoBo could be had for much less than the price of that RAM at the time (turns out, it's quite reasonable these days).
Actually, after the really impressive initial tests on Windows I got greedy and ordered another two, to create an oVirt Gluster for functional testing but at zero acoustics and very little power.
Alas, the on-board RealTek has issues with more complicated things on Linux, some of which can be healed by cold-booting (warm boots seem to loose the network on CentOS...)
It's also not generally as snappy on a Linux desktop as it is on Windows, but that's now how I plan on using them.
The common theme on all Atoms since the J1900 (Braswell, I think): Those 8GB RAM limitations are 'market segmentation lies', whatever RAM you can physically fit into those SO-DIMM slots, the Atoms will address it all. With DDR3 16GB were never any issue, with DDR4 32GB work just fine, too.
Few will actually feel tempted to load a $100 system with 32GB of RAM, but I thought it worth mentioning, that you could.
I only wish they'd make these boards with some higher quality 5Gbit NBase-T onboard NICs, too for say a $20 premium, because that would make them a nicely rounded fanless/noiseless low power home appliance.
Not using a NUC, because those have fans. These are ASRock Mini-ITX boards (€116 with VAT) in a chassis just big enough to house it with up to 2 SATA SSDs using a massive heat-sink.
100% silent and typically 6-8 Watts at the wall plug on idle.
The chassis (€40 with VAT) comes with a 65 Watt 12V power brick and an internal 12V -> ATX power conversion unit that isn't optimal at that load point, but who knows what will be in there two years down the road? I prefer modularity.
There are silent conversion kits for NUCs, but those are only worth paying for on high-end variants with i7 or similar.
Some of those configurations (NUC7i7BNH, for example) would be rather nice to have as Mini-ITX, alas Intel won't have it!
I have an i7-7700T (35 Watt) running as pfSense appliance with a Noctua NH-L9i fan in a chassis like that and you have to put your ear right next to it to hear anything. Under constant high loads (deep inspection of 400MBit traffic) it becomes noticeable but never a bother. Normal loads and even shorter spikes (several seconds) get completely buffered inside the high mass of the Noctua cooler.
Whereas even with 15 Watt NUCs I found them far too noisy on high loads and far too nervous with the fans: Almost an audible CPU graph... yucks!
If NUCs had Noctuas, I might go for them, especially with Thunderbolt/USB4 replacing PCIe slots. So far they are design over function.
In addition to the ASRock board, there's also the Odroid H2, which Gemini Lake-based and a similar price. Obviously, case selection is far more limited, but at least you can go smaller than mini-ITX.
BTW, Intel makes certain NUC models for heavy-duty use (i.e. built for sustained high CPU load).
What's nice about the Odroid is an M.2 SSD slot: The ASRock only has SATA, but 4 ports via an ASMedia SATA controller. On the other hand, the entire SoC can't really handle anything much beyond 5-6Gbit anyway, so the major M.2 advantage may be size and the M.2 premiums at least have largely disappeared.
It also maintains the 32GB RAM capacity, which is an incredible bonus, if you want to run a couple of low compute yet significant size (RAM+SSD) virtual appliances at lowest noise/energy. And RAM is so *cheap* these days: I wish they had 4x SO-DIMM Mini-ITX motherboards for 64GB of RAM at €300! (and with ECC)
Just having a 12V power supply is another huge benefit: The 12V to ATX conversion units take a lot of space and are most likely wasting electricity: The Odroid has it, I have not seen any affordable cheap Mini-ITX variants that have it, too.
Of course the Odroid has a lower bin SoC with fewer GPU EUs and 300MHz less clock at the same TDP: Perhaps not a killer criterion, but given a choice, I know where I tend to aim...
The two RealTek NICs may be a mixed blessing: I really want something that goes along with the 5Gbit theme here, either a 2.5 or 5Gbit NBase-T NIC to match what USB 3 does, too.
You can get 5Gbit Ether for USB 3.1 but these NICs easily double the price of the entire system: RealTek has a 2.5 Gbit chip (supposedly even a 5Gbit one) that won't cost an arm and a leg and are a match so much better to these silent SSD systems you can have a affordable prices these days.
Sure, let there be "tape" in the form of >10TB HDDs on an Atom server node, if you need that space. But please, only turn it on during backups, otherwise let's stick to 100% passive and no-noise, please!
What's still missing: ECC. But that costs an arm and a leg, except with AMD. Yet, they don't offer 15-45 Watt TDP APUs for the current Ryzens just yet (or at least not in retail).
Intel overcharges like crazy for C3000 Denvertons that support it and Xeon 2000 in the 35Watt TDP range are paper launches from what Google can find in terms of purchasable motherboards.
The market is segmenting in ways that enable few cross-over products or synnergies.
You either go down the performance road with normal DIMM sockets and CPU/APUs in the >65 Watt TDP range, or you have 15-35Watt TDP soldered-on BGA systems with SO-DIMMs that aren't available in PC form factors like Mini-ITX. NUC is the only mobile-technology/desktop-use-case product you can get and typically that means no control over fans and noise.
Expandability is all external via ThunderBolt/USB4 (good), but that market still has Mac prices.
I don't know if SO-DIMMs as a form factor are really a technical challenge vs. normal DIMMs (shorter traces should always be good in GigaHertz), but if they aren't I'd sure like them to be universally adopted, just because space is a premium, but shouldn't be in price.
The real reason you couldn't get them was the supply-crunch in Intel's 14 nm fabs. As those were lower-margin parts, they went to the back of the queue.
I think Intel's 14 nm supply issues have largely been alleviated, by now. Perhaps somewhat due to a dip in demand.
But I would not have thought that these issues are over *and* that chips are being delivered to OEMs again: It takes a while for chips to go through fabs and into finished products, I'd say months of lead-time.
And it's not like Gemini Lake were generally available all over the place. I really only found one source and it only had a few.
In any case I count myself lucky I found and got them: They are really nice IT Lego bricks, when VMs or containers won't do!
Actually, if you look at the slide, the codename for the 2023 core is "'Next' Month". Probably a spellcheck autocorrect error, as they obviously meant "'Next' Mont".
IMO, NextMont wouldn't be so bad, except I don't know what you'd call the one after that. Kinda like GCN...
if i search for " 2018 Architexture Day " all i find are articles that have nothing to do with cpus., but if i search for 2018 Architecture Day " then i find some articles about cpus... looks like hstewart STILL doesnt know how to spell it right.
Wrong all I indicating that there is at least one place that does not have "Next Month" but "Next Mont" - somebody is play joke on words to stating that the like next month, joking about Intel recent time problems on meeting schedule. Recent means last two years of so.
I not saying it bad here and not saying which one is correct from Intel. But I serious doubt Intel has "Next Month".
So, you're refuting Ian's claim that he took the photo AT THE EVENT?
Spell check autocorrect is a far more plausible explanation than Intel trying to make a joke (which doesn't even make much sense, let along seem particularly funny), in their actual presentation.
Lol... architexture... sounds like some GPU rendering hack, but actually found a site by that name that seems to sell texture maps for buildings. Makes sense...
Presumably, you're referring to the iPad, which has limited cooling capacity. Expect to see better sustained performance in a larger device, like a laptop.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
66 Comments
Back to Article
edzieba - Monday, July 15, 2019 - link
Could the Tremont cores on Lakefield be talking to the L3 on the Sunny Cove cores, rather than having their own L3?HStewart - Monday, July 15, 2019 - link
I pretty sure that the L3 cores are on the Atom, the source links do not mention Lakefield but instead Snow Ridge.R0H1T - Monday, July 15, 2019 - link
Frankly this is one of the more exciting products from Intel in God knows how long? I also remember how a number of people dissed ARM & Big Little especially on AT(forums) now lo & behold 🙄Anyway good luck to Intel, as an AMD fan I'd love Intel to bring more competition in this space & perhaps drive AMD towards bringing their (revised) version of Cat cores back from the dead, or K12 for that matter!
isthisavailable - Monday, July 15, 2019 - link
I think apple’s A13/14 would be fasterPeachNCream - Monday, July 15, 2019 - link
It's important for Intel to at least attempt to compete. It drives ARM-derived development harder and gets us better useful products that are actually relevant to consumers. With that said, Atom cores can probably find a home in embedded devices and (hopefully) low end notebook computers that don't need cooling fans. Like seriously, once you use a fanless laptop, it's really hard to put up with anything that has a fan. It's like going back a decade.ads295 - Monday, July 15, 2019 - link
I suppose you think devices with thick bezels on screens must be like going back a decade too.PeachNCream - Monday, July 15, 2019 - link
Not really. I prefer a certain amount of bezel around a screen on a laptop anyway so I don't have incidental finger smudges on the screen and in the pricing class of computer I'm prone to purchase (sub-$400), there just aren't serious efforts made to eliminate bezels anyway.V900 - Wednesday, July 17, 2019 - link
Good call!Nothing wrong with a little bezel.
Aside from the fingerprint issue you mentioned, it also increases the structural stability and makes for a more durable laptop.
Namisecond - Friday, November 1, 2019 - link
You may be surprised. Acer and Asus makes laptop that just barely fit into that price range that made some big strides in reducing bezel size.The Acer Swift 1 SF114-32-xxxx series can run for less than $300 now. IPS screen and storage upgradeable too.
V900 - Wednesday, July 17, 2019 - link
Bezels are seriously a dumb thing to prioritize. They have numerous advantages, and only cosmetics as an advantage.jakky567 - Wednesday, October 30, 2019 - link
Unless you need performance from the higher end designs. Fans never bothered me, especially with lower rpm (I think the ambient noise is calming).You need to probably be upgrading from a sandy bridge system or older for the performance difference not to be noticeable, or you must be sticking to bursty workloads.Namisecond - Friday, November 1, 2019 - link
Windows is a bursty workload lol. :)unclevagz - Monday, July 15, 2019 - link
Unless the cores in Intel tremont are shrinkwrapped skylake cores this has little chance of even catching Qualcomm's latest offerings let alone the Apple stuff.mode_13h - Monday, July 15, 2019 - link
Uh, shrinkwrapped? You mean shrunken?GreenReaper - Tuesday, July 16, 2019 - link
Intel found a way to one-up its original replacement of solder with thermal pads.HStewart - Monday, July 15, 2019 - link
Tremont is not based off Skylake but closer to Sunny Core which is entirely new architexture. I yet discover accurate performance measure between x86 and ARM. Plus this is a completely new architexture and it would be hard to compare to current cpu designs and especially existing atom's.IntelUser2000 - Tuesday, July 16, 2019 - link
Tremont is not Sunny Cove based!Tremont is its own development that builds upon Goldmont Plus. And we don't know any details. Certainly not enough to claim it looks like anything.
Namisecond - Friday, November 1, 2019 - link
Good luck getting Apple to manufacture them for anyone else but themselves.III-V - Monday, July 15, 2019 - link
I don't recall people ever dissing big/little. I just remember people thinking it would be a pain in the ass to implement.I also recall you coming up with a lot of "alternative histories and alternative truths" back then too. Seems you haven't changed.
R0H1T - Tuesday, July 16, 2019 - link
>I also recall you coming up with a lot of "alternative histories and alternative truths" back then too.Like what ~ Intel would be benevolent if AMD disappeared, lower their prices even because ARM? Wait that was the IDF.
>I don't recall people ever dissing big/little.
It's still there, you can go search yourself. "More cores" gimmick & all that.
name99 - Monday, July 15, 2019 - link
big.LITTLE is not just yet cores, it was a very specific IMPLEMENTATION of that idea. And it wasn’t a great implementation. Which is why even ARM has ditched it, replaced with DynamIQ...You’ll get more out of tech sites if you distinguish
- the commenters who know what they are talking about from the idiots
- exactly WHAT is being criticized when (knowledgeable) commenters criticize something. Knowledgeable criticism usually accepts some aspect of an idea is valuable, while also pointing out other parts that are problematic.
Jorgp2 - Monday, July 15, 2019 - link
I feel like low power cores are more exciting in general, since you have to balance performance with power.mode_13h - Monday, July 15, 2019 - link
No, please leave the cats dead and buried!If Intel adds AVX to their atom cores (as the slides suggest they will), I wonder how much area advantage they'll still hold over Zen. If it's only like a factor of 2, then I'd rather have one Zen2 cores with HT than 2 Atom-derived cores.
IntelUser2000 - Tuesday, July 16, 2019 - link
Big.Little like setups have existed as an idea for future compute in academia for a long time.It was actually the former head of Intel Labs, Justin Rattner, that showed the idea to the public back in 2005. It was called "Platform 2015".
Had 10nm not been so ambitious(they go as far as admit this in their recent presentation) we might have really seen such setups in 2015-2016.
konbala - Monday, July 15, 2019 - link
ARM's x86 emulation is so good now it is ridiculous, can the new nano lineup beat it? Its efficiency gonna be life or death determining for x86 I think.HStewart - Monday, July 15, 2019 - link
I thought for second this was April 1st, this sounds like a April Fools joke believing x86 emulation.mode_13h - Monday, July 15, 2019 - link
Emulation done right is like just-in-time recompilation. There's always a performance penalty vs. fully recompiled code, but it can get pretty close.HStewart - Monday, July 15, 2019 - link
Big difference between ARM and x86 based cpu is simple RISC vs CISC which by definition means more RISC instructions are required for single CISC instruction. But fortunately for most applications is they are likely a simple set of instructions. They might be ok for most application but more complex application uses higher level instructions. I am curious if application designed to work with AVX or higher other instruction. I would think anything that uses Machine language would have difficultly. But like something with .NET could be made portable in both architexture quite easy. In fact a .NET interpreted library could made in native ARM,Wilco1 - Tuesday, July 16, 2019 - link
.NET already compiles to native Arm of course! It's only old x86 applications that need translation. And the performance of the translated code is not all that important for Windows applications which spend much of their time in native Windows libraries.Microsoft demonstrated good Windows performance on a 2 year old Arm SoC (https://www.youtube.com/watch?v=PaSmZzo3Y_c), and performance has doubled since then. Cortex-A77 will be out in a few months and is about 3 times faster. Emulation is more than fast enough.
Zoolook13 - Friday, July 19, 2019 - link
Emulation done right is keeping the recompiled native binaries, speeding up the application the more you run it until it's completely native, like DEC's FX!32.A similar approach with todays compiler technology and processing power and largely similar dev librarys on many platforms could yield you close to native speed at the cost of a larger storage footprint, but the compiled code part of an application is much less today, GUI resources etc is a much larger part of the footprint today than back then.
lmcd - Monday, July 15, 2019 - link
It's not "ARM's x86 emulation," it's a Windows-specific technology. Windows uses a HAL for x86 on x86_64, and they added a similar layer for x86 on ARM. The times when the HAL isn't molasses are when it can drop in an ARM binary instead of an x86, and the binary is doing most of the heavy work. In essence, the "emulation" is fast when it's not emulation.Phynaz - Monday, July 15, 2019 - link
Are you high?III-V - Monday, July 15, 2019 - link
>Enabling an L3 cache on Atom does two potential things to Intel’s design: it adds power, but also adds performance. By having an L3, it means that data in the L3 is quicker to access than it would be in memory, however there is an idle power hit by having L3 present.I'm pretty sure it's a net reduction in power consumption, no? The "only" real reason to not include a cache of any level/size is cost -- and cost is a really big deal.
mode_13h - Monday, July 15, 2019 - link
Is L3 usually SRAM? Perhaps making it DRAM would be a way to tackle both the cost & power issues, while sacrificing just a bit of performance.Thunder 57 - Monday, July 15, 2019 - link
That would seriously kill performance.IntelUser2000 - Tuesday, July 16, 2019 - link
L3 is SRAM unless its otherwise indicated.DRAM for CPU caches are called embedded DRAM or "eDRAM" for short.
DRAM has much higher leakage current because the circuitry requires power just for refreshing data, while its unnecessary for SRAM.
SRAM is also much faster. eDRAM is in a handful of CPUs where absolute capacity trumps everything else. Like eDRAM used in Intel's Iris CPUs. Or IBM's Power CPUs where it already uses so much power.
mode_13h - Wednesday, July 17, 2019 - link
Uh, so help me understand...How is it that modern phones have like 4+ GB of DRAM and yet you're saying power would be an issue for embedding like 0.1% of that?
IntelUser2000 - Wednesday, July 17, 2019 - link
You don't just put DRAM on-die and call it done. It doesn't work that way. You have to make a CPU-process specific DRAM and that's why they call it eDRAM. eDRAM also has lot lower density compared to regular DRAM. It's still better than SRAM in density, but not like order of magnitude(and in case its not clear, 1 order of magnitude = 10x) or greater better. Intel's eDRAM for example had a density advantage of just 3x over their SRAM.Yes you have a point about the power compared to system memory but that power is in addition to being quite a bit slower. You are talking slower in both latency and bandwidth. The density increase isn't enough to displace DRAM anyways.
And reducing power in complex systems require tackling it in multiple areas. Just because its in ok in DRAM doesn't make sense to increase it on the CPU.
Don't assume things without doing more research.
mode_13h - Wednesday, July 17, 2019 - link
Thanks for the explanation.> Don't assume things without doing more research.
Well, I wasn't sure so I asked. You didn't have to answer, but I'm glad you did. Thanks.
Santoval - Tuesday, July 16, 2019 - link
L3 cache is always (or practically always) SRAM based. L4 cache is usually DRAM based. SRAM is much more expensive die area wise, but it's also quite faster and more power efficient than DRAM. DRAM is not fast enough for an L3 cache.Perhaps in the future (STT-)MRAM will instead be used for L4 cache, or maybe even L3 cache. MRAM is almost as fast as SRAM but it's much denser (thus much more cache can fit in each mm^2), it's more power efficient and it's also non volatile. I have no idea if CPU cache non volatility can become a useful feature, but I imagine it might.
mode_13h - Wednesday, July 17, 2019 - link
Persistent L3 cache would make it more efficient for a CPU to sleep and periodically wake up to do some small amount of work.However, for security reasons, I'm thinking persistence wouldn't be utilized across reboots, etc.
HStewart - Monday, July 15, 2019 - link
I very curious about the efforts that Intel is doing here, I believe there is more happening here because there is a major change it chip design here - it sounds to me that the Lakefield has a fast core for computer and 4 Tremont cores for backgound tasks - which sounds idea for portable always on computer.But this Snow Ridge is a server based product and I believe it more than just Xeon D. There might be reason for Xeon Phi discontinuation and Snow Ridge could be the reason. With larger cache and more cpus in device, this could be excellent low power server. I would not doubt 16 or more cores is likely in very small space.
Toms Hardware has interest article about Intel plans past Foveros and part it discusses CO-EMiB which is designed for datacenter - which sounds like it combine 4 Foveros together which each having 8 chiplets - to me that sound like at least 32 core system that is only if one core per chiplet which I would assume it could have more
Do me Snow Ridges sound like next generation of C3000's series.
HStewart - Monday, July 15, 2019 - link
Supremicro has C3xxx based system aim at cloud computinghttps://www.supermicro.com/en/products/system/3U/5...
IntelUser2000 - Wednesday, July 17, 2019 - link
Snow Ridge is specifically set to act as a compute node for 5G base stations. It's not C3000 successor which is general purpose.They'll likely have a Tremont-based successor to C3000.
abufrejoval - Monday, July 15, 2019 - link
So that's the reason you can score Gemini Lake Atoms or J5005 all of a sudden again?They used to be near impossible to obtain!
I got one last week; turned out very much more responsive than the J1900 and N3700 I already had, at least on Windows 10: Really nothing to complain about even on a 4K screen, can't say that an Nvidia 2080ti on an 5GHz i7 or a 4GHz Xeon 18 core is dramatically faster on Firefox or Office.
I had DDR4 SO-DIMMs lying around 1x 32GB and 2x 16GB that seemed sad to waste (and were priceless a year ago), so that's why I clicked on 'order' when a full Mini-ITX ASRock MoBo could be had for much less than the price of that RAM at the time (turns out, it's quite reasonable these days).
Actually, after the really impressive initial tests on Windows I got greedy and ordered another two, to create an oVirt Gluster for functional testing but at zero acoustics and very little power.
Alas, the on-board RealTek has issues with more complicated things on Linux, some of which can be healed by cold-booting (warm boots seem to loose the network on CentOS...)
It's also not generally as snappy on a Linux desktop as it is on Windows, but that's now how I plan on using them.
The common theme on all Atoms since the J1900 (Braswell, I think): Those 8GB RAM limitations are 'market segmentation lies', whatever RAM you can physically fit into those SO-DIMM slots, the Atoms will address it all. With DDR3 16GB were never any issue, with DDR4 32GB work just fine, too.
Few will actually feel tempted to load a $100 system with 32GB of RAM, but I thought it worth mentioning, that you could.
I only wish they'd make these boards with some higher quality 5Gbit NBase-T onboard NICs, too for say a $20 premium, because that would make them a nicely rounded fanless/noiseless low power home appliance.
Jorgp2 - Monday, July 15, 2019 - link
Isn't the J5005 NuC discontinued?abufrejoval - Tuesday, July 16, 2019 - link
Not using a NUC, because those have fans. These are ASRock Mini-ITX boards (€116 with VAT) in a chassis just big enough to house it with up to 2 SATA SSDs using a massive heat-sink.100% silent and typically 6-8 Watts at the wall plug on idle.
The chassis (€40 with VAT) comes with a 65 Watt 12V power brick and an internal 12V -> ATX power conversion unit that isn't optimal at that load point, but who knows what will be in there two years down the road? I prefer modularity.
There are silent conversion kits for NUCs, but those are only worth paying for on high-end variants with i7 or similar.
Some of those configurations (NUC7i7BNH, for example) would be rather nice to have as Mini-ITX, alas Intel won't have it!
I have an i7-7700T (35 Watt) running as pfSense appliance with a Noctua NH-L9i fan in a chassis like that and you have to put your ear right next to it to hear anything. Under constant high loads (deep inspection of 400MBit traffic) it becomes noticeable but never a bother. Normal loads and even shorter spikes (several seconds) get completely buffered inside the high mass of the Noctua cooler.
Whereas even with 15 Watt NUCs I found them far too noisy on high loads and far too nervous with the fans: Almost an audible CPU graph... yucks!
If NUCs had Noctuas, I might go for them, especially with Thunderbolt/USB4 replacing PCIe slots. So far they are design over function.
mode_13h - Wednesday, July 17, 2019 - link
In addition to the ASRock board, there's also the Odroid H2, which Gemini Lake-based and a similar price. Obviously, case selection is far more limited, but at least you can go smaller than mini-ITX.BTW, Intel makes certain NUC models for heavy-duty use (i.e. built for sustained high CPU load).
abufrejoval - Tuesday, July 23, 2019 - link
What's nice about the Odroid is an M.2 SSD slot: The ASRock only has SATA, but 4 ports via an ASMedia SATA controller. On the other hand, the entire SoC can't really handle anything much beyond 5-6Gbit anyway, so the major M.2 advantage may be size and the M.2 premiums at least have largely disappeared.It also maintains the 32GB RAM capacity, which is an incredible bonus, if you want to run a couple of low compute yet significant size (RAM+SSD) virtual appliances at lowest noise/energy. And RAM is so *cheap* these days: I wish they had 4x SO-DIMM Mini-ITX motherboards for 64GB of RAM at €300! (and with ECC)
Just having a 12V power supply is another huge benefit: The 12V to ATX conversion units take a lot of space and are most likely wasting electricity: The Odroid has it, I have not seen any affordable cheap Mini-ITX variants that have it, too.
Of course the Odroid has a lower bin SoC with fewer GPU EUs and 300MHz less clock at the same TDP: Perhaps not a killer criterion, but given a choice, I know where I tend to aim...
The two RealTek NICs may be a mixed blessing: I really want something that goes along with the 5Gbit theme here, either a 2.5 or 5Gbit NBase-T NIC to match what USB 3 does, too.
You can get 5Gbit Ether for USB 3.1 but these NICs easily double the price of the entire system: RealTek has a 2.5 Gbit chip (supposedly even a 5Gbit one) that won't cost an arm and a leg and are a match so much better to these silent SSD systems you can have a affordable prices these days.
Sure, let there be "tape" in the form of >10TB HDDs on an Atom server node, if you need that space. But please, only turn it on during backups, otherwise let's stick to 100% passive and no-noise, please!
What's still missing: ECC. But that costs an arm and a leg, except with AMD. Yet, they don't offer 15-45 Watt TDP APUs for the current Ryzens just yet (or at least not in retail).
Intel overcharges like crazy for C3000 Denvertons that support it and Xeon 2000 in the 35Watt TDP range are paper launches from what Google can find in terms of purchasable motherboards.
The market is segmenting in ways that enable few cross-over products or synnergies.
You either go down the performance road with normal DIMM sockets and CPU/APUs in the >65 Watt TDP range, or you have 15-35Watt TDP soldered-on BGA systems with SO-DIMMs that aren't available in PC form factors like Mini-ITX. NUC is the only mobile-technology/desktop-use-case product you can get and typically that means no control over fans and noise.
Expandability is all external via ThunderBolt/USB4 (good), but that market still has Mac prices.
I don't know if SO-DIMMs as a form factor are really a technical challenge vs. normal DIMMs (shorter traces should always be good in GigaHertz), but if they aren't I'd sure like them to be universally adopted, just because space is a premium, but shouldn't be in price.
mode_13h - Monday, July 15, 2019 - link
The real reason you couldn't get them was the supply-crunch in Intel's 14 nm fabs. As those were lower-margin parts, they went to the back of the queue.I think Intel's 14 nm supply issues have largely been alleviated, by now. Perhaps somewhat due to a dip in demand.
abufrejoval - Tuesday, July 16, 2019 - link
I understood the first part.But I would not have thought that these issues are over *and* that chips are being delivered to OEMs again: It takes a while for chips to go through fabs and into finished products, I'd say months of lead-time.
And it's not like Gemini Lake were generally available all over the place. I really only found one source and it only had a few.
In any case I count myself lucky I found and got them: They are really nice IT Lego bricks, when VMs or containers won't do!
mode_13h - Wednesday, July 17, 2019 - link
Well, I don't really know what the supply situation is like, but Odroid H2's have been back in stock for a while, I believe.Lord of the Bored - Monday, July 15, 2019 - link
"beyond that is ‘increased ST Perf, Frequency, Features’ listed around 2023."Worst codename ever.
mode_13h - Monday, July 15, 2019 - link
Actually, if you look at the slide, the codename for the 2023 core is "'Next' Month". Probably a spellcheck autocorrect error, as they obviously meant "'Next' Mont".IMO, NextMont wouldn't be so bad, except I don't know what you'd call the one after that. Kinda like GCN...
HStewart - Monday, July 15, 2019 - link
Somebody has the wrong document for - if you search for 2018 Architexture Day you can find a document and it states "Next Mont" not "next Month",https://www.servethehome.com/wp-content/uploads/20...
Ian Cutress - Tuesday, July 16, 2019 - link
At the presentation photo vs after presentation edited slide deckQasar - Tuesday, July 16, 2019 - link
if i search for " 2018 Architexture Day " all i find are articles that have nothing to do with cpus., but if i search for 2018 Architecture Day " then i find some articles about cpus... looks like hstewart STILL doesnt know how to spell it right.HStewart - Tuesday, July 16, 2019 - link
Wrong all I indicating that there is at least one place that does not have "Next Month" but "Next Mont" - somebody is play joke on words to stating that the like next month, joking about Intel recent time problems on meeting schedule. Recent means last two years of so.I not saying it bad here and not saying which one is correct from Intel. But I serious doubt Intel has "Next Month".
Lord of the Bored - Wednesday, July 17, 2019 - link
No one is making jokes about Intel's scheduling issues. Just making fun of their typo.mode_13h - Wednesday, July 17, 2019 - link
So, you're refuting Ian's claim that he took the photo AT THE EVENT?Spell check autocorrect is a far more plausible explanation than Intel trying to make a joke (which doesn't even make much sense, let along seem particularly funny), in their actual presentation.
mode_13h - Wednesday, July 17, 2019 - link
Lol... architexture... sounds like some GPU rendering hack, but actually found a site by that name that seems to sell texture maps for buildings. Makes sense...Maybe he was listening to this: https://en.wikipedia.org/wiki/Architextures
Lord of the Bored - Tuesday, July 16, 2019 - link
I stand corrected. 'S what I get for ignoring the pictures.You're right, NextMont wouldn't be bad at all.
nobodyblog - Friday, July 19, 2019 - link
Apple's SOC is faster only for 30 seconds. Then expect almost %50 drop in speed.mode_13h - Monday, July 22, 2019 - link
Presumably, you're referring to the iPad, which has limited cooling capacity. Expect to see better sustained performance in a larger device, like a laptop.