Comments Locked

49 Comments

Back to Article

  • Gothmoth - Friday, October 29, 2021 - link

    bringing grandpa back....
  • sirmo - Friday, October 29, 2021 - link

    "Bringing Geek back". Ok Boomer.
  • kkromm - Friday, October 29, 2021 - link

    OK, dumbass
  • smokefrog - Saturday, October 30, 2021 - link

    You clowns don't have nothing worth to say
  • YB1064 - Saturday, October 30, 2021 - link

    I hoped someone would have asked about integrating FPGA fabric on processors. It would be nice to have a reasonably priced consumer product that did this.
  • Threska - Friday, October 29, 2021 - link

    Not terribly new. I've dealt with equipment that had an x86 as part of it's core with a bunch of other functionality surrounding it.
  • sharath.naik - Saturday, October 30, 2021 - link

    Agree.. there is no fundamental architectural change done so can pretty much say their performance numbers are BS or just cranking power draw. If Intel needs to show that they have changed they need to move to on chip high bandwidth ram cache which can double as sole ram in mobile processors without the need for socketed ram. With similar capacities that Apple has.

    Not just in some exotic server platform but in all their products.
  • mode_13h - Sunday, October 31, 2021 - link

    You really think people want to upgrade their whole CPU to get more memory? And what if some of the memory goes bad - discard the entire CPU and buy a new one?

    As for server products, probably the most RAM you can pack into a CPU package is < 100 GB. Server platforms already support multiple TB of RAM. So, the in-package stuff can't replace external memory, at least for the substantial majority of customers.

    Besides being more restrictive, in-package memory is also more expensive and won't help much with most tasks other than iGPU performance. That means it'll be a relatively specialized product, either for high-end laptops or maybe extreme gaming rigs and workstations looking to squeeze every last drop of performance from the CPU.
  • FunBunny2 - Sunday, October 31, 2021 - link

    "As for server products, probably the most RAM you can pack into a CPU package is < 100 GB."

    not every application is a clone of Google or Amazon or ... for many industrial/commercial applications, 100GB will do just fine. especially if the application is built with a real relational database, as opposed to the common protocol of treating a 'database' as just a more tractable filesystem bunch of files. 5NF will take up, more or less, an order of magnitude less storage than the file-dump approach that is all too common. not only do you get a smaller storage footprint, but you get relational integrity for free; less edit code on clients. yum.
  • mode_13h - Sunday, October 31, 2021 - link

    > for many industrial/commercial applications, 100GB will do just fine.

    Some, yes. And we know that Intel has already announced a variant of Sapphire Rapids with HMB2, with many details yet to come.

    However, the post I replied to advocated for Intel to include in-package memory for "Not just in some exotic server platform but in all their products."

    In general, the more cores you have, the more RAM you need. I've found that even a simple task like compiling software can use in excess of 1 GB per SMT thread.
  • mode_13h - Sunday, October 31, 2021 - link

    > 1 GB per SMT thread.

    For instance, a 64-core EPYC acting as a build server could use in excess of 128 GB of RAM. Stastically speaking, maybe not. However, if you partition it into a bunch of VMs, then they each need to be provisioned that way. In fact, I'd give them at least 2 GB per vCPU.
  • sharath.naik - Monday, November 1, 2021 - link

    You did not read My comment correctly. I specifically said HBM ram CACHE with capacities ranging from 16GB to 64GB. That does not mean it eliminates the option for RAM. For mobile you can skip external ram and just use the on chip RAM cache as the sole RAM. The current architecture for socketed RAM draws too much power and too slow for the number of cores in current CPUs.
    Just having a HBM ram cache solve both power and performance issue with this. Apple with M1 already showed this. Not sure what the argument against this is. Intel just does not want to do it in one go since they would like to milk the market more.
  • whatthe123 - Monday, November 1, 2021 - link

    M1pro and max don't use HBM, they use 256/512bit LPDDR5. They have patents for everything, it doesn't mean they're shipping HBM on consumer chips right now. Apple is the largest tech company on earth and even they aren't spending extra for HBM and you're saying a company 1/10th the size should somehow have more money than Apple?
  • mode_13h - Monday, November 1, 2021 - link

    Having that much RAM acting simply as cache is very expensive, for some applications. It also means you need to power it + any external DRAM, so net power utilization would go up, when both are used.

    And it sounds like you did not read all the points I made, because the ones about not being able to upgrade memory without replacing your CPU or having to replace your entire CPU (+ RAM) when any of the memory goes bad would be major downsides of moving DRAM in-package.
  • wow343 - Friday, October 29, 2021 - link

    It’s about execution rather than plans. Intel has shown very poor execution and so it will take time to really see the difference. I think they are behind but they are positioned to really deliver if they buy into all that they are saying. But will Intel buy into this or will it fail spectacularly as they did with 10NM? That’s an open question and won’t be answered until perhaps after even Pat retires. We are taking a decade time frame. A decade from now are we going to look back at this and say not only did Intel deliver PC cores but they are absolutely the go to FAB for smart phone manufacturers and custom hardware or are we going to be dealing with a fading giant that keeps shrinking market space rather than die space?
  • three_dots--- - Friday, October 29, 2021 - link

    I wouldn't say a decade, I think in 3-5 years it will be pretty clear if they managed to pull it off. At least they are buying their idea since there has been a lot of INTC stock insider buying this week
  • wow343 - Friday, October 29, 2021 - link

    Well it took them 8 years to kind of fix their 10NM process node. Now you are telling me this famously closed company is going to completely change its tune and license it’s IP, it’s fabs and give customers the same access to all of this as it does to its own products and will do that in 3-5 years? Well I would love to believe you but….
  • whatthe123 - Saturday, October 30, 2021 - link

    Most of that doesn't require much in terms of effort, so I don't know why that part would be surprising. They have been doing collaborative and open source work for years.

    The part nobody trusts them on is getting out a new shrink every year after being stuck on 10nm for so long.
  • wow343 - Saturday, October 30, 2021 - link

    I agree with you that due shrink is the real question mark but don’t discount the way big companies move. There is a unexplainable slowness in changing direction when you are as big as Intel. Compare with AMD that almost went bankrupt before Zen. The smaller you are the more nimble you are.
  • tamalero - Saturday, October 30, 2021 - link

    Pretty sure that the stagnation and their reliance on AMD being down made them not separate.
    They used to have like multiple fabs being updated at the same time to be always on top.
  • Blastdoor - Saturday, October 30, 2021 - link

    Yup. The next 2 years represent the pivotal do or die moment for intel executing this plan.
  • nandnandnand - Friday, October 29, 2021 - link

    "So Zettascale in 2027 – it’s a huge internal initiative that's going to bring many of our technologies together for a 1000x gain in five years. That's pretty phenomenal."

    Big if true, and there's only one way to get that kind of increase so fast.
  • mode_13h - Sunday, October 31, 2021 - link

    > there's only one way to get that kind of increase so fast.

    Huh? What's that? Aliens?

    The performance impact of Moore's Law has been quoted as doubling every ~18 months. This works out to roughly a 10x performance increase every 5 years. So, to achieve 1000x, the resulting computer would probably be 100x as big, cost 100x as much ($50B?), and burn 100x as much power. No thanks.

    Let's say they do something miraculous and somehow deliver 100x the perf/$ and perf/W in 5 years. Still, can you *really* call a computer 10x as big, expensive, and power-hungry a win?

    I think they'd do well to set more realistic goals and try to over-deliver by a praise-worthy amount. Especially in the light of their recent run of over-promising and under-delivering.
  • whatthe123 - Sunday, October 31, 2021 - link

    probably just fudging the numbers. like AMD had a 25x efficiency in 5 years goal and hit that target with 5x performance/watt gains compared to bulldozer. They hit the full 25x with platform efficiency gains from stricter energy regulations and improvements made by laptop OEMs.

    intel hasn't shipped their discrete Xe gpus yet so the matrix accelerators alone are already an exponential performance gain compared to CPUs, even if they're only comparable to whats already coming to the market. ASICs make the fudging possible.
  • mode_13h - Friday, November 5, 2021 - link

    > intel hasn't shipped their discrete Xe gpus yet so the matrix accelerators alone
    > are already an exponential performance gain compared to CPUs

    The 1000x comment was made in the context of Zetta-scale. That means it's referring to the 1-2 PFLOPS Aurora from as the baseline (i.e. Sapphire Rapids + Xe-HPC).
  • nandnandnand - Sunday, October 31, 2021 - link

    Integrating RAM layers directly into the CPU or GPU in a 3D package.
  • mode_13h - Friday, November 5, 2021 - link

    Please show me how that's good for a 100x speedup.
  • DigitalFreak - Friday, October 29, 2021 - link

    Hey Pat, if you really want people to believe Intel has changed the first thing you can do is fire Ryan Shrout.
  • Slash3 - Saturday, October 30, 2021 - link

    This.
  • UltraWide - Saturday, October 30, 2021 - link

    That last Q&A about licensing core is very interesting. It gives Nvidia's purchase of ARM more credibility.
  • mode_13h - Sunday, October 31, 2021 - link

    Read it closely. He only committed to licensing Intel cores for use in chips made by Intel's own foundry services (see the first 2 sentences of his answer). This is still very different from ARM's business model.
  • FunBunny2 - Saturday, October 30, 2021 - link

    "Moore's Law is alive and well, and as I said in the keynote, until the periodic table is exhausted, we ain't done."

    that's kind of interesting. I guess there's some element(s) out there, not in semi groups, that only Intel has discovered. smart.
  • Threska - Saturday, October 30, 2021 - link

    Graphene is everybody's darling. Or maybe going photonic.
  • back2future - Sunday, October 31, 2021 - link

    On mass market there's cost of information, advantage of efficiency of access and cost of energy for solving development problems and enabling communication for www support and transfer, probably more basic. If industrial technology keeps up on Moore's Law (per layer), but system efficiencies getting lower because of bandwidth difficulties or administrative/social boundaries and counteractive interests the technical advance is lower that expected in theory?
  • back2future - Sunday, October 31, 2021 - link

    Graphene transistors may get into 100s of GHz switching frequency (on today's about $200/g for high grade graphene and about $100-300/kg for electronic grade silicon)
    Optical transistors are modelled for comparable switching speed (maybe even low THz speeds sometime), but lower power requirements (reduced cooling needs) because of no capacity changes (Depending on wavelength efficiencies of signal laser diodes, few years ago, were in a range of about 20-70% (laboratory <85-90%) from ultraviolett to infrared and laser diodes might last for 10000h (on mainboard components?), compared to LEDs for 50k(-100k) hours?)
  • mode_13h - Sunday, October 31, 2021 - link

    Optical computers have been the "next big thing" for many decades. I'll believe it when I see it, and it's sure not going to take over Intel's product portfolio a mere 5 years from now!

    Graphene has been another one of these darlings, for well over a decade. Again, what's needed is some kind of demonstration that it's production-ready and a path for mass production.

    Given that none of these technologies have appeared on the roadmaps of any foundries. I think it's safe to say they won't be used in volume production, this decade.
  • Threska - Sunday, October 31, 2021 - link

    True, and Huawei is added to the list.*

    *The nationalism in the comments is amusing.

    https://youtu.be/9aM3VxW69nw
  • mode_13h - Sunday, October 31, 2021 - link

    > If industrial technology keeps up on Moore's Law (per layer),
    > but system efficiencies getting lower because of bandwidth difficulties
    > or administrative/social boundaries and counteractive interests
    > the technical advance is lower that expected in theory?

    Depends on what applications you're talking about. There are lots of compute-intensive tasks where the heavy parts are sufficiently self-contained that the overheads you're talking about won't play a major factor. There are other types of workloads that won't scale as well, for sure.
  • back2future - Wednesday, November 3, 2021 - link

    Very common and widely known applications are web browsers between efficiency and security tradeoff with user interaction (enabling cookies for every entrance(?) web page, for example). Highly optimized hardware (network and memory bandwidth, rendering, object detection, etc.) is sent to idling because of system's ruling against efficiency (what might be solved later with additional, special hardware therefore, if available).
    Even for established (pro) software it took years, maybe even 1-2 decades to adjust to multi-core (with configurable balancing) advantages from single-core platforms.
    And platforms, if active, are returning to higher idling consumption, with more capable newer hardware components (-> faster standby and resume or improved power management with huge memory?), like Thunderbolt, USB4, PCIe5, iGPU and chipsets.
  • mode_13h - Sunday, October 31, 2021 - link

    It's interesting to see they included Charlie Demerjian in the small group of privileged invitees. He's long been a fierce critic of Intel, and I think he's gotten explicitly banned from some Intel events, as a result. I would love to hear what comes of the promised exchange with Intel, but I fear it'll be locked away, behind SemiAccurate's paywall. I sort of wish Intel would stipulate they'll answer his questions, only if he puts the entire exchange on the free part of his site.

    Also, I was glad to see to Paul Alcorn on the list. He's long been one of the better writers at Toms Hardware (and also contributed to this site, on a few occasions).
  • del42sa - Sunday, October 31, 2021 - link

    yes
  • back2future - Sunday, October 31, 2021 - link

    What will be the 2025 difference between ARM RISC and x86 CISC concepts with IP blocks on multi-tile chiplets on a socketed computing platform (special IP cores for ARM like advanced NPU and X64 emphasis on flexibility with peripherals for customising)?
    Distributions getting more hardware support through VMs, being Android or Linux concepts on Windows base (file systems and partition structures).
    What will RISC-V change on this market?
  • mode_13h - Sunday, October 31, 2021 - link

    > Distributions getting more hardware support through VMs,
    > being Android or Linux concepts on Windows base
    > (file systems and partition structures).

    Delegating hardware support to the host is surely a boon for niche operating systems, like FreeBSD or maybe even more specialized ones. However, I doubt very many such users will use *Windows* as the VM host.

    I think the ability to run Android apps on Windows was a defensive move against Chrome OS.
  • back2future - Sunday, October 31, 2021 - link

    Interesting part will be user experience and performance with hardware passthrough* comparison for available virtual machines, storage r/w on top of NTFS, exFAT, ReFS file systems or direct access to Linux optimized fs partitions?
    *Compiling a current (07/2021) Chromium OS was mentioned with 150-300GB free storage and at least 8GB RAM for linking, Android 12.0 from sources seems being about 400GB requirement.
  • alufan - Monday, November 1, 2021 - link

    wow so false this guy doesn't have a sincere bone in his body his body language and facial expressions tell the truth
  • mode_13h - Monday, November 1, 2021 - link

    You should think about what it's like to be in meetings all day, every day, and delivering the same sorts of messaging all the time. It's hard to stay energized and enthusiastic, much less passionate about it. And you can't just go off-script. You need to be disciplined and stay on-message. That just saps one's energy even more.

    I don't care what's in his heart. All I care about is what he does. If he runs Intel in a way that it competes based on excellence and execution, and not shady & anti-competitive business practices, then he earns full marks from me.
  • alufan - Tuesday, November 2, 2021 - link

    Well I think the "special" work they convinced MS to do on win 11 proves how he is going to do it that and of course the fact he can show the slides from Day 1 that shows Intel CPUs in a very favourable light, smacks of the release of intels last so called HPC chip that was released hurridly the Day before AMDs just so they could be on top of the performance charts for a Day and historically relate to them nowing full well they didnt stand a chance in hell of actually beating the AMD part
  • mode_13h - Friday, November 5, 2021 - link

    > I think the "special" work they convinced MS to do on win 11

    Microsoft and Intel have always been close partners. I don't see anything shady about that, so long as there's nothing in the arrangement that tries to block AMD or ARM. And if there were, I doubt Microsoft would agree to it. MS doesn't want to run well *only* on Intel chips.

    As for the timing of launches an announcements relative to competitors, that's the sad reality of the industry, these days. Everyone is trying to steal the spotlight. Nvidia is king of sabotaging it competitors' launches, and AMD does it too (probably in response to Nvidia?).

    Don't get distracted by the PR posturing. The real test is on execution & delivery. Will Intel of the future only be able to claim the performance crown by burning more power? That's the sort of standard I'm applying.
  • Vink - Friday, December 10, 2021 - link

    Welcome back Pat, and kick ass to all these bastards! thx.

Log in

Don't have an account? Sign up now