Comments Locked

67 Comments

Back to Article

  • CajunArson - Friday, August 21, 2020 - link

    "The only downside here is that Intel hasn’t spoken much about the glue that binds it all together. "

    Uh... you apparently missed the last several years of EMIB and Foveros? Not to mention all the products that have already used these interconnects?

    P.S. --> Calling Emib and Foveros "glue" compared to AMD's 1970's era copper traces in plastic "glue" is like calling the Space X Dragon 2 just another capsule like Mercury 1.
  • hetzbh - Friday, August 21, 2020 - link

    You are mixing 2 different things: You are talking about glue in terms of vertical "binding" between layers.
    Ian is talking about chip to chip communication. If you know AMD tech, think about things like Infinity fabric.

    BTW, Intel had an experience with "glue" - look at Kaby Lake G which had a Vega chip in it. In that case it was a simply PCIe connection (and not a fast one, either).
  • Atari2600 - Friday, August 21, 2020 - link

    Indeed. [/Tealc]

    Without a basis for communication, Intel are whistling in the wind here.

    Look at the power budget for AMD's infinity fabric when it came to core vs. uncore. When the number of chiplets rises, so does uncore power budget (and not far away from linear either).

    https://www.anandtech.com/show/13124/the-amd-threa...

    Intels many chiplet strategy (especially without a magic communication system) is going to blow the entire power budget on each chiplet talking to each other to say "I've no available power to do any work".
  • eek2121 - Saturday, August 22, 2020 - link

    AMD is actively working on improving power efficiency in future generations of chips, however.
  • Sahrin - Saturday, August 22, 2020 - link

    Not only that, but AMD has patents on interdie communication tech dating back to 2003 and this is not part of the AMD64 cross-license. AMD has made massive strides in cutting IF power consumption just between Zen and Zen 2.

    Will be interesting to see the IP 'interconnect' Wars.
  • lmcd - Friday, August 21, 2020 - link

    As if either AMD or Intel would deliver a custom interconnect for a low-volume part?
  • eddman - Saturday, August 22, 2020 - link

    The EMIB on Kaby Lake-G is used to connect the GPU to its memory, not CPU to GPU.

    https://www.extremetech.com/wp-content/uploads/201...
    https://www.techspot.com/review/1654-intel-kaby-la...
  • eddman - Saturday, August 22, 2020 - link

    https://fuse.wikichip.org/news/1634/hot-chips-30-i...
  • Ian Cutress - Saturday, August 22, 2020 - link

    hetzbh is correct. I was referring to the fabric interconnect, not the die-to-die connections. It literally talks about high-speed fabric protocols in the next sentence and the rest of the paragraph. It'd be a bit odd just to mention physical interconnect in that context just on its own.

    Besides, you've commented *A LOT* on my articles where I've gone into detail about all the different levels of physical interconnect that Intel have been developing. Literally in the related reading section underneath are several of my articles on those exact topics.

    Who exactly are you trying to fool?
  • psyclist80 - Saturday, August 22, 2020 - link

    It was probably the glue comment, it triggered him and was a clever play on words considering Intel's previous comments regarding MCM, what a difference 3 years makes.
  • Spunjji - Monday, August 24, 2020 - link

    Comment-and-run, no less. No response, no retraction. It's getting increasingly difficult to tell whether stuff like this is just people getting up in their feelings, or a group purposefully disrupting the comments to provide an illusion of commenters rejecting a reviewer's "bias" :/
  • at_least_3db - Monday, August 24, 2020 - link

    The comment was in part a slap back to intel's comment about AMD's "glued together solution " a few years back. It was meant as an insult to AMD, but how the tables have turned.

    Secondly, as other commented, there is a difference between the fabrication technology, and the protocol. See AMD infinity fabric for comparison of what we want.
  • Quantumz0d - Friday, August 21, 2020 - link

    All show, this time they got someone to make new BS slides to look fresh. And got some new shiny trash to show off. Dafaq is that gamer and creator bs chiplet design, on top with AI, what AI are you going to stuff in Intel, that Intel HW scheduler with Big Little garbage to make up for the big core SMT performance ?

    Man they are really out of ideas, Ian's aspect of "which glue" is perfectly apt here, we know AMD processors have limitations around the Infinity Fabric and how IMC works alongside the cache and other IP on the whole Ryzen chip and how AMD improving it as they learn. With Intel none of them shown and where are the products, copying AMD is what they are doing now, chiplet design and now this, what else, wait for 7nm ? Xe was outsourced to TSMC, if it fails then Raja can retire.

    Let's see what Ice Lake delivers first on Xeon platform and how RKL wants to be relevant with their 5.0GHz clocks but lower core count due to non scalable ring bus on power hungry 14nm++ design, 14nm++ is a technology feat for sure, competing with AMD parts on 7nm but it's very old now, SMT performance is already taking a hit...
  • Lord of the Bored - Friday, August 21, 2020 - link

    "copying AMD is what they are doing now"
    Hey, it kept them relevant in the AMD64 days and bought them enough time to engineer their way out of the Pentium 4-shaped hole they'd dug. Maybe it can save them this time too.
  • nathanddrews - Saturday, August 22, 2020 - link

    Hey, if people can get AMD's class-leading and affordable high core count and next gen features alongside Intel's class-leading single-threaded frequencies, that would officially be the best CPU ever made. So yeah, it would certainly *save* Intel.
  • FunBunny2 - Friday, August 21, 2020 - link

    this talk about chiplets, Intel AMD et al reminds me of my first RS/6000 install. 5 chips, I think... well, the wiki says that the multi-chip versions from the early 90s (when I was involved) had at least 6 chips.

    "The lower cost RIOS.9 configuration had 8 discrete chips—an instruction cache chip, fixed-point chip, floating-point chip, 2 data cache chips, storage control chip, input/output chip, and a clock chip. "
    the wiki

    kinda the same idea
  • brucethemoose - Friday, August 21, 2020 - link

    Yep, its kinda back to the future.
  • vFunct - Saturday, August 22, 2020 - link

    And then it'll all go back to monolithic chips for the generation after this, perhaps through wafer-scale integration like we're seeing with some designs..
  • Alexvrb - Sunday, August 23, 2020 - link

    That's silly. Do you have any idea how expensive a wafer is? If they go monolithic again, it will be after they run out of new process nodes to design for. In the meantime, they're planning on doing what AMD is already doing - chiplets. It lets you tweak parts of the whole without a complete redesign, and it lets you mix and match processes based on what you have ready - you can use what is best for a given design, cost, etc.

    In fact, if you look at their fancy pictures, Intel is thinking of taking it a step further. Which, as others have said, will be very challenging. The "fabric" to make it all work together as one is a major challenge. AMD has spent a lot of time getting it right, even without subdividing their chiplets as Intel is discussing. Further breaking those components up multiplies the challenge. AMD hasn't even released a chiplet based APU (yet), mainly due to how much power it would consume - Renoir made huge power gains, they would have burned all that up in fabric power if they had CPU and GPU chiplets instead of monolithic. It's why they started with chiplets on the desktop and up.

    Anyway, we'll see, I'm sure future fabric designs will enable ever-increasing flexibility. Intel has a lot of resources to throw at the problem, and AMD has a lot of experience already. So it should be fascinating to see what they both cook up in the coming years.
  • JayNor - Wednesday, October 7, 2020 - link

    Intel's heterogeneous accelerator solutions using CXL are claimed to be simpler than, for example, requiring UPI or IF on all the accelerators.

    It isn't clear to me that CXL would imply greater power, since the biased cache coherency is intended to reduce the traffic between CPU and accelerator.

    https://www.nextplatform.com/2019/09/18/eating-the...
  • PixyMisa - Friday, August 21, 2020 - link

    Even POWER10 has multiple chips - the main CPU (with one or two CPU dies), an L4 cache chip (960MB), and a memory controller chip.
  • Kevin G - Sunday, August 23, 2020 - link

    POWER has been doing the MCM since POWER2. I think only POWER3 was the processor in the line up that was offered exclusively in a single chip package.

    Most of those have used traditional wire bonding and IBM's high end stuff still uses ceramic packaging. AMD is leveraging wire bonding in Epyc.

    Intel is looking to go the next step with interposers and EMIB as appropriate. That cuts down on power, shaves a few hairs off of latency, supports higher clocks and wider interfaces. Win-win from a technical sense if thermodynamics weren't so difficult between high power dies and costs were more reasonable.

    Both AMD and nVidia have indicated interest in using interposers and/or EMIB as appropriate. Using numerous smaller dies to build something like a GPU simply makes sense if you can cool the resulting slab of silicon on a package.

    Really just a matter of time who gets a product like this to market first.
  • vFunct - Saturday, August 22, 2020 - link

    Chiplets are different because they're on the same package. Otherwise you can say Intel's 8086 + 8087 did the same thing...
  • TristanSDX - Friday, August 21, 2020 - link

    chiplets are best only to save costs by reduce investment, where cores with high pace of evolution must work with cores of low pace of evolution. High perf specialised cores (GPU, RayTracing, AI, CPU) have high pace of evolution, so should be integrated in monilitic way.
  • Flunk - Friday, August 21, 2020 - link

    The reason they're doing this is to reduce die size, which increases yield, even if it means they need a bunch of dies.
  • FunBunny2 - Saturday, August 22, 2020 - link

    "which increases yield"

    well, increases the number dies printed per wafer, but, IIRC, as node size has shrunk the % of good dies per wafer has decreased, so it's always a see-saw between gross yield and shippable yield. by how much at each step I don't know. it is legend that printing at a step or two or three larger than current step does wonders for yielding good dies.
  • sor - Saturday, August 22, 2020 - link

    That’s the beauty of chiplets. If you have a massive die with CPU, GPU, IO, etc all together, a flaw in any one of these can possibly waste the entire die. Fab four distinct CPU chiplets in the same area instead and you end up with three good CPUs and one bad.

    Additionally you get the ability to mix and match node sizes for each component to their optimal yields.
  • Alexvrb - Sunday, August 23, 2020 - link

    They have had strategies for dealing with flawed chips for decades. When you get flaws you often still get a lower-tier chip. It depends on how much redundancy there is and where the flaws are, but for a recent monolithic example fusing off CPU and GPU cores in a Renoir produces different models... and those fused off cores don't have to be in the same place from chip to chip! So the flaws can be in various places. GPUs are another great example. That's why they subdivide them into CUs, and it's also why they don't sell many fully-enabled large die chips to consumers. They can cut them down as needed both to meet market needs and also to sell a larger percentage of flawed chips.

    Also, just using chiplets doesn't really "solve" this problem, although it does enable additional flexibility in using flawed chips. Like a good I/O chip ends up in HEDT or server, a slightly-flawed one ends up in a Ryzen dual chiplet, and a more-flawed one still might be OK for a single chiplet Ryzen or an X570 chipset. Ditto for the chiplets themselves, you have 8 core chiplets with 6 or 4 cores enabled. It could be one fully enabled CCX or two half-working CCX. Lots of options for flawed chips.
  • Arbie - Friday, August 21, 2020 - link

    I got as far as "2024+". Not saying that future vision isn't important, but Intel's recent ability to project even six months has been disappointing. Their public pronouncements of plans so much more distant are simply uninteresting.

    It's a shame because until about 2017 I believed what they said.
  • Kangal - Saturday, August 22, 2020 - link

    Intel peaked in 2014-2016.
    That's when they were the world's leader in 14nm fabrication, way ahead of Samsung, TSMC, GlobalFoundaries, and SMIC. And that's when their Sky Lake architecture was miles ahead of AMD's Piledriver architecture, and the very outdated ZX C3+ by Zhaoxin. And it was even competitive against ARM, which at the time was only pushing the troubled Cortex A57 on an on planar 28nm wafers.

    Just think about their first consumer 8-core processor, the i7-5960X from 2014, and compare that to their latest 8-core processor, the i7-10700KF that is for 2021. There's been notable improvement in terms of power draw/thermals comparing the old 280W to the new 210W, yet, both desktop chips still are classed in the high-watt power category. So what really differentiates them is the performance. Within the timeframe of 7 years, they've only managed a rough ~1.30x times increase. You can deduce that figure by looking at the CineBench r20 figures, of roughly 5100 points for the new chip, compared to roughly 3900 points on the old processor.

    In 2017, AMD practically caught up to Intel. Sure their Zen1 architecture was inferior to Sky Lake architecture at the time, and the 16nm lithography they used was also inferior to Intel's +14nm wafers. But with 12nm and Zen+ they closed the gap. And later with 7nm and Zen2, they overshot Intel by a clear margin. Now AMD is poised to bring together +7nm lithography, Zen3, and RDNA-2 iGPU in a few months, while Intel is still stuck manufacturing their old offerings and promising that better things are on the way. Oh, how the turntables have...

    If you think that's interesting, it's also been said that Zhaoxin plans to take AMD's IP and recent advances, in order to manufacture x86-PC with the aid of SMIC foundry. It is aimed for the Mainland Chinese market where they are shielded from international lawsuits and asset freezing. They would eventually like to export this to other countries, and gain revenue (outside of first-world western markets) presumably. So the 2014-2016 world leader Intel may be matched (or even surpassed) by some no-name competitor. And I'll name drop RISC-V here too, with the promising looking SiFive U8 processor.

    And that's ignoring the elephant in the room, the current market leader, ARM. Heck, the proprietary solution made by Apple last year in the form of the Thunder CPU microarchitecture itself is jaw-dropping. But we're expecting to soon see even further advanced lithography from Samsung-TSMC and the ARMv9 architecture.
  • ads295 - Saturday, August 22, 2020 - link

    Damn, a 30% increase in 7 years? Never thought about it that way, but in CAGR terms that's an improvement of less than 4% yearly! That is abysmal by any standard...
  • Quantumz0d - Saturday, August 22, 2020 - link

    SMIC and and AMD IP that Hygon and sell it back to west ? Lol not happening. Not even Dems will allow that to happen, current govt blocked ASML tech, even if ASML sells to China they will be robbed just like how Maglev tech was stolen from Germany. U.S pride will be rubbed in sewage if that happens, they are already in TSMC AZ fab deals, ARM DARPA deals, and Intel is not going to sit idle and watch it happen, they are already into that Big little for the ULV and LP CPUs and even S Desktop SKUs, this is probably to get the SMT performance up there with AMD, as Ryzen in SMT is very powerful vs Intel. Plus many nations are against China esp with the current plandemic.

    And on top RISC V is a unicorn at this point lol, people were saying x86 is dead, its old and all here we are, EPYC7742 is unchallenged for 2 years and ARM Thunder X2 is not going to take any of their market either, as the AMD and Intel will secure years long contracts, only Graviton is core since it reaches masses through AWS EC2 but it's not going to break AMD/Intel at all, the R&D and supply, customer demand is not possible for the ARM, Software is an entirely different story, Apple is in their own bubble, it's not a DC market or HPC at all. Nvidia buying ARM Is the only concern at this point for both Intel and AMD, but as I said they are not going to sit idle and let it happen, Ice Lake is already going for improvements, Intel is going chiplet route for cheaper and faster capturing of market like AMD, copying them essentially since they are out of ideas and they didn't plan for this sitting on the gold mine of cash.
  • Kangal - Saturday, August 22, 2020 - link

    The point was that it was in the works.
    Now I'm not saying it's probable, but certainly possible that a Nameless Brand from Mainland China surpasses Intel in the coming years. They've been quite hostile in taking trade secrets, bribing experts, and implementing huge projects.

    Wether both Republicans and Democrats agree bipartisan is a moot point. And selling back is the last step, obviously not to first-world western nations, but money talks so plenty of second-world, third-world, and Eastern nations will bite the bullet. It's the same story with Huawei's success, they were struggling at 3G tech in Mainland, then out of nowhere they announce cheap international contracts and got all this 4G tech that mimicks what was being developed in Bell Labs, there's a huge funding provided to them from the state, we then find out Nortel (one of world's leading telecoms) was breached several times, they end up losing market share and value, then some of their R&D staff pops up working for Huawei, Nortel declares bankruptcy, Huawei is listed as the world's leading telecom manufacturer.

    The world is no longer dependant on x86, and the advancements of Intel/AMD, and are no longer dependant on Microsoft. Not like they used to be back in 2010 or earlier. So it's likely that ARM will continue on improving, to the point, where most of the server marketshare that was trapped under MS running on Intel/AMD may shift radically to running Linux and using ARM chipset. Amazon's offerings have truly been the only proper attempts so far, and they've been mighty successful this early at it, so we will see more of this coming from Amazon and the industry at large. Not sure how you think EPYC is unchallenged, maybe you forgot this is about price, thermals, performance, and support. Server market is big money, and you know who likes money: Apple. They're poised to also make a splash and shake up the market if they choose to do so, they certainly have the hardware and software to do so. That's why it would be disingenuous to not mention them, especially with their announcement to scrap Intel.

    RISC-V is a moonshot, but one that's worth mentioning and keeping an eye on. And Intel probably won't recover properly until 2022 when they get their fans running efficiently. So while Intel hasn't caught up to AMD, and won't for some time, how can we substantiate that when the bigger danger seems to ARM where they have No Chance* of catching up, let alone surpassing! This feels like the "iPhone Moment" when it inevitably came to the demise of Nokia, BlackBerry, etc etc.

    PS: I don't think Nvidia will be cleared by antitrust to buy ARM, I think that could fall back into the hands of the British. I know the Japanese aren't happy about the last 5-10 years with the loss of their Asian marketshare and resurgence of Chinese corporations and they probably blame the USA for that happenstance.
  • Zoolook - Monday, August 24, 2020 - link

    Nortel never recovered from the IT-crash in late 2000, and finally went under after the 2008 financial crisis. What they did have in LTE-patents wasn't anything the other telcoms went after, it was bought by a consortium including Apple, Microsoft et al, who wanted break into the telcom patent pool.
    Souped up 3G networks came out around 2010 but real 4G networks weren't up and running until 2013, long after Nortel floundered, if anyone wanted leading edge 4G research they should have hacked Ericsson and Cisco, and maybe they did, we don't know.
    Nortel were strong in fiber optics and that is also what has been acknowledged that the chinese hacked.
  • Spunjji - Monday, August 24, 2020 - link

    "Not even Dems will allow that to happen" 🤡
    "plandemic" 🤡🤡🤡

    I'm so glad that z0d's stunning political insights are never far from view.
  • DareDevil01 - Saturday, August 22, 2020 - link

    It's even more abysmal when you compare it to AMDs year on year IPC gains since Zen 1
  • Spunjji - Monday, August 24, 2020 - link

    Yeah, I'm amused by how far out into the future they're trying to project and the implications that has for their near-term releases.
  • ksec - Friday, August 21, 2020 - link

    Intel is posting a lot of these over the years with no timeline to ship. So I call all of these marketing more than anything else.

    Unlike TSMC, they post their work and intended shipping time to the client and investors. Knowing exactly when something should be ready.

    Not to mention I fail to see how this will save on any cost from its clients. Especially if you consider Intel's margin.
  • Mike Hagan - Friday, August 21, 2020 - link

    Near the end of the 6th paragraph the acronym "IP" is used, but I cannot find it defined anywhere in the article. What does it stand for?
  • IanCutress - Saturday, August 22, 2020 - link

    IP is a standard industry/global acronym for Intellectual Property. In this case we're talking about a portion of an SoC - a CPU core, a GPU core, a TB4 controller, a USB controller, etc
  • quadibloc - Saturday, August 22, 2020 - link

    And here I thought AMD was only putting 14nm I/O on its chips because it was locked in to that stupid contract with GlobalFoundries.
  • Spunjji - Monday, August 24, 2020 - link

    Even if they weren't, it would probably still save them meaningful amounts of money over the higher costs of manufacturing at 7nm (plus the indirect cost of eating up portions of their wafer starts).
  • KayM - Saturday, August 22, 2020 - link

    No matter whatever it is...benchmark results is what consumers are looking for..
  • GeoffreyA - Saturday, August 22, 2020 - link

    Cant help but laugh at the terms Marketing comes up with, such as "Purpose Built Client." But I think "Mobile Go-getter" takes the prize.
  • Spunjji - Monday, August 24, 2020 - link

    I enjoyed that one too. I guess the alternative is the Immobile Sit-and-waiter 😂
  • MamiyaOtaru - Saturday, August 22, 2020 - link

    I love how the "gamer" configuration has extra gpu chiplets, as though a gamer would (by choice) ever use something other than a discrete GPU, or like a gamer would want to forego extra CPU cores to load up on more anemic Intel integrated graphics
  • Rudde - Saturday, August 22, 2020 - link

    I prefer the content creator without media acceleration.
  • Spunjji - Monday, August 24, 2020 - link

    Yeah, that's a real ?!? moment
  • DigitalFreak - Sunday, August 23, 2020 - link

    That was my thought as well. For Intel's sake, I hope these were just marketing BS and that they don't actually believe that.
  • Spunjji - Monday, August 24, 2020 - link

    "It's shit, but with enough of it, it's... a lot of shit"
  • JayNor - Saturday, August 22, 2020 - link

    don't U.S. patents expire after 17 yrs?
  • KAlmquist - Saturday, August 29, 2020 - link

    U.S. patents used to expire 17 years after being granted, but this was changed in the 1990's in order to comply with WTO rules. Patents in the U.S. now expire 20 years after being filed.
  • JayNor - Saturday, August 22, 2020 - link

    Their fpgas already have a chiplet system. I can imagine them creating a XeMF chiplet and using the CXL based Xe Link communication from their Aurora node design.
  • zamroni - Saturday, August 22, 2020 - link

    Chiplet is suitable for desktop and server where power consumption is not big concern.
    Even amd doesn't use chiplet for laptop processor
  • JayNor - Sunday, August 23, 2020 - link

    Why is AMD's sprawling chiplet layout considered to be a technological achievement? It seems to me it is just a trade-off between performance and yield. If it solved both issues, shouldn't we be seeing 5.3GHz AMD chips?
  • Spunjji - Monday, August 24, 2020 - link

    It *is* just a trade-off between performance and yield. It's still considered to be a technological achievement because:
    1) Despite the inherent compromises in such a design, it still outperforms its competition from a power/performance perspective.
    2) It puts them in a position where they've already trialled the tech and will be on their 2nd or 3rd generation before Intel hit their 1st.

    We aren't seeing 5.3Ghz AMD chips because their core design and/or the manufacturing process don't really support it.
  • JayNor - Tuesday, August 25, 2020 - link

    I'm not so sure the power/performance can be claimed as a virtue of the chiplet design. In fact, the sprawling chiplets require longer traces with higher energy per bit than, for example, Intel's EMIB interconnections to FPGA chiplets.

    Yes, TSM's 7nm process uses less power than Intel's 14nm process, but I think it has nothing to do with their chiplets.

    I suspect we'll see that the power/performance differences have disappeared with Ice Lake Server, along with the PCIE4, 8 channel memory and process differences.
  • PeachNCream - Sunday, August 23, 2020 - link

    Weird how this customization at the CPU package level is nearly identical to customization of compute technology to suit a specific role has been happening for very long. Need more CPU power, add more CPU. Need more graphics, add more GPU. Need more I/O, add more I/O. The only difference is that instead of replacing or adding an expansion card in some ISA/PCI/AGP/PCIe slot, you're just seeing it done by Intel at the fab before it reaches the end user. In a way, it's just moving the needle a little regarding who handles that system customization and only barely at that since the buyer is going to be the one picking whatever chiplet capabilities they get.

    The more things change, the more they stay the same.
  • ender8282 - Sunday, August 23, 2020 - link

    "a gamer might want to focus purely on the graphics"

    Does Ian know something about Xe that we don't? I can't think of any gamer who would want Intel to focus purely on graphics.
  • Duncan Macdonald - Sunday, August 23, 2020 - link

    A monolithic design like Intel's has a number of an advantage of faster internal communication but with a number of costs - yield, design effort, use of high performance technology to do slow jobs.
    The yield cost comes because the larger a chip, the fewer that are perfect - a wafer of AMD CPU chiplets with a 10% failure rate becomes a 40% failure rate with the larger Intel CPUs.
    The design effort comes from the fact that the whole chip has to be re laid out for any changes that increase the size of an area in the chip - with the AMD chiplet design changes to the CPU chiplet do not require changes to the I/O die and vice versa. Also changing the number of CPU cores is a major job. AMD can use the same basic design from 8 cores to 32 cores just by adding more chiplets (Ryzen) or from 8 to 64 cores (Threadripper, EPYC)
    The monolithic design requires the same technology be used for the whole chip so slower areas such as the memory interfaces still have to use the high speed, high power process as the CPU core. AMD are able to use a slower, less power hungry and cheaper process (14nm or 12nm) for the I/O die while using the high performance process (7nm) for the CPU chiplets.
  • yetanotherhuman - Monday, August 24, 2020 - link

    Now with special Intel Glue-Not-Glue™
  • londiste - Monday, August 24, 2020 - link

    I think most of the failure in what you are describing is about process node and failure to manufacture and bring new stuff to market en masse.

    Architecturally speaking - especially of core, iGPU - while Skylake-derived Comet Lake is still what we get on the desktop Intel has Ice Lake with Sunny Cove and Gen11 iGPUs since last year. Tiger Lake with more improvements is expected early next year, if I remember correctly.

    Zen and Zen+ had shortcomings compared to Skylake-ish and while Zen2 had a nice jump in terms of "IPC" (technically single-core performance at iso-frequency) Ice Lake and Sunny Cove have proven to be a bit above that but let down by Intel's failing 10nm process.

    Upcoming Zen3 and Tiger Lake might be closer than we are assuming them to be. Unless something has changed considerably, we as out of luck on the desktop with Intel unable to compete but again - when it comes to architecture Intel really is not behind or not that far behind.

    ARM is nice but I would not draw too many conclusions from current news about fast improvements. ARM is undoubtedly improving, growing larger and faster but majority of the changes that are made follow the same path more mature performance/oriented ISAs (like x86) have gone through. ARM CPUs - or rather, SoCs - are also manufactured on bleeding edge nodes, even ahead of AMD and TSMCs cooperation. 5nm SoCs in phones will be out something like a year before Zen4 gets to the performance variation of the same node.

    When it comes to competition and ARM encroaching on x86-s playground from the lower power angle, do not count out Intel or AMD. Tremont seems to be noticeable improvement on Atom line after years of neglect and that is probably not a coincidence. AMD definitely has capability to follow, Jaguars were OK and they can pick up where they left off. While performance of these pales in comparison to Lakes or Coves and Zens they are also much smaller and much more efficient. Even Intel is apparently able to manufacture Tremont-based SoCs reasonably well on the 10nm node.
  • JayNor - Tuesday, August 25, 2020 - link

    Intel also confirmed a Tiger Lake-H recently with some linux patches. The recent presentations indicate Tiger Lake's 10nm SuperFin transistors should do well out to 65W.

    Recent leaks indicate Intel will also build a 14nm Rocket Lake-S with PCIE4.
  • edzieba - Monday, August 24, 2020 - link

    "The only downside here is that Intel hasn’t spoken much about the glue that binds it all together"

    Ian, you yourself blogged about the interface three years ago! https://www.anandtech.com/show/11748/hot-chips-int...

    It's the same AIB interface provided to the DARPA CHIPS initiative for implementation by other vendors: https://www.anandtech.com/show/13115/intel-provide...
  • Spunjji - Monday, August 24, 2020 - link

    The "glue" being referred to there is the interconnect architecture - think of AMD's Infinity Fabric - rather than the physical method of bonding the things together, which is what EMIB does.

    They'll need to design a flexible interface through which these various components can communicate at the required speeds while using a minimum of power and area. Until they do that (and I don't doubt they're capable) none of this makes any sense as an actual product.
  • edzieba - Monday, August 24, 2020 - link

    "They'll need to design a flexible interface through which these various components can communicate at the required speeds while using a minimum of power and area"

    Which is AIB (Advanced Interface Bus). Which is what the two articles I linked mentioned.
  • alufan - Monday, August 24, 2020 - link

    Nice to see the sites virtually all Intel front page again apart from AMD B550 motherboard thats been there for several Days there must be at least 10 articles about Intel and its products on the front page

Log in

Don't have an account? Sign up now