Comments Locked

237 Comments

Back to Article

  • abufrejoval - Wednesday, May 8, 2019 - link

    Seeing is believing.

    And then the ability to put more transistors into a die is nothing by itself, unless it yields tangible value.

    2.x density won't deliver 2.x IPC or 1/2.x power consumption.

    Not holding my breath...
  • peevee - Wednesday, May 8, 2019 - link

    And not 1/2 of cost.
    And not even 2x density ("7" vs "10"), I suspect.
  • Bluetooth - Wednesday, May 8, 2019 - link

    Density is in the power of two: 10^2 / 7^2 = 2x
  • peevee - Thursday, May 9, 2019 - link

    If only the "7" and "10" had anything to do with reality... for about 10 years now these are purely marketing BS.
  • psychobriggsy - Thursday, May 9, 2019 - link

    Yes, but the MTr/mm^2 figures are a reasonable guideline to density, these are off the top of my head:
    Intel 14nm, TSMC/SS 10nm: About 35-40MTr/mm^2
    Samsung 8nm: ~60MTr/mm^2
    Samsung 7nm: 95MTr/mm^2
    TSMC 7nm: 96MTr/mm^2
    TSMC N7+: 114MTr/mm^2
    TSMC N5: ~170MTr/mm^2
    Intel 10nm: 102MTr/mm^2 but they haven't given any figures for what their shipping 10nm will be
    Intel 7nm: "2X" 10nm, so ~200MTr/mm^2 (likely competing against N5 or N5+)
  • Dolan - Friday, May 10, 2019 - link

    This is theory based on Intel's promises.

    Now little reality:

    Intel 14 = other 14/16 ... https://images.anandtech.com/doci/11170/ISSCC%208....
    Intel 10 = other 10 ... use of SAQP, LE3/4...
    Intel 7 = other 7 ... EUV versions

    Seriously people, Its Intel... They are lying.
  • Azethoth - Saturday, May 11, 2019 - link

    You are confused about how these things compare. Historically nobody matched Intel with actual density being the same at the marketing numbers. Your claims are false and known to be false.
  • Butterfish - Saturday, May 11, 2019 - link

    @Dolan Your graph doesn’t show anything that supports your argument if anything it actually show Intel has smaller transistor. Even if you don’t like MT/mm2 matrix using traditional CPPxMMP and plug in the figure in your graph. Intel is still 1.4X denser very similar to 1.5X denser reported in real world.
  • 0ldman79 - Sunday, May 19, 2019 - link

    It also shows that twice the L2 takes up 50% more space on the supposedly weaker process.

    Their transistors are similar.

    It doesn't show the actual chip density, however, Intel's 10nm chip density is like 20-30% higher than TSMC's 7nm (ballpark), however, TSMC is shipping 7nm product. Intel is not.

    At the end of the day which one is tighter doesn't really matter, it matters whether the product works as advertised. 10nm vs 7nm, maybe Intel's 10nm really is a better design, but they don't work. Yields were in the single digits for Cannon Lake.
  • Spunjji - Wednesday, May 22, 2019 - link

    Nailed it. A node that won't yield or perform to spec is a bad node, no matter how dense.
  • Smell This - Thursday, May 9, 2019 - link

    I continue to believe that originally cell libraries at 10nm were dinked, and the Chipzillah 14nm "Process-Architecture-Optimization' further went crapola. AMD 28nm SHP 'Excavator' cell libraries were/are comparable to 14nm, and certainly less dense than Kaby 14nm+ and subsequent optimizations.

    I think the standard high-density libraries from 14nm to 10nm are 20-33% "less dense" depending upon block/tool mods.
  • Wilco1 - Wednesday, May 8, 2019 - link

    Also Intel CPUs traditionally only achieve a small fraction of the maximum theoretical density. For example Kirin 980 has ~93 million transistors/mm^2, while a 14nm Xeon achieves just 15.8 million transistors/mm^2.

    So 7nm TSMC is a whopping 6 times more dense, and that despite Xeon having significantly more cache (which is the most dense). Even if Intel 10nm delivers its original promise, it will still be more than 2 times off TSMC 7nm.
  • peevee - Wednesday, May 8, 2019 - link

    It's original promise was over 100 MT/mm2 in a balanced SRAM/logic combo. It's to be seen what it is actually going to be in volume parts.
    But judging from "10nm", if their "7nm" volume production to be expected in 2021, pilot production should be happening now, and it is nowhere in sight. Meanwhile TSMC starts volume on EUV "7+" and "6nm" very soon.

    I wish at least press would switch from those fake "nm" to MT/mm2 x GHz (SRAM or 50/50 or 60/40, does not really matter as long as it is common), we all be talking on much more real, physical level, amd not this marketoidal BS.
  • Wilco1 - Wednesday, May 8, 2019 - link

    TSMC 7+nm is already in production. TSMC 5nm started risk production a while ago with volume production next year - that means 7+nm iPhones this year and 5nm next year.

    Even the MT/mm^2 are easy to manipulate by giving numbers for a low-track library which isn't useful in the real world. Hence my point that Intel doesn't ever get anywhere near the claimed densities for their processes. They specifically invented their density metric as that was the only way 10nm looked competitive with TSMC 7nm. However TSMC has shipped hundreds of millions of chips with 90+ MT/mm^2. Let's see when Intel ships a single one that does 90 MT/mm^2.
  • peevee - Thursday, May 9, 2019 - link

    "TSMC 7+nm is already in production."

    Note I wrote "volume", not "production". What volume products are on 7+ now?
  • Wilco1 - Thursday, May 9, 2019 - link

    Next iPhone is said to use 7+nm. That means full scale production right now to get 50+ million units fabbed.
  • peevee - Tuesday, May 14, 2019 - link

    Phone which will be released in 4.5 months? Pure speculation about the volume production yet.
  • levizx - Friday, May 10, 2019 - link

    Kirin 985/990 and A13 are in volume production now. Otherwise TSMC won't be able to ship the chip before August for a October consumer device launch.
  • peevee - Thursday, May 9, 2019 - link

    "Even the MT/mm^2 are easy to manipulate by giving numbers for a low-track library which isn't useful in the real world."

    As long as it is the same thing, the metric is still infinitely more useful than completely misleading "nm". Even pure SRAM would be useful all by itself as caches have to take a lot of space on 5GHz parts.

    Also note I wrote "x GHz" there. Of course it is much much easier to make dense 1-2GHz than 5GHz. I suspect it is what is happening to Intel's "10 nm" now - they will have 2GHz Y (and maybe U) mobile parts and Xeons, not 5GHz desktop parts.
  • wumpus - Thursday, May 16, 2019 - link

    Only the L1 cache really needs to operate at 5GHz (the L2 will probably be pretty close as well). By the time you get out to L3 or so, the densities should be comparable.

    High density 6T SRAM is likely to be the best way to compare density (even if the "good stuff" is 7.5T).
  • Nimrael - Thursday, May 9, 2019 - link

    What a BS... You wrote a some numbers and do not understand it's meaning.
    First of all, you compare HD cells of mobile chip with HP or even uHP cells of desktop dies.
    Secondary, TSMC's "5nm" in fact are no 5nm. Even not taking into the consideration that ASML NXE 3400B has a resolution of 13nm (and no 3400C has been shipped), 5nm (real, not PR) require scanner with the NA (mumerical aperture) of 0.5-0.6 instead of 3400B's 0.33
  • Wilco1 - Thursday, May 9, 2019 - link

    Actually I do fully understand the meaning. TSMC server dies on 10nm (eg. Centriq) do achieve 3 times the density of current Intel 14nm servers. Future 10nm Intel servers will get at most 40 MT/mm^2, nowhere near the claimed 100 MT/mm^2. So while 7nm chips like Kirin 980 achieve close to 100MT/mm^2, the density claims from Intel are simply marketing bullshit.

    And nobody claimed that 5nm means exactly 5nm, the nm figure has not been accurate for more than a decade. It's simply a label which can be used as a density approximation.
  • levizx - Friday, May 10, 2019 - link

    Actually NO, you DON'T understand what you are talking about. You are talking about ONE implementation of a specific process against a range of the other.
    Kirin 980 and A12 uses HD libs, Intel will not actually use those on their CPU, neither will AMD, so you are effectively comparing Intel 10nm with N7, when you should compare it to N7HPC. Radeon IV and Zen 2 both have MUCH lower density.
  • Wilco1 - Friday, May 10, 2019 - link

    Actually I do. Intel will never release CPUs using their high density libraries so even talking about Intel high density libs is misleading, and using them for density comparisons is outright lying.

    Server chips made on TSMC (such as Centriq) are only 25% lower density than eg. Kirin 980. Zen 2 density figures should be available soon, I expect the density to be similar to Centriq since there will be a LOT of SRAM on the chiplet.
  • Butterfish - Saturday, May 11, 2019 - link

    Centriq isn’t made by TSMC. It use Samsung’s 10nm Low Power Early library, and is only 20% denser than 14nm Skylake server chip despite Centriq using a low power high density library on “10nm” node
  • Wilco1 - Saturday, May 11, 2019 - link

    Centriq is 3 times denser than 14nm Skylake, look up the numbers.
  • levizx - Monday, May 13, 2019 - link

    Why are you fixating on hypothetical Centriq when Radeon IV and Ryzen is right in front of your eyes? You said yourself Intel will never use HD lib, then why do you bring up mobile SoC which IS on HD lib when you talk about density at all?
  • levizx - Monday, May 13, 2019 - link

    In other words, Intel CPU's density is a design choice, not a process feature. You'll expect much higher density with Intel's FPGA IoT and GPU chips.
  • Gondalf - Friday, May 10, 2019 - link

    Don't be a kid Wilco but try to have some proof about silicon manufacturing.
    You know that a server SKU can not be dense like a phone SOC, instead of the
    power density on die will not allow Intel to respect golden standards of long term reliability that
    OEMs ask.
    Bet Zen 2 core is not dense like TSMC claims about 7nm?? Bet actual Zen main defect is in
    dense library? This avoid to achieve high clock speeds on 14nm and this was the main reason
    of the very little success of actual Epyc.
    Try to be smart please. One thing is a very low power SOC, another story an high power SKU.
  • Wilco1 - Friday, May 10, 2019 - link

    Seriously, is Centriq not a high performance high power server CPU???
  • Butterfish - Saturday, May 11, 2019 - link

    Centriq use Samsung’s 10nm LPE (Lower Power Early) process. It is basically made like a scale up mobile chip. And your claimed of Centriq has 3X the density of Intel is also false. According to another Anandtech article it was 45.2MTr/mm2 compared to 37.5MTr/mm2 on Intel’s 14nm Skylake XCC server chip.
  • Wilco1 - Saturday, May 11, 2019 - link

    Your 37.5 MT/mm^2 figure is the marketing density of the 14nm process, so that's not the correct figure to use for Skylake. Checkout https://en.wikipedia.org/wiki/Transistor_count for actual densities achieved in the real world, Skylake does barely 16MT/mm^2.

    10nm from Samsung and TSMC are fairly close, the same is true for 7nm EUV processes, so which is used does not matter much. Centriq beats Skylake on performance and has much higher base and sustained frequencies than Skylake. And yet you try to claim it's a mobile chip?
  • Butterfish - Saturday, May 11, 2019 - link

    It use the same type of library as mobile process that is the point, and since Qualcomm abandon the project most performance figure found online are just their marketing materials which hardly prove it can do better than Intel in real world.

    If you want a high density Intel 14nm chip look for their Stratix 10 FPGA line. 17billion transistors in 560mm2, slightly over 30 MTr/mm2
  • blu42 - Monday, May 13, 2019 - link

    @Butterfish Centriq uses 10LPE since it doesn't need anything else to achieve its 2.2/2.6GHz. What are the clocks of 48-core Cascade Lakes Xeons again (hint: if they're below, say, 3GHz will you consider them mobile chips?) And why does Centriq need to do *better* than Skylakes in order to be considered in the same category?
  • Butterfish - Friday, May 24, 2019 - link

    The turbo are 1.2Ghz lower though. And intel’s process node are proven to be able to clock at significantly higher clock by their consumer facing product. There is also no third party benchmark that confirmed Centriq’s advertised performance. Two CPU can be clock the same but performance quite differently. Also we are talking about process node tech not final chips. If you argue LPE has little impact on the performance of cheerypicked lower cloaked high core count server chip, so is density. Finally, I have proven intel does has high density library for 14nm that they use on their FPGA chip which debunk Wiloc1’s entire conspiracy theory which is what this thread really is about.
  • Jorgp2 - Wednesday, May 8, 2019 - link

    Transistors are measured differently based on who's counting them.
  • HStewart - Wednesday, May 8, 2019 - link

    So true and also the lower nm rating by one manufacture does not mean more transistor compare to another manufacture .
  • Lord of the Bored - Friday, May 10, 2019 - link

    Which doesn't change the fact that Intel is SUPER late with their process improvements.
  • peevee - Thursday, May 9, 2019 - link

    Sorry, transistors are transistors. But for what the marketoids industry did to nanometer, they all must sued for fraud. Too bad our FTC does not have any brains, only lawyers.
  • ats - Thursday, May 9, 2019 - link

    Not actually true. How you count transistors can vary widely because there are reasonable differences of opinion on how you should count fingered and parallel devices.
  • levizx - Friday, May 10, 2019 - link

    Nope, you are wrong. Those measures listed are ONE specific method - Intel's. That's the transistor density for a standard 2-input NAND cell and a scan flip-flop logic cell.
  • ats - Friday, May 10, 2019 - link

    For the moment, I'll ignore the fact that your reply is incongruent with the discussion... What drive strength for the 2-input nand, what power level for the 2 input nand, etc?

    To the large actual point of the chain, I am entirely correct. There are effectively infinite ways to structure the actual transistors of a circuit and myriad methods to count the resultant structure as far as number of transistors. Do you just pick the smallest single cell/circuit that will work or do you use multiple parallel devices? Do you finger a device (effectively multiple transistors) or just make a bigger device? etc.
  • ZolaIII - Wednesday, May 8, 2019 - link

    I agree with your rugh estimate that TSMC 7 nm is even with T9 lib 3x denser then intels 14 nm with high performance lib. That brings it down to 1.5x per gate which actually means if Intel can pool 2.2x (projected while achieved is always smaller gain) increase per gate it would be in (small) leed but TSMC rushed it's first gen 7 nm. Samsung with its 7nm is comparable to Intels 10 nm (they all really are 10 nm) while TSMC 7nm+ is also. TSMC 5 nm (which is completely new node) will have a 25~30% size advantage over the competition until Intel launches 7 nm but in the mean time Samsung will be developing & probably have it ready until Q2 2021 the 5 nm GAA. FinFET reached it's maximum, we are seeing more by force extension with EUV than also ain't mature enough...
  • Wilco1 - Wednesday, May 8, 2019 - link

    TSMC 7nm, Samsung 7LPP, and Intel 10nm are all close to 100 MT/mm^2 theoretical density. TSMC 7+/6nm are about 20% denser at 114 MT/mm^2. TSMC 5nm does 175 MT/mm^2, while Samsung 5LPE gets 126 MT/mm^2. See https://www.semiwiki.com/forum/content/8157-tsmc-s... . So TSMC 5nm is far ahead both in density and timescales.
  • levizx - Thursday, May 9, 2019 - link

    Only because TSMC uses 6T lib and Samsung uses 6.75T. 7LPP and 7FF+ has otherwise the same size.
  • Maxiking - Thursday, May 9, 2019 - link

    You literally have no clue you talk about.

    Intel 10 nm - 100M per mm2
    TSMC "7"nm -66M per mm2

    Go troll elsewhere.
  • Wilco1 - Thursday, May 9, 2019 - link

    Really? Go and check the density of real 7nm chips on https://en.wikipedia.org/wiki/Transistor_count

    Kirin 980 does 93 MT/mm^2. Now shut up and get back into your cave.
  • Maxiking - Friday, May 10, 2019 - link

    Yes, really, you have no cluea what you are talking about. Every node has several versions of it, like high density, high performance, ultra high performance etc and their aim for different products, like x86 cpu, ARM cpus and so on. And because you are an uneducated internet figher, you just cross compare different versions of the nodes. So go back to your cave, you wiki fighter with no education, bye. Comparing ARM with x86, gg. You knowledge so stronk.
  • Maxiking - Friday, May 10, 2019 - link

    they* Excuse other typos, written on smartphone.
  • Wilco1 - Friday, May 10, 2019 - link

    Nobody cares how many special secret transistor libraries exist, if they aren't used in actual chips they don't count. Get back to me when Intel releases a chip that achieves their promised 100 MT/mm^2, until then you simply have no point.
  • ZolaIII - Thursday, May 9, 2019 - link

    All of those (N7, N7+, N6, Samsung 7 nm, 5 nm) are the "7 nm" (10 really) with denser rooting libs. I also noted how TSMC N5 is a comply new node. F*ck the theory I calculated based on primary T9 lib or better say split Intels high performance lib in half to match real difference per gate. What good is theoretical density when Intel node's won't ever see anything other than HP lib? TSMC is also behind in GAA and FF isn't beneficial (much) any more in many aspects (power consumption, performance basically leaking to much). TSMC 5 nm is a win regarding density but unfortunately they don't manufacture SRAM where this win would pay out the most, its not beneficial for HP but they could win big times if they menage to score GPU & FPGA contracts along with top mobile phone SoC's...
  • ajc9988 - Thursday, May 9, 2019 - link

    What are you talking that FF isn't useful anymore? GAA doesn't get used until around 3nm TSMC and Samsung (5nm if it really must be used), but they are also looking at ways to incorporate FF into SoS structures. Also, TSMC manufactures both AMD and Nvidia GPUs at the moment. They manufacture AMD's upcoming CPUs. They have been doing high powered ARM products for ARM servers for years.

    So WTH are you talking about?
  • ZolaIII - Friday, May 10, 2019 - link

    Things you actually need brains to understand.
    FinFET never whose good for many things to start with. Whosent good for analogue at all nor mixed circuit's for that matter which means not good for transceivers, MOSFET's cetera. It whose an Intel's child all together on the quest & design methodology to ensure peak higher possible clocks, two fins instead one. Does that sound familiar & then let's tie even more fets together (high performance lib) to ensure even better drain so that pore thing could hit even more MHz. The strategy whose wrong from beginning all together. FinFET brought a modest bump (200 MHz) over the planar in terms of what is industrialy considerd as sustainable leak for a complex structure as transistor but with an almost 2x power cost in a idle state it also enabled higher density of around 20~25% compared to planar in terms of possible miniaturisation but again with much higher both design and manufacturing costs. The AMD's (& IBM's also) idea originally with SOI whose much better one overlay but they didn't had enough money to push it to the end so didn't the Global foundries. It remains to be seen how good will Gate All Around be but FinFET really needs to die for many reasons.
  • Arsenica - Wednesday, May 8, 2019 - link

    One word: heat

    You cannot really compare a mobile chip with a power consumption of >5W to a 150W server chip. When the transistor count for AMD's 7nm and Intel's 10nm chips are released you will see that their density is comparable.
  • name99 - Wednesday, May 8, 2019 - link

    This would be rather more convincing if Intel didn't ALSO make 5W chips...
    Which have very similar (low...) densities:

    https://en.wikichip.org/wiki/intel/microarchitectu...
  • RSAUser - Thursday, May 9, 2019 - link

    The main problem there is that the architecture for their mobile chips is based on the desktop one; it's basically an optimized desktop chip with way lower power limits, so not really comparable.
    The mobile SoC are basically 5W for the entire thing, with 5W basically being max draw, while Intel's one is usually just the CPU and GPU together going to 5W.
    Then take into account performance differences, density isn't everything, but it sure does help a lot.
  • name99 - Thursday, May 9, 2019 - link

    (a) "it's basically an optimized desktop chip with way lower power limits "
    Whose fault is that? Apple manages to design three rather different CPU cores just for 2018 (Vortex, Tempest, Chinook). There is nothing STOPPING Intel from a better, more appropriate, design for their low power end, obtained through a recompile of their higher end microarchitecture!

    (b) If even Intel's lowest power actually shipping cores don't really use that 100MTr/mm^2 metric, then WHY THE FSCK is it Intel's metric of choice?
    I don't go around telling the world repeatedly that real soon now I'll be running a 4 minute mile, then when someone calls me out, say "Well, I'm not really a runner you know, I actually spend all my time reading and using my computer"!
  • Arsenica - Thursday, May 9, 2019 - link

    "There is nothing STOPPING Intel from a better, more appropriate, design for their low power end,"

    Actually: Economics

    Developing a new process with low power metal layers for a ultra-low volume part makes no financial sense to Intel. The Pentium 4405Y is the only part with a >7W TDP and that was a chromebook-only part, so it doesn't even make sense to make the die smaller as a chromebook has plenty of PCB space for a less dense chip.

    if some phone manufacturer were to select such part then it may make sense to actually make the die physically smaller, but as X86 never caught as a phone architecture then Intel has no financial incentive to satisfy your MT/mm^2 fetishism.
  • ZolaIII - Friday, May 10, 2019 - link

    Lack of brain stops them... Switch to UHD lib from HP one makes 2.5x increase in MT/mm² & with it more than sizable price reduction per functional die & power reduction more than 2x.It's not a "new" anything, just software lib and design approach. The cost is it won't be working on 4+ GHz but it will happily work on 2 GHz and top at around 3 GHz which is more then enough for anything mobile.
  • Irata - Thursday, May 9, 2019 - link

    Curious what the density of Radeon VII is - that is a very large high TDP chip on TSMC 7nm.
  • Zizy - Thursday, May 9, 2019 - link

    GPUs are always denser than CPUs for a variety of reasons, but yeah, TSMC does have very high density in actual products, while Intel's (and previous AMD's or IBM's) chips have much lower density than what the process could perhaps somewhat be capable of.
  • Santoval - Thursday, May 9, 2019 - link

    CPU cache is not the most dense, it is the *least* dense part of a chip. That's due to SRAM's 6 transistors per cell, which is the most frequent SRAM structure. DRAM, by contrast, is much denser.
  • name99 - Thursday, May 9, 2019 - link

    Since DRAM is not present on the chips in question, the relevance of your point is unclear...
    Hell, why not go all in and complain that SRAM is less dense than flash?
  • Santoval - Thursday, May 9, 2019 - link

    DRAM has been often used for CPUs with L4 cache in the form of eDRAM. Intel makes such CPUs, IBM also makes them. I hope the relevance is more clear now.
  • peevee - Thursday, May 9, 2019 - link

    Even in these rare cases, DRAM is not on the same chip.
  • Zizy - Thursday, May 9, 2019 - link

    You are mixing storage and transistor density here. SRAM is not dense in capacity/mm but it is dense in transistors/mm.
  • Santoval - Thursday, May 9, 2019 - link

    You are technically right, however since we are talking about CPU *caches* their effective density (how many transistors are required for each KB) is important. Transistors for logic are different, since you can't easily quantify logic transistors with a simple KB or MB number. Logic is .. quite more complex.
    The comment of Wilco I replied to mentioned that "CPU cache is the densest part of the CPU". Densest in what sense? CPU caches waste a huge amount of die space, and millions of transistors merely for up to a few MB of cache. That doesn't sound remotely "dense" to me.
  • Wilco1 - Thursday, May 9, 2019 - link

    No - caches are significantly more dense than logic. Denser both in the sense of transistors per area as well as bits/area due to being extremely tightly packed and repeated millions of times. Logic wastes a lot of area due to being irregular and all the routing required.

    DRAM is denser than SRAM on a per-bit basis, and flash is much denser again. But that's irrelevant to this discussion given only IBM uses eDRAM.
  • psychobriggsy - Thursday, May 9, 2019 - link

    I think he was trying to say that you can have lower density SRAM - however this is only used in register files and L0/L1 caches in the performance critical areas.

    That goes the other way too. IBM was using eDRAM as on-die L4 cache for density reasons.
  • Smell This - Thursday, May 9, 2019 - link

    Short cell libraries are optimized for high density and low power, and is 70% +/- of graphics and 'uncore' of the die size.

    I think 'cache' cell libraries are much less dense, more power 'hungry" and optimized for speed.
  • Wilco1 - Thursday, May 9, 2019 - link

    SRAM is more dense than logic. There are multiple SRAM cell choices with different area, power and performance characteristics. L1 cache is typically less dense than L2 or L3 cache because it needs to be fast.
  • ats - Friday, May 10, 2019 - link

    There isn't anything on a chip that is as transistor dense as SRAM. SRAM cells are specifically designed and optimized for size. In many cases, SRAM cells utilize process rules that cannot be used for anything else due to the vast amount of resources spent on optimizing, verifying, and testing the SRAM cells. Nothing else gets the same amount of resources which allows them to characterize the process performance for the SRAM cells and their arrays to a level that simply isn't achievable with logic.

    Even the highest performance SRAM cells tend to be significantly more dense than logic.
  • Calin - Friday, May 10, 2019 - link

    Density is measured in terms of transistors per square millimeter, and high thermal power, higher level complexity, hot spots tend to force a decrease in transistor density (due to cooling issues).
    SRAM by comparison has few hotspots, has a very regular structure and can be easily optimized, ...
    SRAM is dense in transistors (even if expensive in terms of actual memory size compared to physical size).
  • Bulat Ziganshin - Thursday, May 9, 2019 - link

    Seems that it's pure difference in transistor counting methodology - at https://en.wikipedia.org/wiki/Transistor_count we see:
    - 28-core Xeon Platinum 8180 is 8B transistors
    - 32-core AMD Epyc is 19B transistors

    Moreover, 32-core AMD Epyc occupies 768 mm^2, while 22-core Xeon occupies 456 mm^2, both with 14 nm technology. So, it seems that Intel counts 2x less transistors for the same thing.
  • Wilco1 - Thursday, May 9, 2019 - link

    No - the densities are comparable and close: densities are 17.5 MT/mm^2 for Xeon and 24.7 MT/mm^2 for Epyc. There is no difference in counting that can explain Intel's lower density.

    As to why Epyc has more transistors - these are very different microarchitectures on different processes. Both chips have huge amounts of SRAM. The numbers of transistors per bit depends on the process and cell design. It's quite possible for the GF process to require 8 or even 10 transistors per bit while the Intel process can use 6 per bit.
  • Zizy - Thursday, May 9, 2019 - link

    Well, you can take a look at a floorplan and consider just cache, ignoring the rest - you know the area from the image, you know cache size.

    I don't recall these numbers as I haven't seen any in a long time, but it should be easy to get MB/mm of the same L3 cache on both chips.
    This would be a relevant comparison of actual process density, assuming both require the same number of transistors per MB and have comparable design goals in terms of frequency, size and whatnot.
  • Wilco1 - Thursday, May 9, 2019 - link

    There is no need to do that - the SRAM densities for various cells are widely published already. Logic density is where things differ the most due to routing, contacts, diffusion breaks, metal layers, design rules, tracks etc.

    So to compare density fairly and without the marketing BS we must compare actual chip density. It's best to compare like with like (eg. server with server) so that the amount of cache, frequency etc are similar and you get a fair comparison.
  • peevee - Thursday, May 9, 2019 - link

    EPYC is not a single chip, it is a multi-chip package.
    https://images.anandtech.com/doci/11551/epyc_tech_...
  • shompa - Thursday, May 9, 2019 - link

    one reason why Intel can clock higher than for example ARM is dark areas on the die. Intel can't pack as many transistors and still be 1-2ghz higher clocks.
  • ZolaIII - Friday, May 10, 2019 - link

    The reason why X86 cores can clock higher is in two things; design of the core (longer pipeline) & high performance rooting library that basically ties more fin's together in a single contact to ensure better conductivity & drain. The sustainable leaking limit for FinFET structures on silicone is 2 GHz no matter how you root it that's why efficient server processors even from Intel still have base frequency around that limit...
  • HStewart - Thursday, May 9, 2019 - link

    "Seeing is believing."

    The articles and slides from Intel state "Shipping in June", and June is a month away so I think it is time to start believing. It is coming and days of blaming 14nm and Skylake is over with. Not only that so its all the Spectre/Meltdown stuff - all part of past coming in June.
  • AshlayW - Thursday, May 9, 2019 - link

    So I can buy a high-performance, 10nm Intel CPU at 4.5GHZ~ in June (also at a fair price)? Cool. But I think I'll still get a 3700X. :)
  • Bulat Ziganshin - Thursday, May 9, 2019 - link

    Almost Yes, but it will be announcements (add 3-6 months for real availability) of 2-4 core ice lakes for notebooks
  • peevee - Thursday, May 9, 2019 - link

    Meaning 2GHz, not 4.5. Huge difference in terms of power required and dissipated.
  • HStewart - Thursday, May 9, 2019 - link

    I believe mobile which is Intel's primary is first, but it sounds like desktop/server is coming later. Keep in mind - that with Sunny Cove, it is expected that performance per Ghz has increase. The rumors above Intel 10nm appear to be wrong - stating that desktop/server were in 2021 or so. that it was delay.

    So we will find out specs once Intel official releases the product specs. I would not doubt higher than 4 cores on notebooks
  • psychobriggsy - Thursday, May 9, 2019 - link

    Heh. No.

    You'll be able to buy a 2C/4C (or 6C!) mobile CPU (15W) that might have single core turbo to 4GHz, but base clocks under 3GHz.
  • arashi - Thursday, May 9, 2019 - link

    Of course Intel marketing tells us to believe.
  • Bulat Ziganshin - Thursday, May 9, 2019 - link

    Spectre etc stuff is about fundamentals of CS. It's not Intel-specific bug, researchers just started with most popular cpu on the market. So, this type of backdoors will be with use until scientists will find model that combines modern speed with better security. Until that, we will see only patches against particular attacks priorly published. And then new attacks - it's endless endeavor.
  • Lord of the Bored - Friday, May 10, 2019 - link

    Seeing is believing, and it isn't June yet.
    I'll believe it when I see it.
  • eastcoast_pete - Wednesday, May 8, 2019 - link

    Thanks! Eagerly awaiting news whether Intel shows any samples of their new chips to demonstrate that their process is at least working at the laboratory level. Given the debacle with their previous/current 10 nm node, some samples of actual silicon would be a lot more convincing than another set of PowerPoint slides.
  • saratoga4 - Wednesday, May 8, 2019 - link

    >Intels 14+ and 14++ processes extracted more than 20% more performance (from Broadwell to Whiskey Lake) from the process since its inception.

    Shouldn't that be from Skylake to Whiskey Lake? Broadwell and Skylake were both 14nm (non-plus).
  • Zizy - Thursday, May 9, 2019 - link

    Yeah, they are mixing some ISA gains by BW->SL to the number. Actual process performance improvement is lower.
  • tommo1982 - Wednesday, May 8, 2019 - link

    I believe I have become an AMD fan. Reading INTEL related news doesn't excite me, whereas AMD's is interesting. I'm not even reading most of the news about INTEL. Partly because their naming sense is horrible. I don't know which 'Lake' they are going on about anymore.
  • wiyosaya - Wednesday, May 8, 2019 - link

    Agreed. At least they could follow the rainbow in their lake scheme, that would make some sense, IMO.
  • Opencg - Wednesday, May 8, 2019 - link

    Im excited to see what AMD and intel release this year but as a gamer its alot easier to be excited for AMD. Mainly this is because they have more ground to gain or may be willing to do what their competition wont. While the 9900k is king AMD has promised on par performance and this will likely be true. And in graphics nvidia wont be comming out with anything exciting this year. The gpu prices have been relatively high and it will be interesting to see what AMD offers with the cost reduction tech in navi. Navi releases this year will also foreshadow a second navi release next year that will go head to head with nvidias next generation. If nvidia continue to make poor gaming gpu designs AMD could even overtake them. Exciting times indeed
  • RSAUser - Thursday, May 9, 2019 - link

    Rumors state Navi had issues, so expect it to run hot again, but probably at a competitive price point as GDDR6 is a lot cheaper than HBM2.
  • eva02langley - Thursday, May 9, 2019 - link

    Who cares if it is not really more efficient than Vega 7nm, as long as the price/performance ratio is similar to Polaris, I will be happy and many people will be to.

    I don't mind paying 75W more for a GPU at full load. It is literally peanuts in power saving... and that is at a worst case scenario.
  • Opencg - Thursday, May 9, 2019 - link

    if it has issues I would look at the super obvious and highly probable reason before I would say "expect it to run hot". its multiple chips trying to present as a single gpu. the real goal here is to do better than sli / crossfire have done in the past. I remember sone titles doing worse with 2x gpus than with one. the main point is that you want progammers and old games to be able to use navi like its a single gpu without heavy losses. I think amd will bring us closer than we have ever been. but it will still be a highly vairiable performer depending on the titles you bench. reviewers will have their work cut out as consumers will want to check EVERY games benchmarks
  • Targon - Friday, May 10, 2019 - link

    Navi doesn't use a chiplet/CCX design, it's a normal GPU.
  • HStewart - Wednesday, May 8, 2019 - link

    Just for information, "Lake" names are suppose to be internal and not for the public. What matter is how the products do - desktop computers is bad place to judge intel by. Compare a high end 4Ghz + notebook now to ones that were first out with Skylake only being 2Ghz or so and half the cores that is difference of newer 14nm vs original 14nm.
  • Manch - Thursday, May 9, 2019 - link

    Just stop with the BS you shill. The lake names have been mentioned plenty in slides made available to the public. While the + &++ offered 10% improvement each time, the only reason we get 4Ghz & double the cores is bc Intel has to now compete with AMD since the release of Zen.
  • Arbie - Thursday, May 9, 2019 - link

    Trying to start a flame war? Your rudeness adds nothing to your position nor to the thread.
  • Manch - Thursday, May 9, 2019 - link

    Whatever. Every damn article this dude starts with the shilling. Its annoying as hell.
  • Korguz - Thursday, May 9, 2019 - link

    arbie, you obviously have NOT seen the various articles he has posted in.. each one of them is pro intel, .. most of it.. seems to be made up by him, or twisted to make intel look better then they are... take a look at the one about micron buying the fab from intel for 1.4 or 1.5 billion for example.... he is pretty much the intel fanboy poster child on here
  • HStewart - Thursday, May 9, 2019 - link

    This is real information - with personal experience - my Y50 has 2.4Ghz quad core - while my 8th generation Dell XPS 15 2in1 goes up 4Ghz - both are down stairs and both are 14nm and quad cores. That is real information
  • Targon - Friday, May 10, 2019 - link

    Real information only applies to true product releases. Anything that talks about the Intel release schedule more than two months in advance is really just repeating the general lies that Intel keeps putting out there to try to keep the stock price up.

    2015: 10nm is on track
    2016: 10nm is on track
    2017: 10nm is on track
    2018: We have 10nm products(dual core laptop chips which no one has actually seen)
    2019: We will have 10nm products this year(low volume laptop chips MAYBE, might be real, might show up in November/December in products).

    Since we can't build our own laptops, even if Intel was shipping to OEMs on the day of the launch, it would take six months before those products would get into the hands of consumers. Intel won't have high end 10nm desktop parts out before 2021, so there won't be much improvement over the 9900k for at least another year and a half.
  • Targon - Friday, May 10, 2019 - link

    When a company makes promises for four years and doesn't deliver, and new products are not seen as terribly innovative or a big step forward, that DOES tend to make people not believe anything that company has to say. The biggest jump for Intel was the 8 core consumer chips, but they do run hot and require a lot of power, plus the TDP rating is misleading to say the least.

    If Intel ever starts to deliver on its promises again, people might go back to paying attention. Right now, 10nm, for all the potential, may not deliver better performance compared to 14nm+++++. Laptop chips won't show up in products for 3-6 months after they come out as well, so by that time, any excitement would probably be gone.

    The other thing, and many people keep missing this, is that new fab process with theoretical improvements mean nothing to consumers if these things do not result in improvements that people care about. New fab process...does it improve base/boost/turbo speeds? Does it lower power demand in a way that will help(lower CPU power but then putting a 4k power hungry screen into a laptop won't help most people). Does battery life really matter to most people if they leave their laptop plugged in all the time?

    That is why EUV means nothing for users, because by itself it won't actually improve CPU performance or lower power draw or anything like that. It is important in the long run, but if it doesn't provide any true benefits to those using computers, then no one should get worked up about it.
  • wiyosaya - Wednesday, May 8, 2019 - link

    Ah poor Intel. It must be tough to be on the defensive these days.
  • imaheadcase - Wednesday, May 8, 2019 - link

    Yah a company working towards goals it already wanted. So defensive.
  • HStewart - Wednesday, May 8, 2019 - link

    This is not defensive - this is offensive attack. But keep in mind this is only one part of this - what is not stated here is the 2h ":Shipping in June" which is technically still Q2 is not only machines with 10nm (Not to be confused with Cannon Lake) but an entirely new architexture called Sunny Cove.

    To me, Sunny Cove has possibility of being not evolutionary but revolutionary in design. It just technical hunch but I see the new addition of second store unit a major revolution coming. What makes it so exiting is that it is addition to dual loads and existing store unit - but they separated load/store and other load/store units - this means they can run parallel load/store operations in the same clock cycle. To me this is hidden jewel inside the chip but this could be not as great as I believe.

    I would not doubt there is other parts of Sunny Cove that also have significant changes to effect performance. So one thing to keep in mind - June is not just 10nm process coming but also that a new architexture is coming. It Intel's terms this means June release will be Tick and Tock together. This could be extremely important. Intel is not stupid and they are not sitting around while AMD and ARM are challenging them from both sides.
  • Korguz - Thursday, May 9, 2019 - link

    so says the intel fanboy.... defending his beloved intel... " but an entirely new architexture called Sunny Cove" to bad sunny cove.. is NOT an " entirely new architecture * again.. the X SHOULD BE A C ) its just an up date to the SAME architecture intel has been using for the last 10 or 15 years now... just HStewart is to much of a fanboy to see it. " Intel is not stupid and they are not sitting around " ahh but they HAVE been... and have now.. just gotten up off their butt, and are playing catch up...
  • Opencg - Thursday, May 9, 2019 - link

    i mean no, they are not sitting around at all. they were in relative stagnation for a while. but intel is famous for throwing money all over the place. and having a blunder make them look really bad here and there, only to come back even stronger when another plan comes to fruition.

    as for the end of your comment well it appears you are so stupid that even the most basic of understandings of architecture evades you. in this case you are too dumb for me to even reason with so ill just let you be. have a nice day
  • Korguz - Thursday, May 9, 2019 - link

    not sitting around at all ?? really ?? how long were we stuck at quad core on the mainstream desktop ?? they " could " of added more cores.. but they didnt.. until zen came out.. then all of a sudden.. mainstream desktop from intel came out with more cores... " relative stagnation " only cause they had NO reason to innovate or do anything more then give is 10% preformance increases year over year... opencg sure if you say so... sound like you are a bit of an intel fanboy your self... sunny cove is still based on the same architecture as the previous cpus.. just updated... but please.. post a link that says other wise, so i can read it as well...
  • Opencg - Thursday, May 9, 2019 - link

    im hardly an intel fanboy. right now i have my money on amd. this is mostly to do with multi chip architectures and how they can overcome the dwindling performance / price gain we are destined to see from process nodes. amd used a limited r/d budget to form an interconnected approach that targets multiple product types and takes into account fabrication stagnation. infinity fabric is used to make zen into super high core count versions while the single die zen processors hit multiple sub markets at once. navi also uses infinity fabric.

    dont get me wrong the comming 2019 products are very exciting. but the most exciting thing to me is the continued advancement of performance / price to consumers in the face of the end of moore's law via single chip fabrication techniques. and amd has shown that it started preparing for this at tbe most ideal time. a smart company indeed.

    but i dont fanboy out. just like with nvidia having a great 900 and 1000 series architecture followed by a bad joke of a 2000 series, my favoritism ends with bad calls by any company
  • Targon - Friday, May 10, 2019 - link

    Navi won't use Infinity Fabric, I don't know where people are getting that idea. AMD did look forward when it came to the design for Zen, and for the new GPU architecture that will follow, while Intel is still limited by tunnel vision.
  • HStewart - Thursday, May 9, 2019 - link

    I personally have no relationship to Intel, actually you would think that I would not like them, in early 90's I was invited to Intel for development and at the time they did not need Assembly Language developer and they want at the time C++ which I very proficient now - I did meet one of designed of P 5 *Pentium) and I mention if they could Virtualize the 386 and at that time they could not but they can now. I personally did not like area. I was writing a DPMI driver for OS at the time. It never happen but just because I prefer and respect Intel products does not call me a fanboy - just to let you know I have 30 years in computers and not just a kid playing games

    One thing for sure is that Sunny Cove is just as much as new architexture as AMD's Zen is compare to older versions of AMD CPU's. Sunny Cove is not Skylake version 2 but please look at diagrams - new units and cache units are part of it designed.

    It will be soon time for AMD to play catchup - Sunny Cove is going kick AMD off the map. I expect that and time will show it will do so.
  • Korguz - Thursday, May 9, 2019 - link

    " respect Intel products does not call me a fanboy " oh really?? then WHY do you ALWAYS praise intel, no matter what, but yet.. bash AMD ?? that is what a fan boy does.. you see intel as the god of the cpu, and amd as nothing.. how is sunny cove a new architecture ( notice the C ?? ) FYI.. zen, compared to AMDs prev architecture ( bulldozer ), is very different, so it is new.. while sunny cove.. as you have stated.. new units and cache units are part of its design, but not a complete redesign as Zen is over Bulldozer

    Sunny Cove is going kick AMD off the map. yea sure.. and you know this how ???
  • HStewart - Thursday, May 9, 2019 - link

    It just a feeling about Sunny Cove, a technical hunch and right now that all we have to go by - it because of technical changes in architexture, yes it possible the deigned could mess up and have issues - but Intel has spent a lot on this and I believe by it designed it going be quite a performer. But important part is like everything else on the net it is a only a personal opinion. You have yours and I have mind - so live with it - time will tell.

    If you want to know that Sunny Cove is new architexture, look at the designed chats from the article provided earlier by AnandTech. To me updating cache is not a big deal - but increase the units from 8 to 10 and other changes in it so it handles more instruction per clock cycle is architexture change. Sunny Cove is vastly improved over Sky Lake.

    I don't know much or even care much about AMD stuff - only that from what I know from results is that Intel has per core faster performance than AMD and Intel indicates that Sunny Core single core performance. Also AMD answer to issues is just add more core, while Intel is improving the core design, It sounds like from AMD fans that Zen is better than Bulldozer but how much on single core, I have no idea.
  • Korguz - Thursday, May 9, 2019 - link

    yes time will tell.. and my hunch.. it wont be as great as you seem to think it will be...

    " but increase the units from 8 to 10 and other changes in it so it handles more instruction per clock cycle is architexture change " that is NOT an architeCture change ( yet again.. change the X to a C Hstewart ) that is just updating/tweaking an existing architeCture..
    " Also AMD answer to issues is just add more core, while Intel is improving the core design " and amd hasnt been doing the same thing with zen+ and zen 2 ??? zen IS better then bulldozer, amd went from not being able to compete with intel.. to being on par, or faster then intel depending on what was being run on the cpus....

    FYI.... you should care more about amd stuff.. if it wasnt for zen, we would STILL be stuck with quad core on the mainstream, and probably paying even MORE for intels cpus then we are now...
  • rrinker - Friday, May 10, 2019 - link

    But for what purpose? What, pray tell, MAINSTREAM use case really needs more than 4 cores? Enthusiast gaming is not mainstream. Massive parallel machine learning is not mainstream. Mainstream is Joe and Jane Smith browsing Facebook and sharing cat pictures at home, and working on some spreadsheets and word documents at work. In terms of performance, a computer from 5 years ago is more than adequate for that. MY work laptop has an I7, 2 core with hyperthreading. I regularly run multiple VMs plus the base system (it does have 32GB RAM at least) with no issues. So tell me again how I need 16 or 32 cores in a machine at this level?
    Even my newest machine at home, now 2 years old, I went with an I5 instead of I7, and didn't even install a discrete GPU of any sort. It's my hobby workbench computer. The I5 using Intel's not so awesome graphics handles EDA for my circuit design with no problem. It handles the 3D CAD program I use for designing model railroads just fine as well. I have several programming IDEs installed on it. I have electronic tools such as a USB based logic analyzer, that stuff all works fine. In fact, it probably would have been fine with an I3. I'd consider these sorts of applications outside of mainstream use, yet they all run perfectly fine with 4 cores. I haven't had to wait for any computer to catch up to me for years now. As in, the bottleneck in response time has not been the machine itself, always something external - internet speed, network speed, slow site.
    Hey, it's great marketing, why settle for 4 when you can have 16, but for MAINSTREAM use it doesn't really buy you anything other than bragging rights.
    For use cases where the huge core count actually DOES help - that's a completely different argument. Though it seems this is lost on many people. Maybe because everyone reading all the enthusiast web sites starts to think what they do IS mainstream. It's not. For every killer gaming rig that can outcompute NASA from 5 years ago, there are probably 1000 or more machines sitting in office cubicles that would probably be just as effective for the end user if they had cheap Celerons in them. THAT'S the mainstream.
  • Xyler94 - Friday, May 10, 2019 - link

    You fell for Intel's reasoning to keep giving us 4 cores on Mainstream for so long.

    What downside is there to an efficient 16 core processor on the Mainstream high-end? If it does extremely well in gaming, it's just pure power. It's like saying you'd rather a Mazda 3 instead of a Ford GT in a track race, because you're not a racing enthusiast.

    More power to us is never a bad thing, why are you trying to complain that AMD is trying to give us more for less money? That is the dumbest thing I can imagine someone saying. "Oh, I'm okay with what we got now, don't innovate and give us something more, please. It's not needed for my specific use case"
  • Targon - Friday, May 10, 2019 - link

    You really don't see it, but there are a LOT of processes that run in the background, and as time goes on, there are going to be more. You have multiple web pages open in different tabs, plus this and that, and suddenly, things feel sluggish when switching between running programs. Four cores is enough for NOW, but then again, dual-core should have been dead six years ago due to how much is going on behind the scenes.

    For those who build their own systems, we tend to be enthusiasts, not just those playing games, but those actually doing a LOT on their computers. Four cores feels sluggish to me at this point compared to 8 [email protected], and I'm not gaming on my computer all that much.
  • Targon - Friday, May 10, 2019 - link

    You don't care much about AMD stuff....considering that AMD will be using a more advanced fab process than Intel for at least the next three years, and that AMD is about to release a full line of chips that will beat Intel in IPC and "core for core", plus offer more cores on top of that SHOULD make you wake up to what is going on.

    All this talk about future hopes and dreams for Intel, when AMD will have the new generation going out in the next two months SHOULD make people stop waiting for Intel to actually follow through on their promises of 10nm.
  • AshlayW - Thursday, May 9, 2019 - link

    "It will be soon time for AMD to play catchup - Sunny Cove is going kick AMD off the map. I expect that and time will show it will do so."

    Oh dear. Sunny Cove is likely 5-10% higher IPC than Skylake, realistically. Zen2 is likely 5-10% higher IPC than Zen1, realistically. I'm being very modest to keep expectations in check. In reality: Sunny Cove is going to be % faster IPC than Zen2 that Skylake is over Zen1, right now.

    Did Skylake 'kick AMD Off the map' ? Nah. Zen1 and now Zen+ are really hurting Intel's bottom line. Intel has a small IPC advantage and similar perf/watt using a vastly better process (14nm+++ >>>> GF 12nmLP) and they're still losing marketshare. Unless Intel can learn to price its products fairly, and actually offer innovation in the Desktop PC space, they're going to keep losing share, even with this fancy new architecture.

    Ryzen is outselling Intel Core by a fairly significant amount, and companies are going to make the switch to Rome in the coming months, as no one wants that 400W Intel turd.
  • Nimrael - Thursday, May 9, 2019 - link

    Zen 2 IPC's improvement will be of SIMD FP FPU's...
    While the sunny cove will have a L1 caches inreased. Prior to post some fantard "analytics', better get familiar with the facts.
  • HStewart - Thursday, May 9, 2019 - link

    Keep in mind you only seeing Ryzen on desktops and not in mobile space. I would also say from personal experience a my 8th gen quad core I7 is about 20 to 50 % faster than my 4th gen quad core
    which is actually before Skylake.

    I never mention Skylake knock AMD off map - but I believe Sunny Cove will give AMD some trouble.

    I not Intel, but if I was in there current situation, I would come back with a product that will never question their performance again. And make this process discussion obsolete. Also make desktop processors not an issues - mean there is no difference between desktop and mobile CPU's - that high end desktop can be place in laptop without issue. If 10nm power requirements are done right than it does not matter anymore.

    Just remember Intel has been hard by AMD and know and don't be foolish to believe that when new products come out that that it going to be like the Cannon Lake product - this is different product coming in june.
  • Korguz - Thursday, May 9, 2019 - link

    of course it is.. because of the minor improvements to IPC intel has done between the 4th gen and 8th gen... i can say the same between my 1st gen I7 930, and my current I7 5930k ...

    " mean there is no difference between desktop and mobile CPU's - that high end desktop can be place in laptop without issue " sure there is.. you cant put a high end desktop cpu in a notebook and expect it to run the exact same.. desktop.. practically unlimited cooling.. where a notebook.. has limited cooling, how many watts each uses.. is also very different....
  • Lord of the Bored - Friday, May 10, 2019 - link

    "I not Intel, but if I was in there current situation, I would come back with a product that will never question their performance again. "

    What kind of product would be so good as to completely destroy forever all attempts at benchmarking? I mean, if going from P4 to Core didn't do it, nothing will.
  • Targon - Friday, May 10, 2019 - link

    Since Intel won't have 10nm desktop parts until 2021, and AMD won't have 7nm laptop chips with Zen2 cores until late this year or early 2020, everything remains to be seen how much of an effect Sunny Cove will have. After four years of Intel promising 10nm and not delivering, I wouldn't bet on when we will see any of these new Intel chips in actual products that people can buy. Intel may not have much of an improvement in clock speeds as well, because you can't trust ANYTHING Intel says at this point, it is all smoke and mirrors, and until something appears out of the smoke, that's all Intel has. We can't even count on power draw of shipping products being lower with Intel 10nm, even though it should be expected.
  • Manch - Thursday, May 9, 2019 - link

    It could be an article about a early 2k flip phone and you would find a way to praise Intel and bash AMD. Just STFU already.
  • Arbie - Thursday, May 9, 2019 - link

    Calm down and be polite. HStewart is just stating what he sees in the designs. Calling everyone who does that a "fanboy" degrades discussion, and BTW doesn't reflect well on your own intelligence.
  • sa666666 - Thursday, May 9, 2019 - link

    You're obviously not familiar with HStewart and his pathetic ramblings on all things Intel. If not, well just realize that he is the local shill/troll that stirs up this crap whenever Intel is mentioned. In his world, Intel is the best thing since sliced bread, and can do no wrong. End of line.

    If you _are_ familiar with his trolling and still support him, then I will add that you're just as bad. Many people on this site are sick of his constant drivel, and the one thing that this site _really_ needs is a blocklist feature. Logging in and checking on new articles would be much more enjoyable if I knew I didn't have to see his constant incoherent, illogical ramblings.
  • HStewart - Thursday, May 9, 2019 - link

    Please don't get personal - on this site and I would hate this site to be like WCCFTech
  • HStewart - Thursday, May 9, 2019 - link

    I think you are one trolling on this site, do you have any interesting in purchasing Intel CPU in the future. Unless AMD discussion brings up anti-Intel stuff, I at least stay out of it. Grow up.
  • Korguz - Thursday, May 9, 2019 - link

    " I at least stay out of it " there is a BS line right there.... HStewart.. you bash amd ANY chance you get, while praising intel any chance you get...
  • HStewart - Thursday, May 9, 2019 - link

    Also I have Samsung Phone and Samsung Tablet - just to say I am all Intel is foolish. To you if you are not AMD than you are all Intel. I do have AMD GPU in my Dell XPS 15 2in1. End of line.

    I would love blocklist discussions like this also - maybe Intel only section and AMD only section would but that would remove some pathetic hope of some people - expecting to bad mouth intel products to push AMD down people is just immature.
  • Korguz - Thursday, May 9, 2019 - link

    " expecting to bad mouth intel products to push AMD down people is just immature." again.. you are one to talk HStewart
  • Irata - Friday, May 10, 2019 - link

    Please do tell me how you'd buy an Intel based mobile phone.

    I know there are tablets with an Atom core (running Android and Windows), but saying "hey, my mobile phone (or lawn mower for that matter) does not use an Intel CPU, so this means I am not a fan" is pretty ridiculous.
  • Targon - Friday, May 10, 2019 - link

    You must have missed that Intel makes a lot of promises that don't come true. If and when Intel gets products out the door and into the hands of consumers, that's when we can properly compare the products and see if they live up to the promises.

    It isn't bashing the Intel products that is going on, it is bashing those who believe Intel whenever a roadmap gets put out there, because 10nm was on track in 2015 for a 2016 release! Think about it, 10nm is four years late, and every year, Intel has claimed that 10nm is on track.

    With that said, no one disputes that the 9900k is the top performing chip out there at the moment, but 10nm Intel has been reported as not performing as well as 14nm Intel in terms of clock speeds. As such, even if 10nm desktop chips were to come out this year, there is a good chance that they might not be competitive with the 9900k.
  • HStewart - Thursday, May 9, 2019 - link

    Here is a promise, if people like you stop bashing intel with pro AMD / anti-Intel comments in Intel, I will not promote Intel in AMD related articles - only condition if some one bashes Intel in that article.

    Please discuss things logically about relative product.
  • Korguz - Thursday, May 9, 2019 - link

    only IF you do.. but you wont... cause you cant... case in point.. the post about amd and the cray super computer
  • Xyler94 - Friday, May 10, 2019 - link

    If Intel does something worth criticizing, then they deserve it. And they wholeheartingly deserve it for stagnating innovation since the Core series began. And AMD pushed Intel to make the huge leap with the Core Arch, if it wasn't for Athlon X2, there'd be no amazing Core series.

    That's what competition breeds. Innovation. Why don't you praise AMD for their accomplishments also? Because that would make Intel look bad, right?
  • HStewart - Thursday, May 9, 2019 - link

    I completely understand, I wish people would leave AMD out Intel discussions. There is no question that Intel has made mistakes with 10nm process - but that is not causing them to go down - Intel is involving from a primary desktop market to more mobile market. AMD is actually good for Intel, because it keep them on their feet and moving forward.

    What would be ideal is think about what this means to future, this obvious means that Intel knows they have been hit hard by this 10nm failures of Cannon lake and also Spectre/Meltdown - but did they fold there hands and just cry - no instead they hire new people, got rid obsolete labs and corrected there mistakes.

    I have 30 years experience in computers and know where Intel comes from. I would just be upset if someone company wrote a cpu that was ARM compatible without ARM's permutation. Did Apple do that - not sure maybe.
  • Manch - Thursday, May 9, 2019 - link

    So Arbie and HStewart are the same person. Got it
  • Manch - Thursday, May 9, 2019 - link

    You promote Intel regardless. I only have one pc with amd cpus but good lord youre full of shit. If you have had 30 yrs experience as you say then you know the x86 has been a shared experience since the 70's and a majority of the x64 code base is amd, not Intel. People dont like you or your comments because ypure blantantly dishonest. Mosg of us lile competition between the two.bc it pushes both to put out a better product. Ypu in the meantime shill for intel regardless of facts and you spam the forums. JSTFU and go away.
  • sa666666 - Friday, May 10, 2019 - link

    Exactly. I don't think most people care about which is better _at the moment_; Intel or AMD. I have systems based on both architeCtures, and will continue buying both as their merits are presented. Right now AMD is better in quite a few areas. Intel has been better in the past, and will likely recover and come back again.

    To be clear, this isn't an Intel vs. AMD debate. It's a "we're all sick of the dishonesty and bullshit coming out of your mouth" debate. It is all about you, HStewart; you are annoying, and I believe intentionally so (aka, a troll).
  • Korguz - Thursday, May 9, 2019 - link

    " I have 30 years experience in computers " but yet.. you cant spell architecture correctly....
  • Lord of the Bored - Friday, May 10, 2019 - link

    "I would just be upset if someone company wrote a cpu that was ARM compatible without ARM's permutation. Did Apple do that - not sure maybe."

    But AMD has Intel's "permutation", and has going back to the original 8086. And AMD was even nice enough to give Intel permission to copy THEIR processors too. Adopting AMD64 doubtless really hurt Intel, but it was a strong sign of how dire the straits they'd found themselves in.

    On the other hand, Intel is about the only company on Earth that DIDN'T clone AMD's very popular AM2900 series of parts. So that's a nice thing.

    And on the third hand, if Intel hadn't licensed the 8086 to someone for second-source'ing, we wouldn't care about that device's descendants because the IBM 5150 would have used something from TI's 9900 series or Motorola's then-new 68000 instead. And both of those were actually better architectures than the 8086. So perhaps there's some fairness in being mad, because AMD cheated us out of a TMS9900-based IBM PC.
  • Targon - Friday, May 10, 2019 - link

    Don't forget Zilog, who was huge back in the 1970s into early 1980s. Z80 assembly was actually very elegant compared to 6502 or even 65816.
  • Lord of the Bored - Saturday, May 11, 2019 - link

    I thought 16-bit was one of the goalposts for the 5150 design team, so they weren't going to use an 8-bit microprocessor.
  • Jorgp2 - Wednesday, May 8, 2019 - link

    2x the GPU performance seems underwhelming.
  • FreckledTrout - Wednesday, May 8, 2019 - link

    Intel will have 7nm in 2021 which is going to be similar to TSMC's 5nm which is already in risk production? Crazy to see Intel behind in process tech. Is Intel's 7nm process going to use GAAFET or will it still be FinFET? I do wonder if Intel is going to hit similar issues with 7nm FinFET scaling up frequencies like they did with 10nm using quad patterning.
  • Yaldabaoth - Wednesday, May 8, 2019 - link

    "Learnings"
    Really, Intel?
  • SaturnusDK - Wednesday, May 8, 2019 - link

    So Intel 7nm will compete against TSMC 5nm+ and Intel 7nm+ (or 7nm++) will compete against TSMC 3mn.

    Doesn't look like Intel is going to catch up to the competition any time soon, if at all.
  • Opencg - Wednesday, May 8, 2019 - link

    in terms of performance / price it probably wont make a huge difference outside of power limited and poorly cooled mobile products. look at nvidia rtx vs radeon vii. they offer roughly the same performance on different nodes at the same consumer price.
  • Irata - Friday, May 10, 2019 - link

    The thing is that Intel has so far always been ahead a node (or even two) of the competition - even now. This did give them lower power consumption, higher clocks and more room for additional features on the die.

    This will only change once AMD releases Ryzen 3000 on 7nm.
  • HStewart - Wednesday, May 8, 2019 - link

    Maybe some people are to young to remember the frequency wars of Pentium 4 vs AMD days. nm from two different foundries maybe is different performance all together.
  • Korguz - Thursday, May 9, 2019 - link

    yes.. and guess what HStewart... intel LOST that Mhz war .... didnt they ??
  • Opencg - Thursday, May 9, 2019 - link

    did they though? I mean its kindof a stupid thing to talk about in the first place. pentium 4 era intel was known for winning the mhz war but having much worse "ipc" than amd. so korguz i guess that makes you wrong and then wrong again for even bringing it up. lol
  • Korguz - Thursday, May 9, 2019 - link

    Opencg.. actually.. AMD did win the mhz war.. they were the 1st to the 1 ghz mark... while intel still had the better IPC so maybe we are thinking different wars ? hehehehe
  • Opencg - Thursday, May 9, 2019 - link

    were talking about pentium 4 bro as stated in the comment you replied to. reading comprehension much bro?
  • Korguz - Thursday, May 9, 2019 - link

    heh.. yep different wars... oops... and different eras.... my mistake..
  • Lord of the Bored - Friday, May 10, 2019 - link

    P4 had higher clocks than A64 early on. But they rapidly ran into thermal issues that placed hard caps on how fast they could go as the A64 caught up and ran right past them. 'S why the P4 stopped being sold by clock rate and started being sold by model # instead.

    The long pipeline and consequent devastating performance hit they took every time they branch prediction circuit guessed wrong didn't help matters, though.
  • HStewart - Thursday, May 9, 2019 - link

    But Intel learn from that and came up with I series which dominated cpu's and almost made AMD go out of business,. Intel has also learn from it mistakes with 10nm and shortly time will show how much.
  • Korguz - Thursday, May 9, 2019 - link

    and if amd went out of business.. guess what.. we, as consumers.. would ALL lose...

    time will tell IF intel has learned anything... so we shale see....
  • Irata - Thursday, May 9, 2019 - link

    The reason why they almost made AMD go out of business had little to do with technical merit. They managed to keep AMD out of the market long enough using dirty / illegal tactics until they could catch up on the technology front.
  • arashi - Thursday, May 9, 2019 - link

    All Intel learnt is that bribes go a long way to market dominance and crippling your competitors.
  • HStewart - Thursday, May 9, 2019 - link

    There is nothing that keep customers for another computer - OEM's trust Intel - that is why AMD is mostly desktop - also when AMD people beg for OEM to make laptops with AMD, but most AMD build there own desktops hope other people will buy AMD instead Intel. OEM's understand this because they have been burned in the past.
  • Korguz - Thursday, May 9, 2019 - link

    yes there is... intel bribed, gave kick backs, and threatened various companies NOT to use amds products.. thats why amd almost went out of business.. NOT for the reasons you state ... you dont remember intel having to pay amd.. what was it.. 4.5 billion a few years ago in the anti trust case amd filed against them ???
  • Irata - Friday, May 10, 2019 - link

    Forget the anti trust cases worldwide the Intel lost ? In the P4 days, the paid both OEM and large Retailers to *not* offer any AMD based products. This, combined with thinly veiled threats that OEM may find themselves on the short end of the stick when it comes to Intel CPU supplies if they do not stick to Intel.
  • arashi - Saturday, May 11, 2019 - link

    I see 30 years in Intel marketing means gaining the ability to have selective amnesia and terrible writing skills since the money did the actual talking.
  • Xyler94 - Thursday, May 9, 2019 - link

    I don't think you remember, but it took until the Core series to actually beat AMD in technical terms. Intel bribed manufactures to not use AMD parts, not because Intel was superior, but because it wasn't. Dell was famously paid off to not use Opteron CPUs for their datacenter servers, and it nearly caused them to lose all their customers because people wanted Opteron. And when Dell released Opteron CPUs, Intel cut their bribe money.

    Intel only held it's market lead due to shady tactics, not because of "domination". AMD was first with a compatible x86-64 instruction set, which was a 64 bit instruction able to do 32 bit also. And AMD was first to true multi-core CPUs with the Athlon x2. there's a reason why to this day the 64 bit extension is still reffered to as "AMD64".

    You really are clueless to the actuality behind Intel. I love Intel Processors, but the thought of an affordable 12 or 16 core CPU is why I may go AMD next CPU.
  • Irata - Thursday, May 9, 2019 - link

    They did for both P3 (frequency) and P4 ( performance). They still won the battle but that did not have much to do with technical merit.

    Still, a very shrewd business decision as the fines they had to pay were nothing compared to the financial gains they made by staying the dominant player.

    Long term though, AMD having to sell their fabs turns out to have been a blessing for them.
  • Targon - Friday, May 10, 2019 - link

    Yea, incompetence by government regulators allowed that to happen. Intel should have been FINED by the government to the tune of $20 billion, not counting what they paid to AMD in the settlement.
  • Targon - Friday, May 10, 2019 - link

    You mean the original Athlon vs. Pentium 3? Pentium 4 1.5GHz being roughly the same performance as a Pentium 3 running at 1.1GHz.
  • Notmyusualid - Thursday, May 9, 2019 - link

    @ Saturnus - meanwhile 9900K is the best CPU on the planet.

    Does that SOUND like 'catching up' to you? Honestly?
  • peevee - Thursday, May 9, 2019 - link

    Is it though? For the price of 9900K you can buy 12-core Threadripper with SMT which adds +50% vs HT adding almost nothing on Intel, 50% more MT performance and twice memory throughput.
  • Notmyusualid - Thursday, May 9, 2019 - link

    Price doesn't concern me *that* much.

    9900K in my laptop is running 5.2GHz, and rocks my socks. Stable as Fort Perch Rock.

    If I can buy TR in a laptop and run 5.2GHz, I'll buy it. No fanboism here.
  • Notmyusualid - Thursday, May 9, 2019 - link

    ...in addition, according to these guys:

    https://www.notebookcheck.net/Review-AMD-Ryzen-Thr...

    I beat out that 12-core CPU on Cinebench R15 (assuming you mean 2nd gen too) by +16% Single Core, yet lose by 29% multicore (their scores).

    Whilst I am a BIG fan of multicore CPUs (incidentially I just got rid of a Xeon 14c/28t 2960 v4), unless you have both the IPC *AND* the GHz, you just can't feed applications / GPUs as much as some require. My buddies 6700K SPANKED my 14 cores in gaming. Even with him on 1080 and me on 1080Ti.

    And I haven't even begun to play with limits yet, just clicked the default 'Level 2 O/C', and I also beat out their 'Desktop' results too.
  • Irata - Thursday, May 9, 2019 - link

    Considering the high amounts of power (and especially cooling) the i9-9900k needs at those frequencies, I very seriously doubt it is running @ 5.2 Ghz in a laptop.
  • Notmyusualid - Friday, May 10, 2019 - link

    @ Irata - Ha! I'm *almost* happy you said that!

    Sir, I have no need to lie to you.

    Quote: "...fire resistant, liquid-crystal polymer fan is built with 0.2mm blades, sleeve bearings and 3-phase fan control to create less friction and circulate air more efficiently. The Area-51m fans occupy an area of 95x105mm with a thickness ranging from 19mm to 21.5mm and can push over 25 CFM in open air conditions—something normally seen only in desktops." https://www.dell.com/en-us/shop/cty/pdp/spd/alienw...

    IF, this performance seems unusual to you, then it seems for once, I won the 'silicon lottery'.

    Finally, my SPECview13 results from last night (for 1080 resoultion):

    3dsmax-06 162.8
    catia-05 111.72
    creo-02 181.04
    energy-02 20.46
    maya-05 260.01
    medical-02 53.04
    showcase-02 107.57
    snx-03 15.69
    sw-04 84.09

    Not bad for a laptop, AND only RTX2070M, not 2080 - didn't need it, just needed more CPU in this purchase.

    Finally - as for power, 1x 330W & 1x 280W PSUs are required. Not convienent, I would agree.
  • Korguz - Friday, May 10, 2019 - link

    post screen shots of all this.. as typing them.. you could type anything... also.. a screen of CPUZ would prove it...
  • Notmyusualid - Saturday, May 11, 2019 - link

    @ Korguz - no idea how to post a screenshot in this awful forum format, but I can give you this CPU-Z window taken just now:

    https://valid.x86.fr/7vbwpj
  • Targon - Friday, May 10, 2019 - link

    Look at the timeline: Ryzen 7 first showed up in March of 2017, and even with immature OS support, Ryzen 7 beat the 7700k(the top end consumer chip at the time) in six out of eight areas. It took until 2018 with the release of the 9900k for Intel to actually regain a full leadership position in terms of performance.

    With Ryzen third generation being officially unveiled on May 27th, with specs and release dates, the dominance of the 9900k is expected to end. Exact performance in the real world, where independent benchmarks and tests can be performed will follow that release, but then, Intel isn't expected to regain ANY superiority on the desktop for at least two to three years.

    Intel will still have the best laptop chip through 2019 due to AMD not having 7nm laptop chips set for this year, but then, once AMD has 7nm laptop chips out, will Intel have any area of superiority?
  • DannyH246 - Wednesday, May 8, 2019 - link

    Yawn....nothing to see here. Just hot air and marketing crap.

    It’s so funny!!! Intel now seem to release a new “innovation” presentation every month. Lol

    They will say anything to try and take attention from AMD. They look and sound desperate.

    Sad.
  • imaheadcase - Wednesday, May 8, 2019 - link

    You mean a roadmap that is normal? Or do you mean roadmaps that AMD failed to follow as well?
  • Targon - Friday, May 10, 2019 - link

    Intel is behind by four years for 10nm, so it makes sense to doubt any Intel roadmap until we see Intel being fairly close to being able to release products CLOSE to when they were expected. A schedule that slips by 3-4 months is fairly normal when it comes to engineering of new products, a schedule that slips by 3-4 years is fairly horrendous. Btw, over the years when 10nm just wasn't showing up, where were the improvements to IPC that shows that the core design was actually being worked on?
  • Tkan215 - Wednesday, May 8, 2019 - link

    Intel still believe they can decept. Amd will be aggressively like they mention. Thry know intel will come back since zen 1
  • HStewart - Wednesday, May 8, 2019 - link

    Article mentions 10nm shipping in June. Not to far a way to find out
  • Xyler94 - Thursday, May 9, 2019 - link

    In what flavour though?

    A slow laptop CPU or high-performance Core i7/i9?
  • HStewart - Thursday, May 9, 2019 - link

    My guess high performance new architexture mobile system that also has long battery life. i7 quad core and likely higher. Plus a new breed of lake field device - extreme portability, long battery life and single core speed that high instead of slow.

    Keep in mind process is not everything, I believe the big surprise will be Sunny Cove which also means all HW Migrations done also. Not to mention integrated Gen 11 graphics at NVidia 1030 speeds
  • HStewart - Thursday, May 9, 2019 - link

    Desktop supposedly comes to 6 months later which lines up to beginning on 2020, but who knows Intel may have yet another surprise in the works.
  • Xyler94 - Friday, May 10, 2019 - link

    So you're guessing on what exactly? Last time Intel "released" 10nm, it was on a super low powered chip on a single laptop meant for China. I really don't share your same enthusiasm over Sunny Cove. Because of the Haswell to Skylake era. Skylake was not that huge of an improvement over Haswell, even though it promised it was. Sure it was more efficient and overclocked a bit better, but in terms of gaming, it didn't help much more.

    Sunny Cove cores will be like Skylake cores, a small improvement. I don't expect great things from it... But I'd love to be proven wrong.
  • arashi - Saturday, May 11, 2019 - link

    Based off his 30 years in Intel marketing.
  • aryonoco - Thursday, May 9, 2019 - link

    So, ignoring the nonsensical nm numbers by everyone, based on their own admission Intel is at least 2 years behind TSMC in using EUV?

    This is crazy. When FinFETs came around, Intel had a solid 2-3 year head start over everyone else in the industry. Now they are 2-3 years behind? What happened?!

    This really is worse than the NetBurst debacle, and it's also much more costly and time consuming to rectify.
  • edzieba - Thursday, May 9, 2019 - link

    "Now they are 2-3 years behind? What happened?!"

    Same as EUV for the past decade: cost continues to increase to implement, DUV continues to provide more value for the same output (i.e. every year EUV is pushed back is a year of SAQP improvements). Samsung, TSMC, and GloFo (until they dropped out entirely) have all progressively pushed back EUV implementation again and again, and even TSMC have stepped back from all-EUV to "we'll do it in at least one layer I guess" for a future process.
  • HStewart - Thursday, May 9, 2019 - link

    Nothing like Netburst, except possibly saying what is coming next which is Sunny Cove in June.
  • Nimrael - Thursday, May 9, 2019 - link

    First of all, Zen2 will NOT use any of EUV processes. Only 193nm DUV CLN7FF.
    Even 13/5 EUV CLN7FF+ use only 4(four) of 14 layers in EUV. CLN6FF - up to 5 of 14.
    Intel's 10nm DUV process approximately equal to the Zen 2's CLN7FF
  • peevee - Thursday, May 9, 2019 - link

    "This really is worse than the NetBurst debacle"

    Not much of a debacle, P4 sold better than Athlons, and 4GHz 15 years ago was very impressive. On 90nm no less. Still ~real nm back then.
  • YoloPascual - Thursday, May 9, 2019 - link

    You said in 2012 that we will have 10nm on 2015. Not gonna believe you anymore.
  • crotach - Thursday, May 9, 2019 - link

    It's missing 14+++, 14++++ and 14+++++ on the roadmap!
  • Nimrael - Thursday, May 9, 2019 - link

    You have your brain missed too...
  • HStewart - Thursday, May 9, 2019 - link

    14 series is going away with this stuff. But intel's packaging allows it to be IO and such.
  • Irata - Thursday, May 9, 2019 - link

    I think what many are forgetting is that regardless of which process comes out when, one of Intel's big advantages was being ahead as far as processes are concerned, sometimes two nodes ahead of the competition. This did give them a lot of headroom over the competition as far as power consumption, clock speed and (space for) features was concerned.

    Even right now, they are still ahead of AMD process wise, but when AMD moves to TSMC 7nm with their main CPU, this is the first time in a long time that Intel is behind.

    Even if / when they catch up, they will be going up against the competition using comparable processes.
  • shompa - Thursday, May 9, 2019 - link

    One of the main reasons why Intel will get more chips available is because Apple will start to switch to ARM. That frees up almost 10% of Intel's capacity.
    And the real reason for Intel's CPU constraints the last year is that 10nm has not come online and there was no more 14nm fab space. To "compete" Intel doubled the cores on mainstream chips/servers = they can produce 50% fewer chips. It's that easy, but fanboys think that Intel somehow sells more chips even if fewer X86 overall are sold. Add to that AMD that also will take 20% market share from Intel.
  • RSAUser - Thursday, May 9, 2019 - link

    I doubt Apple will "switch to ARM" any time soon. They might for the next generation or two add an ARM chip for low performance stuff (e.g. the touch bar they did it for), but they can't make it exclusive yet.
  • peevee - Thursday, May 9, 2019 - link

    They can. Their high-performance cores in A12 are very impressive and almost a year old, for a laptop/desktop chip they can have more of them and clocked higher - and next gen.

    All they really need is a good x64 to ARM v8.2 (8.3 for next core?) compiler, which is much much harder to do compared to their experience with Power to x86 as modern x64 with AVX2 is extremely more complex instruction set. I bet they are secretly working on it for a few years now though.
  • Targon - Friday, May 10, 2019 - link

    You give up software compatibility in the switch from Intel to ARM based chips. If Apple were to do that, those who actually use their computers for more than web browsing and e-mail would drop Apple at that point. It was a huge issue when Apple made the jump to Intel, and when Apple switched from MacOS 9 to MacOS X due to software incompatibility.
  • peevee - Wednesday, May 22, 2019 - link

    "You give up software compatibility in the switch from Intel to ARM based chips."

    Nope. Read what I wrote. Read up on their transition from Moto 680xx to PowerPC, from PowerPC to x86. Same thing applies here, just a good binary code compiler.
  • Daeros - Thursday, May 9, 2019 - link

    Why not? They have the performance already from an in-house design:

    https://www.tomsguide.com/us/new-ipad-pro-benchmar...
  • HStewart - Thursday, May 9, 2019 - link

    Apple going ARM on Mac would be serious mistake, until we see development tools for iOS on iPad Pro than we will never have this happen, plus people still have x86 bases apps on Mac and also dual part with Windows or running Fusion.

    It will be a while before Apple could do this

    That sounds a AMD fanboy dream to take 20% market from Intel. Maybe on dying desktop.
  • Lord of the Bored - Friday, May 10, 2019 - link

    Apple's navigated architecture changes before. They shouldn't have been able to change over from 68x00 to PowerPC by your logic, but they did it. Or from PowerPC to 80x86, for that matter.
  • Targon - Friday, May 10, 2019 - link

    Steve Jobs....Tim Cook is NOT a visionary, and does not inspire confidence. Also, it's been 18 years since the move to OS X on the Mac, do you really expect that Apple users would be able to cope with their existing programs not working on a new generation of Apple products?
  • Lord of the Bored - Saturday, May 11, 2019 - link

    Jobs wasn't the guy in the trenches making things work.
    And that is sort of the thing: Apple's done this twice WITHOUT breaking software.
  • Xyler94 - Friday, May 10, 2019 - link

    How the heck did you manage to bring AMD in a discussion about Apple's ARM cores?
  • peevee - Wednesday, May 22, 2019 - link

    "Apple going ARM on Mac would be serious mistake, until we see development tools for iOS on iPad Pro "

    What are you blabbing about? Who the hell needs developing on iOS?

    Replacing Intel with their own cores is just a binary x64-to-ARMv8 compiler away. On Mac OS. They did it twice already, Moto to PowerPC, PowerPC to x86.
  • saayeee - Thursday, May 9, 2019 - link

    What is 1272, 1274, 1276 in the charts ?!!
  • peevee - Thursday, May 9, 2019 - link

    Intel process codes.
  • vortmax2 - Thursday, May 9, 2019 - link

    "Murthy made it clear that Intel wants to introduce a Moore's Law-like gain at the beginning of a new process."

    Shouldn't they be calling it Moore's Theory now?
  • Lord of the Bored - Friday, May 10, 2019 - link

    Moore's debunked hypothesis.
  • HStewart - Thursday, May 9, 2019 - link

    This article is wonderful news and it going to be interesting to find out actually what comes in June. I was expecting some article from AMD to try to steal the thunder from Intel which typically happens in the past.

    I am sure Intel is working very hard to make this happen, with all the complaints of 10nm / Cannon Lake process and also more important and more visual issue with Spectre/Meltdown fixes. Sites like this have a lot of desktop gamers and 10nm / 7nm stuff is important but to average user it not important.

    Personally myself, I was thinking of getting an Ipad with cellular connection, but I thought about it and usage of iPad and for me Windows applications were major importance - so I thought of Surface Go and then I thought about performance - so for me the idea next computer would be Lakefield based tablet with cellular connection. That would be best of both worlds with both performance for single thread apps and lower power for portability.
  • sa666666 - Thursday, May 9, 2019 - link

    Hmm, strange how the now 'unimportant' things are ones that Intel is no longer dominant at. You didn't have that opinion when Intel was at the top in those areas.

    Your bias is really showing here; just admit it.
  • HStewart - Thursday, May 9, 2019 - link

    I guess adding more alu's and store unit to architexture is no longer important. Everyone has opinion but concentrate on minor part of industry such as desktop does not some important. Have you been to BestBuy lately and seen the number of laptop vs desktops lately. 10 years ago there was a at least a roll of them but not any more.

    You can't consider the parts that one purchase from Amazon, Newegg or other places as the entire industry.

    This does not matter if Intel or AMD, just go to BestBuy and compare laptops vs desktops. That sounds like more important area to focus on to me. But you maybe right and not sure about it, but AMD might dominate in desktops at this point in time - but how really important is that in industry where majority of computers are mobile now.

    The following link shows you example of statistics. Desktops have drop from 157million to 88million between 2010 and 2019 (2018-2023 est on this chart) and 2023 is estimated at 79.5 million which is about 1/2 while Laptops have gone from 201 to 171 million and tables have gone from 19 million to 136 million. From the chart, and information, I believe it on PC based.

    www.statista.com/statistics/272595/global-shipments-forecast-for-tablets-laptops-and-desktop-pcs/

    Not sure how 2in1 like my Dell XPS 15 2in1 is accounted in this
  • Irata - Thursday, May 9, 2019 - link

    " I was expecting some article from AMD to try to steal the thunder from Intel which typically happens in the past."

    Seriously - as I stated earlier, being a fan is not an issue, but having an extremely warped view of reality is.

    What happened whenever AMD announced a new product (or was about to) ? Intel demoed or announced a super awesome much faster

    Remember Computex 2018 when AMD was to present Threadripper 2 (an actual product) ? Intel demoed their "upcoming" 28 Core 5 Ghz CPU....except that it was never released anywhere near these specs, much later, in very limited quantity, at high prices, needing special mainboards....
  • silverblue - Friday, May 10, 2019 - link

    ...and needing a special water chiller if you dared to overclock it anywhere near the 5GHz in the presentation. 28 cores at 5GHz is one thing, 2.3KW power consumption is quite another.
  • DannyH246 - Wednesday, May 15, 2019 - link

    Anandtech helpfully publishes pages and pages of all the new things Intel will be bringing to market in the next 3-5 years, but NOTHING about the latest round of security flaws. E.g Zombieload

    How about no thanks Intel. Not interested. Fix your existing crap before telling us about all the new stuff you’re working on.
  • HardwareDufus - Thursday, May 9, 2019 - link

    this is all just a PR spoiler because AMD is going to start shipping Zen3 soon.... It will be ready for back2school.
  • SaturnusDK - Friday, May 10, 2019 - link

    Zen2 or Ryzen 3rd generation but yes.
  • RealBeast - Thursday, May 9, 2019 - link

    And we'll see full HW security mitigations finally in the 7++?
  • HStewart - Thursday, May 9, 2019 - link

    Hardware Migrations are not related to process - Sunny Cove architexture will have hardware Migrations in it - please stop spreading these lies. This is not WCCFTech.
  • silverblue - Friday, May 10, 2019 - link

    I'm not sure that's what RealBeast is referring to, rather how long it's taking to build them in.

    If this was WCCFTech, you'd have been memed off the face of the earth by now.
  • arashi - Saturday, May 11, 2019 - link

    This is not Intel marketer's forums too. Please stop spreading lies here.
  • Manch - Thursday, May 9, 2019 - link

    Anandtech, yall need an edit button and some forum adjudication bc good lord. The spam and shills are too much.
  • zodiacfml - Friday, May 10, 2019 - link

    they are capable of 7nm on 2021 but easily delay it if their products continue to be popular at 10nm
  • johnhenry99 - Monday, May 20, 2019 - link

    It was really a nice article to read.<a href="https://www.downloadmorpheustvapk.com/">Mo... TV APK</a>
  • johnhenry99 - Monday, May 20, 2019 - link

    Really a Nice article to read about it. https://www.downloadmorpheustvapk.com/morpheus-tv-...
  • MDD1963 - Tuesday, May 21, 2019 - link

    After being 2-3 years late on 10 nm, and still not yet producing a single desktop on that process....Intel has the audacity to discuss steps beyond that? <sigh!> ....uh, ok.....

Log in

Don't have an account? Sign up now