Thank you for your Great reviews. Look like we should not ecpect much from those new 10nm CPU's for cunsumers for new future, maybe in Q1 2020 with 10++ gen. 2019 going to be on AMD's Favor!.
12 or 16 core Ryzen with a 13% IPC increase, at equivalent power to the i9-9900k is not going to go well for Intel. Seems like they'll be able to compete with the AMD processors of 2019 around late 2020 at the earliest.
Take a look at the Spec 2006 benchmark and make the comparation to A76 (Snapdragon 855) it beats this Intel SKU (@2.2 GHz) In most cases with only half the power used. When SVE NEON SIMD lies in CISC is doomed.
Unfortunately we don't know how perform AMD new cpus, only cherry picked results nothing more. Even less we know about power consumption. Are we certain AMD 7nm cores will are winner over 12nm ones?? AMD is unhappy about clock speed for example, so the IPC advantage will be likely vanished. IMO AMD is painting a too bright future to be trusted. TSMC process is not perfect at all, instead of Nvidia should be on it right now.
Lying about future products is grounds for lawsuits from shareholders (and possible criminal charges many places), so that's quite unlikely. We do have one indication of power draw from Zen2, from the live Cinebench demo where an 8-core Zen2 chip matched the 9900K's score at ~50W lower power. Of course we don't know how clocks will scale, nor the clock speed that test was run at, and it's relatively well established that Cinebench is a workload where AMD does well. Still, TSMC 7nm is proven good at this point, with several shipping large-scale SKUs on it (Apple A12, A12X, among others). Even if these are all mobile low-power chips, they're very high performance _and_ low power, which ought to fit Zen2 well. Also, the Cinebench score matching the 9900K means that either IPC has improved massively, SMT scaling on Zen2 is ~100%, or clocks are quite high. Likely it's a mix of all three, but they wouldn't reach that score without pretty decent clocks.
Ignoring any Zen IPC improvement whatsoever, process improvements alone this year would make them competitive with Intel going forward. All they need to do is ramp up the clock frequency a bit without a TDP penalty and they have an automatic win...
Intel ice Lake for performance laptops should be out by 2019 christmas. Then we will see if there are any IPC improvements in this new architecture. Probably not much...
I think that Intel need 10nm for Data-centers for higher core count and profit, and their production focus will be on this area and not consumer desktop PC's. I don't see 9700K/9900K 10nm competitor until 2020.
Sunny Cove and Willow Cove are intermediate designs until the release of Ocean Cove, the "brand new" CPU architecture Jim Keller was hired to lead the design of. Since Ocean Cove has not yet appeared in Intel's schedule it either means that it will not be ready before at least 2022 or Intel is just being secretive.
Or it might just be Golden Cove. Since Golden Cove will apparently be Intel's next new design, if the it is not actually Ocean Cove, then Ocean Cove will not be released until 2023 at the earliest (at 7nm). That's because Intel has never released two new designs one after the other without an optimization in-between. It's also possible that Intel will just "pull a Skylake" and rather than use a new design for Golden Cove they will just.. re-optimize it. In that case Ocean Cove should be released in 2022, right after Golden Cove.
So far, quantum is looking like a dead end. Maybe for specialized coprocessors in cryo environments in 10 years, but not for general-purpose computing AT ALL.
There are much better, actually realistic directions for general-purpose computing on non-Von Neumann architectures, and that is where the future lies now that Moore's law is firmly dead and buried.
There is not release information about desktops on Ice Lake. But I would not doubt that Ice Lake on desktop at that time. It going to be fun to compare new laptops and even desktops at that time.
But keep in minor to Intel desktop market is a minor market and once performance is up, I would not doubt we will not see any difference in desktop vs mobile chips
We don't know how well Ice Lake / Sunny Cove will perform, but no matter how good it performs AMD will still have a market lead of 6 to 7 months (assuming a release of Zen 2 based Ryzen CPUs in May or June and an Intel HVM release of Sunny Cove in December). This assumes that Intel does not screw up again and moves back the launch of Sunny Cove into 2H 2020, which would be frankly catastrophic, at least for their client wing. Their 14nm process has been milked dry, they can no longer extract any more performance from it.
"This is an M.2 module, which means it could be upgraded at a later date fairly easily."
No, you can't. Lenovo only lets wifi/bluetooth cards with their custom firmware in their systems. If you boot the system with a standard (say Intel) wifi card, it refuses to boot.
That's the reason I stopped buying lenovo laptops despite liking their build and design.
it's not great obviously when you are stuck at 2.2GHz, while the prev gen cpu with the same capabilities(except the avx) can go up to 3.4GHz. I bet the 8130 would've been faster even if configured at 10Watt TDP.
...and before jumping on me about that "stuck at 2.2GHz" let me report this: in certain loads the locked freq is slower than the unlocked one. What does this mean? it most probably means that the unlocked freq makes the cpu run hot, throttle and then try to balance between temperature and consumption.
and a subnote on this. I think Intel should stop pushing the AVX instructions. It doesn't work as intended, it's not needed in most cases, especially when you have to design 256bit buses for 512bit data transfer on a low power cpu. Also it takes a lot of space on the die, it taxes the cache buses and it's useless when you disable your igpu(which is a good SIMD machine but not hUMA) and you have a dGPU up all the time just rendering your desktop. They should try focusing on HSA/hUMA on their cpus+igpus instead of integrating wide SIMD instructions inside their cores.
Nope, even with the simplest data set where AVX512 can perform twice the speed of AVX2 per cycle, the frequency has to drop significantly (~30% on Xeon Gold 5120 for example), so the upper limit is more like 40% gain. And that's PURE AVX512 code, you won't get that in real life. Assuming 50% AVX2 and 50% AVX512 code - that's a very generous assumption for non-datacentre usage, you'll have a 5% net gain.
Normally I try to read the whole article (and I *am* looking forward to reading the rest of it) but I already have 2 comments:
1. Maybe this review has been in progress for quite a while, but you can definitely buy the NUC8i3CYSM NUC on Amazon, at least in the US. It is shipped and sold by Amazon not some random 3rd party too. It is expensive ($530), and can only be bought with up to 8GB of soldered down RAM, but you can buy it.
2. While the Wi-Fi card is M.2, Lenovo (like HP and others) usually restricts what Wi-Fi cards can be used with a BIOS/UEFI whitelist. I guess this might not apply to a China-only model, but I wouldn't just assume that the card can be upgraded down the line unless you've already verified this is possible.
I would chalk up the system resonsiveness to the GPU and the low screen res. When moving from a Dell XPS 15 9560 laptop with 1080p screen resolution to an otherwise identical 4K model, I noticed a severe loss of performance in the windows UI. The reality is that Intel iGPUs in even kaby lake processors are simply not enough to provide a smooth experience on high res laptops. The 1080p experience was really smooth, however. You can also force certain apps to the use the dedicated nvidia graphics, or simply choose to run at a non-native 1080p and it speeds up the UI drastically.
The first dual core laptop came out in 2015 with the AMD Athlon 64 X2 4800 so it's just weird to me 14 years later it's still something being made especially with such a dense process.
I think I had one of those in a Sharp laptop. It had horrible VIA S3 graphics, but a beautiful, bright display. It was my last 4:3 laptop, an end of an era for me.
Majority of laptops are still DC, I have to check our laptop orders when we place them to make sure my boss and our vendor aren’t screwing up ordering them.
Bored with laptops, want a large foldable phone with a projected keyboard so i can forget about these bulky heavy things. Ok, fair enough, glasses are way better but those will take a while longer.
@Ian: Thanks for the deep dive, and giving the references for background! One comment, three questions (they're related): In addition to being very (overly) ambitious with the 10 nm process, I was particularly struck by the "fused-off integrated graphics" and how Intel's current 10 nm process apparently just won't play nice with the demands in a GPU setting. Question: Any information or rumors on whether that contributed to AMD going the chiplet route for Ryzen going forward? In addition to improving yields, that also allows for heterogeneous manufacturing nodes on the same final chip, so that can get around that problem. Finally, any signs that Intel may go down that road in its upcoming mainstream chips? Any updates on what node they will make their much-announced dGPUs on? Probably won't be this or a related 10 nm process.
Lastly, and maybe you and Andrei can weigh in on that: TSMC's (different) 7 nm process seems to work okay for the (smaller) different "iGPUs" in Apple's 12/12x, Huawei's newest Kirin and the new Snapdragon. Any insight/speculation which steps of Intel's 10 nm process cause the apparent incompatibility with GPU usage scenarios?
That's why I asked about the apparent incompatibility of GPU-type dies with Intel's 10 nm process. Isn't it curious that this seems to be the Achilles heel of Intel's process? I wonder if their future chips with " iGPU" will use a chiplet-type approach, with the CPU parts in 10 nm, and the GPU in 14 nm++++ or however many + generations it'd be on. The other big question is what process their upcoming high-end dGPU will be in Unless, Intel let's TSMC make that for them, too.
"Fast forward several months later, to May 2018, and we still had not heard anything from Intel."
Anton covered their statement in April, where they indicated they weren't shipping volume 10nm until sometime in 2019, and that they would instead release another 14nm product, whiskey lake, in the interim. https://www.anandtech.com/show/12693/intel-delays-...
>AMD XXXXX (XM/XT, XXW) Thanks Ian for reminding us is every article, that we are reading a Purch media product, or a clueless editor. Don't forget, 386 was o 0 core CPU. No, it doesn't bother me as a reader, it bothers me as an engineer who designs and studies digital circuits. But hey you can't have it all, it's hard to find someone who is capable at running windows executables AND know his way in comp. arch..
i interpreted it as, ... "I disagree with the distinction between 'modules' and 'cores' that is made when some journalistic endevours mention AMD's 'Construction' architecture microprocessors. I find the drawing of a line based on FPU counts inaccurate- disengenous even- given that historic microprocessors such as the renowned Intel 80386 did not feature an on-chip FPU at all, an omission that would under the definitions used by this journalist in this article cause the '386 to be described as having 'zero cores'. The philosophical exercise suggested by such a definition is, based upon my extensive experience in the industry of digital circuit design, repugnant to my sensibilities and in my opinion calls into question the journalistic integrity of this very publication!" ... or something like that (automatically translated from Internet Hooligan to American English, tap here to rate translation)
Interesing. So Basically no real possibility for desktop improvement until 2020 at least. They really are giving AMD a huge window to take the performance crown. Zen 2 is due to ship this year, right?
And dont forget- there are many Dual/Quad core (lets Say from Q6600 ~SandyBridge to 7700K ) Intel PC's that gonna be upgraded finally with new Ryzen launch and those PC won't we upgraded for another 3+ Years,
The lower end of that range has been upgrading for years. The upper end has no real reason to upgrade unless they're doing something other than gaming, since current games don't benefit from the higher core counts much.
I'm in the middle with a 4790K; and still see myself on track for a nominal 2022 upgrade; short of games growing CPU demands significantly or unexpected hardware failures I don't see any need to bring it forward. The additional cores will be nice for future proofing; but what I'm mostly looking forward to is all the stuff outside the CPU.
My notional want list is 10GB ethernet, PCIe4(5?) to the GPU and SSD, 50/50 USB 3.x A/C mix, and DDR5. The first of these is starting to show up on halo priced mobos.
PCIe4 is rumored to be launching this year on AMD, although from the leaks so far it's not clear if it'll only reach the first x16 slot for the GPU or be more widely available (maximum trace lengths are short enough that anything other than M.2 on a not-dimm will probably need signal boosters increasing costs).
Dual USB-C is starting to show up on a few boards; but widerspread availability is likely to be blocked until the hardware to handle flipping the connector moves from a separate chip into the chipset itself.
DDR5 is supposed to start shipping in very limited quantities this year, but will be another year or two before reaching consumer devices.
My guess is late 2020/early 2021 before all the hardware I want is finally available; which fits well with the nominal 8y lifespan I'm targeting for my systems core components.
What is the point of DDR5? It's going to be beyond overpriced at launch for negligible performance gain. As for USB-C, you can find cases with front connectors.
I don't think TSMC would give anybody except their customer (AMD) an expected shipping date. Also, while we don't know how the new AMD processors will perform, we already know that I Intel's 10 nm tech was both late and hasn't performed so we'll. BTW, I am currently running all PCs around me on Intel chips, so no fanboy here. This disappointing 10 nm fiasco is bad for all of us, as we need Intel to egg on AMD and vice versa. If one of them drops behind, the other one gets lazy.
I don't recall AMD ever being in that position before. Even with the Athlon they were outmanned in all areas except for performance. Unfair business practices by Intel and a inability to keep up with demand on the manufacturing side took away any lead AMD had at the time. On top of that they were never competing price wise. Amd chips were sold for a fair amount less. I only recall one cpu being priced similar to Intel's top dog and it was dropped down to 30% less a few months later.
"CLWB attempts to minimize the compulsory cache miss if the same data is accessed temporally after the line is flushed if the same data is accessed temporally after the line is flushed. "
1. The Article should have been split into at least two parts, separating each by at least 3 to 7 days. First parts being Intel 10nm, 2nd Part being Cannon Lake and how it perform.
2. Basically Cannonlake sucks. Lets hope Icelake will not disappoint.
This incarnation of 10nm is only ever going to be seen in this particular chip, so its really quite closely related. The production-grade 10nm we're getting end 2019 is already going to be one step up from that.
Wow, this is why I visit Anandtech; the deep dives are truly deep dives, unlike how some other "tech blog" sites pawn off articles as "deep dives" when all they do is regurgitate information off of official technical slides. Kudos Ian!
"The CPU area is instead attached at three points, and there is an additional pad to stop the chassis from rubbing against the heatpipe. This means that the chip height combined with the PCB and the heatsink is enough to start to worry how the chassis brushes up against the internals? Interesting."
This isn't an uncommon practice. Laptop bottom panels can flex so the placement of pads is quite typical. Even my old Core2 Dell Latitude e6400 has pads on the heat pipe.
This whole situation begs the question, what could Intel have gotten out of 65nm, 32nm, 22nm, etc, had they run it for five generations.
I wonder if they'll do similarly on the 10nm process, punt the first time or two then knock it out of the park. Skylake was a beautiful success. Maybe Sunny Cove will be the same for 10nm.
The point is Intel now needs better uarch designers lot more than process designers. Yes 10nm improvements is hard work and an interesting read...but for users they ultimately only care about end performance and perf/$, not die sizes, transistors/mm2 or manufacturing margins. If Zen 2 blows the doors off CFL would anybody even care about about Intel's process advantage? Hell not.
Doubt this is even an "if" at this point. Curious to see if *Cove cores can keep Zen 4 and later from running away too much. Only time will tell, but Intel bringing in guys like Keller can't possibly be a bad thing. And in spite of their disastrous former attempts at building a dGPU, I fully expect Intel to make it happen this go around.
The problem is, do you believe 7nm would be any different? Unless they implement EUV directly, I don't see it. Intel will be force, like AMD, to go fab less because their node will not be better than the competition. To add to it, it is most likely be behind in time to.
Great job again though it doesn't warrant it for this Intel junk. Looks like they're paying Lenovo just to use Cannon lake, usable chips that came from tuning manufacturing. The performance is where I expected it to be. I still stand to my theory that Intel is reluctant to spend, leaving their engineers stressing if they can produce 10nm products without new equipment. Anyways, it is a dead horse. AMD will be all the rage for 2019.
"Intel is reluctant to spend" To the contrary: throwing money at the problem is exactly what they're doing. Have you tracked their CAPEX these past few years? "AMD will be all the rage for 2019." I think that's basically a given.
The reports were pretty vague and I don't remember them spending substantial money except the news that they're spending for more capacity on 14nm. AMD is pretty lukewarm for me last year. I'm certain that this year will be a lot stronger for AMD until Intel and Nvidia starts taking their customers more seriously.
Even for a company Intel's size, spending north of $12B a year isn't penny-pinching. I know their revenue and margins are massive, but their failings haven't been a lack of spending since SB. They've been progressively spending more than ever.
I looked into the transistor density of different nodes and particularily the claim that Intel 10nm will feature "100 million transistors per square millimeter." Intel seems to historically lack in transistor density. 22nm has ~8 million per mm², while competing 28nm from GlobalFoundries have ~13 and TSMC has ~12. Moving unto 14nm and all foundries double their transistor density. Intel goes to 15M/mm², GF to 24 (on a node bought from Samsung) and TSMC's 16nm also to 24M/mm². TSMC's 7nm node has a density of ~40M/mm². Now Intel has made two statements (both found in the first page of the article): 1. 100 million transistors per mm² or a 5.7x improvement. 2. A 2.7x improvement in density over 14nm, which gives 55M/mm². 55M/mm² would be consistent with Intel's claim of beating TSMC's 7nm. Next I'm assuming my calculations about Intel's transistor density are wrong, and that both of Intels claims are true. In that case Intel's current 14nm would be 27M/mm². Now of course we can't assume my calculations about GF and TSMC are correct either and we are left without any conclusion.
I jumped the gun too early and didn't proceed to page two that explains a lot of the same things as I tries to explain, but uses actual node data and not chip sizes.
Yep, they're not the only ones optimizing libraries. They're trying to muddle transistors with design compiling. While this is fair, it's not taking into account that others are working both halves of the problem as well. Clearly meant to be misleading.
How is the fan noise on the PN60? Mine makes a pretty loud whine all the time and temperatures regularly cross 80 on full load...My 4010u Brix PC is whisper quiet by comparison.
As someone said it earlier in this thread, I think we miss opportunities when moving to a new process every two years. The mishap that Intel had just showed us how much better a process can become if you give the time to your engineers. 14nm started late, with some low clocked parts. We had some Broadwell chips that ran at 3.3 base. Then, Skylake came and the 6700k brought 4ghz at quite high power. Then, the 7700k came and another tweak to the process improved clocks, so we now got 4.7 GHz boost. After this, things moved up in core counts (which should've happen a long time ago, but with competition...) and we got 8700k and now 9900k with turbo to 5ghz. Until now, only 32nm with Sandy Bridge came close to 5ghz mark. Now, with a lot of time to tweak, they have become so confident in the 14nm process that they released a 5ghz stock cpu. Financials say the true story. Even if we cry about 10nm, truth is that things can move forward without a new process. It is cheaper actually to prolong the life of a certain process and if they can add enough improvements from generation to generation, they can afford to launch new process once every 4-5 years.
Indeed, we probably have to get used to a lot of +++ processes. During the architecture day, the new Intel people (old AMD people) mentioned they are decoupling the architecture from the process. That means they can make progress other than pushing clocks on the same core over and over, but IPC as well...
Unfortunately, SB-derivatives seem to be needing a significant overhaul. "tocks" of late haven't exactly brought meaningful IPC gains. Hopefully deeper and wider *Cove designs are a step in the right direction. I just don't like that Intel seems to be taking an approach not dissimilar to the Pentium 4 the last time AMD reared its head. Only this time, a major departure in micro-architecture and steady process advantage isn't in the wings. Even with the *Coves, I think AMD may be able to build enough steam to solidly overtake them. There's no reason that Zen 4 and on couldn't go deeper and wider too, especially looking at power consumption on the front and back ends of the Zen core versus the uncore mesh. I think Zen derivatives currently will try the wider first. It actually might make the high core-count parts significantly more power efficient. Also could easily scale better than post-SB did if Agner Fog's analysis is anything to go by. Multiple CPU die masks and uncore topologies incoming? Wouldn't surprise me.
Well, yeah, they can be improved upon over time, but that doesn't cut the production costs like a process reduction does. improving the process can increase yields and increase performance, but only by a limited percent. A process reduction increases the number of chips from a wafer by a much higher amount, even if there are more defects.
Well, that was the way it worked up until the 14nm process.
With 10nm at Intel, they had far too many defects, and the process failed to give the returns they wanted for quite a while. That had as much to do with the quality of the wafers before production as it did the production process itself. They had to push the wafer producers to higher levels of purity in order to fix that. I'm fairly sure TSMC would have had the same issues with their 7nm, but Intel had already pushed the wafer production to higher levels of purity because of their problems, so TSMC was able to take a couple extra steps ahead because of that.
These days, we're going to see each step smaller take longer and longer to get right, because of these same hurdles. As things get smaller, impurities will have a higher and higher impact on production. We may not get as far as some are hoping, simply because we can't get silicon as pure as necessary.
"Another takeaway is that after not saying much about 10nm for a while, Intel was opening up. However, the company very quickly became quiet again."
The history page is great. But I have to wonder if the ultimate conclusion is that the best thing, for both Intel and the world, is that they STICK to the STFU strategy? And that journalist stick to enforcing it.
One thing that's incredibly clear from all this is that Intel are utterly lousy at forecasting the future. Maybe it's deliberate lies, maybe it's just extreme optimism, maybe it's some sort of institutional pathology that prevents bad news flowing upward?
Regardless, an Intel prediction for beyond maybe two years seems to be utterly worthless. Which raises the question -- why bother asking for them, and why bother printing them? Look at that collection of technologies from the 2010 slide that are supposed to be delivered over the next nine years. We got Computational Lithography, and that's about it. CErtainly no III-V or Germanium or Nanowires. Interconnects (Foveros and EMIB?) well, yeah, in about as real a form as 10nm. 3D refers to what? Die stacking? or 3D structures? Either way nothing beyond the already extant FinFETs. Dense Memory? Well yeah, there's Optane, but that's not what they had in mind at the time, and Optane DIMMs are still crazy specialized. Optical Interconnect? Well occasional mutterings about on-die photonics, but nothing serious yet.
Now on the one hand you could say that prediction is hard. How much better would IBM, or TSMC, or Samsung, have done? On the other hand (and this is the point) those companies DON'T DO THIS! They don't make fools of themselves by engaging in wild claims about what they will be delivering in five years. Even when they do discuss the future, it's in careful measured tones, not this sort of "ha ha, we have <crazy tech> already working and all our idiot competitors are four years behind" asinine behavior.
I suspect we'd all be better off if every tech outlet made a commitment that they won't publish or discuss any Intel claims regarding more than two years from now. If you're willing to do that, you might as well just call yourself "Home of Free Intel's advertising". Because it's clear that's ALL these claims are. They are not useful indications of the future. They're merely mini-Intel ads intended to make their competition look bad, and with ZERO grounding in reality beyond that goal.
While you're correct that the media is ignorantly doing just that for the most part, at least this article provides context in what Intel is trying to do in obfuscating the numbers versus TSMC and Samsung who haven't stumbled the same way. Some of the Foveros "magic" is certainly not being knocked-down enough when people don't understand what it's intended to do. 2.5D, 3D, MCMs, and TSVs all overlap but cover different issues. I blame the uneducated reader more than anything. Good material is out there, and critical analysis between the lines is under-present. "Silicon photonics" was a big catch-phrase in calls a few years ago, but quiet now. Hype, engineering, and execution are all muddied by PR crap. Ian is however due credit for at least showing meaningful numbers. It's more in the readers hands now. Your last remarks really aren't fair to this article, even if they bear a certain degree of merit in general. Sometimes lies are needed to help others understand the truth though...
I believe that this Cannon is get AVX 512 out to developers. What would be interesting if possible is for Intel to release Covey Lake on both 14nm and new 10nm. One thing I would expect that Covey Lake will significant speed increase compare to current 14nm chips even if on 14nm and the 10nm will be also increase but combine Covey Lake and new 10nm+. should be quite amazing.
One test that I am not sure is benchmark that runs in both AVX2 and AVX 512 and see the difference. There must be reason why Intel is doing the change.
Cheap Cannon Lake is not designed to get AVX512 into dev hands. That's the dumbest thing ever. And "Covey Lake"? Please read the article before commenting. There are a few good blog posts and whitepapers out there analyzing and detailing SIMD across AVX varieties. For most things, AVX512 isn't as big a deal as earlier SIMDs were. It has some specialized uses as it is novel, but vectoring code and optimizing compilers to maturity is slow and difficult. There are fewer quality code slingers and devs out there than you would expect. Comp sci has become littered with an unfortunate abundance of cheap low-quality talent.
yes it going to take a while people user AVX 512 - but just think about it twice the bits - I was like you not believe 512 but instead 64 bit would make in days of early 64 bit - thinking primary that is will make program largers and not necessary. As developer for 3 decades one thing I have send that 64 bit has done is make developer lazy - more memory less to worry about in algorithms for going to large arrays.
As for Sunny Cove, it logical with more units in the chip - it is going to make a difference - of course Cannon Lake does not have Sunny Cove - so it does not count. Big difference will be seen when Covey Lake cpus come out what the difference it be like with Cannon Lake - and even Kaby Lake and assoicated commetitors chips
One thing on Covey Lake and upcoming 7nm from Intel, it is no doubt that it designers made a mistake with Cannon Lake's 10nm - Intel realizes that and has created new fabs and also new design architexture - there is no real reason for Intel to release a Cannon Lake - but it good to see that next generation is just more that Node change - it includes the Covey Lake architexture change.
I am more curious on the manufacturing node. Zen (14 / 12nm from GF) has 12 metal layers. Cannon Lake has 13 metal layers, with 3 quad-patterning and 2 dual patterning. How would these impact the yield and manufacturing time of production? I think the 3 quad-patterning process will hurt Intel in the long run.
More short-run I would say actually. EUV is coming to simplify and homogenize matters. This is a patch job. Unfortunately, PL analysis and comparison is not an apples-to-apples issue as there are so many facets to implementation in various design stages. A broader perspective that encompasses the overall aspects and characteristics is more relevant IMHO. It's like comparing a high-pressure FI SOHC motor with a totally unrelated low-pressure FI electrically-spooling DOHC motor of similar displacement. While arguing minutiae about design choices is interesting to satisfy academic curiosity, it's ultimately the reliability, power-curve and efficiency that people care about. Processors are much the same. As a side note, I think it's the attention to all these facets and stages that has given Jim Keller such consistent success. Intel's shaping up for a promising long-term. The only question there is where RISC designs and AMD will be when the time comes. HSA is coming, but it will be difficult due to the inherent programming challenges. Am curious to see where things are in ten or fifteen years.
Good point and question! With the GPU functions apparently simply not compatible with Intel's 10 nm process, does anyone here know if any GPUs out there that use quad-patterning at all?
@Ian or @Andrei Is dealII missing from the spec2006fp results table for some reason? Is this just a typo/oversight, or is there some reason it's being omitted?
Great write up, but isn't this backwards on the third page? "a 2-input NAND logic cell is much smaller than a complex scan flip-flop logic cell" "90.78 MTr/mm^2 for NAND2 gates and 115.74 MTr/mm^2 for Scan Flip Flops" NAND cell is smaller than flip-flop cell, but there is more flip-flop than NAND in a square millimeter? Or am I missing something?
A NAND logic cell consists of 2 transistors, while a Scan flip flop logic cell can consist of different count of transistors depending on where it is used. If I remeber correctly, Intel uses 8, 10 and 12 transistor designs. That gives 45.39 million NAND cells per mm² (basically SRAM) and ~12 million flip-flop cells.
The NAND cell is smaller because it consists of fewer transistors.
It would be great if you guys could get a CNL sample in the hands of Agner Fog. He might be able to answer some of the micro-architecture questions through his tests.
Awesome review, great in depth content and well explained. Considering the amount of work this entailed, it's clear why these reviews don't happen every day. Thanks
I'll just add...many folks are saying AMD should kick arse. They should, but Intel has been in this situation before - they had messed up the 90nm process; probably not quite as bad as the chips to be unusable, but it opened the door to AMD and its Athlon 64. What did AMD do? Messed it up in turn with slow development and poor design choices. Hopefully they'll capitalize this time so that we get an actual dupoloy, rather than the monopoly on performance we had since Intel's 65nm chips.
"Relationship with Intel: Chiang told us that, given Intel's strong support during the shortage, it would be awkward to tell Intel if he chose to come out with an AMD-powered product. "It's very hard for us to tell them 'hey, we don't want to use 100 percent Intel,' because they give us very good support," he said. He did not, however, make any claims that Intel had pressured him or the company."
Yeah right, Intel is winning because they have better tech... /sarcasm
Great Article! The title is a bit misleading given that it is much more than just a review. I found the historical perspective of the Intel processes most interesting: Other reporting often just reports on whatever comes out of the PR department of some company, and leaves the readers to compare for themselves with other reports; better reporting highlights some of the contradictions; but rarely do we se such a pervasive overview.
The 8121U would be interesting to me to allow playing with AVX512, but the NUC is too expensive for me for that purpose, and I can wait until AMD or Intel provide it in a package with better value for money.
@Ian Cutress Great article, it's going to become all-time classic and kudos for mentioning semiaccurate and Charlie for his work and inside information (and guts)
But really, how many days, weeks or even months did it take to finish it ?
@Ian, This was a valuable article and it is clipped to Evernote. Thanks!
Without becoming Seeking Alpha, you could add another dimension or two to the history and future of 10nm: cost per transistor and amortizing R&D costs. At Intel's November 2013 investor meeting, William Holt strongly argued that Intel would deliver the lowest cost per transistor (slide 13). Then-CFO Stacey Smith and other execs also touted this line for many quarters. But as your article points out, poor yields and added processing steps make 10nm a more expensive product than the 14nm++ we see today. How will that get sold and can Intel improve the margins over the life of 10nm?
Then there's amortizing the R&D costs. Intel has two independent design teams in Oregon and Israel. Each team in the good-old tick-tock days used to own a two-year process node and new microarchitecture. The costs for two teams over five-plus years without 10nm mainstream products yet is huge--likely hundreds of millions of dollars. My understanding is that Intel, under general accounting rules, has to write off the R&D expense over the useful life of the 10nm node, basically on a per chip basis. Did Intel start amortizing 10nm R&D with the "revenue" for Cannon Lake starting in 2017, or is all of the accrued R&D yet to hit the income statement? Wish I knew.
Anyway, it sure looks to me like we'll be looking back at 10nm in the mid-2020s as a ten-year lifecycle. A big comedown from a two-year TickTock cycle.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
129 Comments
Back to Article
BigMamaInHouse - Friday, January 25, 2019 - link
Thank you for your Great reviews.Look like we should not ecpect much from those new 10nm CPU's for cunsumers for new future, maybe in Q1 2020 with 10++ gen.
2019 going to be on AMD's Favor!.
jaju123 - Friday, January 25, 2019 - link
12 or 16 core Ryzen with a 13% IPC increase, at equivalent power to the i9-9900k is not going to go well for Intel. Seems like they'll be able to compete with the AMD processors of 2019 around late 2020 at the earliest.ZolaIII - Friday, January 25, 2019 - link
Take a look at the Spec 2006 benchmark and make the comparation to A76 (Snapdragon 855) it beats this Intel SKU (@2.2 GHz) In most cases with only half the power used. When SVE NEON SIMD lies in CISC is doomed.Gondalf - Friday, January 25, 2019 - link
Unfortunately we don't know how perform AMD new cpus, only cherry picked results nothing more.Even less we know about power consumption. Are we certain AMD 7nm cores will are winner over 12nm ones?? AMD is unhappy about clock speed for example, so the IPC advantage will be likely vanished.
IMO AMD is painting a too bright future to be trusted. TSMC process is not perfect at all, instead of Nvidia should be on it right now.
levizx - Saturday, January 26, 2019 - link
Rubbish written in garbled words.KOneJ - Sunday, January 27, 2019 - link
What exactly are you trying to babble about here?Valantar - Sunday, January 27, 2019 - link
Lying about future products is grounds for lawsuits from shareholders (and possible criminal charges many places), so that's quite unlikely. We do have one indication of power draw from Zen2, from the live Cinebench demo where an 8-core Zen2 chip matched the 9900K's score at ~50W lower power. Of course we don't know how clocks will scale, nor the clock speed that test was run at, and it's relatively well established that Cinebench is a workload where AMD does well. Still, TSMC 7nm is proven good at this point, with several shipping large-scale SKUs on it (Apple A12, A12X, among others). Even if these are all mobile low-power chips, they're very high performance _and_ low power, which ought to fit Zen2 well. Also, the Cinebench score matching the 9900K means that either IPC has improved massively, SMT scaling on Zen2 is ~100%, or clocks are quite high. Likely it's a mix of all three, but they wouldn't reach that score without pretty decent clocks.Samus - Thursday, January 31, 2019 - link
Ignoring any Zen IPC improvement whatsoever, process improvements alone this year would make them competitive with Intel going forward. All they need to do is ramp up the clock frequency a bit without a TDP penalty and they have an automatic win...Makste - Saturday, March 6, 2021 - link
Lol😆Vegajf - Friday, January 25, 2019 - link
Icelake desktop will be out 3q 2020 from what I hear. We will have another 14nm refresh before then though.danwat1234 - Friday, January 25, 2019 - link
Intel ice Lake for performance laptops should be out by 2019 christmas. Then we will see if there are any IPC improvements in this new architecture. Probably not much...BigMamaInHouse - Saturday, January 26, 2019 - link
I think that Intel need 10nm for Data-centers for higher core count and profit, and their production focus will be on this area and not consumer desktop PC's.I don't see 9700K/9900K 10nm competitor until 2020.
Santoval - Monday, January 28, 2019 - link
Sunny Cove and Willow Cove are intermediate designs until the release of Ocean Cove, the "brand new" CPU architecture Jim Keller was hired to lead the design of. Since Ocean Cove has not yet appeared in Intel's schedule it either means that it will not be ready before at least 2022 or Intel is just being secretive.Or it might just be Golden Cove. Since Golden Cove will apparently be Intel's next new design, if the it is not actually Ocean Cove, then Ocean Cove will not be released until 2023 at the earliest (at 7nm). That's because Intel has never released two new designs one after the other without an optimization in-between. It's also possible that Intel will just "pull a Skylake" and rather than use a new design for Golden Cove they will just.. re-optimize it. In that case Ocean Cove should be released in 2022, right after Golden Cove.
Trevor08 - Friday, February 1, 2019 - link
For intel's sake (and ours), I hope they're working furiously on quantum CPU's.peevee - Monday, February 4, 2019 - link
So far, quantum is looking like a dead end. Maybe for specialized coprocessors in cryo environments in 10 years, but not for general-purpose computing AT ALL.There are much better, actually realistic directions for general-purpose computing on non-Von Neumann architectures, and that is where the future lies now that Moore's law is firmly dead and buried.
HStewart - Saturday, January 26, 2019 - link
There is not release information about desktops on Ice Lake. But I would not doubt that Ice Lake on desktop at that time. It going to be fun to compare new laptops and even desktops at that time.But keep in minor to Intel desktop market is a minor market and once performance is up, I would not doubt we will not see any difference in desktop vs mobile chips
Santoval - Monday, January 28, 2019 - link
We don't know how well Ice Lake / Sunny Cove will perform, but no matter how good it performs AMD will still have a market lead of 6 to 7 months (assuming a release of Zen 2 based Ryzen CPUs in May or June and an Intel HVM release of Sunny Cove in December).This assumes that Intel does not screw up again and moves back the launch of Sunny Cove into 2H 2020, which would be frankly catastrophic, at least for their client wing. Their 14nm process has been milked dry, they can no longer extract any more performance from it.
James5mith - Friday, January 25, 2019 - link
"This is an M.2 module, which means it could be upgraded at a later date fairly easily."No, you can't. Lenovo only lets wifi/bluetooth cards with their custom firmware in their systems. If you boot the system with a standard (say Intel) wifi card, it refuses to boot.
That's the reason I stopped buying lenovo laptops despite liking their build and design.
jeremyshaw - Friday, January 25, 2019 - link
They've stopped doing that since about ~2 years ago.levizx - Saturday, January 26, 2019 - link
Welcome to 2015.Gondalf - Friday, January 25, 2019 - link
For now they have nothing out in cpu departement, so i don't see any AMD bright year in front of us.I remember you we are already in 2019.
vegajf51 - Friday, January 25, 2019 - link
Icelake Desktop 3q 2020, intel will have another 14nm refresh before then.HStewart - Saturday, January 26, 2019 - link
Intel is expected to release 10nm+ with Covey Lake by Christmas seasons. This canon lake chip is just a test chip.pugster - Friday, January 25, 2019 - link
Thanks for the review. While the performance is not great, what about the power consumption compared with the 8130U?Yorgos - Friday, January 25, 2019 - link
it's not great obviously when you are stuck at 2.2GHz, while the prev gen cpu with the same capabilities(except the avx) can go up to 3.4GHz.I bet the 8130 would've been faster even if configured at 10Watt TDP.
Yorgos - Friday, January 25, 2019 - link
...and before jumping on me about that "stuck at 2.2GHz" let me report this:in certain loads the locked freq is slower than the unlocked one.
What does this mean? it most probably means that the unlocked freq makes the cpu run hot, throttle and then try to balance between temperature and consumption.
and a subnote on this. I think Intel should stop pushing the AVX instructions. It doesn't work as intended, it's not needed in most cases, especially when you have to design 256bit buses for 512bit data transfer on a low power cpu. Also it takes a lot of space on the die, it taxes the cache buses and it's useless when you disable your igpu(which is a good SIMD machine but not hUMA) and you have a dGPU up all the time just rendering your desktop.
They should try focusing on HSA/hUMA on their cpus+igpus instead of integrating wide SIMD instructions inside their cores.
0ldman79 - Saturday, January 26, 2019 - link
Thing is when AVX2 and AVX512 are used the performance increase can be rather massive.PCSX2, PS2 emulator, runs identically between my 3.9GHz Ivy Bridge Xeon (AVX) and my 2.8GHz i5 Skylake mobile (AVX2).
AVX2 makes several games playable. You can choose your plugin and the AVX plugin cannot play Gran Turismo 4 @ 2.8GHz, the AVX2 plugin can.
You may not find it useful, others do.
HStewart - Saturday, January 26, 2019 - link
It would be interesting to see the emulator re-factor to work with AVX 512 - it would like be twice the speed of AVX 2levizx - Sunday, January 27, 2019 - link
Nope, even with the simplest data set where AVX512 can perform twice the speed of AVX2 per cycle, the frequency has to drop significantly (~30% on Xeon Gold 5120 for example), so the upper limit is more like 40% gain. And that's PURE AVX512 code, you won't get that in real life. Assuming 50% AVX2 and 50% AVX512 code - that's a very generous assumption for non-datacentre usage, you'll have a 5% net gain.levizx - Sunday, January 27, 2019 - link
5%~20% net gain, depending on how the scaling works.MrCommunistGen - Friday, January 25, 2019 - link
Normally I try to read the whole article (and I *am* looking forward to reading the rest of it) but I already have 2 comments:1. Maybe this review has been in progress for quite a while, but you can definitely buy the NUC8i3CYSM NUC on Amazon, at least in the US. It is shipped and sold by Amazon not some random 3rd party too. It is expensive ($530), and can only be bought with up to 8GB of soldered down RAM, but you can buy it.
2. While the Wi-Fi card is M.2, Lenovo (like HP and others) usually restricts what Wi-Fi cards can be used with a BIOS/UEFI whitelist. I guess this might not apply to a China-only model, but I wouldn't just assume that the card can be upgraded down the line unless you've already verified this is possible.
jaju123 - Friday, January 25, 2019 - link
I would chalk up the system resonsiveness to the GPU and the low screen res. When moving from a Dell XPS 15 9560 laptop with 1080p screen resolution to an otherwise identical 4K model, I noticed a severe loss of performance in the windows UI. The reality is that Intel iGPUs in even kaby lake processors are simply not enough to provide a smooth experience on high res laptops. The 1080p experience was really smooth, however.You can also force certain apps to the use the dedicated nvidia graphics, or simply choose to run at a non-native 1080p and it speeds up the UI drastically.
hansmuff - Friday, January 25, 2019 - link
Wow, this is an excellent article. Packed with knowledge and facts, well written; a real gem. Thank you!FreckledTrout - Friday, January 25, 2019 - link
Its weird to see a dual core even in a laptop on the new 10nm process. I would have expected dual cores to disappear with Intel's 10nm or AMD's 7nm.FreckledTrout - Friday, January 25, 2019 - link
The first dual core laptop came out in 2015 with the AMD Athlon 64 X2 4800 so it's just weird to me 14 years later it's still something being made especially with such a dense process.FreckledTrout - Friday, January 25, 2019 - link
Damn no edit.... in 2005 I meant.jeremyshaw - Friday, January 25, 2019 - link
I think I had one of those in a Sharp laptop. It had horrible VIA S3 graphics, but a beautiful, bright display. It was my last 4:3 laptop, an end of an era for me.Icehawk - Saturday, January 26, 2019 - link
Majority of laptops are still DC, I have to check our laptop orders when we place them to make sure my boss and our vendor aren’t screwing up ordering them.ianmills - Friday, January 25, 2019 - link
Intel probably thought the same as you! Remember the reason this was released was so that Intel could tell its investors it was shipping 10nm partsdanwat1234 - Friday, January 25, 2019 - link
Agreed.jjj - Friday, January 25, 2019 - link
Bored with laptops, want a large foldable phone with a projected keyboard so i can forget about these bulky heavy things. Ok, fair enough, glasses are way better but those will take a while longer.eastcoast_pete - Friday, January 25, 2019 - link
@Ian: Thanks for the deep dive, and giving the references for background! One comment, three questions (they're related): In addition to being very (overly) ambitious with the 10 nm process, I was particularly struck by the "fused-off integrated graphics" and how Intel's current 10 nm process apparently just won't play nice with the demands in a GPU setting. Question: Any information or rumors on whether that contributed to AMD going the chiplet route for Ryzen going forward? In addition to improving yields, that also allows for heterogeneous manufacturing nodes on the same final chip, so that can get around that problem. Finally, any signs that Intel may go down that road in its upcoming mainstream chips? Any updates on what node they will make their much-announced dGPUs on? Probably won't be this or a related 10 nm process.Lastly, and maybe you and Andrei can weigh in on that: TSMC's (different) 7 nm process seems to work okay for the (smaller) different "iGPUs" in Apple's 12/12x, Huawei's newest Kirin and the new Snapdragon. Any insight/speculation which steps of Intel's 10 nm process cause the apparent incompatibility with GPU usage scenarios?
Thanks!
Rudde - Saturday, January 26, 2019 - link
AMD has lauched huge 7nm desktop graphics cards (2 server and Radeon VII). AMD does not seem to have any problems making gpus on TSMC 7nm.eastcoast_pete - Sunday, January 27, 2019 - link
That's why I asked about the apparent incompatibility of GPU-type dies with Intel's 10 nm process. Isn't it curious that this seems to be the Achilles heel of Intel's process? I wonder if their future chips with " iGPU" will use a chiplet-type approach, with the CPU parts in 10 nm, and the GPU in 14 nm++++ or however many + generations it'd be on. The other big question is what process their upcoming high-end dGPU will be in Unless, Intel let's TSMC make that for them, too.velanapontinha - Friday, January 25, 2019 - link
Every time I read Kaby G I'm instantly stormed by a Kenny G theme stuck in my head, and it ruins the rest of my day.Please stop.
skis4hire - Friday, January 25, 2019 - link
"Fast forward several months later, to May 2018, and we still had not heard anything from Intel."Anton covered their statement in April, where they indicated they weren't shipping volume 10nm until sometime in 2019, and that they would instead release another 14nm product, whiskey lake, in the interim.
https://www.anandtech.com/show/12693/intel-delays-...
Yorgos - Friday, January 25, 2019 - link
>AMD XXXXX (XM/XT, XXW)Thanks Ian for reminding us is every article, that we are reading a Purch media product, or a clueless editor.
Don't forget, 386 was o 0 core CPU.
No, it doesn't bother me as a reader, it bothers me as an engineer who designs and studies digital circuits. But hey you can't have it all, it's hard to find someone who is capable at running windows executables AND know his way in comp. arch..
Ryan Smith - Friday, January 25, 2019 - link
While I'm all for constructive feedback, I have to admit I'm not sure what we're meant to be taking from this.Could you please articulate in more detail what exactly is wrong with the article?
KateH - Saturday, January 26, 2019 - link
i interpreted it as,...
"I disagree with the distinction between 'modules' and 'cores' that is made when some journalistic endevours mention AMD's 'Construction' architecture microprocessors. I find the drawing of a line based on FPU counts inaccurate- disengenous even- given that historic microprocessors such as the renowned Intel 80386 did not feature an on-chip FPU at all, an omission that would under the definitions used by this journalist in this article cause the '386 to be described as having 'zero cores'. The philosophical exercise suggested by such a definition is, based upon my extensive experience in the industry of digital circuit design, repugnant to my sensibilities and in my opinion calls into question the journalistic integrity of this very publication!"
...
or something like that
(automatically translated from Internet Hooligan to American English, tap here to rate translation)
Ryan Smith - Saturday, January 26, 2019 - link
"tap here to rate translation"5/5 stars. Thank you!
KOneJ - Sunday, January 27, 2019 - link
Bingo.Spunjji - Tuesday, January 29, 2019 - link
Truly magnificent.KateH - Saturday, January 26, 2019 - link
but please, if OP is interested in taking a whack at "articulating" i'd love to see what that looks like and how my translation faredMidwayman - Friday, January 25, 2019 - link
Interesing. So Basically no real possibility for desktop improvement until 2020 at least. They really are giving AMD a huge window to take the performance crown. Zen 2 is due to ship this year, right?BigMamaInHouse - Friday, January 25, 2019 - link
And dont forget- there are many Dual/Quad core (lets Say from Q6600 ~SandyBridge to 7700K ) Intel PC's that gonna be upgraded finally with new Ryzen launch and those PC won't we upgraded for another 3+ Years,DanNeely - Sunday, January 27, 2019 - link
The lower end of that range has been upgrading for years. The upper end has no real reason to upgrade unless they're doing something other than gaming, since current games don't benefit from the higher core counts much.I'm in the middle with a 4790K; and still see myself on track for a nominal 2022 upgrade; short of games growing CPU demands significantly or unexpected hardware failures I don't see any need to bring it forward. The additional cores will be nice for future proofing; but what I'm mostly looking forward to is all the stuff outside the CPU.
My notional want list is 10GB ethernet, PCIe4(5?) to the GPU and SSD, 50/50 USB 3.x A/C mix, and DDR5. The first of these is starting to show up on halo priced mobos.
PCIe4 is rumored to be launching this year on AMD, although from the leaks so far it's not clear if it'll only reach the first x16 slot for the GPU or be more widely available (maximum trace lengths are short enough that anything other than M.2 on a not-dimm will probably need signal boosters increasing costs).
Dual USB-C is starting to show up on a few boards; but widerspread availability is likely to be blocked until the hardware to handle flipping the connector moves from a separate chip into the chipset itself.
DDR5 is supposed to start shipping in very limited quantities this year, but will be another year or two before reaching consumer devices.
My guess is late 2020/early 2021 before all the hardware I want is finally available; which fits well with the nominal 8y lifespan I'm targeting for my systems core components.
shadowx360 - Friday, February 1, 2019 - link
What is the point of DDR5? It's going to be beyond overpriced at launch for negligible performance gain. As for USB-C, you can find cases with front connectors.Gondalf - Friday, January 25, 2019 - link
Ask to TSMC, we have not any real date of shipment. Moreover we don't know how the new SKUs will perform.eastcoast_pete - Saturday, January 26, 2019 - link
I don't think TSMC would give anybody except their customer (AMD) an expected shipping date. Also, while we don't know how the new AMD processors will perform, we already know that I Intel's 10 nm tech was both late and hasn't performed so we'll. BTW, I am currently running all PCs around me on Intel chips, so no fanboy here. This disappointing 10 nm fiasco is bad for all of us, as we need Intel to egg on AMD and vice versa. If one of them drops behind, the other one gets lazy.eastcoast_pete - Saturday, January 26, 2019 - link
Damn autocorrect and no edit!just4U - Saturday, January 26, 2019 - link
I don't recall AMD ever being in that position before. Even with the Athlon they were outmanned in all areas except for performance. Unfair business practices by Intel and a inability to keep up with demand on the manufacturing side took away any lead AMD had at the time. On top of that they were never competing price wise. Amd chips were sold for a fair amount less. I only recall one cpu being priced similar to Intel's top dog and it was dropped down to 30% less a few months later.edzieba - Friday, January 25, 2019 - link
"CLWB attempts to minimize the compulsory cache miss if the same data is accessed temporally after the line is flushed if the same data is accessed temporally after the line is flushed. "Probably unintentional, but appropriate!
Spunjji - Tuesday, January 29, 2019 - link
I liked that one, tooiwod - Friday, January 25, 2019 - link
Just two things.1. The Article should have been split into at least two parts, separating each by at least 3 to 7 days. First parts being Intel 10nm, 2nd Part being Cannon Lake and how it perform.
2. Basically Cannonlake sucks. Lets hope Icelake will not disappoint.
nevcairiel - Saturday, January 26, 2019 - link
This incarnation of 10nm is only ever going to be seen in this particular chip, so its really quite closely related. The production-grade 10nm we're getting end 2019 is already going to be one step up from that.iwod - Saturday, January 26, 2019 - link
Yes but the article is way too long for a single read.saylick - Friday, January 25, 2019 - link
Wow, this is why I visit Anandtech; the deep dives are truly deep dives, unlike how some other "tech blog" sites pawn off articles as "deep dives" when all they do is regurgitate information off of official technical slides. Kudos Ian!austinsguitar - Friday, January 25, 2019 - link
i bet amd has them shaking in their boots...PeachNCream - Friday, January 25, 2019 - link
"The CPU area is instead attached at three points, and there is an additional pad to stop the chassis from rubbing against the heatpipe. This means that the chip height combined with the PCB and the heatsink is enough to start to worry how the chassis brushes up against the internals? Interesting."This isn't an uncommon practice. Laptop bottom panels can flex so the placement of pads is quite typical. Even my old Core2 Dell Latitude e6400 has pads on the heat pipe.
KOneJ - Sunday, January 27, 2019 - link
Excellent point. I think I remember this on an old Toshiba C55D E-1200 APU and a Dell Latitude D610 based on a Dothan Pentium M.0ldman79 - Friday, January 25, 2019 - link
This whole situation begs the question, what could Intel have gotten out of 65nm, 32nm, 22nm, etc, had they run it for five generations.I wonder if they'll do similarly on the 10nm process, punt the first time or two then knock it out of the park. Skylake was a beautiful success. Maybe Sunny Cove will be the same for 10nm.
StrangerGuy - Friday, January 25, 2019 - link
The point is Intel now needs better uarch designers lot more than process designers. Yes 10nm improvements is hard work and an interesting read...but for users they ultimately only care about end performance and perf/$, not die sizes, transistors/mm2 or manufacturing margins. If Zen 2 blows the doors off CFL would anybody even care about about Intel's process advantage? Hell not.KOneJ - Sunday, January 27, 2019 - link
Doubt this is even an "if" at this point. Curious to see if *Cove cores can keep Zen 4 and later from running away too much. Only time will tell, but Intel bringing in guys like Keller can't possibly be a bad thing. And in spite of their disastrous former attempts at building a dGPU, I fully expect Intel to make it happen this go around.eva02langley - Sunday, January 27, 2019 - link
The problem is, do you believe 7nm would be any different? Unless they implement EUV directly, I don't see it. Intel will be force, like AMD, to go fab less because their node will not be better than the competition. To add to it, it is most likely be behind in time to.zodiacfml - Saturday, January 26, 2019 - link
Great job again though it doesn't warrant it for this Intel junk. Looks like they're paying Lenovo just to use Cannon lake, usable chips that came from tuning manufacturing.The performance is where I expected it to be.
I still stand to my theory that Intel is reluctant to spend, leaving their engineers stressing if they can produce 10nm products without new equipment.
Anyways, it is a dead horse. AMD will be all the rage for 2019.
KOneJ - Sunday, January 27, 2019 - link
"Intel is reluctant to spend"To the contrary: throwing money at the problem is exactly what they're doing. Have you tracked their CAPEX these past few years?
"AMD will be all the rage for 2019."
I think that's basically a given.
zodiacfml - Sunday, January 27, 2019 - link
The reports were pretty vague and I don't remember them spending substantial money except the news that they're spending for more capacity on 14nm.AMD is pretty lukewarm for me last year. I'm certain that this year will be a lot stronger for AMD until Intel and Nvidia starts taking their customers more seriously.
KOneJ - Sunday, January 27, 2019 - link
Even for a company Intel's size, spending north of $12B a year isn't penny-pinching. I know their revenue and margins are massive, but their failings haven't been a lack of spending since SB. They've been progressively spending more than ever.YoloPascual - Saturday, January 26, 2019 - link
bUt 10nm iNtEL iS bEtTeR tHaN 7nm TSMC riGhT?KOneJ - Sunday, January 27, 2019 - link
Shouldn't your alias be yOlOpAsCuAl, wannabe troll?dgingeri - Saturday, January 26, 2019 - link
With Intel recently releasing the "F" SKUs for processors that don't have integrated graphics, I would think this processor would be a Core i3-8121FU.KOneJ - Sunday, January 27, 2019 - link
ROFL, mate. Though a UF line-up honestly wouldn't surprise me with where MCMs, TSVs, yields, iGPUs, and core counts are seemingly headed.Piotrek54321 - Saturday, January 26, 2019 - link
I would love an article on how quantum mechanical effects have to be taken into account at such small nodes.KOneJ - Sunday, January 27, 2019 - link
I would love to see the mathematics of quantum mechanics cleaned up to be more elegant and less Newtonian in nature.Rudde - Saturday, January 26, 2019 - link
I looked into the transistor density of different nodes and particularily the claim that Intel 10nm will feature "100 million transistors per square millimeter."Intel seems to historically lack in transistor density. 22nm has ~8 million per mm², while competing 28nm from GlobalFoundries have ~13 and TSMC has ~12.
Moving unto 14nm and all foundries double their transistor density. Intel goes to 15M/mm², GF to 24 (on a node bought from Samsung) and TSMC's 16nm also to 24M/mm².
TSMC's 7nm node has a density of ~40M/mm².
Now Intel has made two statements (both found in the first page of the article):
1. 100 million transistors per mm² or a 5.7x improvement.
2. A 2.7x improvement in density over 14nm, which gives 55M/mm². 55M/mm² would be consistent with Intel's claim of beating TSMC's 7nm.
Next I'm assuming my calculations about Intel's transistor density are wrong, and that both of Intels claims are true. In that case Intel's current 14nm would be 27M/mm². Now of course we can't assume my calculations about GF and TSMC are correct either and we are left without any conclusion.
Rudde - Saturday, January 26, 2019 - link
I jumped the gun too early and didn't proceed to page two that explains a lot of the same things as I tries to explain, but uses actual node data and not chip sizes.smalM - Saturday, January 26, 2019 - link
Page two doesn't use actual node data, it uses Intel propaganda ;-)KOneJ - Sunday, January 27, 2019 - link
Yep, they're not the only ones optimizing libraries. They're trying to muddle transistors with design compiling. While this is fair, it's not taking into account that others are working both halves of the problem as well. Clearly meant to be misleading.sidm2k11 - Saturday, January 26, 2019 - link
How is the fan noise on the PN60? Mine makes a pretty loud whine all the time and temperatures regularly cross 80 on full load...My 4010u Brix PC is whisper quiet by comparison.alacard - Saturday, January 26, 2019 - link
Well that was a wonderfully intricate review. Thank you.yeeeeman - Saturday, January 26, 2019 - link
As someone said it earlier in this thread, I think we miss opportunities when moving to a new process every two years. The mishap that Intel had just showed us how much better a process can become if you give the time to your engineers. 14nm started late, with some low clocked parts. We had some Broadwell chips that ran at 3.3 base. Then, Skylake came and the 6700k brought 4ghz at quite high power. Then, the 7700k came and another tweak to the process improved clocks, so we now got 4.7 GHz boost. After this, things moved up in core counts (which should've happen a long time ago, but with competition...) and we got 8700k and now 9900k with turbo to 5ghz. Until now, only 32nm with Sandy Bridge came close to 5ghz mark. Now, with a lot of time to tweak, they have become so confident in the 14nm process that they released a 5ghz stock cpu. Financials say the true story. Even if we cry about 10nm, truth is that things can move forward without a new process. It is cheaper actually to prolong the life of a certain process and if they can add enough improvements from generation to generation, they can afford to launch new process once every 4-5 years.Dodozoid - Saturday, January 26, 2019 - link
Indeed, we probably have to get used to a lot of +++ processes. During the architecture day, the new Intel people (old AMD people) mentioned they are decoupling the architecture from the process. That means they can make progress other than pushing clocks on the same core over and over, but IPC as well...KOneJ - Sunday, January 27, 2019 - link
Unfortunately, SB-derivatives seem to be needing a significant overhaul. "tocks" of late haven't exactly brought meaningful IPC gains. Hopefully deeper and wider *Cove designs are a step in the right direction. I just don't like that Intel seems to be taking an approach not dissimilar to the Pentium 4 the last time AMD reared its head. Only this time, a major departure in micro-architecture and steady process advantage isn't in the wings. Even with the *Coves, I think AMD may be able to build enough steam to solidly overtake them. There's no reason that Zen 4 and on couldn't go deeper and wider too, especially looking at power consumption on the front and back ends of the Zen core versus the uncore mesh. I think Zen derivatives currently will try the wider first. It actually might make the high core-count parts significantly more power efficient. Also could easily scale better than post-SB did if Agner Fog's analysis is anything to go by. Multiple CPU die masks and uncore topologies incoming? Wouldn't surprise me.dgingeri - Saturday, January 26, 2019 - link
Well, yeah, they can be improved upon over time, but that doesn't cut the production costs like a process reduction does. improving the process can increase yields and increase performance, but only by a limited percent. A process reduction increases the number of chips from a wafer by a much higher amount, even if there are more defects.Well, that was the way it worked up until the 14nm process.
With 10nm at Intel, they had far too many defects, and the process failed to give the returns they wanted for quite a while. That had as much to do with the quality of the wafers before production as it did the production process itself. They had to push the wafer producers to higher levels of purity in order to fix that. I'm fairly sure TSMC would have had the same issues with their 7nm, but Intel had already pushed the wafer production to higher levels of purity because of their problems, so TSMC was able to take a couple extra steps ahead because of that.
These days, we're going to see each step smaller take longer and longer to get right, because of these same hurdles. As things get smaller, impurities will have a higher and higher impact on production. We may not get as far as some are hoping, simply because we can't get silicon as pure as necessary.
name99 - Saturday, January 26, 2019 - link
"Another takeaway is that after not saying much about 10nm for a while, Intel was opening up. However, the company very quickly became quiet again."The history page is great. But I have to wonder if the ultimate conclusion is that the best thing, for both Intel and the world, is that they STICK to the STFU strategy? And that journalist stick to enforcing it.
One thing that's incredibly clear from all this is that Intel are utterly lousy at forecasting the future. Maybe it's deliberate lies, maybe it's just extreme optimism, maybe it's some sort of institutional pathology that prevents bad news flowing upward?
Regardless, an Intel prediction for beyond maybe two years seems to be utterly worthless. Which raises the question -- why bother asking for them, and why bother printing them?
Look at that collection of technologies from the 2010 slide that are supposed to be delivered over the next nine years. We got Computational Lithography, and that's about it. CErtainly no III-V or Germanium or Nanowires. Interconnects (Foveros and EMIB?) well, yeah, in about as real a form as 10nm. 3D refers to what? Die stacking? or 3D structures? Either way nothing beyond the already extant FinFETs. Dense Memory? Well yeah, there's Optane, but that's not what they had in mind at the time, and Optane DIMMs are still crazy specialized. Optical Interconnect? Well occasional mutterings about on-die photonics, but nothing serious yet.
Now on the one hand you could say that prediction is hard. How much better would IBM, or TSMC, or Samsung, have done? On the other hand (and this is the point) those companies DON'T DO THIS! They don't make fools of themselves by engaging in wild claims about what they will be delivering in five years. Even when they do discuss the future, it's in careful measured tones, not this sort of "ha ha, we have <crazy tech> already working and all our idiot competitors are four years behind" asinine behavior.
I suspect we'd all be better off if every tech outlet made a commitment that they won't publish or discuss any Intel claims regarding more than two years from now. If you're willing to do that, you might as well just call yourself "Home of Free Intel's advertising". Because it's clear that's ALL these claims are. They are not useful indications of the future. They're merely mini-Intel ads intended to make their competition look bad, and with ZERO grounding in reality beyond that goal.
KOneJ - Sunday, January 27, 2019 - link
While you're correct that the media is ignorantly doing just that for the most part, at least this article provides context in what Intel is trying to do in obfuscating the numbers versus TSMC and Samsung who haven't stumbled the same way. Some of the Foveros "magic" is certainly not being knocked-down enough when people don't understand what it's intended to do. 2.5D, 3D, MCMs, and TSVs all overlap but cover different issues. I blame the uneducated reader more than anything. Good material is out there, and critical analysis between the lines is under-present. "Silicon photonics" was a big catch-phrase in calls a few years ago, but quiet now. Hype, engineering, and execution are all muddied by PR crap. Ian is however due credit for at least showing meaningful numbers. It's more in the readers hands now. Your last remarks really aren't fair to this article, even if they bear a certain degree of merit in general. Sometimes lies are needed to help others understand the truth though...HStewart - Saturday, January 26, 2019 - link
I believe that this Cannon is get AVX 512 out to developers. What would be interesting if possible is for Intel to release Covey Lake on both 14nm and new 10nm. One thing I would expect that Covey Lake will significant speed increase compare to current 14nm chips even if on 14nm and the 10nm will be also increase but combine Covey Lake and new 10nm+. should be quite amazing.One test that I am not sure is benchmark that runs in both AVX2 and AVX 512 and see the difference. There must be reason why Intel is doing the change.
KOneJ - Sunday, January 27, 2019 - link
Cheap Cannon Lake is not designed to get AVX512 into dev hands. That's the dumbest thing ever. And "Covey Lake"? Please read the article before commenting. There are a few good blog posts and whitepapers out there analyzing and detailing SIMD across AVX varieties. For most things, AVX512 isn't as big a deal as earlier SIMDs were. It has some specialized uses as it is novel, but vectoring code and optimizing compilers to maturity is slow and difficult. There are fewer quality code slingers and devs out there than you would expect. Comp sci has become littered with an unfortunate abundance of cheap low-quality talent.HStewart - Sunday, January 27, 2019 - link
Ok for the misunderstood people about AVX 512 - which appear to be 2x fast AVX2https://www.prowesscorp.com/what-is-intel-avx-512-...
yes it going to take a while people user AVX 512 - but just think about it twice the bits - I was like you not believe 512 but instead 64 bit would make in days of early 64 bit - thinking primary that is will make program largers and not necessary. As developer for 3 decades one thing I have send that 64 bit has done is make developer lazy - more memory less to worry about in algorithms for going to large arrays.
As for Sunny Cove, it logical with more units in the chip - it is going to make a difference - of course Cannon Lake does not have Sunny Cove - so it does not count. Big difference will be seen when Covey Lake cpus come out what the difference it be like with Cannon Lake - and even Kaby Lake and assoicated commetitors chips
HStewart - Sunday, January 27, 2019 - link
One thing on Covey Lake and upcoming 7nm from Intel, it is no doubt that it designers made a mistake with Cannon Lake's 10nm - Intel realizes that and has created new fabs and also new design architexture - there is no real reason for Intel to release a Cannon Lake - but it good to see that next generation is just more that Node change - it includes the Covey Lake architexture change.qcmadness - Saturday, January 26, 2019 - link
I am more curious on the manufacturing node. Zen (14 / 12nm from GF) has 12 metal layers. Cannon Lake has 13 metal layers, with 3 quad-patterning and 2 dual patterning. How would these impact the yield and manufacturing time of production? I think the 3 quad-patterning process will hurt Intel in the long run.KOneJ - Sunday, January 27, 2019 - link
More short-run I would say actually. EUV is coming to simplify and homogenize matters. This is a patch job. Unfortunately, PL analysis and comparison is not an apples-to-apples issue as there are so many facets to implementation in various design stages. A broader perspective that encompasses the overall aspects and characteristics is more relevant IMHO. It's like comparing a high-pressure FI SOHC motor with a totally unrelated low-pressure FI electrically-spooling DOHC motor of similar displacement. While arguing minutiae about design choices is interesting to satisfy academic curiosity, it's ultimately the reliability, power-curve and efficiency that people care about. Processors are much the same. As a side note, I think it's the attention to all these facets and stages that has given Jim Keller such consistent success. Intel's shaping up for a promising long-term. The only question there is where RISC designs and AMD will be when the time comes. HSA is coming, but it will be difficult due to the inherent programming challenges. Am curious to see where things are in ten or fifteen years.eastcoast_pete - Sunday, January 27, 2019 - link
Good point and question! With the GPU functions apparently simply not compatible with Intel's 10 nm process, does anyone here know if any GPUs out there that use quad-patterning at all?anonomouse - Sunday, January 27, 2019 - link
@Ian or @Andrei Is dealII missing from the spec2006fp results table for some reason? Is this just a typo/oversight, or is there some reason it's being omitted?KOneJ - Sunday, January 27, 2019 - link
Great write up, but isn't this backwards on the third page?"a 2-input NAND logic cell is much smaller than a complex scan flip-flop logic cell"
"90.78 MTr/mm^2 for NAND2 gates and 115.74 MTr/mm^2 for Scan Flip Flops"
NAND cell is smaller than flip-flop cell, but there is more flip-flop than NAND in a square millimeter?
Or am I missing something?
Rudde - Sunday, January 27, 2019 - link
A NAND logic cell consists of 2 transistors, while a Scan flip flop logic cell can consist of different count of transistors depending on where it is used. If I remeber correctly, Intel uses 8, 10 and 12 transistor designs.That gives 45.39 million NAND cells per mm² (basically SRAM) and ~12 million flip-flop cells.
The NAND cell is smaller because it consists of fewer transistors.
KOneJ - Sunday, January 27, 2019 - link
It would be great if you guys could get a CNL sample in the hands of Agner Fog. He might be able to answer some of the micro-architecture questions through his tests.dragosmp - Sunday, January 27, 2019 - link
Awesome review, great in depth content and well explained. Considering the amount of work this entailed, it's clear why these reviews don't happen every day. Thanksdragosmp - Sunday, January 27, 2019 - link
I'll just add...many folks are saying AMD should kick arse. They should, but Intel has been in this situation before - they had messed up the 90nm process; probably not quite as bad as the chips to be unusable, but it opened the door to AMD and its Athlon 64. What did AMD do? Messed it up in turn with slow development and poor design choices. Hopefully they'll capitalize this time so that we get an actual dupoloy, rather than the monopoly on performance we had since Intel's 65nm chips.eva02langley - Sunday, January 27, 2019 - link
Euh... You mean this...?https://www.youtube.com/watch?v=osSMJRyxG0k
Anti-competitive tactics? They bought the OEM support to prevent competition.
And, all lately, this came up...
https://www.tomshardware.com/news/msi-ceo-intervie...
"Relationship with Intel: Chiang told us that, given Intel's strong support during the shortage, it would be awkward to tell Intel if he chose to come out with an AMD-powered product. "It's very hard for us to tell them 'hey, we don't want to use 100 percent Intel,' because they give us very good support," he said. He did not, however, make any claims that Intel had pressured him or the company."
Yeah right, Intel is winning because they have better tech... /sarcasm
eva02langley - Sunday, January 27, 2019 - link
Even better...https://youtu.be/osSMJRyxG0k?t=1220
AntonErtl - Sunday, January 27, 2019 - link
Great Article! The title is a bit misleading given that it is much more than just a review. I found the historical perspective of the Intel processes most interesting: Other reporting often just reports on whatever comes out of the PR department of some company, and leaves the readers to compare for themselves with other reports; better reporting highlights some of the contradictions; but rarely do we se such a pervasive overview.The 8121U would be interesting to me to allow playing with AVX512, but the NUC is too expensive for me for that purpose, and I can wait until AMD or Intel provide it in a package with better value for money.
RamIt - Sunday, January 27, 2019 - link
Need gaming benches. This would make a great cs:s laptop for my daughter to game with me on.Byte - Monday, January 28, 2019 - link
Cannonlake, 2019's Broadwell.f4tali - Monday, January 28, 2019 - link
I can't believe I read this whole review from start to finish...And all the comments...
And let it sink in for over 24hrs...
But somehow my main takeaway is that 10nm is Intel's biggest graphics snafu yet.
(Well THAT and the fact you guys only have one Steam account!)
;)
NikosD - Monday, January 28, 2019 - link
@Ian CutressGreat article, it's going to become all-time classic and kudos for mentioning semiaccurate and Charlie for his work and inside information (and guts)
But really, how many days, weeks or even months did it take to finish it ?
bfonnes - Monday, January 28, 2019 - link
RIP IntelCharonPDX - Monday, January 28, 2019 - link
Insane to think that there have been as many 14nm "generations" as there were "Core architecture" generations before 14nm.ngazi - Tuesday, January 29, 2019 - link
Windows is snappy because there is no graphics switching. Any machine with the integrated graphics completely off is snappier.Catalina588 - Wednesday, January 30, 2019 - link
@Ian, This was a valuable article and it is clipped to Evernote. Thanks!Without becoming Seeking Alpha, you could add another dimension or two to the history and future of 10nm: cost per transistor and amortizing R&D costs. At Intel's November 2013 investor meeting, William Holt strongly argued that Intel would deliver the lowest cost per transistor (slide 13). Then-CFO Stacey Smith and other execs also touted this line for many quarters. But as your article points out, poor yields and added processing steps make 10nm a more expensive product than the 14nm++ we see today. How will that get sold and can Intel improve the margins over the life of 10nm?
Then there's amortizing the R&D costs. Intel has two independent design teams in Oregon and Israel. Each team in the good-old tick-tock days used to own a two-year process node and new microarchitecture. The costs for two teams over five-plus years without 10nm mainstream products yet is huge--likely hundreds of millions of dollars. My understanding is that Intel, under general accounting rules, has to write off the R&D expense over the useful life of the 10nm node, basically on a per chip basis. Did Intel start amortizing 10nm R&D with the "revenue" for Cannon Lake starting in 2017, or is all of the accrued R&D yet to hit the income statement? Wish I knew.
Anyway, it sure looks to me like we'll be looking back at 10nm in the mid-2020s as a ten-year lifecycle. A big comedown from a two-year TickTock cycle.
bananaforscale - Thursday, January 31, 2019 - link
A single 10nm SKU, and it has the GPU fused off? Why bother even taping it out then when you're using a different process node?Trevor08 - Friday, February 1, 2019 - link
For intel's sake (and ours), I hope they're working furiously on quantum CPU's.talktowendys - Saturday, February 2, 2019 - link
This is the best processor to work on. Me myself uses this processor it is the best technology.You can check our blog.
El Sama - Monday, February 4, 2019 - link
Maybe it will be great once 10nm+++++++ is released?cheshirster - Saturday, June 22, 2019 - link
So what is actual density of Canon Lake?