Speaking off the cuff a bit, I'm a bit surprised that Intel picked the 21st for this announcement. With that being the day of the solar eclipse, it's become a mini-holiday for a lot of people. This is particularly the case in Intel's backyard (Oregon), where the zone of totality is only 30 miles or so from Intel's Oregon facilities. Intel's not doing an on-site meeting here or in California (thankfully), but I can only imagine a lot of people in Oregon would have rather had that day off.
Meanwhile it means that Intel will be competing with the eclipse for public attention. Outside the US that won't be a problem, but inside the US that's another story. Only Intel could pick a fight with the Moon, I suppose...
Maybe they don't want huge attention. Considering the disappointment that was Kaby Lake relatively. Not that its a bad proc by any means but little to no IPC increase, just errata and a clock speed bump and very little in ways of new features. Will Coffee Lake provide something more substantive other than requiring a new motherboard? Or will it be another proc that offers no incentive to anyone to upgrade.
On another note. I'm disappointed bc I wont be stateside until the 27th. Wish I could see the eclipse. Whoever decided to do this on that day, well your name will be cursed at the water coolers and local bars.
I was thinking the same thing. Maybe Intel is taking a page out of Donald's playbook and taking advantage of a distraction (the solar eclipse) with the announcement of what may amount to virtually nothing exciting.
I dunno, rumor seems really pretty solid that they're substantially ramping the core count this time round. Quite a major thing, and a pretty easy one to sell as well you'd think. (Much easier than 5-10% IPC.).
W00t! Let's all have a round of applause and congratulate Intel on its fantastic achievement of no longer holding back consumer CPU performance to maximize profits. How amazing!
Hah, they are still holding back. First they do have core counts increasing which they had already planed years ago with almost perfect timing to counter AMD. Then they have been sitting on Cannon Lake, Ice Lake, and 10nm. Maybe they really have issues on the 10nm node but honestly I think they see more profit not moving there to fast so they those issues are taking there sweet time to get resolved. IM watching to see how fast Ice Lake lands that will tell me how much Intel has been holding back from producing there best product they can.
Funny thing is people still wanted to buy a 4c/8t intel i7 for more than $330 when they had 90% of that chip in the name of Ryzen 5 1500X for half the price.
Now those same people drool over the i7 8700K when they already have the 8c/16t Ryzen 7 1700X for $299. Intel love those kind of blind customers, they spit on them but they say thx and have some bills.
Actually if IPC is your jam, Apple Hurricane is king. Around 15 to 25% higher than Intel. But Intel still wins (for now...) on frequency.
We shall see what the A11 brings, but I expect it will be a whole lot more exciting than Coffee Lake's expected ~3% IPC boost and ~300MHz higher turbo frequencies. (Note how Intel now always talks about how great the Sysmark improvement is? Sysmark, by DESIGN, is not a CPU benchmark, it's a system benchmark. So it picks up improvements in things like RAM speed, flash speed, faster PCI, even more cores/better hyperthreading. All that's perfectly legit IF you're interested in the naive question "how fast is this computer for office work" --- but that's NOT the same thing as asking "how fast is this CPU on single-threaded performance".)
What you mean is not best at IPC but best as single-threaded performance.
Sure, A10 might have a strong ALU, but ALU performance is secondary, it is basically only used to drive program flow. 99% of number crunching is done on the SIMD units, and ARM is still at 128 bit throughput while intel is already (and prematurely IMO but someone gotta do it) pushing 512 bit.
A strong ALU makes a strong impact in something like a mobile phone, based on the typical usage scenarios, but that chip won't do too well in a prosumer situation. Basically OK for content consumption, to weak for content creation.
Granted, very few people out there do any content creation on ios devices, maybe a few illustrators sketching and few musician wannabes mixing rudimentary music.
ALU performance is still *by far* the most important aspect of CPU performance; and on x86 the extend to which SIMD matters is somewhat inflated because using the SIMD instructions for serial floating point code is generally a good idea. However, that use-case does not extend to larger SIMD batch sizes.
Now, certainly number crunching matters, and SIMD is OK for that, but lots if not most of workloads contain fairly little SIMD - and I'm going to speculate that it's not going to be hugely valuable to extend SIMD support *in general*. Keep in mind further that workloads that really benefit from larger SIMD batch sizes tend to benefit orders of magnitude more from GPUs and parallelization for the same reasons.
Larger SIMD batch sizes have downsides and costs too; it's not a free lunch - even when you aren't using them! So it's really not clear if e.g. AVX 512 is a good idea in the first place, and that goes even more so for further increases thereafter.
Right now the A10 has 3 128bit wide vector units, comparable Intel today has 2 256 wide units. This shows in the numbers --- dense linear algebra IPC for A10 is about 3/4 the value for Intel.
It's unclear where Apple will go with this. As others have pointed out, integer performance is by far the more challenging (and important) part of modern CPU design. MOST (not all, but most) FP work (certainly that which is amenable to ever wider vectorization) is best run on systems that better exploit the substantial regularity in most FP algorithms, so DSPs for extremely regular algorithms and data flow, GPU-like designs for less regular but substantially decoupled data flow. I've said before that the only reason it makes sense for Apple to drop the IMG GPU is to replace it with something better, and that something better would be a throughput engine, which would designed to handle throughput code with extremely low overhead for shifting from the main latency cores to the throughput cores and back. Meanwhile on the latency cores we shall see if Apple adopts SVE (initially on top of NEON hardware, but with an eye to deprecating NEON).
Point is - FP performance is mostly not interesting any more if your goal is to understand these designs and computing in general. Boasting about how well a latency core handles a throughput operation just shows that you don't understand the engineering big picture.
- the way Intel has handled growing its SIMD units at every stage, from MMC through various SSEs to the clusterfck of all the different AVX versions on different CPUs is hardly something to be proud of. The main thing it has achieved is to ensure that almost no wide-spread code ever uses the best Intel SIMD capabilities because Intel uses those to segment markets, instead wide-spread code targets the lowest common denominator. Multiple NEON units (like Apple has done) is a better intermediate term solution because everybody wins, with generic code working well on both the high end and the low-end. SVE right now seems like an even better solution, allowing the same code to work on everything from the lowest 128-bit wide implementation to as high as you practically want to go (4096-wide I think).
It remains to be seen how this will play out, but I could see Apple moving to SVE soon. Maybe not this year. (The drama for the A11 would appear to be dropping 32-bit support, and adding the Apple throughput engine. And while who knows what is being done behind the scenes, publicly LLVM's SVE support is mostly still at the design/prototype stage.) But maybe next year on 7nm?
As for discussions of "musician wannabe's", really dude? (a) It's an engineering discussion. You also omitted, among others, photographers and video editors/manipulators. (b) The discussion refers to CPUs not how they are used. Scenarios like recognition (think real-time translation), AR and computational photography, or games may not be prosumer, but they certainly can demand powerful computation and are part of what Apple is targeting. (c) It's inevitable that Apple will at some point move their desktops (and likely even their data-warehouses) to ARM. The wins are just too compelling in terms of more control of the pacing of innovation, while TSMC charges a lot less than Intel. Which means all those markets that you claim as not existing on iOS are still within Apple's design ambit.
I am quite sure that Apple would love to scale up their CPUs for their Mac computers, but I do not think they (and Tim Cook in particular) are ready to reenact their PowerPC to x86 move once again. They will probably play it safe and stay on x86, out of fear of scaring away their users and due to the massive porting effort that would require, from MacOS itself to the most trivial MacOS program.
And now that the x86-64 market is no longer a monopoly, with the competition between Intel and AMD heating up, they have even less of a reason to move. Competition means lower prices from Intel, sooner or later, and lower CPU prices mean higher profit margins (which is the reason they would move to an ARM ISA in the first place) - since there is no way they are going to drop their prices.
As much as I would love to see an ARM based Mac I highly doubt it is going to happen, and in the very low power budgets of phones and tablets they cannot beat Intel's performance. They are also never going to develop a high TDP Apple CPU for another customer, since although that would mean more profits it is not Apple's style.
So the only path we can expect a high (or rather mid power, in the 15 - 20 W range, for low power laptops) power, wider ARM CPU is from ARM themselves. That should be their goal in the mid - long term, in order to expand their market, but are they going to do it?
They use AMD GPU's. Now that AMD has a decent CPU, I wonder if Apple will go the way of the consoles. Have semicustom design for their laptops. Would provide packaging advantages so they can make the next MacBook Air even thinner.
Meanwhile on the other side of the world we have TSMC saying in March "Samsung announced late last year it plans to use EUV in a 7nm process that could be in production by 2019. “We believe we will be the first one” to use EUV in volume production, said TSMC’s Woo with risk production starting by June 2018."
and more recently "TSMC used an unnamed “novel resist” chemical to replace five immersion masks with one EUV mask at pitches ranging from 26-30nm. Liu said the company currently expects EUV could compress as many as 16 immersion masks to four or five."
with 5nm in 2019...
And this stuff seems to all be happening on schedule. TSMC Board meeting today approved US$3,153.6 million for various capital spending. (Don't think this $3 billion is for the whole of 2017. They seem to have these meetings four to six times a year and EVERY ONE OF THEM ends with a resolution to spend somewhere from $1 to $4 billion!)
From what I have heard, people believe that Intel has the most EUV tools in the field. And Samsung and TSMC have been a bit looser with their naming recently than Intel has been.
EUV has been "coming soon totally for realz this time guys 100% sure" so many times, I will simply not waste time listening to the marketing drivel until they have the initial production wafers rolling out the fab.
You realize the above statement doesn't make you a wise and careful analyst? It makes you someone who can't distinguish between the statements of various enthusiastic amateurs and the statements of people who actually matter. If people like E S Jung (head of Samsung Foundry) or Mark Liu (Co-CEO of TSMC) say they'll be introducing EUV in 2018, that's rather different from random conference attendees saying "well, this works, that doesn't, we're optimistic we can get it working in a few years"...
You know they all buy the EUV machines from the same place, ASML. That's Intel, TSMC, Samsung, GF, etc. They are all honestly on pretty similar footing as far as how far away production is. Of course there are secrets at each shop, but ASML knows all of them...
Pretty much -- I mean pre i7 basically all of the brand names ran cross multiple process nodes. Core 2 was on 65/45, P4 was on 180, 130, 90, 65... P3's went from at least 250 to 130, etc.
Hah, Intel appear to have added another “optimization” stage to their Process->Architecture->Optimization flow, so they can’t even stick to that. Coffee Lake is basically going to be take 3 of the Skylake microarch, but with six (gasp!) cores for the mainstream. They really know how to piss on their customers and tell them it’s raining.
Well, finally getting 6 cores on mainstream, and I have also heard of getting 4 true cores on U series parts as well IS good news no matter how it's spun.
in 2020 8k video products in production , i wait for tigerlike or later , i suppose that processors H 45w will be with 8core , ddr5,pcie 4, F UHD resoliution, HDMI 2.1, usb 3.2
Ahh just like the AMD 64 days. The new Pentium D err Coffee Lakes being a crappy incremental update just like the old days.
So what's Intel's "Core" (The successor to Netburst) architecture in 2019-20 going to look like? How are they going to respond to the incoming IBM process that AMD is going to be using and it's 5ghz targeted clockspeed?
It's exciting times to be in the computing space again. I really wonder how things will turn out and how Intel will respond. I think if there is one thing that anyone can say, it's good for there to finally be competition again. I built my 6700k desktop last year so I could run more ram over my old 2600k build, and I really wish I had waited until now for a nice TR setup. So nice to have choices.
Jim Keller went back to AMD in 2012 till 2015. Jim Keller, the god of cpu's. If you knew that, pain was coming, and investing in stock when AMD reached low levels was a brilliant idea.
I've been following it for the past two decades. AMD sucked for the past 8 years. I know Jim went back, and I also invested way back in '12, and have off and on traded it since 04. I'm really hoping on the server side of things these take off, because as a consumer of these server grade chips, it has been frustrating seeing the pathetic improvements in the v1-v4 e5 xeons over the years. The only thing you were able to do is go one SKU down at best for your midrange servers which was a savings of about 100-200. Nothing spectacular.
As for you Hurr Durr, first this isn't /g/, and second practically the entire semiconductor industry collaborated on the 7nm process node. Samsung, IBM and TSMC have thrown their weight and investments into the process because they all plan on using it for their products, which means Samsung SSDs to AMD CPUs and GPUs. With this being said, I wouldn't plan on it being a dud. Intel has having significant problems bringing up yields on their 10nm nodes and here we are with 14nm+++. Honestly, I originally thought AMD spinning off their fab would be disasterous but it seems that it was a smart move instead because they ended up collaborating with the rest of the industry.
All this said, supposedly this 14nm node they are on was set for 3ghz clockspeeds, so if their new 7nm shrink actually does run at the 5ghz like it's said, then we could be seeing the first consumer CPUs that regularly hit 6ghz overclocks. Wouldn't that be nice? With all this being said, it remains to be seen if AMD can stop screwing up everything else and continue to innovate rather than being one hit wonders. With how much money Intel throws into R&D there will have to be more than just clockspeed and core scaling with their modular architecture. Hopefully their IPC also improves.
The 7nm node does look very good on paper, better than Intel 10nm but of course have to wait and see. However if the 7nm node does work out well, Zen2 should be a really good CPU to compete with Ice Lake. I do believe Intel's 10nm issues are like getting a flat tire they choose not to fix, they are milking 14nm due to no competition as they simply have been at 10nm way to long for any other answer.
Intel has been sitting on 10nm and Ice Lake and is going to fight back hard over the next 12 months.
AMD has a pretty awesome 7nm process coming up from GLOFO that could be better than Intel's 10nm which will be a first.
Every node ever has to look better than what intel operates on at the moment at least on paper, because intel is for all intents and purposes a benchmark here. Who would even get excited if you came out and said something like "we really threw down and produced this marvel of technology, which ekes out a 5% yield/power improvement over meh TSMC process"?
I wanted to add AMD's next gen GPU, Navi, if you read up on it, its is a beast of a concept the devil is in the execution. You will see why Vega's design has some compromises so AMD didn't have to do a full redesign for Navi. Using the Infinity fabric tying GPU's together sharing low latency HBM2 will be Zen all over fighting against Nvidia monolithic cores.
Intel's new core design is Ice Lake on 10nm and was expected in 2019 but I'm pretty sure they had this tape in ready for a while so expect it to hit in second half of 2018. I hope they do start announcing 2018 dates might get AMD to push GLOFLO harder on 7nm. Should make for a fun couple years.
There is a lot of confusion with Intel's code names. Coffee lake refers to all 8th generation CPUs, but 8th generation is both 14nm++ *and* 10nm. The 14nm++ CPUs are the ones released this year, under the code name "Kaby Lake refresh" and the 10nm ones are "CannonLake". All clear? Coffee Lake = Kaby Lake refresh + CannonLake. Oh, and by the way, 14nm++ performs better than the first iteration of 10nm (according to slides from Intel's manufacturing day)
There are only 3 factors I would consider for this new processor line. 1st, will there be a xeon ecc option. 2nd, will Intel match the number of PCIE lanes with AMD to 64 lanes. 3rd, will any low core count version be release since my workload is light and do not need anything more then 4 cores.
It honestly sounds like the AMD ThreadRipper 1900X is almost perfect for your workload. Why do you want to wait for Intel to offer something they have no interest in offering? Intel prefers to highly segment their offerings, so getting all of that in one seems unlikely.
All of AMD's current processors support ECC memory if you get an appropriate motherboard, so, the 1900X offers:
- 60 usable PCIe lanes - quad-channel ECC RAM - 8 cores / 16 threads that boosts from 3.8GHz to 4.2GHz (with XFR) - easily overclockable, if more is needed
15%? They lie. There have not been such generational increase since Sandy Bridge. It is 0% here, 5% there for the last 6 years, and there is no indication it is getting any better.
Do you remember the time when CPU performance increased 50% every year? More than doubled in 2 years? I do.
It is all about architecture now, which is simply hit the physics barrier. A completely new computer architecture is needed, and it is totally possible (I have developed one, not von Neumann-derived) to eventually increase performance/W by an order of magnitude.
The architecture to Watch is ARM, particularly as done by Apple. They are still managing to give 30 to 50% increases year after year. And while these A series,chips are only equal to the i3 and 15 mobile x86 lines, they also use less than 5 watts for the entire tablet SoC, and less than 3 watts for the iphone version, which includes the GPU, which is itself increasing by 50 to 75% a year.
I would imagine that if Apple decided to use this for something else, they could figure out a way to work it with 10 Watts, and blow past the mobile i7 too.
Nope. ARM has exactly the same architectural problems, and in fact follows the same improvement pattern as Intel, only a few years later. For example, A73 is almost the same as A72 in terms of performance per watt (more integer but less FP performance in fact). You cannot fool physics which prevents further performance and efficiency improvements in the current architecture. But you can change architecture, in the way which is compatible with all the current software (normal software, without the need to rewrite everything like for OpenCL/GPU computing for example), while replacing CPUs, GPUs, DSPs, ISPs and to a point even single-function blocks (the latter with somewhat lower efficiency, but not with 10x-100x lower like the current generation of CPUs). Current CPU designers are just stuck on the way set in 1940s, while technology made it completely obsolete.
They are talking SKU to SKU -- so like a 6700K vs a 7700K, so they are including higher clockspeed in there as well -- not just IPC. (And as we know, 6700K -> 7700K was essentially only clockspeed)
Isn't Coffee Lake coming exclusively to desktop (as hexa and quad core CPUs) and higher end mobile parts (as quad core 15 & 28 W CPUs and up to hexa core 45 W CPUs), with Cannon Lake (10 nm) releasing only mobile small die dual core 6 & 15 W CPUs? I thought I remembered seeing that on an Intel roadmap a while back but it's possible things have changed...
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
64 Comments
Back to Article
Ryan Smith - Tuesday, August 8, 2017 - link
Speaking off the cuff a bit, I'm a bit surprised that Intel picked the 21st for this announcement. With that being the day of the solar eclipse, it's become a mini-holiday for a lot of people. This is particularly the case in Intel's backyard (Oregon), where the zone of totality is only 30 miles or so from Intel's Oregon facilities. Intel's not doing an on-site meeting here or in California (thankfully), but I can only imagine a lot of people in Oregon would have rather had that day off.Meanwhile it means that Intel will be competing with the eclipse for public attention. Outside the US that won't be a problem, but inside the US that's another story. Only Intel could pick a fight with the Moon, I suppose...
Yaldabaoth - Tuesday, August 8, 2017 - link
Well, if Intel is the Sun, and AMD is the Moon, then... WAITAMINNIT! [Gasp]Manch - Wednesday, August 9, 2017 - link
Maybe they don't want huge attention. Considering the disappointment that was Kaby Lake relatively. Not that its a bad proc by any means but little to no IPC increase, just errata and a clock speed bump and very little in ways of new features. Will Coffee Lake provide something more substantive other than requiring a new motherboard? Or will it be another proc that offers no incentive to anyone to upgrade.On another note. I'm disappointed bc I wont be stateside until the 27th. Wish I could see the eclipse. Whoever decided to do this on that day, well your name will be cursed at the water coolers and local bars.
Samus - Wednesday, August 9, 2017 - link
I was thinking the same thing. Maybe Intel is taking a page out of Donald's playbook and taking advantage of a distraction (the solar eclipse) with the announcement of what may amount to virtually nothing exciting.Manch - Wednesday, August 9, 2017 - link
AMD 290 or 480 is to AMD 390 or 580 as Kaby Lake is to _______a. Coffee Lake
b. Sky Lake
c. a & b
d. a & b with the caveat that you have buy a new MB....
answer: ...letnI uoy no emahS .yletanutrofnu krow lliw srewsna eseht fo ynA
Qwertilot - Wednesday, August 9, 2017 - link
I dunno, rumor seems really pretty solid that they're substantially ramping the core count this time round. Quite a major thing, and a pretty easy one to sell as well you'd think.(Much easier than 5-10% IPC.).
Manch - Wednesday, August 9, 2017 - link
So just more cores? :/Valantar - Wednesday, August 9, 2017 - link
W00t! Let's all have a round of applause and congratulate Intel on its fantastic achievement of no longer holding back consumer CPU performance to maximize profits. How amazing!Manch - Wednesday, August 9, 2017 - link
This ^^^^ LMFAO!!FreckledTrout - Wednesday, August 9, 2017 - link
Hah, they are still holding back. First they do have core counts increasing which they had already planed years ago with almost perfect timing to counter AMD. Then they have been sitting on Cannon Lake, Ice Lake, and 10nm. Maybe they really have issues on the 10nm node but honestly I think they see more profit not moving there to fast so they those issues are taking there sweet time to get resolved. IM watching to see how fast Ice Lake lands that will tell me how much Intel has been holding back from producing there best product they can.jospoortvliet - Thursday, August 10, 2017 - link
I easily believe they held back on adding cores but not a new process. That just doesn't fit their MO.marc1000 - Wednesday, August 9, 2017 - link
lol great remark, Intel competing with the Moon for attention!Gastec - Sunday, August 20, 2017 - link
Indeed, and only a fool would pick a fight with a force of Nature :)Buk Lau - Tuesday, August 8, 2017 - link
How much harder can you squeeze out of a tube of toothpaste aka 14nm Intel?Lolimaster - Wednesday, August 9, 2017 - link
Funny thing is people still wanted to buy a 4c/8t intel i7 for more than $330 when they had 90% of that chip in the name of Ryzen 5 1500X for half the price.Now those same people drool over the i7 8700K when they already have the 8c/16t Ryzen 7 1700X for $299. Intel love those kind of blind customers, they spit on them but they say thx and have some bills.
HammerStrike - Wednesday, August 9, 2017 - link
If IPC is your jam Intel is still the king. Just because I don't like them doesn't mean they don't have the best architecture.name99 - Wednesday, August 9, 2017 - link
Actually if IPC is your jam, Apple Hurricane is king. Around 15 to 25% higher than Intel.But Intel still wins (for now...) on frequency.
We shall see what the A11 brings, but I expect it will be a whole lot more exciting than Coffee Lake's expected ~3% IPC boost and ~300MHz higher turbo frequencies.
(Note how Intel now always talks about how great the Sysmark improvement is? Sysmark, by DESIGN, is not a CPU benchmark, it's a system benchmark. So it picks up improvements in things like RAM speed, flash speed, faster PCI, even more cores/better hyperthreading.
All that's perfectly legit IF you're interested in the naive question "how fast is this computer for office work" --- but that's NOT the same thing as asking "how fast is this CPU on single-threaded performance".)
What you mean is not best at IPC but best as single-threaded performance.
ddriver - Wednesday, August 9, 2017 - link
Sure, A10 might have a strong ALU, but ALU performance is secondary, it is basically only used to drive program flow. 99% of number crunching is done on the SIMD units, and ARM is still at 128 bit throughput while intel is already (and prematurely IMO but someone gotta do it) pushing 512 bit.A strong ALU makes a strong impact in something like a mobile phone, based on the typical usage scenarios, but that chip won't do too well in a prosumer situation. Basically OK for content consumption, to weak for content creation.
Granted, very few people out there do any content creation on ios devices, maybe a few illustrators sketching and few musician wannabes mixing rudimentary music.
emn13 - Wednesday, August 9, 2017 - link
ALU performance is still *by far* the most important aspect of CPU performance; and on x86 the extend to which SIMD matters is somewhat inflated because using the SIMD instructions for serial floating point code is generally a good idea. However, that use-case does not extend to larger SIMD batch sizes.Now, certainly number crunching matters, and SIMD is OK for that, but lots if not most of workloads contain fairly little SIMD - and I'm going to speculate that it's not going to be hugely valuable to extend SIMD support *in general*. Keep in mind further that workloads that really benefit from larger SIMD batch sizes tend to benefit orders of magnitude more from GPUs and parallelization for the same reasons.
Larger SIMD batch sizes have downsides and costs too; it's not a free lunch - even when you aren't using them! So it's really not clear if e.g. AVX 512 is a good idea in the first place, and that goes even more so for further increases thereafter.
name99 - Wednesday, August 9, 2017 - link
Right now the A10 has 3 128bit wide vector units, comparable Intel today has 2 256 wide units. This shows in the numbers --- dense linear algebra IPC for A10 is about 3/4 the value for Intel.It's unclear where Apple will go with this. As others have pointed out, integer performance is by far the more challenging (and important) part of modern CPU design. MOST (not all, but most) FP work (certainly that which is amenable to ever wider vectorization) is best run on systems that better exploit the substantial regularity in most FP algorithms, so DSPs for extremely regular algorithms and data flow, GPU-like designs for less regular but substantially decoupled data flow. I've said before that the only reason it makes sense for Apple to drop the IMG GPU is to replace it with something better, and that something better would be a throughput engine, which would designed to handle throughput code with extremely low overhead for shifting from the main latency cores to the throughput cores and back.
Meanwhile on the latency cores we shall see if Apple adopts SVE (initially on top of NEON hardware, but with an eye to deprecating NEON).
Point is
- FP performance is mostly not interesting any more if your goal is to understand these designs and computing in general. Boasting about how well a latency core handles a throughput operation just shows that you don't understand the engineering big picture.
- the way Intel has handled growing its SIMD units at every stage, from MMC through various SSEs to the clusterfck of all the different AVX versions on different CPUs is hardly something to be proud of. The main thing it has achieved is to ensure that almost no wide-spread code ever uses the best Intel SIMD capabilities because Intel uses those to segment markets, instead wide-spread code targets the lowest common denominator.
Multiple NEON units (like Apple has done) is a better intermediate term solution because everybody wins, with generic code working well on both the high end and the low-end. SVE right now seems like an even better solution, allowing the same code to work on everything from the lowest 128-bit wide implementation to as high as you practically want to go (4096-wide I think).
It remains to be seen how this will play out, but I could see Apple moving to SVE soon. Maybe not this year. (The drama for the A11 would appear to be dropping 32-bit support, and adding the Apple throughput engine. And while who knows what is being done behind the scenes, publicly LLVM's SVE support is mostly still at the design/prototype stage.) But maybe next year on 7nm?
As for discussions of "musician wannabe's", really dude?
(a) It's an engineering discussion. You also omitted, among others, photographers and video editors/manipulators.
(b) The discussion refers to CPUs not how they are used. Scenarios like recognition (think real-time translation), AR and computational photography, or games may not be prosumer, but they certainly can demand powerful computation and are part of what Apple is targeting.
(c) It's inevitable that Apple will at some point move their desktops (and likely even their data-warehouses) to ARM. The wins are just too compelling in terms of more control of the pacing of innovation, while TSMC charges a lot less than Intel. Which means all those markets that you claim as not existing on iOS are still within Apple's design ambit.
Santoval - Wednesday, August 9, 2017 - link
I am quite sure that Apple would love to scale up their CPUs for their Mac computers, but I do not think they (and Tim Cook in particular) are ready to reenact their PowerPC to x86 move once again. They will probably play it safe and stay on x86, out of fear of scaring away their users and due to the massive porting effort that would require, from MacOS itself to the most trivial MacOS program.And now that the x86-64 market is no longer a monopoly, with the competition between Intel and AMD heating up, they have even less of a reason to move. Competition means lower prices from Intel, sooner or later, and lower CPU prices mean higher profit margins (which is the reason they would move to an ARM ISA in the first place) - since there is no way they are going to drop their prices.
As much as I would love to see an ARM based Mac I highly doubt it is going to happen, and in the very low power budgets of phones and tablets they cannot beat Intel's performance. They are also never going to develop a high TDP Apple CPU for another customer, since although that would mean more profits it is not Apple's style.
So the only path we can expect a high (or rather mid power, in the 15 - 20 W range, for low power laptops) power, wider ARM CPU is from ARM themselves. That should be their goal in the mid - long term, in order to expand their market, but are they going to do it?
Manch - Thursday, August 10, 2017 - link
They use AMD GPU's. Now that AMD has a decent CPU, I wonder if Apple will go the way of the consoles. Have semicustom design for their laptops. Would provide packaging advantages so they can make the next MacBook Air even thinner.Lolimaster - Wednesday, August 9, 2017 - link
The IPC difference is 5-7%.name99 - Wednesday, August 9, 2017 - link
Ha!Meanwhile on the other side of the world we have TSMC saying in March
"Samsung announced late last year it plans to use EUV in a 7nm process that could be in production by 2019. “We believe we will be the first one” to use EUV in volume production, said TSMC’s Woo with risk production starting by June 2018."
and more recently
"TSMC used an unnamed “novel resist” chemical to replace five immersion masks with one EUV mask at pitches ranging from 26-30nm. Liu said the company currently expects EUV could compress as many as 16 immersion masks to four or five."
with 5nm in 2019...
And this stuff seems to all be happening on schedule. TSMC Board meeting today approved US$3,153.6 million for various capital spending.
(Don't think this $3 billion is for the whole of 2017. They seem to have these meetings four to six times a year and EVERY ONE OF THEM ends with a resolution to spend somewhere from $1 to $4 billion!)
Yojimbo - Wednesday, August 9, 2017 - link
From what I have heard, people believe that Intel has the most EUV tools in the field. And Samsung and TSMC have been a bit looser with their naming recently than Intel has been.edzieba - Wednesday, August 9, 2017 - link
EUV has been "coming soon totally for realz this time guys 100% sure" so many times, I will simply not waste time listening to the marketing drivel until they have the initial production wafers rolling out the fab.name99 - Wednesday, August 9, 2017 - link
You realize the above statement doesn't make you a wise and careful analyst? It makes you someone who can't distinguish between the statements of various enthusiastic amateurs and the statements of people who actually matter.If people like E S Jung (head of Samsung Foundry) or Mark Liu (Co-CEO of TSMC) say they'll be introducing EUV in 2018, that's rather different from random conference attendees saying "well, this works, that doesn't, we're optimistic we can get it working in a few years"...
extide - Monday, August 14, 2017 - link
You know they all buy the EUV machines from the same place, ASML. That's Intel, TSMC, Samsung, GF, etc. They are all honestly on pretty similar footing as far as how far away production is. Of course there are secrets at each shop, but ASML knows all of them...Lolimaster - Wednesday, August 9, 2017 - link
Those gens, years ago they were called simple higher numbered SKU's.Skylake-Kaby Lake-Coffee Lake-Canon Lake
6700K-6750K-6770K-6790K
Lord of the Bored - Wednesday, August 9, 2017 - link
I still remember when we didn't use cryptic business-speak like SKU. We used words like model and revision.Manch - Wednesday, August 9, 2017 - link
Wouldn't kaby lake just have been a stepping back in the day?extide - Monday, August 14, 2017 - link
Pretty much -- I mean pre i7 basically all of the brand names ran cross multiple process nodes. Core 2 was on 65/45, P4 was on 180, 130, 90, 65... P3's went from at least 250 to 130, etc.r3loaded - Wednesday, August 9, 2017 - link
Hah, Intel appear to have added another “optimization” stage to their Process->Architecture->Optimization flow, so they can’t even stick to that. Coffee Lake is basically going to be take 3 of the Skylake microarch, but with six (gasp!) cores for the mainstream. They really know how to piss on their customers and tell them it’s raining.extide - Monday, August 14, 2017 - link
Well, finally getting 6 cores on mainstream, and I have also heard of getting 4 true cores on U series parts as well IS good news no matter how it's spun.minde - Wednesday, August 9, 2017 - link
in 2020 8k video products in production , i wait for tigerlike or later , i suppose that processors H 45w will be with 8core , ddr5,pcie 4, F UHD resoliution, HDMI 2.1, usb 3.2Lolimaster - Wednesday, August 9, 2017 - link
Ryzen 1700 8core already draws a tiny bit less than 45w if you disable turbo and apply a bit of undervolt :Dminde - Wednesday, August 9, 2017 - link
Mobile H 45wdamianrobertjones - Wednesday, August 9, 2017 - link
Capitals and English can be fun! 4k hasn't exactly taken off as yet.TheUsual - Saturday, August 12, 2017 - link
I was thinking of buying a laptop this fall. If I can get a 6 core vs 4 core, I'm all for that.Runiteshark - Wednesday, August 9, 2017 - link
Ahh just like the AMD 64 days. The new Pentium D err Coffee Lakes being a crappy incremental update just like the old days.So what's Intel's "Core" (The successor to Netburst) architecture in 2019-20 going to look like? How are they going to respond to the incoming IBM process that AMD is going to be using and it's 5ghz targeted clockspeed?
It's exciting times to be in the computing space again. I really wonder how things will turn out and how Intel will respond. I think if there is one thing that anyone can say, it's good for there to finally be competition again. I built my 6700k desktop last year so I could run more ram over my old 2600k build, and I really wish I had waited until now for a nice TR setup. So nice to have choices.
Lolimaster - Wednesday, August 9, 2017 - link
Maybe a bit of your lack of "read behind line".Jim Keller went back to AMD in 2012 till 2015. Jim Keller, the god of cpu's. If you knew that, pain was coming, and investing in stock when AMD reached low levels was a brilliant idea.
Runiteshark - Wednesday, August 9, 2017 - link
I've been following it for the past two decades. AMD sucked for the past 8 years. I know Jim went back, and I also invested way back in '12, and have off and on traded it since 04. I'm really hoping on the server side of things these take off, because as a consumer of these server grade chips, it has been frustrating seeing the pathetic improvements in the v1-v4 e5 xeons over the years. The only thing you were able to do is go one SKU down at best for your midrange servers which was a savings of about 100-200. Nothing spectacular.As for you Hurr Durr, first this isn't /g/, and second practically the entire semiconductor industry collaborated on the 7nm process node. Samsung, IBM and TSMC have thrown their weight and investments into the process because they all plan on using it for their products, which means Samsung SSDs to AMD CPUs and GPUs. With this being said, I wouldn't plan on it being a dud. Intel has having significant problems bringing up yields on their 10nm nodes and here we are with 14nm+++. Honestly, I originally thought AMD spinning off their fab would be disasterous but it seems that it was a smart move instead because they ended up collaborating with the rest of the industry.
All this said, supposedly this 14nm node they are on was set for 3ghz clockspeeds, so if their new 7nm shrink actually does run at the 5ghz like it's said, then we could be seeing the first consumer CPUs that regularly hit 6ghz overclocks. Wouldn't that be nice? With all this being said, it remains to be seen if AMD can stop screwing up everything else and continue to innovate rather than being one hit wonders. With how much money Intel throws into R&D there will have to be more than just clockspeed and core scaling with their modular architecture. Hopefully their IPC also improves.
FreckledTrout - Wednesday, August 9, 2017 - link
The 7nm node does look very good on paper, better than Intel 10nm but of course have to wait and see. However if the 7nm node does work out well, Zen2 should be a really good CPU to compete with Ice Lake. I do believe Intel's 10nm issues are like getting a flat tire they choose not to fix, they are milking 14nm due to no competition as they simply have been at 10nm way to long for any other answer.Intel has been sitting on 10nm and Ice Lake and is going to fight back hard over the next 12 months.
AMD has a pretty awesome 7nm process coming up from GLOFO that could be better than Intel's 10nm which will be a first.
I have my popcorn ready.
Hurr Durr - Wednesday, August 9, 2017 - link
Every node ever has to look better than what intel operates on at the moment at least on paper, because intel is for all intents and purposes a benchmark here.Who would even get excited if you came out and said something like "we really threw down and produced this marvel of technology, which ekes out a 5% yield/power improvement over meh TSMC process"?
FreckledTrout - Wednesday, August 9, 2017 - link
I wanted to add AMD's next gen GPU, Navi, if you read up on it, its is a beast of a concept the devil is in the execution. You will see why Vega's design has some compromises so AMD didn't have to do a full redesign for Navi. Using the Infinity fabric tying GPU's together sharing low latency HBM2 will be Zen all over fighting against Nvidia monolithic cores.Hurr Durr - Wednesday, August 9, 2017 - link
They always somehow forget to mention that Keller left AMD again...for Tesla! Really shows you how bad things must have been.And IBM has been dead for a while already.
jospoortvliet - Thursday, August 10, 2017 - link
Right just like he left Apple and things went to shit. Oh, wait, apple still beats everybody else...Jim built a team and a basic design, then left the finishing touches to others. Just as planned.
Hurr Durr - Wednesday, August 9, 2017 - link
>incoming IBM processAny IBM process at this point in time is a guaranteed dud. So AMD will be stuck with their new pedestrian architecture for ten years again.
FreckledTrout - Wednesday, August 9, 2017 - link
Intel's new core design is Ice Lake on 10nm and was expected in 2019 but I'm pretty sure they had this tape in ready for a while so expect it to hit in second half of 2018. I hope they do start announcing 2018 dates might get AMD to push GLOFLO harder on 7nm. Should make for a fun couple years.lefty2 - Wednesday, August 9, 2017 - link
There is a lot of confusion with Intel's code names. Coffee lake refers to all 8th generation CPUs, but 8th generation is both 14nm++ *and* 10nm. The 14nm++ CPUs are the ones released this year, under the code name "Kaby Lake refresh" and the 10nm ones are "CannonLake". All clear? Coffee Lake = Kaby Lake refresh + CannonLake.Oh, and by the way, 14nm++ performs better than the first iteration of 10nm (according to slides from Intel's manufacturing day)
hingsunanand - Wednesday, August 9, 2017 - link
There are only 3 factors I would consider for this new processor line. 1st, will there be a xeon ecc option. 2nd, will Intel match the number of PCIE lanes with AMD to 64 lanes. 3rd, will any low core count version be release since my workload is light and do not need anything more then 4 cores.coder543 - Wednesday, August 9, 2017 - link
It honestly sounds like the AMD ThreadRipper 1900X is almost perfect for your workload. Why do you want to wait for Intel to offer something they have no interest in offering? Intel prefers to highly segment their offerings, so getting all of that in one seems unlikely.All of AMD's current processors support ECC memory if you get an appropriate motherboard, so, the 1900X offers:
- 60 usable PCIe lanes
- quad-channel ECC RAM
- 8 cores / 16 threads that boosts from 3.8GHz to 4.2GHz (with XFR)
- easily overclockable, if more is needed
It's not a quad-core part, of course.
AnandTech article: http://www.anandtech.com/show/11678/amd-threadripp...
peevee - Wednesday, August 9, 2017 - link
15%? They lie. There have not been such generational increase since Sandy Bridge. It is 0% here, 5% there for the last 6 years, and there is no indication it is getting any better.Do you remember the time when CPU performance increased 50% every year? More than doubled in 2 years? I do.
It is all about architecture now, which is simply hit the physics barrier. A completely new computer architecture is needed, and it is totally possible (I have developed one, not von Neumann-derived) to eventually increase performance/W by an order of magnitude.
melgross - Wednesday, August 9, 2017 - link
The architecture to Watch is ARM, particularly as done by Apple. They are still managing to give 30 to 50% increases year after year. And while these A series,chips are only equal to the i3 and 15 mobile x86 lines, they also use less than 5 watts for the entire tablet SoC, and less than 3 watts for the iphone version, which includes the GPU, which is itself increasing by 50 to 75% a year.I would imagine that if Apple decided to use this for something else, they could figure out a way to work it with 10 Watts, and blow past the mobile i7 too.
Past that, who knows?
peevee - Wednesday, August 9, 2017 - link
Nope. ARM has exactly the same architectural problems, and in fact follows the same improvement pattern as Intel, only a few years later. For example, A73 is almost the same as A72 in terms of performance per watt (more integer but less FP performance in fact).You cannot fool physics which prevents further performance and efficiency improvements in the current architecture. But you can change architecture, in the way which is compatible with all the current software (normal software, without the need to rewrite everything like for OpenCL/GPU computing for example), while replacing CPUs, GPUs, DSPs, ISPs and to a point even single-function blocks (the latter with somewhat lower efficiency, but not with 10x-100x lower like the current generation of CPUs).
Current CPU designers are just stuck on the way set in 1940s, while technology made it completely obsolete.
extide - Monday, August 14, 2017 - link
They are talking SKU to SKU -- so like a 6700K vs a 7700K, so they are including higher clockspeed in there as well -- not just IPC. (And as we know, 6700K -> 7700K was essentially only clockspeed)bodonnell - Wednesday, August 9, 2017 - link
Isn't Coffee Lake coming exclusively to desktop (as hexa and quad core CPUs) and higher end mobile parts (as quad core 15 & 28 W CPUs and up to hexa core 45 W CPUs), with Cannon Lake (10 nm) releasing only mobile small die dual core 6 & 15 W CPUs? I thought I remembered seeing that on an Intel roadmap a while back but it's possible things have changed...