Comments Locked

252 Comments

Back to Article

  • gregounech - Monday, November 6, 2017 - link

    The rumours were true :O
  • nathanddrews - Monday, November 6, 2017 - link

    Indeed they were. I am truly stunned!
  • mapesdhs - Tuesday, November 7, 2017 - link

    I bet Alex Jones predicted it 3 years ago. ;D
  • pogostick - Tuesday, November 7, 2017 - link

    What surprises me about this is remembering how Intel straight up lied about it.
  • patrickjp93 - Tuesday, November 7, 2017 - link

    Intel didn’t lie. The response to the rumor was that no such deal had been signed. It’s quite possible for Intel to move from the ink drying to bringing a heterogeneous package to market in the span of a few months. Don’t forget why Intel is the king of the computing world. It has the engineering prowess to beat IBM and Oracle even though they used to be the kings of supercomputing.
  • pogostick - Tuesday, November 7, 2017 - link

    No, as npz explained, they were already making this. Intel's official response, as noted in a Barron's piece was this:

    "The recent rumors that Intel has licensed AMD's graphics technology are untrue."

    So, claiming that Intel did not lie is technically correct, but practically speaking it was a lie. The crux of the question was if AMD would be making money from Intel. To me, this is Intel officially and willingly misleading the public. It confirms a history of lacking principles for Intel to so freely choose to further erode their public trust like this. It's a bit disgusting, actually. Boy who cried wolf, and all that.

    My takeaway is that Intel will always be full of shitake.
  • Pinn - Tuesday, November 7, 2017 - link

    Do you work at AMD or Nvidia?
  • pogostick - Tuesday, November 7, 2017 - link

    No. Have I said something that offended you?
  • mikato - Monday, November 20, 2017 - link

    It sounds like from the article that Intel will be buying AMD chips, not licensing technology.
  • Scummie - Tuesday, November 28, 2017 - link

    One does not need to licence technology to purchase components using the technology. Do you license all of the technology in your car when you purchase it?
  • haknukman - Tuesday, November 7, 2017 - link

    Intel is king of raping consumer!
  • tipoo - Monday, November 6, 2017 - link

    Interesting development for sure...So maybe we could see 13" ultrabooks with Radeon graphics that challenge the MX150 first?
    I'm down for a 13" rMBP with the 8th gen ULV quads, and these Radeon graphics.
  • fmq203 - Monday, November 6, 2017 - link

    AMD Ryzen 5 2500U with VEGA would perform the same as an Intel i7-8550U + MX150 combo in graphics applications. This would perform better than that, it's not made to compete with upcoming AMD laptop products, it's made to serve a higher-end market that AMD can't enter right now
  • IGTrading - Monday, November 6, 2017 - link

    How weird is to see Intel not naming or showing AMD's brand or logo anywhere in the presentation video and slides ?!

    Really ?? No branding, no nothing when AMD provides the biggest chip on the package and it is the INVENTOR and FOUNDER of HBM & HBM 2 ?!

    Intel is absolutely the worst corporation we've seen in the past 30 years and we're happy they're not into the oil Business, considering how bad they can fail and how dirty they can behave :)
  • CajunArson - Monday, November 6, 2017 - link

    If you had the reading comprehension of a third-grader you would have seen that Intel mentions AMD repeatedly in its official press release and even quotes AMD executives: https://newsroom.intel.com/editorials/new-intel-co...

    I've seen your senile and frankly bigoted posts on other websites too.
  • BrokenCrayons - Monday, November 6, 2017 - link

    Heh, proof that the Internet has a ddriver (just under a different name) for every topic.
  • ironargonaut - Monday, November 6, 2017 - link

    I had to look at the name twice I thought it was ddriver
  • Notmyusualid - Tuesday, November 7, 2017 - link

    @ ironargonaut

    Oh reading this news, I was about to look at the calendar, and then I realized its too cold for it to be April.

    I pass an Intel fab on the way to work each morning (Hillsboro, OR), and I DREAM of getting a look inside, at what new tech they have... sometimes I bump into their execs at a local cafe, but they have so far given nothing away. Good lads.
  • Ranger1065 - Tuesday, November 7, 2017 - link

    He certainly appears to have made an indelible impression on some folks.
  • jabber - Monday, November 6, 2017 - link

    Trust me, there are far far worse corporations out there than Intel. Some real nasty scum out there. Intel isn't anywhere near.
  • haknukman - Tuesday, November 7, 2017 - link

    Oh really like saying infinity fabric is glued and what are intel doing! defend that you stooge!
  • dromoxen - Monday, November 6, 2017 - link

    thats maybe because they are buying the shizzle from RTG , which may also be spun out from AMD within the next two months, bought up by a VC/Saudi buyout. that may be why intel is happy to deal with them. And dont discount the renewed ARM mali gfx long term threat ....
  • patrickjp93 - Tuesday, November 7, 2017 - link

    Hynix invented HBM/2, and it’s a ripoff of Intel’s and Micron’s HMC. Quit being a primadona.
  • haknukman - Tuesday, November 7, 2017 - link

    yep that intel think we stupid. Every tech know it is AMD HBM2 and Radeon.
  • Krysto - Monday, November 6, 2017 - link

    I agree. But his comment exemplifies how most people will think now about this whole situation. Ryzen being almost as good on performance plus needing to get a powerful dedicated GPU might have pushed them towards the Ryzen + Radeon combination. Now they will think "hey, I wanted Intel anyway, and now I can get it with highly integrated AMD GPUs, too!" Win-win.

    Sigh. Such a stupid move by AMD.
  • ash9 - Monday, November 6, 2017 - link

    It's a custom part, as Intel refers to 'It', "In close collaboration, we designed a new semi-custom graphics chip" . Do you see AMD's logo on the Xbox or PlayStation????????
  • IGTrading - Monday, November 6, 2017 - link

    No, but Microsoft actually makes the chips (AMD only designed them) and also Microsoft had complete decision power when it came to the chip architecture design as well (memory, BUS and so on) .

    Here AMD makes the chips and sells them to Intel.

    Also AMD invented, founded and supported HBM and HBM 2.

    On this package, AMD is responsible for the 2 largest chips and Intel is responsible for the x86 processor and the interconnect.
  • Trackster - Monday, November 6, 2017 - link

    Microsoft does not make any chips as they do not own any foundries.
  • tamalero - Monday, November 6, 2017 - link

    Neither apple, yet they have their own proprietary design.
    They simply buy space and production from the Fabs around the world (Samsung, TSCM..etc..)
  • IGTrading - Tuesday, November 7, 2017 - link

    You understood it wrong mate :)

    What you're saying means AMD and Qualcomm make no chips because they don't have any foundries :)

    To try and spell it our more clearly: Microsoft makes the chips which it designed with AMD's help and IP.

    Microsoft designs the chips. Microsoft writes the drivers. Microsoft integrates the chips into the build. Microsoft pays TSMC to manufacture the chips. Microsoft tests the chips. Microsoft pays AMD for IP. Microsoft pays AMD for the assistance with the drivers and the rest.

    TSMC manufacturers the chips for Microsoft which then pays AMD for the IP and the semicustom work.

    Here AMD makes the chips: AMD designs, AMD tests, AMD optimizes, AMD writes the drivers, AMD pays TSMC to manufacture the chips, AMD packages the chips and prepares them to be connected to Intel's processor, AMD sells the chips it has made to Intel and Intel pays AMD for each chip.

    That's an AMD chip sitting on Intel's processor and it is messed up that Intel doesn't clearly give AMD credit nowhere in the marketing materials.

    Intel just mentions AMD in text, like a technical clarification, almost like the fine print on corporate contracts while all marketing screams INTEL :)

    It's really a mess IMHO.
  • awesomegamer919 - Monday, November 6, 2017 - link

    AMD will not make any of the chips, they do not own any foundries. Either GloFo or Intel will make the GPU and Samsung/SK/Micron the HBM.
  • patrickjp93 - Tuesday, November 7, 2017 - link

    Intel makes its own HBM2. Don’t forget Intel and Micron invented stacked memory with HMC.
  • SetiroN - Monday, November 6, 2017 - link

    These are exactly the opposite of 13" ultrabook parts.
  • tipoo - Monday, November 6, 2017 - link

    15" laptops with smaller motherboards for more batteries would also be good.
  • IGTrading - Monday, November 6, 2017 - link

    According to one of my colleagues : "Intel is being a funny ass" :)

    Intel : " we designed a new semi-custom graphics chip, which means this is also a great example of how we can compete and work together"

    IGTrading team : "When it comes to graphics and semi-custom, you've designed squat. Failed twice. And spent a few billions on the failures."

    #comebackoftheday
  • CajunArson - Monday, November 6, 2017 - link

    Oh yeah Intel "failed"... it's way to a $4.5 Billion quarterly profit last quarter.

    Tell me how long it takes the voices in your head... I meadn "IGTrading Team" to match that failure.
  • tamalero - Monday, November 6, 2017 - link

    Market Control and CPU business have nothing to do with their graphical failures.

    They have tried many times to do graphic cards, they are BAD.

    I mean, noone remembers Larrabee?
  • PeachNCream - Monday, November 6, 2017 - link

    I dunno why anyone would remember Larrabee since it was canceled before it was ever sold as a consumer GPU. The design is still kinda around in those Intel compute card things like Knights Landing or whatever. That's not something the average person is going to end up using so it not being memorable isn't a big deal.

    Anywho, for being a failure at GPUs, the fact that Intel graphics are the huge majority of consumer computers doesn't really scream failure. Performance might not be all that great, but you can't really turn around without seeing a computer that has Intel graphics in it.
  • daku123 - Tuesday, November 7, 2017 - link

    *cough* Linux *cough*
  • ZeDestructor - Tuesday, November 7, 2017 - link

    Intel iGPUs are lovely on Linux. Perf is a bit shit (such is the life of being fully opensource), but it's there, it works, it's mostly opensource. Can't say the same for nV, and I have no idea about AMD.
  • PeachNCream - Tuesday, November 7, 2017 - link

    I can only talk for basic office tasks, web browsers, video playback, and little bit of light gaming (a few native games and a couple things through WINE that I reall love), but my experience with Linux and Intel graphics hasn't been bad. They're not gaming GPUs and they perform worse than under Windows, but that's true of nividia and AMD also...graphics performance just isn't comparable from any company's graphics. So don't use your computer for games. Buy a PS4 or something.
  • Notmyusualid - Tuesday, November 7, 2017 - link

    @ CajunArson

    I'm not greedy, I only want ONE of those billions...
  • haknukman - Tuesday, November 7, 2017 - link

    profit has nothing to do shitty intel hd graphic
  • Zeratul56 - Monday, November 6, 2017 - link

    Never thought I would see the day. I guess the earth is about to freeze over.

    On a serious note, this would be great for on the go gamers. I use my skylake tablet to game quite a bit on the airplane. If I could get a device to boost the graphics just a little bit that would be nice.
  • blppt - Monday, November 6, 2017 - link

    +1 on this, I had to check the date to make sure this wasn't april.
  • IGTrading - Monday, November 6, 2017 - link

    Remember how many slammed Fudzilla.com, HardOCP.com and SemiAccurate.com for breaking this news back in February 2017 ?!

    We hope everybody will politely apologize and #readmore before speaking on matters they don't master.

    February 2017 : http://www.fudzilla.com/news/processors/42806-inte...

    May 2017 : http://www.fudzilla.com/news/graphics/43663-intel-...

    They didn't get all the details right, but they got the bottom line.

    This is Intel's official admission that INTEL CANNOT BUILD GPUs.

    It's done and over. They've wasted over 10 billions on failed projects only to end up begging AMD to help them.

    The "poetic" words are well chosen on our behalf : Intel is desperate to cut nVIDIA's sales and profits therefore all those tens of millions of laptops with nVIDIA graphics are going to suddenly dissappear.

    Intel does this because it sees nVIDIA as a much bigger threat than AMD and they hope that by cutting their revenue down, they will put pressure on nVIDIA's R&D budgets.
    For nVIDIA, this remains to be seen how it plays out.

    For AMD this is a God-send:

    1) AMD's Raven Ridge will power AMD-based laptops and bring revenue to the company (that was previously 0% in the Thin&Light market)

    2) AMD will still be present on the low end notebook market

    3) AMD ensures THEIR GPUs are in 80% of the laptops sold in the whole world, displacing nVIDIA with a serious revenue increase.

    4) AMD's software optimization base suddenly improves if developers make the effort to take advantage of all the compute abilities brought by AMD's iGPU

    There will also be a downside : Intel may decide to completely annihilate AMD in the low end market, because obviously the CPUs that not reside in notebooks between 400 USD and 700 USD will be moved lower on the scale, where AMD's Bristol Ridge can barely face them.

    This is one of the most strange partnerships we've seen i the past 15 to 20 years and it goes to show that Intel is capable of anything in its attempt to annihilate the competition.

    Moreover, considering Intel's past history, we hope AMD's net gain from such a strategy is significant, because Intel is always dangerous and literally up to no good : https://www.youtube.com/watch?v=osSMJRyxG0k
  • Gondalf - Monday, November 6, 2017 - link

    " INTEL CANNOT BUILD GPUs " bold statement dude, come on :). The real issue are the patents, i know this, you know this. "INTEL CANNOT BUILD PROFITABLE GPUs" sounds much better.
    A good GPU is all but definitively not rocket science, for example a good cpu is much complex to realize, by a wide wide margin.
  • IGTrading - Monday, November 6, 2017 - link

    Intel lost more than 10 billion USD on failed GPU projects and ALWAYS FAILED to beat even the worst performing GPU from both, AMD and nVIDIA.

    It has nothing to do with profitability.

    When you have all the money in the world and spend years and years on the project and the result is such a failure that it fails to beat the low end of the competition, that means you can't build GPUs, no matter if they're profitable or not.

    IMHO
  • Santoval - Tuesday, November 7, 2017 - link

    You appear to be unaware that Intel's iGPUs employed some Nvidia patents between 2011 and 2017. That was the result of a cross-licensing agreement between Nvidia and Intel which lapsed in early 2017 (hell, it is even mentioned in this article). So Intel was restricted patent-wise since at least 2011, and they decided to buy GPUs from AMD instead of beating the same patent horse from Nvidia.

    You need to realize that Nvidia did not provide Intel with their "crown jewel" GPU patents, but rather patents for basic GPU functionality (just as Intel did not grant them a x86 license from CPU side). So, since almost all GPU related patents had already been picked by Nvidia, AMD or ARM, and Intel could only work with a very limited set of Nvidia patents, Intel's iGPUs would *always* be handicapped. Intel, despite their very significant resources, could not reinvent the (GPU) wheel.
  • pSupaNova - Tuesday, November 7, 2017 - link

    "So, since almost all GPU related patents had already been picked by Nvidia, AMD or ARM, and Intel could only work with a very limited set of Nvidia patents, Intel's iGPUs would *always* be handicapped. Intel, despite their very significant resources, could not reinvent the (GPU) wheel."

    Intel has been in this game for years they could have built up their own IP, Apple has managed to do it.
  • patrickjp93 - Tuesday, November 7, 2017 - link

    Apple has not managed to do it. It just bought out Imagination Technologies to avoid a lawsuit on that you ditz.
  • IGTrading - Tuesday, November 7, 2017 - link

    Apple ?! Buying Imagination Technologies ?!

    Where did you read that ?!
  • IGTrading - Tuesday, November 7, 2017 - link

    Very interesting point Santoval.
  • haknukman - Tuesday, November 7, 2017 - link

    So Intel is stupid all these year! good point!
  • Hixbot - Monday, November 6, 2017 - link

    If Intel are desperate for a decent SOC GPU, why would AMD willingly help? You would think AMD could reap the benefit that Intel cannot compete with AMD's Zen/Vega APUs. In this case AMD is helping Intel to compete with AMD in the APU market. I don't understand why AMD would invite the competition. They could have had the only x86 APU with decent GPU and CPU performance.
  • lmcd - Monday, November 6, 2017 - link

    AMD cannot compete with Intel's experience with creating platforms. Intel also has done much better historically with interconnects. Intel's CPUs also always hit at or under their TDP, and usually are at a smaller die area as well.
  • IGTrading - Monday, November 6, 2017 - link

    AMD invented HyperTransport , formerly known as LDT which Intel later copied.

    AMD invented x86-64 which Intel later licensed from AMD.

    AMD invented Infinity Fabric "glue" which Intel now copies.

    AMD invented HBM and HBM2 memory which Intel now uses.

    AMD invented Mantle which was completely integrated into DirectX 12 which Intel now uses .

    AMD invented TSV which Intel now uses on everything including its high end SSDs.

    AMD produced the first processors with copper interconnects which Intel later started using.

    AMD basically invented the iGPU and the APU while Intel beat them to the manufacturing punch.

    So where did Intel inovate lately ... in the past 40 years ... since they've invented x86 ?!

    AMD cannot compete ?!

    Like AMD Opteron did not completely destroied Intel's Itanium and its disastrous EPIC architecture ?! :)
  • awesomegamer919 - Monday, November 6, 2017 - link

    > AMD invented Infinity Fabric "glue" which Intel now copies.

    EMIB =/= IF, EMIB is by definition very different, allowing Intel to slap near anything together whilst IF has only been shown to work with CPU and GPU dies, not HBM.
  • blppt - Monday, November 6, 2017 - link

    Wasnt HyperTransport something that AMD acquired from DEC?
  • blppt - Monday, November 6, 2017 - link

    Also, I'm pretty sure that DX12 has absolutely nothing to do with Mantle---that would be Vulkan. It (DX12) has a similar "closer to the metal" mentality, but other than that...
  • boozed - Monday, November 6, 2017 - link

    The DEC technology that AMD used was the Alpha's DDR memory bus. HT was developed by a consortium of which AMD was a member.
  • IGTrading - Tuesday, November 7, 2017 - link

    AMD founded the HT Consortium to gain industry support for HyperTransport which it invented.

    AMD is also the President of the HT Consortium.

    To say HyperTransport was developed by the HT Consortium and AMD was "a member" is plain wrong.

    AMD invented, used sucessfuly and popularized HyperTransport and THEN founded the HT Consortium.

    Guys, Google is free to use. :)

    #allweneedislove and to #readmore
  • Morawka - Tuesday, November 7, 2017 - link

    AMD did not invent HBM nor HBM2. Intel actually owns and makes HBM2 through Micron. AMD buys it from Micron among others.
  • boozed - Monday, November 6, 2017 - link

    Itanic was designed to compete outside the x86 arena and was a flawed concept from the start. It would have died just as quickly even in Opteron's absence, IMO.
  • ABR - Tuesday, November 7, 2017 - link

    Opteron (and Intel's ensuing copy) defined the entire market direction that has reduced *all* non-x86 CPUs to a small niche.
  • ABR - Tuesday, November 7, 2017 - link

    Server CPUs, that is. :-)
  • mapesdhs - Tuesday, November 7, 2017 - link

    It died because it was incredibly late and the first version was poor, not because it was objectively a bad idea. Had it come out on time, with the performance of Itanium2, nobody would be complaining.
  • Cliff34 - Monday, November 6, 2017 - link

    The proof of the pudding is in the eating. I ain't bashing AMD but until they come out with a solid CPU for laptop market, it is hard to say they are competing in this arena at all. I do hope that their next generation of CPU and their platform for laptops can compete with Intel. But until there is proof, for laptops I am sticking with Intel.
  • blppt - Tuesday, November 7, 2017 - link

    They already have launched Ryzen+Vega APU for notebooks. Availability seems scarce at the moment though.
  • ZeDestructor - Tuesday, November 7, 2017 - link

    HyperTransport isn't all that special.. fast, low-latency, serial, cache-coherent interconnect, just like InifiniBand and any number of supercomputer links, which are even older. Hell, on it's own, HyperTransport isn't very useful - it's only when you bring in the NUMA that it starts mattering. Intel didn't go NUMA till the Nehalem generation, at which point they brought out QPI, which looks the same because it solves the same problems as what HT solves.

    In terms of ISA extensions (x86_64 is just an ISA extension on top of x86), Intel has been by far the most prolific, with MMX, SSE1-4, AVX, FMA, AES-NI, TSX-NI and so on. Meanwhile AMD has 3DNow! (which nobody really used since it was basically MMX+), SSE4A (which nobody really uses, to the point where AMD are deprecating it), AMD-V/IOMMU (same thing as VT-x and VT-d respectively), SVM (super new, so no wide usage yet)

    Inifinity Fabric isn't all that special either - cache coherent interconnect on PCIe.. I wonder where I saw that first.. oh in CAPI. Intel went the UPI route instead, which is somewhat less flexible (can't remap UPI links to PCIe), but should work just as well. Internal to the CPU, Intel uses router-based meshes rather than rings, which while not as good as rings (that AMD uses) currently, should get better as core counts go ever higher.

    Intel and Micron have been working on HMC for about as long as AMD and SK Hynix have been working on HBM, and have shipped at around the same times. Alongside that they've also been working on Optane (which I suspect will be much better in NVDIMM form than NVMe) and EMIB.

    Mantle is mostly DICE's work, not AMD's, and as awesomegamer919 pointed out, is now Vulkan. DX12 and Metal are completely independent projects. I'll give AMD credit for supporting DICE and advertising the hell out of it though, cause that kicked MS into building DX12.

    TSV is most certainly not an AMD's invention - that honour goes to some madmen in the 60s: "The first TSV was patented by William Shockley in 1962,[4] although most people in the electronics industry consider Merlin Smith and Emanuel Stern of IBM the inventors of TSV, based on their patent “Methods of Making Thru-Connections in Semiconductor Wafers” filed on December 28, 1964 and granted on September 26, 1967." IBM strikes yet again as being a leader in IC design.

    Intel invented High-k Metal gate tech, and later on were the first to ship FinFETs. And SOI was more of an IBM thing, which AMD made good use of over the years.

    AMD did not produce the first CPUs with copper interconnects. IBM and Motorola did in 1997.

    AMD did not invent iGPUs - that honour goes to consoles I believe. Simple example off the top of my head is the PS2's Emotion engine after a die shrink from two chips to a single unified chip. For a more modern example, you have the OMAP2420 (used in the Nokia N95, for example) which has a more "integrated" integrated GPU. AMD only wins the x86+iGPU award, and that's pretty shitty as awards go.

    So where did Intel innovate you ask? Well, everywhere really, and more often than not driving x86 forward more so than AMD has. The only really, really major technical wins AMD has are x86_64 and integrating the memory controller. Big wins, certainly, but nothing that Intel couldn't have done on their own.

    Doesn't mean AMD can't compete, or even beat Intel, but overall, Intel has been the leader more often than not. That's just the facts, unfortunately.

    On the subject of Itanium, you should really go talk to anyone who's worked with Itanium at a really low-level (like OS or compiler development). Itanium is an AMAZING microartitecture, let down completely by x86 lock-in, a crap initial release, crap x86 emulation on the first gen (which I have been told was a big part of why the initial release was crap, and later ditched completely), and being basically HP-only. These days, you can see much of the lessons learnt with Itanium in the core design of the Sandy Bridge lineage, in particular how they can achieve 4 FLOPs/cycle, and the usefulness of HyperThreading.

    PS: you should really have checked that AMD actually invented/popularised most of the things you give AMD credit for, since x86 isn't the only CPU microarchitecture out there.
  • ZeDestructor - Tuesday, November 7, 2017 - link

    Damn.. this went a tad longer than I expected...
  • IGTrading - Tuesday, November 7, 2017 - link

    I like your post mate. It is factually corect for the most part (and I say that because I didn't check every fact since most of them I know already and I see you depicted them correctly).

    But you present disparate facts that don't have a direct connection with our scenario of the innovation competition between AMD and Intel in the x86 space.

    Reffering to that, AMD invented, developed, supported and brought to the market way more technologies than Intel which Intel initially bashed and then copied.

    That's what I'm trying to say.

    Copper interconnects: AMD introduces successfuly, Intel follows later without success ( 7 years of Pentium 4 based CPUs)

    IMC: AMD introduces very early and successful, Intel follows much, much later

    HyperTransport: AMD introduces, Intel follows much later

    APUs: AMD introduces, Intel launches FIRST but with bad performance

    HBM: AMD introduces with limited success, Intel copies much later

    and so on ...

    Intel did one thing and one thing only: x86 .

    They they quiclky tried to kove away from it and failed while Cyrix taught them a hardware lesson.

    Then they again tried to move away by EPIC and failed incredibly painful being beaten by AMD on the x86 side and IBM, Fujitsu, Sun, NEC on all other sides.

    They tried graphics for almost a decade, lost 11 billion USD and gave up before ANY produc got to the market.

    They tried paralel computing for almost 5 years now and for 5 years especially nVIDIA (but even AMD) beat them so bad that they've spent 15 billion on Altera to try and diversify a bit.

    They tried competing with GloFlo and TSMC and failed miserably as a FAB.

    All they've achieved was through bribery and illegal activities being sentenced in over 5 countries on 3 different continents: https://m.youtube.com/watch?v=osSMJRyxG0k

    Trying to paint Intel in a decent light is hard mate :)
  • ZeDestructor - Tuesday, November 7, 2017 - link

    *sigh*

    Copper interconnects: IBM beat everyone cause THEY FREAKING INVENTED IT. AND shipped it in volume with various Power ISA stuff and maybe even system/390. I believe nV and ATi GPUs were also fabbed at IBM at the time, which puts AMD anywhere from 2nd to 4th place in colume shipments of copper interconnects. Intel was perfectly fine with their 180nm-65nm nodes, it's just that NetBurst was a massive fail. Also, ignoring Pentium M there, I see, which beat the crap out of everything else on the market, both Intel and AMD, if you were willing to overclock.

    HT/IMC: counterpoint: AMD weren't able to get FSB fast enough so they had to go for an IMC+HT setup. Besides, AMD is hardly the first there: all kinds of embedded chips (most notably ARM and MIPS) had IMCs for embedded and mobile devices, like routers, phones, PDAs. As I said, x86 isn't the only architecture out there.

    APUs: mobile and embedded (including consoles) got there YEARS earlier than both.

    HBM: Intel was about a year later there, I'll grant that.

    On the x86 ISA front, Intel has led that from far more than AMD. SSE and AVX may not be as visible as x86_64, but they're equally important, especially AVX. AVX512 is twice as fast as AXV2, which is twice as fast as AVX1, which is twice as fast as SSE4, which is I believe ~3x as fast as "base" x86. This isn't the work of a company resting on their laurels, this is very deep, very serious cutting-edge engineering. Of course, the industry being convergent perf-wise means that IBM added similar stuff to POWER, and ARM did the same to some ARM cores.

    As I said in my original comment, Itanium (EPIC) is a great architecture - arguably the best ISA+architecture combo ever released in the history of computing, but the launch fail, combined with the x86 platform being given a massive boost from x86_64, combined with the effectively Intel-HP only approach killed Itanium very very dead.

    Graphics.. meh. AMD never even tried building a GPU from scratch like Intel - they just bought ATi for billions instead. If I had to call a company not innovative there, I'd say it's AMD, not Intel. Besides, have you even looked at how complicated building a GPU is? And all the patent licensing? Shits HARD, yo.

    Parallel computing-wise, Xeon Phi is pretty cool and functional. The problem there is the nVidia/CUDA monopoly, which is stifling AMD just as much as Intel. Also, did you know that Apple was the lead company behind OpenCL that AMD joined in wholesale later when they gave up on AMD Stream SDK due to developers ignoring it in favour of CUDA?

    Up until about 3 years ago, Intel fabs were strictly Intel-only. Intel wasn't even trying to compete with TSMC, GloFo and Samsung. Even now, Intel is still a ways ahead of TSMC, GloFo and Samsung in actual measurements and switching performance (some 30% denser with about 10% better perf), but them doing their own naming makes em look worse. Seriously, TSMC "12nm" is basically the exact same dimensions as TSMC "16nm FF+".. and bigger than Intel 14+ and 14++. If Intel actually opens it's fab up to all and sundry, things will get very interesting, very, very fast.

    The bribery and other illegal activities while true, shouldn't undermine their technical achievements when technical achievements were achieved. Intel is still a terrible company in some respects though, I have to say (their stance on x86 licensing in particular disgusts me).
  • Jaybus - Tuesday, November 7, 2017 - link

    IBM did indeed invent copper interconnects, but not for Power ISA stuff. The first systems shipped with CPUs using copper interconnects was the 6th gen S/390 mainframes in 1999. Matter of fact, many of the innovations in microprocessor design were scaled down mainframe concepts. Not trying to diminish Intel or AMDs efforts! The scaling down WAS the innovation. The concept of using multiple different processors combined into a single unit has long been the standard in the mainframe world, only they are implemented as MCMs (multichip modules) that have a number of central processors, clock chips, storage controllers, vector processors, specialized i/o processors, etc. Intel and AMD are now taking the heterogeneous processor design and scaling it down to fit on a single chip. Not necessarily a new concept, but definitely a huge step in the right direction. Combining an AMD GPU and Intel CPU on the same silicon is just the tip of the iceberg. Why bother with AVX when you can include a full-blown FPGA?
  • smilingcrow - Tuesday, November 7, 2017 - link

    Yet for ten years they didn't put out any decent CPUs except for a few small niches and nearly went bankrupt!
  • IGTrading - Tuesday, November 7, 2017 - link

    Yes smilingcrow ... In a way ... But not for 10 years ... More like 5 years ...

    Phenom II was a good chip in desktop and server and Bulldozer which had its weaksides was only launched in 2011 I think ...

    So not 10 years ... Just 5 years ...

    But not even 5 ... Because we have Jaguar which was extremely successful selling in hundreds of millions of consoles.

    Also AMD Mullins was an absolute monster compared to Atom and this is why Intel bribed ALL the tablet makers in the world (losing 4 billion USD per year) to not make a single tablet with AMD Mullins.

    So yeah ... Many amazing AMD products did not get to us, but thet doesn't mean AMD did not make "any decent CPUs" for the past 10 years.

    They had them. Offered them. Intel bribed and we never got them.
  • patrickjp93 - Tuesday, November 7, 2017 - link

    Omniscale predates Infinity Fabric by 2 years.

    AMD did not invent TSVs. Micron and Intel did when they invented HMC 3 years before HBM

    Intel is not using a glue interconnect.

    APIs which Intel helped them invent.

    First with copper interconnects? Oh hell no. IBM beat both of them to it by 4 years.

    No, Intel produced the first APU a full year ahead of AMD.
  • ironargonaut - Monday, November 6, 2017 - link

    You mean like when they put a thermal switch on the core so that you wouldn't fry the part if you made a mistake or like one case I saw a heat sink (non-Intel spec) literally fell off a friends CPU when the clamp became brittle and broke. They were up to no good saving people from having to buy new processors. AMD on the other hand... copied Intel's lead after it was pointed out they had no such feature. So, save your selective memory for comment sections on the kiddie blogs where they might believe you.
  • ZeDestructor - Tuesday, November 7, 2017 - link

    You should check my looong-ass comment I wrote just above.. most of the claims he makes aren't even AMD wins...
  • IGTrading - Tuesday, November 7, 2017 - link

    For Intel it was necessary like hell, especially on their overheating Pentium 4s.

    AMD had the feature, the motherboard manufacturers did not implement it.

    What you remember was a publicity stunt made by Tom Pabst.
  • peevee - Tuesday, November 7, 2017 - link

    "Intel does this because it sees nVIDIA as a much bigger threat than AMD "

    Why? They don't even compete directly.
  • CaedenV - Monday, November 6, 2017 - link

    It's Intel and AMD's plan to fight global warming
  • rtho782 - Monday, November 6, 2017 - link

    Erm... wow...
  • nagi603 - Monday, November 6, 2017 - link

    So hell CAN freeze over...
  • negusp - Monday, November 6, 2017 - link

    This is fucking amazing.

    I just nutted reading the headline.
  • jrs77 - Monday, November 6, 2017 - link

    Was I in a coma and it's April 1st again?!? Not that I would complain about an i7 with an integrated Radeon graphics and big onboard-HBM. It would finally be an evolution to my i7-5775C.
  • haukionkannel - Monday, November 6, 2017 - link

    Well, well amd gpu made by Intel factories. Maybe the fastest and most effient amd gpu ever releases!
    Would be really nice to see amd gpus made by Intel one day, but this is the second best option.
  • Ej24 - Monday, November 6, 2017 - link

    I don't think Intel will be fabbing the gpu's, they will be packaging them on the substrate with their cpu, but amds gpu will more than likely still be made by global foundries.
  • patrickjp93 - Tuesday, November 7, 2017 - link

    If AMD is intelligent they’ll let Intel fab it themselves if they so choose. GloFo is a nightmare for AMD to work with, and the profit per chip will be the same either way for AMD. It’s just less business overhead for AMD to let Intel fab it.
  • Lolimaster - Monday, November 6, 2017 - link

    Even Vega is extremely efficient, but at the 64/56 clocks. Check those Vega56 at 1.2Ghz with undervolt...
  • smilingcrow - Monday, November 6, 2017 - link

    Every chip is more efficient if you reduce the clocks and especially the voltage.
    The issue as that Vega is aimed at the power hungry high end gaming GPU market where performance is king and efficiency much less so.
    In other words its a failure compared with the competition on both counts no matter how people try and spin the data.
  • sirmo - Monday, November 6, 2017 - link

    14nm GloFo process doesn't scale well with clocks.. It is much more efficient at moderate clocks than Vega 64 would make it seem.

    Also Vega architecture (and AMD's GCN architecture as a whole), scales much better with less CUs. You can see this by just comparing Vega 56 and 64.. the extra CUs don't really improve the performance that much. GCN and Vega suffer from lower stream processor utilization the higher in SPs you go. This means that sub 32CU configurations should perform much more efficiently especially if you ensure they are also clocked at the 14nm sweet spot clocks. I do not think Nvidia has an efficiency edge in this case.
  • tuxRoller - Monday, November 6, 2017 - link

    So you think the issue is their command processor? How did you come to this conclusion?
  • neblogai - Tuesday, November 7, 2017 - link

    The issue is likely the drivers, with driver team being too busy to ready drivers for new Vega features for gaming. Gaming Vega should use primitive shading for faster culling, so that geometry engines properly feed the CUs. But such important Vega features (primitive shaders and draw stream binning rasterizer) are not working in games currently. Vega64 design comes with 16 CUs per compute engine, so it suffers the worst- Raven Ridge comes with up to 11CUs, these new semi-custom for Intel look to be 12CU per compute engine (24CU /2). You can check this guys channel for some interesting speculation: https://www.youtube.com/watch?v=m5EFbIhslKU
  • vladx - Tuesday, November 7, 2017 - link

    You'd be wrong there, Nvidia also fabs certain Pascal models on GloFo's 14nm pricess and those are indeed more power efficient than AMD's.
  • JasonMZW20 - Tuesday, November 7, 2017 - link

    Stream processor usage is fine when they're used correctly, which is up to developers and various low-level API tech (Vulkan, DX12), as well as the driver team at AMD. Nvidia prefers an ROP-heavy design for consumer graphics, while AMD likes to balance their designs out (64CU/64ROP). ROPs are always bandwidth limited, so unless you've drastically reduced bandwidth requirements across the architecture, it's generally not wise to unbalance the design and favor ROPs. Nvidia does it because they've worked diligently on DCC and various other compression techniques, as well as triangle culling without overdraw along with tile-based deferred rendering.

    Vega has something called NGG, which is their Next-Generation Geometry engine (a more programmable geometry engine that isn't limited to fixed function ops), and is somewhat confusingly called Primitive Shaders. Fixed-function geometry is also included in hardware, and functions at 4 primitives/clock via 4 geometry units (hence why Vega performs like an upclocked Fiji in many cases), which is what most games not designed for NGG will use. NGG is capable of more than 17 primitives/clock and can be combined with DSBR. I don't think NGG is currently enabled, but even if it was, it'd need developer support to be used as Primitive Shaders are basically programmable geometry shaders that don't go through the fixed-function geometry engines (NGG has a new pathway that probably leverages the compute power of Vega).

    I've noticed in Wolfenstein 2 that power consumption is quite a bit lower at 1600MHz/1.05v vs DX12 benchmarks running at 1575MHz/1.031-1.043v. We're talking about 50W lower (185-193w in Wolf 2 at max settings vs 243w in Superposition/1080p Extreme at 1575MHz/1.025-1.031MHz). So, clearly, Wolf 2 on Vulkan is using some of Vega's new features like Rapid Packed Math and triangle culling (DSBR), which help its efficiency and throughput. GPU usage is always 99-100% in both cases, so I don't think there's an issue with stream processor utilization. Games like GTA V, though, clearly have an issue with Vega and it'll be up to AMD to solve that.
  • artk2219 - Monday, November 6, 2017 - link

    I wonder the manner in which AMD will be screwed over by Intel this time.
  • IGTrading - Monday, November 6, 2017 - link

    Best question we've heard today.

    Insightful!

    #devilsinthedetails and Intel is his twin. :)
  • haknukman - Tuesday, November 7, 2017 - link

    the only way intel screw Amd is not paying for the chip!
  • TEAMSWITCHER - Monday, November 6, 2017 - link

    "Teaser trailers" for movies have more information.
  • TEAMSWITCHER - Monday, November 6, 2017 - link

    Sorry, I spoke to soon. This story seems to be developing in real time!!!
  • Rubinhood - Monday, November 6, 2017 - link

    Weirdest news of the day. It's also an open admission on Intel's part that they can't do good enough integrated graphics.

    While I'm not interested in on-board GFX, integrated memory (as opposed to discrete RAM sticks) is fascinating as that could more than double the mem bandwidth. Integrate enough HBM to last during the lifetime of the processor, something like 24 to 32 GB, and we might have the next leap in performance.
  • A5 - Monday, November 6, 2017 - link

    I'm guessing there will be a workstation product with 32GB of HBM2 once the 8GB stack production gets going, but you'll pay very dearly for it.

    A HEDT consumer product with 4 or 8 GB of HBM2 as a local cache would be interesting though.
  • jordanclock - Monday, November 6, 2017 - link

    The only thing I'm taking from this is that Intel doesn't think the investments into the Iris/Iris Pro level performance for each generation of their GPUs is worth it. Intel has done a fantastic job at making GPUs considering it isn't their forte and they are just glorified co-processors.
  • woggs - Monday, November 6, 2017 - link

    As the article itself says, it's more a recognition that chip size can't keep growing. There is a yield cliff somewhere beyond ~500mm2. Intel does not make discrete GPUs (and probably doesn't want to... yet). If you want to be cynical about intel, then maybe it is recognition that they can't make a discrete GPU as good as AMD or nvidia.
  • blppt - Monday, November 6, 2017 - link

    Theres also the possibility that Intel's lucrative portable devices business was about to take a serious hit now that Ryzen has closed the performance/watt gap -AND- AMD's upcoming APUs were going to offer that plus a Vega GPU far superior to anything Intel was including on die.

    I guess the biggest win for AMD here is Intel admitting (apparently) that they couldnt come up with anything competitive GPU-wise to avoid having to resort to buying from a direct competitor.
  • Gondalf - Monday, November 6, 2017 - link

    I think the biggest loser is Nvidia here. AMD is fairly happy to supply GPU silicon to Intel, looking at the absolute dominance of Nvidia in Laptops.
    AMD will win more mobile GPU share with the alliance with Intel. Intel will be really happy to satisfy Apple.
    Honestly speaking Nvidia is becoming too big lately and it is a smart Intel move to stop this company to grow more. The very hard competition in automotive makes of AMD the perfect parner in GPUs to counter Nvidia.
  • Hixbot - Monday, November 6, 2017 - link

    Yet, if AMD withheld Vega for their own APUs. Apple may have knocked on AMDs door instead. AMD had a chance to have the only x86 APU with decent graphics, now that Intel will have AMD graphics, AMDs own APU will no longer be in its own class.
  • blppt - Monday, November 6, 2017 - link

    AMD does have a virtual monopoly on the console market tho.
  • mapesdhs - Tuesday, November 7, 2017 - link

    Not really, since NVIDIA specifically chose to ignore that market. It's not as if NVIDIA tried to get into it but was kept out somehow. They just decided to focus elsewhere instead. History suggests the money in console gfx isn't that great, especially if a product has to price-slide to remain competitive or attractive to consumers for whatever reason.
  • PeachNCream - Tuesday, November 7, 2017 - link

    NVIDIA's graphics were used in the original Xbox. :D
  • drothgery - Tuesday, November 7, 2017 - link

    And the PS3
  • Pinn - Tuesday, November 7, 2017 - link

    That's mario from Odyssey saying HI.
  • patrickjp93 - Tuesday, November 7, 2017 - link

    Intel can’t? Hah! It’s not allowed to. Even AMD, Nvidia, ARM, and Imagination have to cross-license like crazy to stay legal. No one ever wanted to let Intel get enough IP to be a threat. All things being fair with no patents in the way, Intel would wipe the floor with everyone. IBM learned that lesson the hard way.
  • darkchazz - Monday, November 6, 2017 - link

    They finally realized how terrible their own iGPUs are?
  • guidryp - Monday, November 6, 2017 - link

    This isn't an iGPU competitor, it's a dGPU competitor.

    It should be performing at close to GTX 1060 level.
  • jordanclock - Monday, November 6, 2017 - link

    Where are you getting that from?

    Seriously, the only thing we know is that Intel will be putting AMD GPUs on one model or another and you're making the jump to saying it will perform at the level of a 120W GPU.
  • ToTTenTranz - Monday, November 6, 2017 - link

    A single HBM2 stack will provide 200-256GB/s.
    From the bandwidth provided to the GPU we can already guess the GPU's performance will at least be close to GPUs with similar VRAM bandwidth, namely RX570/580 and GTX1060.

    For lower performance brackets, we already have AMD with Ryzen Mobile and Intel with Iris Pro.
  • DanNeely - Monday, November 6, 2017 - link

    That's a very poorly supported assumption IMO. The ram being on package doesn't mean anything other than the GPU being fast enough that going out to system ram would be a bottleneck; something that happens at much lower performance levels. A single HBM2 stack could be 2-3x overkill; but is still the smallest size available.

    With this apparently being a 45W part the GPU is almost certainly going to be too power limited to reach anything approaching RX570 performance. Something similar to the GTX950M equivalent that AMD's supposedly targeting for their own APU is much more likely. Maybe 1 or 2 steps up the product line if they'll have a process shrink in place before this product comes out.
  • MrSpadge - Monday, November 6, 2017 - link

    HBM and the bandwidth it provides are expensive, so I don't expect them to use a factor of 2-3 more than they need.
  • DanNeely - Monday, November 6, 2017 - link

    HBM1 is obsolete, and there's no such thing as a half wide HBM2 stack. Massive overkill or not a single stack is the minimum bandwidth size. Potentially they could underclock the ram/bus for a bit of power savings; but since one of the big selling points of HBM was that its data bus only needed a tiny fraction of GDDR's power there's not a lot to be gotten there.
  • patrickjp93 - Tuesday, November 7, 2017 - link

    Considering Intel makes its own HBM2 for its FPGAs, it very easily could spin a half-wide (or just cut the clocks clean in half) as long as AMD allowed them to use a custom memory controller.
  • MrCommunistGen - Monday, November 6, 2017 - link

    I responded to jordanclock just above, but my response could just as easily have been to you.

    Based on eyeballing and guestimation, the GPU die size could easily be in the Polaris 10 or half-Vega64 range.

    That's not to say performance will end up there though. I'd guess that for efficiency clocks will be significantly lower.
  • jordanclock - Monday, November 6, 2017 - link

    Power limits are going to be a much bigger hurdle than die size. You're comparing luke warm apples to flaming oranges.
  • Notmyusualid - Tuesday, November 7, 2017 - link

    +1
  • MrCommunistGen - Monday, November 6, 2017 - link

    I'm sure someone can actually do the math (both faster and better than I can) but assuming the mockup is generally correct, that GPU die is in the ballpark for being the size of a Polaris 10 die.

    According to an earlier Anandtech article, the size of an HBM2 die is 91.99mm^2. Just eyeballing it I'd say that GPU die is *easily* more than twice the size of the HBM2 die. Polaris 10 is 232mm^2. Of course they can't just use an RX580. It'd most likely be a Vega variant of some sort.

    Coming at it from the other side, a Vega64 die (I'd say "Vega 10", but now mobile Ryzen is using that name) is 486mm^2 and has 4096 shaders. Halving that die gives 243mm^2. Once again it isn't that simple. There are fixed function blocks which can't just be halved -- and of course there is the question of whether or not Intel has them keep all those blocks or not as part of their custom design. Still, this yields 2048 cores which is the same as RX570.

    Regardless of how this shakes out, very interesting development now that it is confirmed.
  • MrCommunistGen - Monday, November 6, 2017 - link

    Oh, I was also going to say:
    All these hypotheses are based on them using GloFo 14nm. There's nothing in the article that definitively indicates whether they would or wouldn't.

    It's a little early for it to be GloFo 12nm, but I don't think it is impossible. I don't follow Fab news that closely, but according to the press release for 12nm they say they're looking to release products in 2018 -- though they aren't more specific than that and it that could mean Q4 2018 for all I know.

    I also see TSMC 16nm being a possibility since Sony and MS are using that for the consoles.

    I know that others have more or less ruled it out (and I think this is FAR from likely) but I think it'd be really neat to see it fabbed on Intel 14nm(++?). Just pie in the sky fantasy. Back during the Hawaii days I had a flight of fancy wondering what it would be like if AMD built Hawaii or really just *any* GPU on Intel 22nm.
  • DanNeely - Monday, November 6, 2017 - link

    It might be about half the size of a Vega64 die but performance is going to be a lot lower because they can't come close to clocking it as high due to power limits. Vega64 is a 295W part vs 45W for this combo.

    And while we can't know the exact amount of area taken by fixed function hardware, and while I don't think AMD has spoken on it recently nVidia's said that about half of the GP107 die is fixed function hardware for video. That works out to about 1.6bn transistors or ~1/8th of the Vega64 dies 12.5bn. OTOH depending on how the lashup's done some of that could be offloaded to video decoders on Intel's chip.
  • psychobriggsy - Monday, November 6, 2017 - link

    The current leaks suggest a 24CU (1536 shader) design, likely Polaris derived, at up to 1190MHz. That's around 3.6 TFLOPS, although that might be boost performance, with normal performance around 3.1 TFLOPS, which is above RX560 but below RX570. I guess it's GTX 1050 (Ti?) level.

    There's also a slightly cut down variant with 22CU and around 1GHz clocks.

    I guess lower TDP variants, if allowed under the contract with AMD, would just clock the GPU even lower - 800MHz, 600MHz even, until it used a suitable amount of power.
  • Gondalf - Monday, November 6, 2017 - link

    You have got the point :). This "thing" looks like disruptive on the Nvidia mobile GPUs product stack.
    No more dGPU in the future apparently but Intel cpus and AMD GPUs packed together, the victim? Nvidia.
  • patrickjp93 - Tuesday, November 7, 2017 - link

    Oh I’m sure Intel will leverage them both to provide integrated solutions on the cheap and keep competition fierce and prices low.

    Never underestimate Intel’s ability to innovate and be business savvy. Just like IBM, you’ll turn around and find you’re already in Hell and Intel knifed you days ago.
  • Dribble - Tuesday, November 7, 2017 - link

    Other sites have some performance figures - it's about a 560 so probably slower then a 1050. Still really good but it's not really going to be killing the discrete gpu laptop market.
  • patrickjp93 - Tuesday, November 7, 2017 - link

    Give Intel time to work with AMD’s driver team. The one thing Intel has that AMD can leverage Here is an army of is programmers. A proper code sharing and NDA arrangement could help AMD get moving on unlocking those Vega features we still haven’t seen come out.
  • wiineeth - Monday, November 6, 2017 - link

    Is this the real life? Is it just fantasy?
  • Exodite - Monday, November 6, 2017 - link

    Caught in a landslide,
    No escape from reality.

    ...

    I'll see myself out.
  • mapesdhs - Tuesday, November 7, 2017 - link

    I turned off the light. :D
  • "Bullwinkle J Moose" - Monday, November 6, 2017 - link

    Wait.... what?

    Is it just for laptops?
    Integrated AMD graphics in a few desktop chips could be nice for small form factor builds
  • "Bullwinkle J Moose" - Monday, November 6, 2017 - link

    Currently Laptop Only !
  • "Bullwinkle J Moose" - Monday, November 6, 2017 - link


    The end of the video over at HotHardware makes it appear that Intel may haz plans for small formfactor desktop computers as well

    Quadcore Compute sticks with 8GB Ram and AMD graphics might be nice for older smart TV's

    Newer sets could integrate something similar
  • phoenix_rizzen - Monday, November 6, 2017 - link

    Many NUC-sized things use mobile chipsets. This could make for an interesting upgrade for the MacMini, for example. And HTPCs. And Macbooks. And, and, and...
  • Ktracho - Monday, November 6, 2017 - link

    My guess (and it's nothing more than that) is that this is specifically for Apple, and that Apple as the instigator is behind getting (forcing?) Intel and AMD to work together. It would help explain why there has been no new Mac Mini in such a long time, but now all of a sudden there seems to be a new model in development.
  • patrickjp93 - Tuesday, November 7, 2017 - link

    Oh no. Look at the bigger picture. Intel cuts its development costs and pits Nvidia and AMD against each other to provide the best integrated solutions for its designs. This is all about Intel leveraging the competition on the cheap. Strategically it’s brilliant, and now AMD’s in a real bind since it’s stuck using interposers in its own designs. Maybe Navi will be packaged by Intel using EMIB instead of AMD paying for interposers. And what of the successor to the V100? Nvidia definitely doesn’t want to be dealing with 1000 mm sq. interposers.

    Intel is laughing all the way to the bank on this, and Hynix and AMD paid and will continue to pay for it.
  • haknukman - Tuesday, November 7, 2017 - link

    intel is laughing to the bank but deposit to AMD account hahaha!
  • e36Jeff - Monday, November 6, 2017 - link

    Intel a few months ago: AMD is gluing together their server CPUs. Glue is terrible, just horrible. It results in unpredictable performance

    Intel today: We are going to glue AMD GPUs onto our CPUs because glue is amazing. Hell, we are going to glue our entire processors together! We love glue!

    You have to wonder if there were a bunch of Intel engineers that knew this was in the pipeline that just facepalmed as hard as they could when they saw the infamous "glue" marketing slide.
  • CajunArson - Monday, November 6, 2017 - link

    Disingenuously pretending that EMIB is equivalent to some wires thrown on a plastic PCB just shows how ignorant you are in the underlying technology that's at play here.

    Additionally, I don't see Intel using a cheap PCB wiring scheme to connect together copies of the same chip because its fabs aren't very good. I see Intel innovating an entirely new high-speed interconnect that allows for completely different types of silicon that aren't even made on the same lithographic process to be integrated in an extremely efficient manner.
  • e36Jeff - Monday, November 6, 2017 - link

    Did you miss the picture of the package? Or maybe the comments from Ian? Let me quote the article for you: "The Intel chip is a long way away from the AMD chip, which would suggest that these two are not connected via EMIB if the mockup was accurate."

    Unless Intel is releasing false images of the product the GPU is not connected via EMIB and is at best on an interposer, but probably just an MCM.
  • extide - Monday, November 6, 2017 - link

    It is implying that the GPU + HBM are connected by EMIB, but not the CPU + GPU.
    Based on this I would assume that the HBM memory is memory only for the GPU, and the CPU will use standard DDR4.
  • e36Jeff - Monday, November 6, 2017 - link

    well, potentially it could be reaching through the substrate to get to the HBM, as that would still probably be faster and lower latency than jumping to the mobo for DDR4, but I think most likely it is segregated the way you describe. If what I've read elsewhere is correct, and the Intel CPU retains its Intel iGPU then potentially it could use the HBM as an L4 cache/iGPU cache when the AMD GPU is inactive, but that seems like it might be a level of integration that's not likely at this time.
  • IGTrading - Monday, November 6, 2017 - link

    Jeff, give the guy a break :)

    He's an Intel fanboy with a bit of a cranky attitude since seeing the news this morning.

    He'll be ok.... he'll throw a few more posts with insults and missing technical arguments, flaunt the daily dose of mediocrity and then catch the bus.
  • CajunArson - Monday, November 6, 2017 - link

    Oh yeah.. AMD invented through-silicon vias?

    In what fantasy world did that occur? Here I thought it had nothing to do with AMD whatsoever.. and I'm right:

    "The first TSV was patented by William Shockley in 1962,[4] although most people in the electronics industry consider Merlin Smith and Emanuel Stern of IBM the inventors of TSV, based on their patent “Methods of Making Thru-Connections in Semiconductor Wafers” filed on December 28, 1964 and granted on September 26, 1967.

    However, it wasn't until the late 1990s that the term "Through Silicon Via" was coined by Dr. Sergey Savastiouk, the co-founder and current CEO of ALLVIA Inc.as part of his original business plan. From the beginning, the vision of the business plan was to create a through silicon interconnect since these would offer significant performance improvements over wire bonds. Savastiouk published two articles on the topic in Solid State Technology, first in January 2000 and again in 2010. The first article “Moore’s Law – The Z Dimension” was published in Solid State Technology magazine in January 2000.[5] This article outlined the roadmap of the TSV development as a transition from 2D chip stacking to wafer level stacking in the future. In one of the sections titled Through Silicon Vias, Dr. Sergey Savastiouk wrote, “Investment in technologies that provide both wafer-level vertical miniaturization (wafer thinning) and preparation for vertical integration (through silicon vias) makes good sense.” He continued, “By removing the arbitrary 2D conceptual barrier associated with Moore’s Law, we can open up a new dimension in ease of design, test, and manufacturing of IC packages. When we need it the most – for portable computing, memory cards, smart cards, cellular phones, and other uses – we can follow Moore’s Law into the Z dimension.” This was the first time the term "through-silicon via" was used in a technical publication."

    https://en.wikipedia.org/wiki/Through-silicon_via#...

    Tell me again how AMD invented rice pudding you senile wanker.
  • IGTrading - Monday, November 6, 2017 - link

    Chill bro :) You don't know what you're talking about.

    Copper has been used for 9000 years or more, but that doesn't mean all inventions using copper were actually designed to build the 1 GHz AMD Athlon :)

    The concept of TSV may have been imagined a long time ago and reiterated a few times, but AMD used it in computing to connect different chips to substrates and other chips. Other companies used it to connect identical silicon, which is a limited particularity of TSV that was also reiterated a few times before.

    AMD managed to successfuly connect different dies to different chips in different products (GPUs, CPUs, server CPUs) with superior results (to monolitic designs) . Credit to them.

    I know, you can use Wikipedia. Congrats! :)

    But Rivas invented the hydrogen engine, not the fuel cell powered vehicles despite they both work with hydrogen and both engine concepts power cars :)
  • ZeDestructor - Tuesday, November 7, 2017 - link

    "Invent" has a very different meaning to "popularise". Maybe you should read a dictionary sometime?
  • IGTrading - Tuesday, November 7, 2017 - link

    Bad choice of words on my behalf.

    It should have been "invents first x86 processor that uses copper interconnects" .

    The same for the rest.

    I am talking about AMD's wins in its competition with Intel on the x86 field.
  • YoloPascual - Tuesday, November 7, 2017 - link

    So intel's version of infinity fabric?
  • IGTrading - Monday, November 6, 2017 - link

    GREAT POINT :)

    There is the possibility of Intel sniffing the AMD glue and liking the effect :)

    Also, how dirty are they for taking all the credit and not putting AMD's logo or branding on the presentation video or photo slides ?!

    In the end, AMD makes the biggest chip on the package and AMD is the inventor and founder of HBM and HBM2 .

    AMD presenting HBM : https://www.youtube.com/watch?v=se9TSUfZ6i0
  • haknukman - Tuesday, November 7, 2017 - link

    More like Intel sniffing AMD ass and liking it!
  • haknukman - Tuesday, November 7, 2017 - link

    AMD glue intel super glue hahahha!
  • Lolimaster - Monday, November 6, 2017 - link

    Now with Ryzen and it's evolutions make it little sense to give intel radeon graphics. Right now with the 12nm AMD cpu's they can offer a powerfull 4-6 core + SMT APU to Apple for the mackbooks/imacs.
  • neblogai - Tuesday, November 7, 2017 - link

    Powerful- yes. But not as power efficient at low loads. This integration for ultra-slim, portable, but also capable makes sense. And AMD will be able to use it's own designs for pure-gaming laptops that are cheaper, but last less when browsing/playing video on battery.
  • R3MF - Monday, November 6, 2017 - link

    One question: How soon can I have a ryzen/vega APU sporting a single stack of HBM2 on package?
  • vladx - Tuesday, November 7, 2017 - link

    2020
  • rtho782 - Monday, November 6, 2017 - link

    >The use of HBM2 is not unsuprirising – Intel has successfully integrated HBM2 into its Altera EMIB-based products so we would suspect that this is not going to be overly difficult.

    "Not Unsurprising" implies that it is surprising, which the rest of the sentence conflicts with.
  • ATC9001 - Monday, November 6, 2017 - link

    This does make it difficult to read, I'm sure you meant "not surprising" OR "unsurprising"...but the double negative is not un-missused and makes things clunky!
  • TobiWahn_Kenobi - Monday, November 6, 2017 - link

    Pressure from Apple perhaps?
  • ToTTenTranz - Monday, November 6, 2017 - link

    I wouldn't put the chances at "perhaps".

    I'd put the chances at apple saying "either you do this for us or we'll go self-made ARM SoCs a lot sooner than you think."
  • LiviuTM - Monday, November 6, 2017 - link

    @Ian Cutress I'm pretty sure you are right about EMIB being used only for . The animation from the official video only shows data transfer from GPU to HBM2 stacks and also specifically mentions transfer "over extremely short distances".
    https://youtu.be/gaHs_guCp2o
  • deathBOB - Monday, November 6, 2017 - link

    Yeah this has MacBook/IMac chip all over it.
  • dromoxen - Monday, November 6, 2017 - link

    I don't think Apple cares how he contrsucts his sentences?
  • baka_toroi - Monday, November 6, 2017 - link

    So is the difference between and EMIB and a traditional interposer comes down to cost or is there something else to it?
  • MrSpadge - Monday, November 6, 2017 - link

    Yep, cost mainly. But that's one very important point. Not sure if yield is also considerably different, but that again boils down to cost.
  • ToTTenTranz - Monday, November 6, 2017 - link

    @Ian

    Intel clearly states they're using a H-series processor - which indeed go up to 45W - but they don't mention power consumption numbers for the CPU+GPU+HBM2 MCM.

    The article seems to suggest the whole MCM is limited to 45W (hence why it could face Raven Ridge directly), but there's really no evidence pointing that way.
    In fact, the presence of a single HBM2 stack should give the embedded GPU similar bandwidth to desktop RX580 and GTX1060. If that's an indicator of its performance bracket, then it's way above any Raven Ridge could do with 11 NCUs, no matter what the clocks are.
  • BrokenCrayons - Monday, November 6, 2017 - link

    It's reasonable to think we're looking at something much quicker than Raven Ridge. Even if the total package TDP is 45W, that's still three times higher than the recently announced next-gen APU. However, we don't know Raven Ridge's performance capabilities yet and the presence of HBM2 alone isn't a clear indicator of what the AMD GPU on this unreleased product can actually do when compared to a dGPU. We'll need more detailed specifications before we can speculate about its dGPU performance category.
  • ToTTenTranz - Monday, November 6, 2017 - link

    HBM2 isn't a clear indicator, but if 4x GDDR5 chips (for 128bit width) were enough to feed the GPU, then there would be no need to go with HBM2. The difference in area isn't that large IMO.
  • Nagorak - Monday, November 6, 2017 - link

    Don't forget the difference in power consumption. HBM is a lot more power efficient than GDDR5.
  • Marburg U - Monday, November 6, 2017 - link

    Fudzilla knew it and you guys bashed them.
  • MrSpadge - Monday, November 6, 2017 - link

    Fudzilla already claimed almost every possible event in Tech industry for the next 3 years.
  • IGTrading - Monday, November 6, 2017 - link

    They actually got it right twice and stood up about it and this proves them right again.

    No sense in attacking Fudzilla.com in any way now. They were literally right.
  • ZeDestructor - Tuesday, November 7, 2017 - link

    They also get things wrong a lot of times, and have a pretty heavy-handed anti-Intel bias. So, grain of salt and all that, yaknow?
  • IGTrading - Tuesday, November 7, 2017 - link

    Yes, they do :)

    But the anti-Intel bias I will tolerate since we've all missed so much progress, competition and good products because of Intel's bribes and mafia-like behavior: https://m.youtube.com/watch?v=osSMJRyxG0k
  • Ian Cutress - Monday, November 6, 2017 - link

    We've never commented on the rumors. I know Fuad, he's a fun guy to talk to :)
  • IGTrading - Tuesday, November 7, 2017 - link

    @Ian

    We don't really see Intel keeping its own iGPU to help with power saving.

    It would be nice (I personally like it) but all my colleagues say it takes up to much die real eastate and the margins take a hit.

    I think there IS a change for a small iGPU being kept for all things 2D , bit they way my mates put it, I'm inclined to think it won't happen.

    They say its all about costs and target :

    Target is high performance and power doesn't seem to matter as much as in the 15W to 4.5W segment.

    Costs can be serious since the current iGPU can be larger thn the x86 cores on some SKUs.

    It may be likely that Intel increases margins by increasing yields with a tiny x86-only die and buys the iGPU from AMD with a cost comparable with what they's get by using their own iGPU.

    For AMD is profitable. For Intel the iGPU might cost them the same as their own iGPU waffer real estate + generated margins (hence zero direct profit) .

    But the AMD iGPU gives Intel many other profitable advantages :

    a) Every AMD iGPU is fully functional as ordered = no yield hit for Intel like they would have if they would make their own iGPU

    b) Intel doesn't spend too much on the AMD iGPU, but it gets them the performance crown, bragging rights and customer mind share

    c) Intel get to charge premium prices on these chips

    d) Intel's R&D related to everything iGPU is zero (AMD does it)

    The geek in me thinks they might have a point although I'd like a hybrid solution myself.

    So do you think they're on to something or just ON something ?! :))))
  • haknukman - Tuesday, November 7, 2017 - link

    No intel don't get the bragging right intel get the low life for using AMD!
  • djsvetljo - Monday, November 6, 2017 - link

    Wait,whaaaaat? Is it April 1st already?
  • Ratman6161 - Monday, November 6, 2017 - link

    I may be being stupid, but I don't quite get it for anything but a mobile sku. For the desktop, if you are just doing office sorts of tasks that don't need much in the way of graphics, then Intel's current offerings are more than good enough. And if you are a gamer, or anyone else who values high performance graphics, this still won't be enough for you; you will still bey buying a discrete graphics card. An in between product seems to me to only be good for the mobile world, but then 45W is probably too much. Ultra small form factor desktops perhaps?
  • Threska - Monday, November 6, 2017 - link

    There's one peculiarity when it comes to free virtualization and GPUs. Only the Intel graphics are exposed, unless you pay a lot of money. Then you can use AMD/Nvidia in VDI. Hopefully this development changes that.
  • jwcalla - Monday, November 6, 2017 - link

    It's a marketing gimick. I could see Apple biting but Apple hasn't been much interested in actual performance, etc. in their Macs in forever.
  • Valantar - Monday, November 6, 2017 - link

    This is very interesting. My guess: AMD knows it can't compete with Intel on frequency+IPC in the high-power mobile space, given how Ryzen tops out at around 4GHz, and exceeding 3.8 pushes the power curve out of whack, so it signs this deal to rake in cash for premium-priced high-end Intel-based laptops while pushing Ryzen Mobile in the 15-25W space. I seriously doubt AMD would do this for any less than half the selling price of the chip, so they'll make some serious bank as long as this sells in any real volume, and in the ultra-mobile space, they're very competitive (and have a huge GPU advantage), so they'll keep these chips high-end.

    I seriously wonder about that GPU, though. Going from that package mock-up, the die is quite large - 200+ mm2? I doubt they're licencing out the HBCC, so that's something like an RX 580, probably a bit smaller. So either ~26 GCN 1.4 CUs at low clocks for max efficiency, or something like 18-20 NCUs at higher clocks (far lower than desktop Vega, obviously). The whole package including CPU likely won't exceed 65W (good luck cooling more than that in an area that small!), so low clocks are a given. Still, wide-and-slow is great for mobile GPUs.
  • IGTrading - Monday, November 6, 2017 - link

    Ryzen is limited to about 4.2 GHz "now" , but in 3 months we'll have 12nm Ryzen and you can bet that's going to have SKUs running at 4.7 GHz.

    Also, there is ZERO IPC improvement in Coffee Lake. It's been debunked a long time ago : https://www.youtube.com/watch?v=O98qP-FsIWo
  • The_Assimilator - Tuesday, November 7, 2017 - link

    If you think that a minor shrink from 14nm to 12nm is going to buy AMD an extra GHz of performance, you should stop smoking your socks.
  • cocochanel - Monday, November 6, 2017 - link

    Great news ! I would love to get my hands on a laptop with GTX1060/RX580 performance and decent battery life.
    Another side benefit and not so obvious to many, is that with many more Vega GPU's around, big game developers may decide to stay away from Nvidia GameWorks shenanigans.
    Here is an interesting video for the curiously minded:
    https://www.youtube.com/watch?v=O7fA_JC_R5s
  • mapesdhs - Tuesday, November 7, 2017 - link

    Genuinely curious, what would you regard as a "decent" battery life in terms of hours/mins? And do you mean just while in general use, or specifically the battery life while gaming? The latter is always going to be a tough nut to crack. As mobile GPUs get faster, games move on and become more complex. Hard to keep up. My laptop has a 970M (equivalent to a desktop GTX 580) which is fine for the older games I play (Oblivion, Crysis, FC2, etc.), but I don't think I'd want to use with newer games like GTA V, etc., and indeed the battery doesn't last long while gaming.
  • cocochanel - Tuesday, November 7, 2017 - link

    I would say 6-8 hours for gaming. I know it's a lot to expect, but I suspect Intel would not cut a deal with the enemy if their engineers did not think they could pull it off. Sure, there are lots of variables that affect battery consumption. But HBM2 combined with a well designed APU might do the trick.
    Plus, I would not expect such battery life out of an ultrabook.
  • Arbie - Monday, November 6, 2017 - link

    "The use of HBM2 is not unsuprirising"

    You meant "is not surprising"...
  • IGTrading - Monday, November 6, 2017 - link

    HBM and HBM2 were founded, invented and supported by AMD.

    It is despicable that Intel takes credit and doesn't acknowledge that AMD powers the 2 largest chips on the package.
  • xenol - Monday, November 6, 2017 - link

    The rumor was literally a single sentence from a post from a website staff on their forums: https://hardforum.com/threads/from-ati-to-amd-back...

    It's amusing how people are saying websites are being vindicated for reporting this (but not actually starting the rumor). It's was a rumor. It doesn't mean anything until it's officially stated. And at best all rumors do for websites is generate traffic. If they weren't correct, they fallback on the notion it was a rumor (i.e., "you're too dumb to believe it's true"). If they are correct, well good for them.
  • IGTrading - Monday, November 6, 2017 - link

    If that would have been the only source, it would have made no sense for Fudzilla to report on it in Feb 2017 when the rumor was from Dec 2016.

    They took tons of flack for that and it would have been late and unnecessary unless they had new info.

    And they came back to it in May 2017, when the chip taped out.

    Render therefore unto Caesar the things which are Caesar's

    Kyle had just that, the rumor or some insider's off the record confirmation and he never touched it again considering how much flack he took for it.

    Fudzilla and SemiAccurate stood up for their research and they were not just posing, like other so called "journalists" . :)
  • CajunArson - Monday, November 6, 2017 - link

    Fudzilla was wrong then and is still just as wrong after this announcement. It said that Intel had licensed AMD's GPU designs when that did not occur and has not occurred.

    The rest of it is backwards revisionist history on their part where they make every single possible prediction and then pretend that the 99% of the wrong predictions they made never happened to prove how "right' they were.

    For somebody who would gladly die for your AMD cause you might want to listen to the words of your commander Lisa Su:

    “It’s about how do we sell more products, or how do we have our IP in markets where we’re not currently selling products. We’re not looking at enabling a competitor to compete against our products.”

    http://fortune.com/2017/05/23/amd-intel-chips/
  • IGTrading - Monday, November 6, 2017 - link

    They said the chip is coming and assumed it will include licensing and some work & effort on Intel's behalf.

    Now we find out Intel only uses "glue" and buys the GPUS ready-made from AMD while using AMD's HBM2 , connecting it by AMD's TSV and transferring data through an interconnect clearly inspired by AMD's Infinity Fabric.

    Then they said the chip is real , taped out and that they've seen technical confirmation, whatever that means.

    They didn't all details right, but they were right: the chip was coming and eventually it is anounced as a real project which will actually reach the market (unlike Intel's Larrabee failure) .

    Thank you for the link! Surprisingly good read from Forbes.

    In our book, they were so right that we're sorry we didn't increase our position into AMD when it was at 9.8 USD.
  • UNCjigga - Monday, November 6, 2017 - link

    I wonder if this package is destined for a future Apple product, i.e. iMac or iMac Pro. That would make sense, as it could be a move to ensure Intel remains the CPU vendor of choice for Apple, while ditching the integrated graphics that have held some Apple products back. Intel is worried Apple may ditch x86 completely so they would bend over backwards to put a rival GPU on-package if Apple demanded it?
  • UNCjigga - Monday, November 6, 2017 - link

    Oops I meant MacBook...not iMac, though I suppose one might see this in a future Mac Mini if Apple ever decided to update it
  • jjj - Monday, November 6, 2017 - link

    Thinking of this as a competitor for Raven Ridge is silly. This GPU is clearly a few times larger than the 45-ish mm2 RR GPU. It would also be silly if this is Polaris as this is aiming at taking share from Mvidia in mid range discrete GPUs and they need to give all they got.

    If no EMIB between CPU and GPU, this solution isn't reaching its full potential and it's less of an advantage for Intel. It's just a discrete GPU but AMD gets some help reducing HBM costs.
  • jjj - Monday, November 6, 2017 - link

    A bit more.
    Worth noting that AMD can't afford invest in multiple die sizes for such a volume limited market.
    It's also possible that they will use the same die for the RX 680 in H1 2018, just with different packaging.
  • psychobriggsy - Monday, November 6, 2017 - link

    Intel will have paid AMD to design this GPU. This GPU is Intel only. That's how the custom design part of AMD works.

    It's only 24CU. HBM2 is there because it's small, it's overkill otherwise. If anything it would be a contender for the RX660 next year. AMD have a habit of designing portions of a design and reusing it, and we have a 24CU GPU here that just needs some AMD specific uncore wrapped around it for AMD's own use.

    If AMD were paid in EMIB rights as well, then I can see them pushing HBM2 downwards as it becomes cost effective next year.
  • IGTrading - Tuesday, November 7, 2017 - link

    Nope. AMD designs, makes and sells the chips to Intel.

    Apparently it's been confirmed.
  • lilmoe - Monday, November 6, 2017 - link

    I agree. This isn't in direct competition with Ryzen Mobile in any way.

    If I'm going to bet, I'd say this is an AR platform in the works for specific clients. Apple is a very plausible client. In that sense, hbm2 in this weird configuration might make sense.

    I doubt an i5 would go with this configuration like suggested by Anandtech/Ars. I'd say 6 cores. Don't forget that Apple has 8+ xion skus for the upcoming iMacs.
  • CaedenV - Monday, November 6, 2017 - link

    I.... I just don't understand. Back in the day you would always pair and AMD CPU and nVidia GPU and then AMD purchased AIT. Now AMD is working with Intel for iGPU solutions instead of Intel working with nVidia?
    I just don't get it :/
  • jwcalla - Monday, November 6, 2017 - link

    Intel is threatened by nvidia and doesn't want to do anything that would benefit them in any way.
  • HotJob - Monday, November 6, 2017 - link

    Dafuq? I... I... I... I hope AMD didn't just eat their own leg to stave off starvation with this licensing deal. AMD's iGPUs were the only reason to buy an APU, as Intel's mobile chips had twice the IPC. Hopefully these will be solely Apple parts, a market I find it darn near impossible to care less about.
  • Nagorak - Monday, November 6, 2017 - link

    The IPC difference no longer is so bleak since Ryzen's release. Furthermore in a laptop the difference between Intel and AMD is even smaller since frequency is more limited.
  • Alistair - Monday, November 6, 2017 - link

    A lot of confused people here. AMD is selling a GPU just like before, but the modern demands for thin and light devices require they are on one package. They haven't licensed out their IP and this is not an iGPU.
  • euskalzabe - Monday, November 6, 2017 - link

    Notice this: 1 month before Windows on ARM devices are said to deliver 2 days of battery life and x86 emulation when needed, we've seen Intel 1) double the mobile CPU performance with twice the cores and 2) release products with AMD GPUs (!!!). I see more and more signs that Intel is reacting before ARM releases to ensure it has more quality for more price, because as things stood for the past few years, they would not have been an attractive option VS ARM.
  • IGTrading - Monday, November 6, 2017 - link

    Very good point.

    Interesting if so.
  • hallstein - Monday, November 6, 2017 - link

    If this is intended to compete with medium-end discrete graphics, these could make for seriously awesome tiny gaming PCs.
  • ikeke1 - Monday, November 6, 2017 - link

    So, small die, one HBM2 stack...

    Can i say... 1x Navi module?
  • RaduR - Monday, November 6, 2017 - link

    Why on earth did not they JUST BUY POWER VR. It is dirt cheap and has all the patents they would ever need . Including defunct MIPS that with Intel money may create big headache to ARM.

    WHY ?
  • IGTrading - Monday, November 6, 2017 - link

    Too late for that :)

    Considering Intel collaborated well with Imagination Technologies before.
  • Speedfriend - Tuesday, November 7, 2017 - link

    That is something I have wondered, especially given the history and the fact that Mobileye uses MIPS.

    Maybe PowerVR wasn't able to be developed further and that is why Apple branched off.
  • Pinn - Tuesday, November 7, 2017 - link

    Why didn't the CEO of intel move away from x86 when I asked him back in 2006? who knows these things?
  • vladx - Wednesday, November 8, 2017 - link

    Probably cheaper this way, if they went that route they would also have need to invest a lot more money into designing the GPU themselves instead of paying AMD to do it for much less.
  • MrSpadge - Monday, November 6, 2017 - link

    Whoa, this could be everything I wanted from AMD! I just hope they won't be limiting those chips to esoteric configurations & offers again, as with some previous Iris / Pro chips. Unlocked Desktop CPU with the HBM as L4 cache?
  • gallon_1 - Monday, November 6, 2017 - link

    This is perfect for the MacBook Pro, seems like a move from Intel and AMD for an upcoming revision of the MacBooks
  • lilmoe - Monday, November 6, 2017 - link

    Erm...
    I think you guys are running to lots of conclusions. This isn't in direct competition with AMD's latest APUs, and out won't either be for possible future 45w AMD offerings.
  • wr3zzz - Tuesday, November 7, 2017 - link

    Is this like AMD APU but for people willing to pay more to have Intel CPU and don't want to use dGPU?
  • Kwarkon - Tuesday, November 7, 2017 - link

    in the citings it is clearly stated "discrete" so it is the other way around. This is for those who want dGPU in smaller devices (as the package is smaller than conventional separate APU + dGPU).
  • neblogai - Tuesday, November 7, 2017 - link

    I wonder, would Intel be announcing this if it was Apple's-only product? My guess is they will sell them to everyone, just like Broadwells with Iris graphics.
  • HollyDOL - Tuesday, November 7, 2017 - link

    Even yesterday if you told me about this I'd smile and reply 'keep dreaming'

    :O
  • serendip - Tuesday, November 7, 2017 - link

    What the heck is this for? Is Intel trying to show Nvidia it has other options in the laptop discrete GPU space? I can't help thinking that the market for this chip could be small and AMD are smart to go for ultrabooks with their latest APUs. Driver support could also be an issue if this is a one-off special - nightmares of Atom PowerVR graphics come to mind.
  • Rookierookie - Tuesday, November 7, 2017 - link

    The product's going to be there, whether AMD is in it or not. Intel doesn't HAVE to work with AMD. They could always work with Nvidia even if they're not happy about it, so at the end of the day the level of competition it represents for AMD is the same.

    By working with Intel, AMD gets to develop this on Intel's budget and get paid for it, and if it's something they have to compete against, they'll know exactly what it is, and even exert a certain degree of control over it. No real downside to AMD.
  • HStewart - Tuesday, November 7, 2017 - link

    I think if AMD was really smart - they would drop the CPU line completely and work with Intel to increase connections - this would remove NVidia out of Market.

    One interesting thought, this news could be first step in Intel's purchase of AMD - they could easily pay off there debt - uses the graphics technology and remove CPU completion at same time.
  • Rookierookie - Wednesday, November 8, 2017 - link

    The moment that happens - either AMD dropping CPU or Intel purchasing AMD - Intel gets cut up into half a dozen companies. While the talk of Intel propping up AMD can't be taken too seriously, there's at least a little bit to it.
  • iranterres - Tuesday, November 7, 2017 - link

    This looks surreal to me
  • HStewart - Tuesday, November 7, 2017 - link

    Maybe the word is unreal - there is a lot of fake news on the internet.
  • lashek37 - Tuesday, November 7, 2017 - link

    Intel has most of The laptop market ,and they are sharing it with AMD.I believe most of y’all is missing the point ? Sharing most of the laptop market with Intel will help AMD annual revenue ,as well as put a strain on Nvidia .
  • SharpEars - Tuesday, November 7, 2017 - link

    Oh Intel, why did you choose AMD instead of nVidia?
  • tipoo - Tuesday, November 7, 2017 - link

    Probably when they called Jen Hsun he continued to tell Intel to pound sand.
  • TEAMSWITCHER - Tuesday, November 7, 2017 - link

    nvidia likely bid on this project too and AMD likely out-bid them. AMD has been giving away technology for minuscule profits to keep nvidia off popular platforms like PS4 and XBox.
  • vladx - Wednesday, November 8, 2017 - link

    Indeed AMD tech is worse but also much cheaper than Nvidia tech.
  • peevee - Tuesday, November 7, 2017 - link

    Why would Intel use small (like 10CU) GPU from AMD (especially old-gen) when they have a GPU of similar performance already built in? And HBM2 in low-latency and high-bandwidth interconnect seems overkill for such weak configuration.
  • HStewart - Tuesday, November 7, 2017 - link

    It does not seem logical for Intel, only real thought is that Intel wants to show that their up and coming Chip technology is flexible - maybe even to point of plug and play components.
  • GreenReaper - Tuesday, November 7, 2017 - link

    AMD's open-source support has had something of a rocky history, in large part because Windows was (and is) their focus and anything else has very limited resources given AMD's woes. In recent years AMD cards have languished with poor power-saving support on Linux.

    They recently landed a huge patch for proper Vega/Raven Ridge (likely to debut in Linux 4.15), which brings them a lot closer to the level of support Intel enjoys, adds HDMI sound, etc:
    https://www.phoronix.com/scan.php?page=news_item&a...

    Still issues to work through - FreeSync will take time because there isn't a kernel standard for it, we're probably talking 4.16+ - but the future's bright.
    https://www.phoronix.com/scan.php?page=news_item&a...

    There are unresolved issues with regards to AMD's low-level firmware, as there is with Intel's. Don't expect those to be resolved; AMD wants to preserve its ability to license HDMI, for one.
  • GreenReaper - Tuesday, November 7, 2017 - link

    I think the desktop Raven Ridge APUs are meant to have HBM2, and they'll be out in Feb/March 2018. They've not been formally announced, though, so check when they are.
  • IGTrading - Tuesday, November 7, 2017 - link

    That would be cool, but that would make them very expensive chips and the APUs are AMD's and Intel's keys to the low end and mid market.

    Therefore I strongly believe there will not be any HBM in AMD's desktop iGPUs in this first wave.

    Later, maybe.
  • GreenReaper - Tuesday, November 7, 2017 - link

    Hmm. Looks like these comments didn't post in the right place. Not working with JS blocking? :-/
  • HStewart - Tuesday, November 7, 2017 - link

    To me this shows how flexible chip design is to integrated another's GPU onto the same die. I am almost 100% AMD can not do that with there Infinity Fabric and I hope Intel does not trade that technology. I would not doubt Intel could also do that with NVidia.

    But I think Apple is behind this and Intel does not want to lose Mac's to AMD.

    I don't think Intel is dropping there efforts - just to keep Apple happy they need AMD based GPU.

    Personally my last Mac had an NVidia GPU in it.
  • HStewart - Tuesday, November 7, 2017 - link

    Maybe there is a lot more behind Intel Die technology than we know - AMD could be desiring to used it in future technology
  • Kwarkon - Wednesday, November 8, 2017 - link

    no, from presentations it is clear that those are two separate dice ( + separate ram).
    The other thing is PCB that is shared - and this is the new thing.
  • HStewart - Wednesday, November 8, 2017 - link

    If so than what is big deal - it just an add on component. It could be that Intel has developed new technology to connect the CPU to GPU faster for mobile and AMD wants to be part of it.

    I do find it very interesting that the following day the top leader of AMD RTG group leaves AMD.
  • Kwarkon - Wednesday, November 8, 2017 - link

    smaller package = more space for cooling/battery or just smaller device - this alone is a big deal
    Also naturally there might be some more tech magic that improves overall performance and power.
  • Alexvrb - Tuesday, November 7, 2017 - link

    Where's the Intel marketing guys with their "glued together dies" marketing slides now?
  • Gunbuster - Wednesday, November 15, 2017 - link

    I'll wait to see where the OEM's think they can push this? Will we see a Microsoft Surface based product bumble onto the scene to be followed with 35 non descriptive firmware and driver fix updates and the same terrible statuesque WiFi?
  • JonnyDough - Tuesday, December 26, 2017 - link

    Nooo AMD. They just want a look at your designs! Everyone knows Intel still can't build a capable APU/GPU!

Log in

Don't have an account? Sign up now