Comments Locked

56 Comments

Back to Article

  • Marlin1975 - Wednesday, June 13, 2018 - link

    I'll believe it when I see it. That and Intels history of really awful drivers will probably also come into play.
    So will look good on paper, be delayed if it ever comes, and then have awful drivers that never gets its full potential from it.
  • PeachNCream - Wednesday, June 13, 2018 - link

    I haven't had a bad experience with Intel graphics drivers since the GMA 950. Those were truly awful, but from the 4500MHD and up, things have improved quite a bit. Most of the PC games I play run well on an Ivy Bridge HD 4000 so I rarely bother to send them to my laptop's Quadro NVS 5200m. There is a notable performance difference in some cases, but overall, Intel's drivers have been very good to me over the last decade.
  • Kvaern1 - Wednesday, June 13, 2018 - link

    Hmm a few years ago I put together a Haswell based PC for a young family member and I had to get a dGPU for it because Minecraft graphics was completely messed up on the Intel GPU.
  • PeachNCream - Thursday, June 14, 2018 - link

    I don't have any experience with Minecraft so I can't speak for how that'd work on any of Intel's GPUs. But I have used every Intel graphics chip released from i740 up to the HD 4000 with the exception of the HD 2000 and HD 2500 for gaming. There used to be stupid things you'd have to do to make stuff run properly, but Intel has come a long way since the dark days of the 915 and 950. Performance compared to dGPUs has always lagged with chipset or CPU-based graphics by a significant margin so setting reasonable expectations is an absolute necessity. I don't doubt there are still issues in other games beyond Minecraft that'd push the need for a different GPU as well, but the point remains that Intel's come a long, long way in driver quality over the last decade or so. I would give AMD a nod for their current iGPU being a better solution for gamers on a shoestring budget since the performance is better, but when you're not allocating a decent sum of money to your computer, you get what you get and sometimes that's an Intel GPU.
  • BenSkywalker - Wednesday, June 13, 2018 - link

    Two to two and a half years out, 2020..... Can't make the math work.
  • Ryan Smith - Wednesday, June 13, 2018 - link

    Current date: June, 2018
    Current date + 2 years: June, 2020
    Current date + 2.5 years: December, 2020

    (While it could conceivably arrive before June, let's be honest: the dev time required means anything is going to be H2'2020)
  • jrs77 - Wednesday, June 13, 2018 - link

    I don't know if the current integrated graphics is scalable and/or to what degree. But imagine an IrisPro with double the EUs and a couple GB GDRR5. If it scales well, than you have a GPU at the level of a GTX1050 allready without inventing anything new.
  • jeremyshaw - Wednesday, June 13, 2018 - link

    I don't believe Intel has a GDDR5 controller, but they do have some experience with HBM2 and its implementations. While the HBM2 controller wasn't Intel's, the implementation was theirs. HBM was already pioneered by their partner, Micron, before Hynix and AMD made the similar HBM.
  • Eris_Floralia - Wednesday, June 13, 2018 - link

    Remember Larrabee? That thing has GDDR5 IMC.
  • edzieba - Wednesday, June 13, 2018 - link

    Yep. Knights Corner (the last Knights die to retain on-die texture units) had GDDR5 controllers. Knights Landing switched to on-package HMC plus off-package DDR4, and this appears to be carried over to Knights Mill. I don't think Knights Hill has had its memory layout published publicly.
  • Chaser - Wednesday, June 13, 2018 - link

    Competition is good for consumers. As it stands Nvidia doesn't have much today.
  • manju_rn - Wednesday, June 13, 2018 - link

    How about having a seperate socket in motherboard just for GPU. Then it will kick off a complete different design of motherboard and GPU chips will be sold as chips and not bulks boards. And of course seperate set of rams for gpu
  • HStewart - Wednesday, June 13, 2018 - link

    This idea reminds me the days of before 486 with math processor - old school.

    EMIB is much, much better - with faster access between CPU, GPU and HDM2 memory.
  • coder543 - Wednesday, June 13, 2018 - link

    so, kinda like nVidia's mezzanine connector or the older MXM standard? Oh, but you want the chip and RAM to be separate... yeah, this idea isn't going to happen any time soon. The kind of RAM that GPUs use isn't okay with being dozens of centimeters from the GPU die.
  • PeachNCream - Wednesday, June 13, 2018 - link

    A socketed GPU is possible with HBM and Intel already has one CPU package with an AMD dGPU using HBM in production plus the Knights' series accelerators use HBM. Intel seems to lean toward integration in order to exert control over performance so the company is unlikely to care much for 3rd party GPU companies and therefore may lean toward HBM. A socketed graphics solution is actually realistically possible nowadays.
  • manju_rn - Wednesday, June 13, 2018 - link

    ^Exactly, there is so much integration possible that current generation of GPU board have it redundant - power caps for e.g. It will also establish standards across the different class so that people one day can switch either of the GPU chips. Currently i believe apart from chip, it is the additional redundant components in the GP boards that hogs the cost. Historically, it is always done with other different componentsn 1970s had extension cards for everything, including mice and paralell ports . Look where we are now
  • foxalopex - Wednesday, June 13, 2018 - link

    I suspect if I was Nvidia I'd be worried. Intel is probably far more likely to work with AMD on open standards than with anything Nvidia proprietary.
  • HStewart - Wednesday, June 13, 2018 - link

    But the advantage of EMIB is that it does not matter the maker of GPU - AMD will be gone from EMIB in 2020.
  • tomatotree - Wednesday, June 13, 2018 - link

    I'm really hoping Intel adopts Freesync and the Nvidia tax on Gsync displays goes away. If both AMD and Intel are using Freesync, plus the next-gen consoles, and then some TV makers start to adopt it for console use as well, then the extra 100-200 premium for Gsync starts to make a lot less sense to be locked into.
  • JackNSally - Wednesday, June 13, 2018 - link

    Intel would be shooting themselves in the foot to come up with there own solution. Easiest for Intel would be FreeSync.
  • HStewart - Wednesday, June 13, 2018 - link

    Intel could support both FreeSync and GSync and let AMD and NVidia fight - any case Intel will work with the winner.
  • Diji1 - Thursday, June 14, 2018 - link

    I don't think that will happen.

    G-sync is only used on high end displays, that are developed with Nvidia's input, presumably partly because they don't want the Nvidia branding on bad displays. So there's little reason for them to want another entities GPU's selling G-sync capable cards given that it equals a lost sale of their own GPU.
  • peevee - Thursday, June 14, 2018 - link

    "Intel could support both FreeSync and GSync "

    Not without paying NVidia for G-sync licenses, which would be superidiotic (and even incurring extra develeopment/QA/support price on G-sync would be stupid too).
  • The Hardcard - Wednesday, June 13, 2018 - link

    Existing project, or maybe relying on existing technologies. They have already laid out their iGPU on the latest process. If theey were satisfied with the new technologies added to that, it wouldn’t take long to maybe double or quad it then tune the transistors to handle more power.

    Alternatively, Koduri could have started driving this before he hired on, maybe offering guidance and perspective while Kaby-G was being developed.
  • HStewart - Wednesday, June 13, 2018 - link

    Think of this way, when Koduri was working on Kaby-G and work on EMIB, he notice the real potential of this technology and wanted to be involved with Intel - of course I would not doubt that they talk about the potential while working on the project,

    I believe EMIB has tons of potential in this computer industry - even beyond the GPU which it already made - I would not doubt we will see EMIB on desktop computers in the future.

    Kaby G was just a test bed for technology.
  • coder543 - Wednesday, June 13, 2018 - link

    EMIB is just another name for Multichip Module (MCM) which AMD is already employing in all of their current processors. We already see it on desktop computers today, so why wait until the future?
  • coder543 - Wednesday, June 13, 2018 - link

    Also worth mentioning that AMD's Navi GPU architecture is rumored to be another one of AMD's highly scalable MCM designs, letting them build enormous GPUs with very high yields. If Koduri were excited about EMIB/MCM, then it wouldn't really matter where he works. I'm sure *that* was not his motivation for working for Intel.
  • HStewart - Wednesday, June 13, 2018 - link

    I am sure EMIB is part of it - because of Kaby G - experience - but also EMIB was developed before Koduri was involved with Kaby G project - so Intel didn't copy AMD - they made it more better and allow other company's (AMD) to work together.
  • HStewart - Wednesday, June 13, 2018 - link

    I don't believe AMD MCM can handle dies of difference process (nm - makers) where EMIB can handle it.

    AMD wants to believe they are the future - but Koduri new better and jump ship.
  • peevee - Wednesday, June 13, 2018 - link

    Just in time their 10nm will get decent yields.
    Everybody else will be on 5nm.
  • HStewart - Wednesday, June 13, 2018 - link

    It not the nm that makes the difference, it is number of transistors
  • peevee - Wednesday, June 13, 2018 - link

    Of course. These nms are just marketing BS. But the point is the current "7nm" is already better than Intel's still not-working "10 nm", and their "5 nm" will be better yet.
  • HStewart - Wednesday, June 13, 2018 - link

    All I saying there is more than just nm. The technology inside makes a big difference and yes this is market but opposite of frequency wars where higher number is better, in this case a smaller nm is used instead of frequency.
  • peevee - Thursday, June 14, 2018 - link

    "All I saying there is more than just nm"

    Way more, because "nm" BS is just branding. But behind these brands, working "7nm" is already better than Intel's still-not-working "10nm".
  • techconc - Wednesday, June 13, 2018 - link

    Yeah, but the better the process, the more transistors can be used in the same die space. So, yes, the size of the process does matter.
  • peevee - Thursday, June 14, 2018 - link

    These numbers before "nm" do not correspond to any real sizes. Just branding.
  • hammer256 - Wednesday, June 13, 2018 - link

    So, what's going on with their xeon-phi line of effort? Is this going to be a replacement for that, or is this in parallel?
    Also, what kind of software support will this have? I assume openCL, maybe Intel will actually make a good effort at it. Whatever it is, for people well entrenched in the CUDA framework, it will take some enticing for sure...
    Don't underestimate the importance of software support, I would say that's a big part of what made Nvidia so successful in the compute space. I remember reading a few years back that Nvidia actually has more software than hardware engineers...
  • jordanclock - Wednesday, June 13, 2018 - link

    Xeon Phi, while it evolved from a project involving x86-based GPUs, is not in any way related to this. This dGPU would likely be an evolution of the existing iGPUs.
  • ZipSpeed - Wednesday, June 13, 2018 - link

    I, for one, am glad that there will be a 3rd entrant. Nvidia definitely needs to be taken down a notch or two. However, looking at Intel's efforts outside their core and foundry business, well it sucks. I'm sure we all have high hopes that they finally get it right, but this is Intel we're talking about. I don't think they will be targeting the high end, not initially. Expect GPUs in the mainstream arena.
  • HStewart - Wednesday, June 13, 2018 - link

    Intel already has the low end market - with there Integrated GPU's - so logically this will be higher - it will be minimum the performance of Kaby G GPU from AMD.,
  • peevee - Wednesday, June 13, 2018 - link

    "looking at Intel's efforts outside their core and foundry business, well it sucks"

    It's core business did not have a significant architectural progress since Sandy Bridge.
    It's foundry botched "10nm" node in a spectacular fail.

    That is what you get with an accountant as your Chairman.
  • Diji1 - Thursday, June 14, 2018 - link

    Sucks how? The main reason for it existing is to avoid the additional cost of a GPU to non-gaming systems. It's done pretty well at that.
  • boeush - Wednesday, June 13, 2018 - link

    Larrabee, the next generation?
  • idealego - Wednesday, June 13, 2018 - link

    Intel released a discrete GPU called the i740 back in 1998, and is why the article title has "(Modern)" in it:
    https://en.wikipedia.org/wiki/Intel740
  • peevee - Wednesday, June 13, 2018 - link

    2.5 years? How hard would it be to scale iGPU from Ice Lake and add HBM2 controller, for a first version?

    Looks like Intel's engineering organization sucks. And that is not the first indication, obviously.
  • sheh - Wednesday, June 13, 2018 - link

    You already have some FreeSync TVs:
    https://www.pcworld.com/article/3278593/gaming/amd...

    And Intel spoke of supporting adaptive sync 2 years ago, but I don't know what happened in the end.
  • eastcoast_pete - Wednesday, June 13, 2018 - link

    While I love the idea of NVIDIA and AMD finally getting competition in dedicated graphics, I share the "believe it if I see it" view. Also, I can't help the impression that Intel's dedicated graphics, once/if they materialize, are the leftover table scraps from their AI/machine learning efforts.
  • Machinus - Thursday, June 14, 2018 - link

    Will these be built out of x86 cores? Or does Intel have a different GPU unit that it is going to use to build cards out of?
  • peevee - Thursday, June 14, 2018 - link

    Of course, their iGPU has nothing to do with x86.
    Modern "all-purpose" architectures, be it x80, x68, ARMv8 etc are really NO-PURPOSE. They are useless for anything but high-level control of many many other processors on other architectures which actually do all the work. Not just GPU, but photo encoding/decoding/processing, video encoding/decoding/processing, sound, wireless and wired connections, display control (not the same as GPU BTW), AI etc etc are handled by specialized processors because the old ARM/Intel etc are totally inadequate despite the cost of them. They are 100 times worse in performance per W and often in performance per area compared to sane designs.
  • MJDouma - Thursday, June 14, 2018 - link

    Did they say if it would be manufactured with 14n or 10nm XD
  • peevee - Thursday, June 14, 2018 - link

    If they don't debug their "10nm" by 2020, they would hopefully be bankrupt.
  • wr3zzz - Thursday, June 14, 2018 - link

    Hopefully this does not end up like another Larrabee.
  • CaedenV - Thursday, June 14, 2018 - link

    Rather than "Will it play crysis" I suppose the modern mantra is "Will it mine?"
  • Dug - Thursday, June 14, 2018 - link

    Ha ha ha ha ha ha.....ha ha ha ha ha ha... ha ha ha...
  • xpto - Saturday, June 16, 2018 - link

    New Vulnerability hits Intel processors - Lazy FP State Restore

    https://www.intel.com/content/www/us/en/security-c...
  • hMunster - Saturday, June 16, 2018 - link

    The "first" GPU? Intel is conveniently forgetting the i740. Not that I can blame them.

Log in

Don't have an account? Sign up now