This interview is corporate balloney. 5000 words for "it sucks right now and one person deals with the mess, and its not on me" Female blame and shame and she is certainly not on par with Lisa Su. xe is 14nm. AMD graphics are 7nm or better as is M1. Intel is not competitive and she part of the reason why this is so.
This is a good look for intel. Good to be public facing and explaining new and upcoming technology. I hope intel regains its mojo and bring IPC updates similar to AMD and Apple.
"One of the criticisms with current driver stacks from Intel and AMD is that the size of the driver downloads are getting ridiculous – upwards of 300-400+ megabytes due to bundled software and the fact that you’re supporting 7/8/9 generations of GPUs with one download. Is there anything you can do to make this process more streamlined in future?"
I don't agree that multigenerational support is the problem. The problem is the bundleware.
Any excuse companies can take to jettison backward compatibility should be watched carefully, which includes citing multigenerational support as a problem when it comes to download bloat.
I know everyone wants to be Apple these days (including MS) — where backward compatibility basically doesn't exist. But, that's not the computing model that's better for people.
Do you have any stats on how much of a typical GPU driver download package is multi-generational code, and how much is optional bundled software? For either AMD or Nvidia.
Also curious how many years do you think a company should continue to update GPU drivers for a particular chip before you'd be satisfied for them to discontinue?
True, thanks to open source GPU drivers in Linux, my 10 year old Acer Iconia Tab W500 is still usable with a modern, up to date software stack (I use Arch, BTW)
What buldleware? I just decompressed AMD driver and the driver part is over 1GB (mostly dlls)! The "bundles" are not even a fraction of the driver in size.
Intel and I have been talking about it for a few months. I was asking a good deal about the early PCIe version of DG1 (the software development version) on the software side, so they said it'd be worth talking to Lisa. I decided to use the opportunity to expand and explain some of the key areas of Intel's Xe/OneAPI software direction.
Not sure if I should read this or this is yet another corporate pawn regurgitating marketing points and conveying the least amount of substantive information in the most number of words.
Why even comment before you've read it? With that attitude, it sounds as if you just don't like industry interviews at all, but still need to voice your displeasure regardless.
Read it pointless bit of look at me and marketing rehash, when we have actual working products to review maybe then we can have a deep dive until then its all speculation and testicles
Xe-HP status was updated to "Sampling" in the latest presentations. At one of the oneAPI presentations today, Xe-HP performance data was presented for a backend to an open source math library named Ginkgo.
One thing is absolutely certain: Competition and choice is getting greater and greater for most markets with a chip involved. AMD challenging intel on CPU. AMD challenging Nvidia for GPU. Intel delving deeper into GPU and Compute. Apple challenging the whole X86-based architecture, ie Intel/AMD, with M1. Arm getting into server space. It's a good time to be alive right now!
Oh dear, this interview did little to assuage my concerns that Intel just doesn’t get the GPU market at all (or it’s competitive landscape). I’m not a game developer, but I do deal with GPGPU programming (CUDA and OpenCL), and compared to Nvidia (and to a somewhat lesser extent, AMD), Intel just doesn’t seem to grasp that GPU isn’t about positioning and segmentation: that’s a natural aspect of computing that occurs once you’ve illustrated you have something amazing at the top end. GPU is always, as a starting point, about the top-end, whether that’s for compute or gaming. Why do I invest in understanding Nvidia’s proprietary CUDA? Why does my personal machine have a big honking RTX card? Why does my work machine have a Titan in it? Because Nvidia’s stuff actually delivers; the compute gains are real and demonstrable, whether it be for all up compute or for my own hobby projects using my Jetson Nano. Across that spectrum, it starts with commitment, and that means the top-end, and it means gaming and visuals, since that workload illustrates performance (even to me, and I’m not a big gamer).
So if Intel wants to be taken seriously, show some real commitment to this space. I want to see a serious high-end part that illustrates why I should give a crap about your architecture. No more half-assing it with repurposing ancient, early 90’s derived x86 uarchs, no more starting with low-end “price-performance” ratios. They need to deliver some shock and awe.
BTW, To Intel, Re: CPU market: Apple just delivered some shock and awe on the mobile PC front, and AMD has been embarrassing you for the last 18 months. You folks at Intel had better have something to replace these never ending (and increasingly uncompetitive) Skyake uarch derivatives. You need a modern day Nehalem, and you need it fast.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
34 Comments
Back to Article
yeeeeman - Wednesday, November 11, 2020 - link
Finally Intel has their own Lisa :))shabby - Wednesday, November 11, 2020 - link
They should make her a ceo at intel, she has more qualifications to run the company than the current boob up top.JfromImaginstuff - Thursday, November 12, 2020 - link
My thoughts exactly808Hilo - Wednesday, November 18, 2020 - link
This interview is corporate balloney. 5000 words for "it sucks right now and one person deals with the mess, and its not on me" Female blame and shame and she is certainly not on par with Lisa Su.xe is 14nm. AMD graphics are 7nm or better as is M1. Intel is not competitive and she part of the reason why this is so.
Leeea - Wednesday, November 11, 2020 - link
Xe MAX, designed from the ground up for a stock price focused marketing campaign.jimjamjamie - Thursday, November 12, 2020 - link
MAX stonksSpunjji - Thursday, November 12, 2020 - link
100%Smell This - Thursday, November 12, 2020 - link
Hate to be so cynical but, yeah . . . . most likely
Dehjomz - Wednesday, November 11, 2020 - link
This is a good look for intel. Good to be public facing and explaining new and upcoming technology. I hope intel regains its mojo and bring IPC updates similar to AMD and Apple.Smell This - Thursday, November 19, 2020 - link
Lisa uses a new AMD desktop . . .
Oxford Guy - Wednesday, November 11, 2020 - link
"One of the criticisms with current driver stacks from Intel and AMD is that the size of the driver downloads are getting ridiculous – upwards of 300-400+ megabytes due to bundled software and the fact that you’re supporting 7/8/9 generations of GPUs with one download. Is there anything you can do to make this process more streamlined in future?"I don't agree that multigenerational support is the problem. The problem is the bundleware.
Any excuse companies can take to jettison backward compatibility should be watched carefully, which includes citing multigenerational support as a problem when it comes to download bloat.
I know everyone wants to be Apple these days (including MS) — where backward compatibility basically doesn't exist. But, that's not the computing model that's better for people.
grant3 - Wednesday, November 11, 2020 - link
Do you have any stats on how much of a typical GPU driver download package is multi-generational code, and how much is optional bundled software? For either AMD or Nvidia.grant3 - Wednesday, November 11, 2020 - link
Also curious how many years do you think a company should continue to update GPU drivers for a particular chip before you'd be satisfied for them to discontinue?29a - Thursday, November 12, 2020 - link
At least 5 for dGPU and 10 for iGPU. No sense throwing away a perfectly good computer because it doesn't have display drivers.sandeep_r_89 - Saturday, November 14, 2020 - link
True, thanks to open source GPU drivers in Linux, my 10 year old Acer Iconia Tab W500 is still usable with a modern, up to date software stack (I use Arch, BTW)dotjaz - Wednesday, November 11, 2020 - link
What buldleware? I just decompressed AMD driver and the driver part is over 1GB (mostly dlls)! The "bundles" are not even a fraction of the driver in size.nrencoret - Wednesday, November 11, 2020 - link
@ian wonder if you setup the interview or did Intel offer the job nterview with Lisa.Thought that all gfx related stuff to press went through Raja
Cheers & great interview.
Ian Cutress - Thursday, November 12, 2020 - link
Intel and I have been talking about it for a few months. I was asking a good deal about the early PCIe version of DG1 (the software development version) on the software side, so they said it'd be worth talking to Lisa. I decided to use the opportunity to expand and explain some of the key areas of Intel's Xe/OneAPI software direction.Kurosaki - Thursday, November 12, 2020 - link
👍Sychonut - Thursday, November 12, 2020 - link
Not sure if I should read this or this is yet another corporate pawn regurgitating marketing points and conveying the least amount of substantive information in the most number of words.IanCutress - Thursday, November 12, 2020 - link
Why even comment before you've read it? With that attitude, it sounds as if you just don't like industry interviews at all, but still need to voice your displeasure regardless.Spunjji - Thursday, November 12, 2020 - link
Mostly that last bit, sadlyalufan - Thursday, November 12, 2020 - link
Read it pointless bit of look at me and marketing rehash, when we have actual working products to review maybe then we can have a deep dive until then its all speculation and testiclesStuka87 - Thursday, November 12, 2020 - link
I would have liked to see a question on why they thought the Xe name would work. Its not exactly a name that rolls off the tongue.bigvlada - Friday, November 13, 2020 - link
Looks similar to thought process Elon Musk used to name his child.Arbie - Thursday, November 19, 2020 - link
Or the process his parents used.Ian Cutress - Friday, November 13, 2020 - link
I've covered that before. https://www.youtube.com/watch?v=3o3-yxUwjSQJayNor - Friday, November 13, 2020 - link
Xe-HP status was updated to "Sampling" in the latest presentations.At one of the oneAPI presentations today, Xe-HP performance data was presented for a backend to an open source math library named Ginkgo.
https://github.com/ginkgo-project/ginkgo
JayNor - Friday, November 13, 2020 - link
This page has some more info on the Ginkgo presentation.https://www.oneapi.com/events/devcon2020/#session-...
Magnus101 - Saturday, November 14, 2020 - link
One thing is absolutely certain: Competition and choice is getting greater and greater for most markets with a chip involved.AMD challenging intel on CPU. AMD challenging Nvidia for GPU. Intel delving deeper into GPU and Compute. Apple challenging the whole X86-based architecture, ie Intel/AMD, with M1. Arm getting into server space.
It's a good time to be alive right now!
Arbie - Thursday, November 19, 2020 - link
Even more reason to wear a mask.sandeep_r_89 - Saturday, November 14, 2020 - link
Intel's provided day zero support for Linux, for about a decade now........Buck Turgidson - Sunday, November 15, 2020 - link
Oh dear, this interview did little to assuage my concerns that Intel just doesn’t get the GPU market at all (or it’s competitive landscape). I’m not a game developer, but I do deal with GPGPU programming (CUDA and OpenCL), and compared to Nvidia (and to a somewhat lesser extent, AMD), Intel just doesn’t seem to grasp that GPU isn’t about positioning and segmentation: that’s a natural aspect of computing that occurs once you’ve illustrated you have something amazing at the top end. GPU is always, as a starting point, about the top-end, whether that’s for compute or gaming. Why do I invest in understanding Nvidia’s proprietary CUDA? Why does my personal machine have a big honking RTX card? Why does my work machine have a Titan in it? Because Nvidia’s stuff actually delivers; the compute gains are real and demonstrable, whether it be for all up compute or for my own hobby projects using my Jetson Nano. Across that spectrum, it starts with commitment, and that means the top-end, and it means gaming and visuals, since that workload illustrates performance (even to me, and I’m not a big gamer).So if Intel wants to be taken seriously, show some real commitment to this space. I want to see a serious high-end part that illustrates why I should give a crap about your architecture. No more half-assing it with repurposing ancient, early 90’s derived x86 uarchs, no more starting with low-end “price-performance” ratios. They need to deliver some shock and awe.
BTW, To Intel, Re: CPU market: Apple just delivered some shock and awe on the mobile PC front, and AMD has been embarrassing you for the last 18 months. You folks at Intel had better have something to replace these never ending (and increasingly uncompetitive) Skyake uarch derivatives. You need a modern day Nehalem, and you need it fast.
Arbie - Thursday, November 19, 2020 - link
"Bringing a new range of hardware to market is not an easy task, even for Intel."Wordy. More concise is:
"Bringing hardware to market is not easy for Intel".