Of course we don't know whats going on underneath, but on the surface this does not look great.
The fab division is swapping out an engineering lead for a business one, which historically is a bad precedent for Intel.
They have gone all in on the GPU division, a business that takes forever to spin up. The last thing it needs is a sudden direction change, especially in light of all the recent cancelations.
Koduri gave AMD all he had--he didn't really have anything left to give them when he jumped to Intel. I always felt he was way overrated at AMD. Ironically, in his parting remarks for AMD, Raja advised AMD to do better at execution, IIRC... Then he moved to Intel where he learned all about what failing to execute looks like....;) (AMD's GPU tech and production went straight up after Koduri left, I noticed.)... What "software startup" I wonder? Vintage Koduri. "In a few weeks" he might actually have a job, and he will tell us all about it, he says. Changes in the FABs indicates a decision to more heavily concentrate on marketing as opposed to technology--Intel has brutal competition these days, so I hope there's more to it than that.
"AMD's GPU tech and production went straight up after Koduri left, I noticed."
Was RDNA1 a Raja architecture?
One of AMD's most (IMO) mixed decisions was bifuricating the GPU line into CDNA/RDNA and cutting compute performance for gaming performance with the 5000/6000 series.
This may have saved their competitiveness, much like cutting everything for Zen did. But I loathe Nvidia's stranglehold on desktop compute (which is also huge advantage for server compute). And its about to bite even more with the generative AI craze.
biggest issue imo, more than speed, is the lack of software support for amd compute. Everything just works on nvidia and lots isn't supported or is hackish on amd. I really wish that wasn't the case as AMD has more ram so even if their cards were slower at compute they could run larger models and whatnot and would still be good choices. But so much is cuda only or cuda first that I don't personally consider AMD to even be competing in this space.
The point I was responding to complained about the CDNA/RDNA split. That's only tangentially related to compute on RDNA, in that it causes AMD to focus on software support for CDNA. Otherwise, I don't really see how it's relevant to the suitability of RDNA for (non-fp64 intensive) compute.
Per unit of gaming performance delivered Vega, Turing, Ampere, and Lovelace are much better architectures for compute than RDNA1-3. Compute performance isn’t just measured by TFLOPS as there are internal bottlenecks that limit performance. RDNA’s compute performance is probably like Maxwell or Pascal. It’s certainly not bad, but the architecture is clearly not geared towards compute.
Raja left AMD in part because he was mad at AMD deprioritizing Vega for Navi. RDNA is Navi and came out in 2019 when he left in 2017. RDNA likely wasn’t his project like GCN was, hence his frustration.
I'm afraid that following the AXG split, AI will be moved to data center and on future consumer Arc GPUs you'll get none. I think this is hugely problematic especially for Intel who's trying to catch up with Nvidia -- If the PhD students who develop the AI models from ground up are not hacking with Intel cards, data centers that run those models will not be buying Intel data center GPUs.
> If the PhD students who develop the AI models from ground up are not hacking with > Intel cards, data centers that run those models will not be buying Intel data center GPUs.
Pretty much. Although Intel does PyTorch integration, it seems researchers often need to write custom layers. If they do that using CUDA, then their network won't run on Intel GPUs. This is the secret to Nvidia retaining dominance, even though most people are using standard frameworks.
I would agree based on the experience we had after Gelsinger's departure and Intel's subsequent decline -- but for a very long time they were still extremely profitable and still producing competitive products, no?
Anyways, in this case I think it's less bad as Gelsinger is still (or should I say now returned) at the helm and at the end of the day the CEO has a way bigger impact on the general direction of the company. But also I have not been impressed with Intel since they returned. TSMC or go home, apparently.
Seems like Pat Gelsinger still doesn't understand how to actually fix Intel's execution problems? People inside Intel probably know exactly why why they have so much executive turnover and, yeah, it's going to lead to execution problems.
IFS is "growing", but at a tiny pace. It's not going to be a relevant part of their business for many years to come. I mean, IFS had the same gameplan, so naturally not believe their story for IFS 2.0.
I thought Intel was finished with rearranging the deck chairs on their Titanic?
Really, more bean counters? This is what got them in this mess in the first place.
They aren't having economic problems, they're having technical problems. And now they put an accountant in charge of their largest technical challenge, fabrication.
I'm sure he'll get a $50 million golden parachute in 18 months when they realize the mistake they made.
As unimpressive as nerds think "bean counters" are at their work, they tend to be successful at sustaining stable, if uninteresting companies that neither burn bright or die fast, but deliver consistent results. I know that upsets those among us who aren't really able to see past the glories of overclocked desktop PCs that are used to play vidja games (which are indeed serious business), but reality is a bit different.
Semiconductor fabrication is a cutting-edge research endeavor, as well as a manufacturing operation. You ideally want it lead by someone who knows something about that, as well as how to manage R&D.
I don't think he is quite as stellar as people make him to be. Vega was a broken/bottlenecked architecture in the beginning and it took a while to get it going. Same for Arc, he even admitted arc in it's first iteration has some big architectural bottlenecks which will be solved in 2nd gen. That isn't the mark of a great architect.
I don’t recall Vega being focused on improving IPC over earlier GCN revisions. From my understanding it was essentially Big Polaris. Vega was focused on fixing Polaris’ clockspeed limitations, and Vega 10 came stacked with compute features and HBM2. Pascal was basically 16nm Maxwell, so Vega was supposed to be 14nm Fiji to go head to head with Nvidia. 14nm was inferior to 16nm and the bandwidth advantage was lost, so Vega 64 had to compete with the 1080 instead of the 1080Ti.
There's absolutely no way they walk away from GPUs completely, Intel desperately wants those sweet government supercomputer contracts and no one wants to go cross-vendor (which is annoying because it shouldn't matter, but whatever).
That's what Falcon Shores is about - pairing a GPU-like compute die/dies with Xeon CPU dies, in the same package. You don't need a whole dGPU team or product line, just for that.
In fact, isn't the team working on compute dies in Falcon Shores now separate from the consumer GPU team?
As for the fabs, Intel already had a vacancy there since last year and seemed to want one of the Tower execs to fill it. But, that acquisition has gotten stalled by Chinese regulators, so they had to find someone else to fill the role.
They make all sort of BS bookkeeper based decisions. A while back it was Jim Keller and then the whole Engineering background guys got replaced by an MBAs add the California's controversial C position rules. Now Raja got fired from AXG. This company is bleeding tons of talent and cash. That Pat Gelsinger has worked with Unit 8200 based companies which is the forefront for NSO Spyware division no wonder why Intel ME is disabled on the Workstation that I was provided by the company. This company is toast if the R&D does not increase and innovate. They killed L4 EDRAM while AMD is making waves with 2023 version of it. They nuked Optane, they lost Lithography edge since 10nm failure. And a ton of issues all over.
LGA1700 ILM failure when it comes to Client (don't forget self sabotage LGA1200 backport) and the massive E core push due to aging CORE series uArch and SPR XEON just got deleted by Genoa. It's really hard to imagine how they are going to change their fate. The bookkeepers are not doing their jobs properly as they just keep it barely afloat killing off things left and right, all the greedy investors axed a ton of innovation and sold of top end for scraps (Intel Modem business, NAND Flash etc) and those investors are in Apple. At this point I think the mega big Bilderberg declared the usefulness of Intel is done and sucked all life out of it along with the stock profits and shorts left it to rot with awful management.
I know dude, what is an X3D on AMD with TSMC COWOS and L4. Intel could have optimized it all the way to 14nm++ but they did not and only chose to do it with Apple BGA junk.
The thing is Intel had this chiplet based Cache system eons ago and not this modern AMD version is the only one, many do not know about this. And look at Ian's own 5775C benchmarks vs SKL it totally destroys SKL and approaches the CML 6C part. If Intel kept it all the way it would have been amazing, but their BS bookkeepers thought it's a waste of money to change the sockets and engineering so they axed it.
5775C was throttled by design, thermally. The lower-tier 5675C beat it or tied it in several games in this site’s tests. Also, lots of armchair analysts have said it was too expensive (not high-enough margin) to have that e-dram.
Skylake apparently had the interface to run an e-dram module but Intel chose not to make a part with one, likely because it had no competition.
A writer at Ars, Peter Bright, called Intel out for refusing to make it.
While the technologies are inherently different both function similarly on the CPU. The goal was to enhance the performance of LLC by increasing capacity. Both increase performance in similar situations. A higher performance l4$ will likely be all Intel needs to counter V-Cache. Broadwell was competitive with 10th gen CPUs in gaming with higher core counts for a reason. A modern variant with more bandwidth and possible twice the capacity is sufficiently competitive.
DRAM has much higher latency than SRAM, and putting it in-package doesn't really change that. The main benefit is increased bandwidth, which indeed benefits highly-scalable workloads. The sweet spot for a DRAM-based cache is going to be significantly different than a larger L3.
The way how innovation works is you have to make amendments to the technology else we would have been playing with rocks still today. And Intel squandered it. Their beancounters do not let them innovate because Intel is bleeding cash every single quarter.
You assume Optane couldn't increase density for simple lack of will or investment, but we don't know that. If Intel had a path to match or exceed density increases of NAND flash, I think it would still be around (if maybe in someone else's hands).
> I think the mega big Bilderberg declared the usefulness of Intel is done and sucked all > life out of it along with the stock profits and shorts left it to rot with awful management.
Intel's investors have been feasting on its lifeblood for decades, with a very generous dividend and stock buybacks. The chickens finally came home to roost, when they had to cut the dividend payment to fund fab expansion.
I just can't believe there are people who believe Raja was delivering on GPU promise. My other surprise is Raja was let go so late. I joined Intel's AXG group some time back, genuinely believing Intel has plans in discreet GPUs and datacenter ones (Ponte Vecchio, which Raja advertised a lot as a revolution).
As soon as I joined, I realized it was mostly marketing and my assumptions were totally wrong. The teams were incompetent, demotivated and were running way beyond schedules published. My boss told me long back PVC is dead, has some serious yield issues and next best hope is Rialto bridge. Both are dead now.
But catalyst for me was one of the quarterly all hands by Raja and other guy/gals in his org. The delays were written all over, the made some bogus paper launch of dGPU on March 31, 2022 which was Raja's quarterly OKR. And to add to that any further line of dGPUs were not only late, but not that competitive also I figured. And I realized one important thing: Raja is not a technical guy, he is a marketing guy who was blabbering about TAM and what not for dGPUs when they were horribly lagging in execution. Check out his ridiculously bold claims on 1000x improvement in PPA by 2027 on none other site than Anandtech.
Looking at Raja's lofty OKRs, so detached from reality, I knew he is on his way out. That it took so long is the only surprise I see. And he seems to have fooled Gelsinger for long. In one of the visits to overseas offices (with Raja), Pat mentioned he sees Raja as "the guy" in GPU instead of that nVIDIA guy in leather jacket. I almost threw up.
Lastly, I judge people's integrity based on their association with "like" minded people. Raja has stake and deep collaboration with one of the shadiest and corrupt guys in that VFX graphics company and is also a board member. How Intel management could not see this for years is a surprise to me, seriously.
Thankfully, I made a quick decision to leave Intel and could not have been happier in retrospect. Not bitching about my ex-employer but it was a real mess. I still wish Intel all the best and believe they still have some of the brightest people who are not being utilized properly.
100%. Similar thing on the CPU side, Jim Keller was definitely not some savior type, more hype than actual technical mettle. Not sure why this Anandtech journalists gloat over these over-hyped ex-AMD folks. Even Zen was not Jim K's brain-child though he has taken a fair share of credit for it which he himself admitted. Intel should get back to being a lean, mean, engineering-first company instead of hiring some so-called hardware name-sake celebrities to somehow magically fix their execution.
It's the Great Man theory. Sure, charismatic leadership can inspire people, but only so far. And taken out of the place they were "great", it usually doesn't work out. A good team is more than a single part.
Even the best leader can't win with a weak team, but even the best team will founder with a weak leader.
And leaders need to be good at leading. They need to have enough technical chops to know when they're being lied to, or not given the whole truth, and also to know who to hire & fire. If the leader isn't as technically sharp as most of the people they manage, that's okay. If they're bad at managing, that's not.
Agreed. I don’t understand lumping Raja and Keller together at all. Almost everything Keller touched turned to gold. The only gold I associate with Raja is his parachute as he bails from one gig to another.
Jim Keller is definitely a different matter from Raja. The guy makes things happen. And individual people most certainly make a difference in tech companies.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
50 Comments
Back to Article
brucethemoose - Tuesday, March 21, 2023 - link
Of course we don't know whats going on underneath, but on the surface this does not look great.The fab division is swapping out an engineering lead for a business one, which historically is a bad precedent for Intel.
They have gone all in on the GPU division, a business that takes forever to spin up. The last thing it needs is a sudden direction change, especially in light of all the recent cancelations.
WaltC - Tuesday, March 21, 2023 - link
Koduri gave AMD all he had--he didn't really have anything left to give them when he jumped to Intel. I always felt he was way overrated at AMD. Ironically, in his parting remarks for AMD, Raja advised AMD to do better at execution, IIRC... Then he moved to Intel where he learned all about what failing to execute looks like....;) (AMD's GPU tech and production went straight up after Koduri left, I noticed.)... What "software startup" I wonder? Vintage Koduri. "In a few weeks" he might actually have a job, and he will tell us all about it, he says. Changes in the FABs indicates a decision to more heavily concentrate on marketing as opposed to technology--Intel has brutal competition these days, so I hope there's more to it than that.brucethemoose - Tuesday, March 21, 2023 - link
"AMD's GPU tech and production went straight up after Koduri left, I noticed."Was RDNA1 a Raja architecture?
One of AMD's most (IMO) mixed decisions was bifuricating the GPU line into CDNA/RDNA and cutting compute performance for gaming performance with the 5000/6000 series.
This may have saved their competitiveness, much like cutting everything for Zen did. But I loathe Nvidia's stranglehold on desktop compute (which is also huge advantage for server compute). And its about to bite even more with the generative AI craze.
mode_13h - Wednesday, March 22, 2023 - link
I think RDNA isn't exactly bad at compute. It just didn't get the same Matrix Cores as CDNA, nor will there ever be a fp64-heavy version.andrewaggb - Sunday, March 26, 2023 - link
biggest issue imo, more than speed, is the lack of software support for amd compute. Everything just works on nvidia and lots isn't supported or is hackish on amd. I really wish that wasn't the case as AMD has more ram so even if their cards were slower at compute they could run larger models and whatnot and would still be good choices. But so much is cuda only or cuda first that I don't personally consider AMD to even be competing in this space.mode_13h - Monday, March 27, 2023 - link
The point I was responding to complained about the CDNA/RDNA split. That's only tangentially related to compute on RDNA, in that it causes AMD to focus on software support for CDNA. Otherwise, I don't really see how it's relevant to the suitability of RDNA for (non-fp64 intensive) compute.Otritus - Monday, March 27, 2023 - link
Per unit of gaming performance delivered Vega, Turing, Ampere, and Lovelace are much better architectures for compute than RDNA1-3. Compute performance isn’t just measured by TFLOPS as there are internal bottlenecks that limit performance. RDNA’s compute performance is probably like Maxwell or Pascal. It’s certainly not bad, but the architecture is clearly not geared towards compute.Otritus - Monday, March 27, 2023 - link
Raja left AMD in part because he was mad at AMD deprioritizing Vega for Navi. RDNA is Navi and came out in 2019 when he left in 2017. RDNA likely wasn’t his project like GCN was, hence his frustration.grrrgrrr - Tuesday, March 21, 2023 - link
I'm afraid that following the AXG split, AI will be moved to data center and on future consumer Arc GPUs you'll get none. I think this is hugely problematic especially for Intel who's trying to catch up with Nvidia -- If the PhD students who develop the AI models from ground up are not hacking with Intel cards, data centers that run those models will not be buying Intel data center GPUs.mode_13h - Wednesday, March 22, 2023 - link
> If the PhD students who develop the AI models from ground up are not hacking with> Intel cards, data centers that run those models will not be buying Intel data center GPUs.
Pretty much. Although Intel does PyTorch integration, it seems researchers often need to write custom layers. If they do that using CUDA, then their network won't run on Intel GPUs. This is the secret to Nvidia retaining dominance, even though most people are using standard frameworks.
achinhorn - Thursday, March 23, 2023 - link
I would agree based on the experience we had after Gelsinger's departure and Intel's subsequent decline -- but for a very long time they were still extremely profitable and still producing competitive products, no?Anyways, in this case I think it's less bad as Gelsinger is still (or should I say now returned) at the helm and at the end of the day the CEO has a way bigger impact on the general direction of the company. But also I have not been impressed with Intel since they returned. TSMC or go home, apparently.
brucethemoose - Tuesday, March 21, 2023 - link
Also IMO a generative AI software startup is exactly the wrong thing to start now.He should either have joined another startup like HuggingFace or OpenAI, or jumped into the generative AI efforts at Epic or Adobe.
ikjadoon - Tuesday, March 21, 2023 - link
Seems like Pat Gelsinger still doesn't understand how to actually fix Intel's execution problems? People inside Intel probably know exactly why why they have so much executive turnover and, yeah, it's going to lead to execution problems.IFS is "growing", but at a tiny pace. It's not going to be a relevant part of their business for many years to come. I mean, IFS had the same gameplan, so naturally not believe their story for IFS 2.0.
lmcd - Wednesday, March 22, 2023 - link
Koduri was a pre-Gel hire and likely was being kept around for the sake of a roadmap that no longer is feasible anyway.Hulk - Tuesday, March 21, 2023 - link
I thought Intel was finished with rearranging the deck chairs on their Titanic?Really, more bean counters? This is what got them in this mess in the first place.
They aren't having economic problems, they're having technical problems. And now they put an accountant in charge of their largest technical challenge, fabrication.
I'm sure he'll get a $50 million golden parachute in 18 months when they realize the mistake they made.
IntelUser2000 - Tuesday, March 21, 2023 - link
The guy who lost Intel's decades long fabrication leadership had extensive background in fab management, Brian Kraznich.PeachNCream - Tuesday, March 21, 2023 - link
As unimpressive as nerds think "bean counters" are at their work, they tend to be successful at sustaining stable, if uninteresting companies that neither burn bright or die fast, but deliver consistent results. I know that upsets those among us who aren't really able to see past the glories of overclocked desktop PCs that are used to play vidja games (which are indeed serious business), but reality is a bit different.mode_13h - Wednesday, March 22, 2023 - link
Semiconductor fabrication is a cutting-edge research endeavor, as well as a manufacturing operation. You ideally want it lead by someone who knows something about that, as well as how to manage R&D.FunBunny2 - Wednesday, March 22, 2023 - link
Semiconductor fabrication is a cutting-edge research endeavoryeah... it would be great if they could find someone who can deterministic execution out of a Heisenberg node; right around the shrink corner.
bji - Wednesday, March 22, 2023 - link
Bean counter spotted.tipoo - Tuesday, March 21, 2023 - link
Well I actually hope they keep trying in dedicated consumer GPUs. It's for the best of all of us that they become a viable third player.yeeeeman - Wednesday, March 22, 2023 - link
I don't think he is quite as stellar as people make him to be. Vega was a broken/bottlenecked architecture in the beginning and it took a while to get it going. Same for Arc, he even admitted arc in it's first iteration has some big architectural bottlenecks which will be solved in 2nd gen. That isn't the mark of a great architect.mode_13h - Wednesday, March 22, 2023 - link
Agreed. Does anyone remember Vega's DSBR? Sounded cool, but was pretty much DoA.https://www.reddit.com/r/Amd/comments/88b06e/quick...
Oxford Guy - Thursday, March 23, 2023 - link
You mean Fiji.Vega, after all the delays and Su’s big promises, had IPC basically identical to Fiji’s.
Otritus - Monday, March 27, 2023 - link
I don’t recall Vega being focused on improving IPC over earlier GCN revisions. From my understanding it was essentially Big Polaris. Vega was focused on fixing Polaris’ clockspeed limitations, and Vega 10 came stacked with compute features and HBM2. Pascal was basically 16nm Maxwell, so Vega was supposed to be 14nm Fiji to go head to head with Nvidia. 14nm was inferior to 16nm and the bandwidth advantage was lost, so Vega 64 had to compete with the 1080 instead of the 1080Ti.mode_13h - Tuesday, March 28, 2023 - link
> I don’t recall Vega being focused on improving IPC over earlier GCN revisions.It wasn't. He's just kvetching. It's a club he uses to hammer on AMD, every time the subject of their GPUs comes up.
He also likes to complain about Intel Atoms from like 12 years ago, so don't expect him to stop whining about Vega any time soon.
mode_13h - Wednesday, March 22, 2023 - link
I think Intel is giving GPUs one more try. If they can't right the ship this time, then they'll probably walk away from it and just do iGPUs.lmcd - Wednesday, March 22, 2023 - link
There's absolutely no way they walk away from GPUs completely, Intel desperately wants those sweet government supercomputer contracts and no one wants to go cross-vendor (which is annoying because it shouldn't matter, but whatever).mode_13h - Thursday, March 23, 2023 - link
That's what Falcon Shores is about - pairing a GPU-like compute die/dies with Xeon CPU dies, in the same package. You don't need a whole dGPU team or product line, just for that.In fact, isn't the team working on compute dies in Falcon Shores now separate from the consumer GPU team?
mode_13h - Wednesday, March 22, 2023 - link
As for the fabs, Intel already had a vacancy there since last year and seemed to want one of the Tower execs to fill it. But, that acquisition has gotten stalled by Chinese regulators, so they had to find someone else to fill the role.Silver5urfer - Wednesday, March 22, 2023 - link
Intel is losing it.They make all sort of BS bookkeeper based decisions. A while back it was Jim Keller and then the whole Engineering background guys got replaced by an MBAs add the California's controversial C position rules. Now Raja got fired from AXG. This company is bleeding tons of talent and cash. That Pat Gelsinger has worked with Unit 8200 based companies which is the forefront for NSO Spyware division no wonder why Intel ME is disabled on the Workstation that I was provided by the company. This company is toast if the R&D does not increase and innovate. They killed L4 EDRAM while AMD is making waves with 2023 version of it. They nuked Optane, they lost Lithography edge since 10nm failure. And a ton of issues all over.
LGA1700 ILM failure when it comes to Client (don't forget self sabotage LGA1200 backport) and the massive E core push due to aging CORE series uArch and SPR XEON just got deleted by Genoa. It's really hard to imagine how they are going to change their fate. The bookkeepers are not doing their jobs properly as they just keep it barely afloat killing off things left and right, all the greedy investors axed a ton of innovation and sold of top end for scraps (Intel Modem business, NAND Flash etc) and those investors are in Apple. At this point I think the mega big Bilderberg declared the usefulness of Intel is done and sucked all life out of it along with the stock profits and shorts left it to rot with awful management.
mode_13h - Thursday, March 23, 2023 - link
> They killed L4 EDRAM while AMD is making waves with 2023 version of it.It's not. If you think AMD's 3D cache is akin to EDRAM, then you don't understand it.
Silver5urfer - Thursday, March 23, 2023 - link
I know dude, what is an X3D on AMD with TSMC COWOS and L4. Intel could have optimized it all the way to 14nm++ but they did not and only chose to do it with Apple BGA junk.The thing is Intel had this chiplet based Cache system eons ago and not this modern AMD version is the only one, many do not know about this. And look at Ian's own 5775C benchmarks vs SKL it totally destroys SKL and approaches the CML 6C part. If Intel kept it all the way it would have been amazing, but their BS bookkeepers thought it's a waste of money to change the sockets and engineering so they axed it.
Oxford Guy - Thursday, March 23, 2023 - link
5775C was throttled by design, thermally. The lower-tier 5675C beat it or tied it in several games in this site’s tests. Also, lots of armchair analysts have said it was too expensive (not high-enough margin) to have that e-dram.Skylake apparently had the interface to run an e-dram module but Intel chose not to make a part with one, likely because it had no competition.
A writer at Ars, Peter Bright, called Intel out for refusing to make it.
Oxford Guy - Thursday, March 23, 2023 - link
Throttled by the power cap I meant to say.Otritus - Monday, March 27, 2023 - link
While the technologies are inherently different both function similarly on the CPU. The goal was to enhance the performance of LLC by increasing capacity. Both increase performance in similar situations. A higher performance l4$ will likely be all Intel needs to counter V-Cache. Broadwell was competitive with 10th gen CPUs in gaming with higher core counts for a reason. A modern variant with more bandwidth and possible twice the capacity is sufficiently competitive.mode_13h - Thursday, March 30, 2023 - link
DRAM has much higher latency than SRAM, and putting it in-package doesn't really change that. The main benefit is increased bandwidth, which indeed benefits highly-scalable workloads. The sweet spot for a DRAM-based cache is going to be significantly different than a larger L3.mode_13h - Thursday, March 23, 2023 - link
> They nuked Optane,It lost them money. Consistently. And it's lagging NAND flash in GB/$ by more and more, year after year, with no signs of a turnaround.
Silver5urfer - Thursday, March 23, 2023 - link
The way how innovation works is you have to make amendments to the technology else we would have been playing with rocks still today. And Intel squandered it. Their beancounters do not let them innovate because Intel is bleeding cash every single quarter.mode_13h - Friday, March 24, 2023 - link
You assume Optane couldn't increase density for simple lack of will or investment, but we don't know that. If Intel had a path to match or exceed density increases of NAND flash, I think it would still be around (if maybe in someone else's hands).mode_13h - Thursday, March 23, 2023 - link
> I think the mega big Bilderberg declared the usefulness of Intel is done and sucked all> life out of it along with the stock profits and shorts left it to rot with awful management.
Intel's investors have been feasting on its lifeblood for decades, with a very generous dividend and stock buybacks. The chickens finally came home to roost, when they had to cut the dividend payment to fund fab expansion.
Oxford Guy - Thursday, March 23, 2023 - link
‘Intel ME is disabled on the Workstation that I was provided by the company’Ha. Sure it is.
It’s actually possible to stop Windows spyware with its nifty sliders and some 3rd-party tool, too.
Lion's Share - Wednesday, March 22, 2023 - link
I just can't believe there are people who believe Raja was delivering on GPU promise. My other surprise is Raja was let go so late. I joined Intel's AXG group some time back, genuinely believing Intel has plans in discreet GPUs and datacenter ones (Ponte Vecchio, which Raja advertised a lot as a revolution).As soon as I joined, I realized it was mostly marketing and my assumptions were totally wrong. The teams were incompetent, demotivated and were running way beyond schedules published. My boss told me long back PVC is dead, has some serious yield issues and next best hope is Rialto bridge. Both are dead now.
But catalyst for me was one of the quarterly all hands by Raja and other guy/gals in his org. The delays were written all over, the made some bogus paper launch of dGPU on March 31, 2022 which was Raja's quarterly OKR. And to add to that any further line of dGPUs were not only late, but not that competitive also I figured. And I realized one important thing: Raja is not a technical guy, he is a marketing guy who was blabbering about TAM and what not for dGPUs when they were horribly lagging in execution. Check out his ridiculously bold claims on 1000x improvement in PPA by 2027 on none other site than Anandtech.
Looking at Raja's lofty OKRs, so detached from reality, I knew he is on his way out. That it took so long is the only surprise I see. And he seems to have fooled Gelsinger for long. In one of the visits to overseas offices (with Raja), Pat mentioned he sees Raja as "the guy" in GPU instead of that nVIDIA guy in leather jacket. I almost threw up.
Lastly, I judge people's integrity based on their association with "like" minded people. Raja has stake and deep collaboration with one of the shadiest and corrupt guys in that VFX graphics company and is also a board member. How Intel management could not see this for years is a surprise to me, seriously.
Thankfully, I made a quick decision to leave Intel and could not have been happier in retrospect. Not bitching about my ex-employer but it was a real mess. I still wish Intel all the best and believe they still have some of the brightest people who are not being utilized properly.
arkhamasylum87 - Wednesday, March 22, 2023 - link
100%. Similar thing on the CPU side, Jim Keller was definitely not some savior type, more hype than actual technical mettle. Not sure why this Anandtech journalists gloat over these over-hyped ex-AMD folks. Even Zen was not Jim K's brain-child though he has taken a fair share of credit for it which he himself admitted. Intel should get back to being a lean, mean, engineering-first company instead of hiring some so-called hardware name-sake celebrities to somehow magically fix their execution.GreenReaper - Wednesday, March 22, 2023 - link
It's the Great Man theory. Sure, charismatic leadership can inspire people, but only so far. And taken out of the place they were "great", it usually doesn't work out. A good team is more than a single part.mode_13h - Thursday, March 23, 2023 - link
Even the best leader can't win with a weak team, but even the best team will founder with a weak leader.And leaders need to be good at leading. They need to have enough technical chops to know when they're being lied to, or not given the whole truth, and also to know who to hire & fire. If the leader isn't as technically sharp as most of the people they manage, that's okay. If they're bad at managing, that's not.
Oxford Guy - Thursday, March 23, 2023 - link
‘Jim Keller was definitely not some savior type, more hype than actual technical mettle.’Citation needed.
Sunrise089 - Thursday, March 23, 2023 - link
Agreed. I don’t understand lumping Raja and Keller together at all. Almost everything Keller touched turned to gold. The only gold I associate with Raja is his parachute as he bails from one gig to another.ABR - Friday, March 24, 2023 - link
Jim Keller is definitely a different matter from Raja. The guy makes things happen. And individual people most certainly make a difference in tech companies.Obiwanbilly - Saturday, March 25, 2023 - link
Yes, I’m a Jim Keller fan too.Raja shouldn’t be mentioned in the same breath as Jim, unless you’re making this statement.
Get it? I just did recursion! 👆
Haha … I’ll see myself out.
Bill