I'm surprised that Intel would also be open to RISC-V and ARM IPs. It seems that while the ship hasn't been turned to the right direction, the planning and execution will be ready.
Well they might not want to refit all the 22/16/10nm Fabs and anyway probably have issues getting enough EUV scanners so that is the best way to make money. For controllers, IoT and automotive those nodes are enough...
They also need to be very open to get any customers as so far that is not a thing. And who knows how much competition x86 gets in future anyway. So Fab business might be important. Or it is just a move to prepare in case they want to sell the Fabs.
Guess the lack of scanners at the economic situation where people buy anything they can are part of the reason to use 7nm in 2023. Though I hoped for U-Series in 7nm parallel to whole lineup of RaptorLake in 10 nm by Q4/2022
That's an interesting statement that equipment is not limiting.
I would have expected it is quite a factor especially with the change from DUV to EUV and ASML being the only option. They will get more chips of one wafer on 7nm/5nm and further but they have far more 14/10nm Fabs now and any time soon.
I would have bet whoever gets enough EUV scanners wins as this leads to more customers and shipments and therefore money to spend R&D.
Not to mention that it is not just Intel, TSMC and Samsung but also SK Hynix and Micron that will increase orders for EUV boosting DDR5, LPDDR5, HBM3 and GDDR7 production and efficiency/performance.
Is the tuxedo really a limiting factor when you don't have the social status or money to attend certain function? Intel is severely limited by its R&D, their EUV process isn't gonna be ready until late 2022, and they already have at least half a dozen EUV machines by now to enable around 20K WSPM initially. that's 6 million CPUs per month assuming a die size of 150sqmm (enough space to fit 10 Cove cores and 128EU Xe)
X86 cisc architecture is basically garbage which was created to address low ram capacity of 1980s home computers, not for performance. It has good performance just because Intel could pour money to have more advanced manufacturing compared to other foundries.
Since the long CISC macro-OP instructions of x86 CPUs started being RISC-ized internally to short μOPs the distinction between CISC and RISC is much less clearly defined. Sure, it would be better and more efficient to run from top to bottom RISC instructions without the need and overhead to translate the CISC instructions to RISC and then back to CISC, but we live in an imperfect world. If we want native RISC we get ARM or (soon) RISC-V :)
Probably the win win solution will be processor that decodes x86 command into arm or riscv risc commands so operating system use either x86 or risc natively
That's very close to what Apple is doing. Their CPU isn't exactly decoding the x86 instructions because they would need an x86 license for that, but it has special instructions to accelerate the translation done in software. That's why x86 emulation on Rosetta 2 is so fast and Mac OS can effectively run either x86 or ARM.
ZoZo, while you're post is spot on, decoding\emulating x86 instructions does not require a license. Hence Hyper-V and VMware can be given away free, and commercial products like Parallels is inexpensive. Even opensource products like DOSBox run x86 on ARM.
The reason Rosetta 2 doesn't emulate\decode x86 instructions IRT isn't because of licensing, it's because, as you said, its slow and an old school way to go about it.
(Repost, because I sent it originally to wrong comment... and also fixed missing word) There is no second step (And especially not RISC-CISC one). CISC instructions are decoded directly to microinstructions that are RISC-like.
Also x86 is pretty simple unlike other long-dead CISC sets. (It helps, that a lot of mostly unused instructions like BCD ones, could be moved to microcode)
There is second step (And especially not RISC-CISC one). CISC instructions are decoded directly to microinstructions that are RISC-like.
Also x86 is pretty simple unlike other long-dead CISC sets. (It helps, that a lot of mostly unused instructions like BCD ones, could be moved to microcode)
Any modern architecture is SUPER-CISC by 1980s standards.
Any difference in the complexity of the decoders between, say, x64 and ARM V8.2 is trivial (completely insignificant) compared to everything else the chips have to do.
I think the situation, financially especially, is quite bad on their foundry business so they need these extreme measures to stay alive. Pat might be a good engineer and ceo but money is money and the situation with intel fabs being sort of stuck for 5 years is desperate. If it were any other fab they would be shut down by now.
Or anyone’s IP. Steve Jobs literally brought up this exact Big Problem (tm) when Intel was begging to make the iPhone chips.
2011:
>But Jobs implies in the biography that Intel wasn't keeping up with the times. He explains why Apple didn't select Intel chips for the iPhone.
>"There were two reasons we didn't go with them. One was that they [the company] are just really slow. They're like a steamship, not very flexible. We're used to going pretty fast. Second is that we just didn't want to teach them everything, which they could go and sell to our competitors," Jobs is quoted as saying.
>On one level that last statement is rather remarkable. Jobs, of course, was saying that Apple would have to teach the world's premier chipmaker how to design better chips.“
>Reuters is reporting that Intel "wouldn't blink" if given the chance to make custom chips for Apple's devices, like the iPhone and iPad. At an investor event in London on Thursday, Chief Financial Officer Stacy Smith told journalists that "there are certain customers that would be interesting to us and certain customers that wouldn't." Apple, unsurprisingly, is one of the first type of customer.
>Currently the A4 and A5 chips found in iPhones and iPads are manufactured by Samsung, but reports have hinted that Apple may be moving away from Samsung and jumping to Taiwan Semiconductor Manufacturing Co. Ltd (TSMC) on a foundry basis. Given that Apple's A5 chip makes up a large portion of the $7.8 billion components contract Apple has with Samsung, it's no wonder that Intel would want to be a foundry chip maker for the Cupertino company.
>As Smith told reporters, "If Apple or Sony came to us and said 'I want to do a product that involves your IA (Intel architecture) core and put some of my IP around it', I wouldn't blink. That would be fantastic business for us."
In fairness, Jobs claimed everyone was stealing Apple's technology, even when said technology was out years before Apple did it. He was convinced the Commodore PET was a ripoff of the Apple II because they were both 6502-based computers.
Shugart sold Apple faulty floppy controllers because it didn't want Apple to succeed with its revolutionary plan to do floppy control in software, cutting the large expense of a physical controller from the drives. Apple was well-acquainted with unreliable partners early on.
One journalist suggested that the only reason Apple survived as a company, and prospered, is because it was offering much cheaper floppy drives. Remember, at that time a lot of people tried to be satisfied with cassette tape (which is pure garbage) because floppy drives were so very expensive. Apparently, the process of developing a software controller for a floppy was extremely challenging, so the brilliance of Wozniak (at least according to an account I read) was the reason the world got a software-driven floppy drive at that time, rather than later.
And, at that time, the market for computers was saturated with competing incompatible standards — making survival in the low-end business sector (especially once IBM entered the personal computer market) unlikely. The Apple II had some nice qualities but it wasn't so great, particularly for the cost of one. The cheapness of the floppy drive, though, brought a lot of interest Apple's way. All of the slots in the II also differentiated it from cheaper competitors like the C-64. Machines with lots of slots ('open platforms') weren't new but many in business tried to sell closed systems for better planned obsolescence. (That trend included Apple, with the II-to-III transition and with the Lisa-to-Mac one.)
Despite the continued success of the II line, Apple then proceeded to botch things extremely badly by ignoring the Apple II's development in favor of the Apple III — a product that should have never have happened. It didn't take much intelligence to realize that a new incompatible 8-bit platform was not going to be enough by that time to establish a successful standard. (Shipping a machine that overheated, causing it to pop-out its RAM chips, and which had a broken clock also didn't help.)
The Motorola 68000's design, after all, was finished in '79 I think. That's a 24/32-bit CPU. Apple then proceeded to botch the replacement of the III, by making the Lisa too slow to impress ordinary people (and giving it bad floppy drives). Ordinary people associated price tag mainly with perceived speed — not productivity and/or operational sophistication. Like the III and the II before it, it was also overpriced. Finally, it was created before the Japanese managed to price fix RAM by taking control of the market. That sent RAM prices sky high, making the Lisa's 1 MB standard untenable, as it cost Apple $2500 in part cost before the price fixing began.
The 68000 could be purchased in 10 MHz form when it launched but Apple put 5 MHz chips into the Lisa and slowed the machine further by refusing to add a GPU (which could have helped a lot with its very sluggish scrolling), coding too much in Pascal, and adding a sluggish kludge to get protected memory, since Motorola hadn't managed to get that working properly. The 68000 contained bugs that prevented it from working with virtual memory as well.
The Lisa was a tremendously good system in some ways but was fatally flawed in a few. It was too slow, too expensive, too incompatible with existing standards, and stupidly orphaned (in favor of the toy-quality 128K Mac) rather than improved upon.
As for your claim about the PET, do you have a source?
Anyone who would trust Intel over TSMC is at absolute best a complete and total rube, any advanced technology company whose more concerned with petty nationalism than the quality of their products and partnerships is a hopelessly pathetic and anachronistic joke fully deserving of the inevitable grave its competitors will crush it into.
I would think legally it would be much easier for a US company to impose penalties on another US company if they violated an NDA. Enforcing one countrie's laws on a company based in another country is not such an easy process. Of course if a company violates IP rights, they may find it more difficult to find customers willing to give them access to IP in the future, although proving who was the IP leaker might be difficult, so the damage to the leaker's business reputation may not fatal to the company. The engineering productivity optimal thing would be to share all the details with a partner company, under an NDA, and know if your IP was stolen you had significant power for legal remedies. Just taking an example of Apple IP being manufactured by TSMC or Samsung, vs being manufactured by Intel, I would imagine the consequences in the courts might be much worse for Intel than it would be for TSMC or Samsung, simply because Apple and Intel are US based companies. Any international IP lawyers reading this? Who might comment from the legal professional viewpoint? I'm a software engineer, so the accuracy of my legal understanding is limited.
That's all fine and great until you realize that a fab being owned by a foreign company can cause problems with national security and can upset the balance of power between countries.
You know, like TSMC being in an island nation that china wants to gobble up, and much of the world seems indifferent to this.
"so you trust taiwanese company more that your own country company"
I'd say that one needs to be sceptical of any company, though some *are* of higher integrity than others. Intel, though, will likely score particularly low on that scale.
I think it will. Not for flagship productions,, though. I can see car manufacturers going to them. They are -apparently- in dire need of more production. Shortage of their own making, of course, but it also gives them a chance to divest their production.
Intel is aware that dominance of x86 is coming to end, and ARM is quickly spreading as wildfire. Apple, Microsoft, NV, possibly soon AMD, and many others, are jumping into ARM side, as they do not want to be still locked in Intel / x86 hand. Intel is trying to hold this escape with their new foundry model "take our x86, manufacture at out fabs", but it is too late. TSMC and ARM will help to crush and dethrone Intel, and Intel can't do anything to stop that.
So unless I'm misreading, none of this sounds like customers can use physical IP for an Intel x86 core on a third-party foundry - just that Intel is likely to do so itself. Is there something I'm missing that says otherwise?
these days the 'compute' area of the cpu is, what, 10% of the real estate? perhaps Dr. Cutress has some exact figures, but I'd wager it's been at least 2 decades since increasing transistor budgets were used for anything other than bringing off-cpu functions onto the chip. I guess that's progress of a sort, but also an implementation of monopoly. will ARM go down some different road? only The Shadow knows.
Console vendors pretty much always go with whatever CPU vendor is offering the best deal that comes remotely close to meeting their requirements. That's why everyone went with IBM in the Wii/360/PS3 generation and the last two have been AMD for MS & Sony and NVidia for Nintendo. Though there were a lot of rumors around MS going with Intel this generation early in the XSX development cycle, and that might well have happened if Intel 10nm hadn't been delayed so much.
i think with compute cores they mean their gpu stuff. it’s where intel manufacturing has had far bigger troubles, and would mean not giving away their x86 core designs. Also in the consumer space about half of the cpu die is dedicated to the gpu.
Is this a joke ? Last time Itanium failed when Gelsinger was at Intel. This guy is really not Andy Grove to learn from mistakes but he is actively trying to net more cash. Again.
x86 IP licensing LMAO. AMD and Intel are able to stronghold it because they are only ones in the game. What makes Intel think that x86 licensing to other fabless corporations will make them big ? And on top this is not ARM type IP where the IP owner is having no business in the core products except licensing. So this creates friction no matter how you see it. How can a company produce and create same stuff in its core business and also sell the same shit ? This just seems desperation that someome comes to Intel and buys their IP plus Fab capacity to build chips (think Surface garbage or a Console with new pathetic big little bs)
IFS, what is this bullshit. Intel 10nm still has to prove itself. With 11700K GN review it's clear that RKL is a DOA product. And their new Cypress Cove bs is all smoke. Its worse than Skylake damn it. 10700K beats it due to Clock boost as AT also showed there's clock regression in scaling. So who is even trusting Intel x86 uArch AND their fabs ? 14nm++ is a feat but its old. Impossible to compete now with TSMC 7N forget 7N with EUV. 5N EUV is havoc.
Next generation revenue lmao. Intel fucking needs to get their head straight. Which is getting Xeon on Track vs EPYC. And bringing competition back to HEDT and Mainstream. This whole thing smells so fake and random.
They are done. Once Zen 4 drops Intel is going to cough blood. And their Xe GPU is even more shame. We saw how EMIB worked out. AMD Vega based APU. Now Foveros as if they are first.
Just watch AMD with Xilinx FPGA plus RDNA and Zen will destroy this pathetic corp. Shame really how far they have fallen. I expected a 7nm design breakthrough not some merchant business.
They'll be fine as long as AMD can't produce more. They have some important products in 2021 to keep them afloat in some markets: Ice Lake-SP, Tiger Lake-H and Alder Lake-S. With the latter they'll retake leadership in I/O (PCI-E 5.0 and DDR5). By the time AMD becomes a threat in production capacity, Intel might be back on track technologically.
With the latter they'll retake leadership in I/O (PCI-E 5.0 and DDR5).
those chips will be capable of... not said when they will release with that spec... and what good is having I/O on par with AMD... knowing that the core can't follow...
TigerLake-H will be competitive to Zen3 in perfromance and offers more features. AlderLake will probably be an advantage in every metric to Zen3+ in mobile as the small cores might be a game changer. On Desktop AMD might have an advantage with all big cores but Desktop is a tiny share. It all comes down to servers and SaphireRapids...
Anyway it doesn't really matter in 2021/22 as market js different and you can sell anything you can get shipped. So all that matters is capacity for the next 12-18 months.
That's why [besides the time it takes to implement] they focus resources on 2023 and onwards...
Looks to me like Intel will be producing Sapphire Rapids before zen4 chips come out. They are already sampling and the CEO gave q1 2022 production ramp schedule in the most recent presentation.
The CEO also stated Alder Lake desktop chips will come before the mobile Alder Lake chips ... in 2H 2021. These add DDR5 and PCIE5 according to recent leaks. So, looks like Intel will move ahead in desktop chip technology in mid 2021.
This quite frankly sounds like a premature April Fool's joke, particularly the part of core IP licensing for customers to fab x86 parts *not* at Intel. Almost as surprising is that Intel are willing to license x86 IP at all. So they are choosing new revenue streams over control? Wait, you're sure we are talking about Intel right?
This is even more surprising than announcing that they are going to spin off their fabs ala AMD. This might be the next step, but even AMD did not license their cores, not even the old ones. I strongly doubt they are going to license anything newer than Skylake and its multiple variants; not even Comet Lake, I would guess Coffee Lake tops. Perhaps in 2 - 3 years they will licence Sunny Cove. As for iGPUs they clearly are not going to licence Xe, not even its predecessor Gen11. Crappy Gen9.5 tops. I guess we'll wait and see...
Looks like Intel wants to get the most out of its 3D fabrication headstart. Perhaps selling x86 tiles for a customer specified system on a package design is their way of fighting off the ARM homebrew projects.
That doesn't give out their x86 processing secrets ... just a new way to sell cores.
I read these statements also as a way to address the obvious question about investing billions of dollars now in fabs that won't be ready for years. Having a much more flexible foundry strategy helps assuage the fears that all that capacity will hit just when demand is sagging. By planning for fabrication flexibility already at the chip design and foundry construction stages, Intel is less hemmed in. And, with foundries, the most expensive ones are those that are mothballed. So, fabbing some chips for IBM or fabless RISC-V or even ARM designs helps spread the risk. Being prepared to have TSMC build some CPUs for them means Intel can sell CPUs even if their manufacturing isn't ready.
This might end up creating opportunities for much more interesting ISAs, like Agner Fog's ForwardCom, the Mill CPU, or Rex Computing.
RISC-V is pretty boring, an academic project with arbitrary academic decision-making. It's not future-focused enough, and without doing something about the programming language problem, a new conservative RISC ISA doesn't accomplish much.
There is that crazy performance they allegedly hit last year with a 1 watt core going 5.0 GHz or something, but I haven't seen any follow-up or explanation. An ISA doesn't just give you that kind of performance by itself, especially not a conventional ISA, so I'm not sure what that core was.
Also, Itanium. Intel should dust it off or something like it. It was good, but they sat around and watched people struggling to build compilers for it. You have to build the compiler with the chip – you can't just introduce the chip and walk away.
I've thought about scenarios where we might want to license Intel's 14nm node, presumably a multi-plus evolved, mature version of it like whatever they're using for Rocket/Comet/Coffee Lake.
That would be interesting, kind of like GlobalFoundries licensing Samsung's 14nm node. Intel's should be the best 14nm node out there, probably better than the "12nm" branded nodes from TSMC and GF.
What about their 10nm? Do you all think it will be a long node, something they could offer as a foundry node? Assume again a multi-plus iteration, the Enhanced SuperFin 10++++ or whatever that they'll use for Sapphire Rapids. What's their cost like compared to TSMC's 7nm? Do they disclose the cost in their corporate reporting as a public company? If TSMC 7nm is supposed to be more expensive than Samsung's 7nm, I wonder where Intel is. It seems like they struggled a lot with 10nm, which might translate into high costs, poor yields, etc.
Do you think 14nm and 10nm will ever be cheap? Well, cheaper? Is there supposed to be a significant drop in the cost of entry at these nodes in the coming years? I wonder if like in 2025 14nm will be super cheap everywhere, at why-not prices that you just use for new SoCs and ASICs by default, instead of 40 or 28, or whether it will still be a major cost barrier.
Intel CEO stated they shipped about 30 million Tiger Lake 10sf chips. While they struggled in 2019 and 2020 with pre 10sf, their reports on TGL yield improvements have been pretty positive.
"In a twist to the norm, Intel is now set to dissolve those walls keeping its x86 cores it itself." "it itself." is incorrect English. IDK what you intended to say here.
It’s official. Intel is Now apples B**CH. No lube, no soap. Microsoft is watching the sordid affair from the cheap seats munching popcorn after being reemed by the Solarwinds and Exchange hacks. No wonder Billy “Boy” Gates bailed early to count his GMO vaccine money.
It’s only been 4 months since M1 Macs, and Greaselinger is on his knees offering to be their foundry. Who was the Intel exec who turned down Steve Jobs for the iPhone chip contract? Surely he must be at the bottom of San Francisco Bay wearing concrete boots.
Intel hasn’t move below the 14nm process, because the don’t have a market for it. Apple has made their own market. Teach that in Harverd biznes skool.
I presume you're American, since Americans have such a penchant for attacking legitimate sexuality when they're trying to belittle something/someone. Not a good look.
What in GOD's name does gay sex have to do with IT? These forums seriously need moderation. I guess you just cannot expect children to act like adults. Yes, I'm American. Go ahead and take your best shot; and, just so you know, your penchant for gay sex does not make it legitimate. Your denigrating presumptions about Americans is asinine. It does, however, serve to illustrate your low IQ. Have a nice day, mate.
The problem for Intel is the same as it was the last time they offered foundry services, ie- their mainstream processes are entirely focused on making very high clock speed CPUs. As such they are not ideal (from a cost/complexity standpoint) for pretty much anything else. They're too specialized, basically.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
70 Comments
Back to Article
RanFodar - Tuesday, March 23, 2021 - link
I'm surprised that Intel would also be open to RISC-V and ARM IPs. It seems that while the ship hasn't been turned to the right direction, the planning and execution will be ready.Matthias B V - Tuesday, March 23, 2021 - link
Well they might not want to refit all the 22/16/10nm Fabs and anyway probably have issues getting enough EUV scanners so that is the best way to make money. For controllers, IoT and automotive those nodes are enough...They also need to be very open to get any customers as so far that is not a thing. And who knows how much competition x86 gets in future anyway. So Fab business might be important. Or it is just a move to prepare in case they want to sell the Fabs.
Guess the lack of scanners at the economic situation where people buy anything they can are part of the reason to use 7nm in 2023. Though I hoped for U-Series in 7nm parallel to whole lineup of RaptorLake in 10 nm by Q4/2022
Ian Cutress - Wednesday, March 24, 2021 - link
Intel has stated that they don't see equipment as a limiting factor, even with the long EUV machine lead times.fallaha56 - Wednesday, March 24, 2021 - link
well they would say that but ASML's share price spike says otherwisemany promises here but meantime let's just see how far behind or not Ice Lake SP is vs Epyc 3rd gen
of course now TSMC has clear motivation not to be esp supportative to Intel as well
Matthias B V - Wednesday, March 24, 2021 - link
That's an interesting statement that equipment is not limiting.I would have expected it is quite a factor especially with the change from DUV to EUV and ASML being the only option. They will get more chips of one wafer on 7nm/5nm and further but they have far more 14/10nm Fabs now and any time soon.
I would have bet whoever gets enough EUV scanners wins as this leads to more customers and shipments and therefore money to spend R&D.
Not to mention that it is not just Intel, TSMC and Samsung but also SK Hynix and Micron that will increase orders for EUV boosting DDR5, LPDDR5, HBM3 and GDDR7 production and efficiency/performance.
dotjaz - Wednesday, April 7, 2021 - link
Is the tuxedo really a limiting factor when you don't have the social status or money to attend certain function? Intel is severely limited by its R&D, their EUV process isn't gonna be ready until late 2022, and they already have at least half a dozen EUV machines by now to enable around 20K WSPM initially. that's 6 million CPUs per month assuming a die size of 150sqmm (enough space to fit 10 Cove cores and 128EU Xe)zamroni - Wednesday, March 24, 2021 - link
X86 cisc architecture is basically garbage which was created to address low ram capacity of 1980s home computers, not for performance.It has good performance just because Intel could pour money to have more advanced manufacturing compared to other foundries.
Santoval - Wednesday, March 24, 2021 - link
Since the long CISC macro-OP instructions of x86 CPUs started being RISC-ized internally to short μOPs the distinction between CISC and RISC is much less clearly defined. Sure, it would be better and more efficient to run from top to bottom RISC instructions without the need and overhead to translate the CISC instructions to RISC and then back to CISC, but we live in an imperfect world. If we want native RISC we get ARM or (soon) RISC-V :)zamroni - Wednesday, March 24, 2021 - link
Probably the win win solution will be processor that decodes x86 command into arm or riscv risc commands so operating system use either x86 or risc nativelyZoZo - Wednesday, March 24, 2021 - link
That's very close to what Apple is doing. Their CPU isn't exactly decoding the x86 instructions because they would need an x86 license for that, but it has special instructions to accelerate the translation done in software. That's why x86 emulation on Rosetta 2 is so fast and Mac OS can effectively run either x86 or ARM.Samus - Wednesday, March 24, 2021 - link
ZoZo, while you're post is spot on, decoding\emulating x86 instructions does not require a license. Hence Hyper-V and VMware can be given away free, and commercial products like Parallels is inexpensive. Even opensource products like DOSBox run x86 on ARM.The reason Rosetta 2 doesn't emulate\decode x86 instructions IRT isn't because of licensing, it's because, as you said, its slow and an old school way to go about it.
Klimax - Wednesday, March 24, 2021 - link
Why? ARM and RISC-V are too decoded into microinstructions. You would just add extra step to pipeline for no benefit.If you want both of them on silicon and have x86 licence, just add a decoder or two for common case + microcode for rest.
Klimax - Wednesday, March 24, 2021 - link
(Repost, because I sent it originally to wrong comment... and also fixed missing word)There is no second step (And especially not RISC-CISC one). CISC instructions are decoded directly to microinstructions that are RISC-like.
Also x86 is pretty simple unlike other long-dead CISC sets. (It helps, that a lot of mostly unused instructions like BCD ones, could be moved to microcode)
Klimax - Wednesday, March 24, 2021 - link
There is second step (And especially not RISC-CISC one). CISC instructions are decoded directly to microinstructions that are RISC-like.Also x86 is pretty simple unlike other long-dead CISC sets. (It helps, that a lot of mostly unused instructions like BCD ones, could be moved to microcode)
Klimax - Wednesday, March 24, 2021 - link
Crap, posted under wrong comment...zmatt - Wednesday, March 24, 2021 - link
Funny you claim x86 is a garbage ISA stuck in the 80's when your reasoning is just as if not more stuck in the 80's.As others have pointed out, your complaints haven't been relevant or factual for a very long time. Since Pentium Pro actually.
peevee - Tuesday, March 30, 2021 - link
Any modern architecture is SUPER-CISC by 1980s standards.Any difference in the complexity of the decoders between, say, x64 and ARM V8.2 is trivial (completely insignificant) compared to everything else the chips have to do.
yeeeeman - Thursday, March 25, 2021 - link
I think the situation, financially especially, is quite bad on their foundry business so they need these extreme measures to stay alive. Pat might be a good engineer and ceo but money is money and the situation with intel fabs being sort of stuck for 5 years is desperate. If it were any other fab they would be shut down by now.Evo01 - Tuesday, March 23, 2021 - link
So how soon do we see AMD silicon using an Intel fab?TristanSDX - Tuesday, March 23, 2021 - link
probably neverDigitalFreak - Tuesday, March 23, 2021 - link
That would be foolish. I wouldn't put it past Intel to steal AMD's IP if given the chance.ikjadoon - Tuesday, March 23, 2021 - link
Or anyone’s IP. Steve Jobs literally brought up this exact Big Problem (tm) when Intel was begging to make the iPhone chips.2011:
>But Jobs implies in the biography that Intel wasn't keeping up with the times. He explains why Apple didn't select Intel chips for the iPhone.
>"There were two reasons we didn't go with them. One was that they [the company] are just really slow. They're like a steamship, not very flexible. We're used to going pretty fast. Second is that we just didn't want to teach them everything, which they could go and sell to our competitors," Jobs is quoted as saying.
>On one level that last statement is rather remarkable. Jobs, of course, was saying that Apple would have to teach the world's premier chipmaker how to design better chips.“
https://www.cnet.com/news/steve-jobs-knocked-intel...
>Reuters is reporting that Intel "wouldn't blink" if given the chance to make custom chips for Apple's devices, like the iPhone and iPad. At an investor event in London on Thursday, Chief Financial Officer Stacy Smith told journalists that "there are certain customers that would be interesting to us and certain customers that wouldn't." Apple, unsurprisingly, is one of the first type of customer.
>Currently the A4 and A5 chips found in iPhones and iPads are manufactured by Samsung, but reports have hinted that Apple may be moving away from Samsung and jumping to Taiwan Semiconductor Manufacturing Co. Ltd (TSMC) on a foundry basis. Given that Apple's A5 chip makes up a large portion of the $7.8 billion components contract Apple has with Samsung, it's no wonder that Intel would want to be a foundry chip maker for the Cupertino company.
>As Smith told reporters, "If Apple or Sony came to us and said 'I want to do a product that involves your IA (Intel architecture) core and put some of my IP around it', I wouldn't blink. That would be fantastic business for us."
https://www.engadget.com/2011-05-26-intel-hints-at...
arashi - Wednesday, March 24, 2021 - link
Coming from Apple, that same company that sent top secret NDA Qualcomm documents to Intel?Lord of the Bored - Wednesday, March 24, 2021 - link
In fairness, Jobs claimed everyone was stealing Apple's technology, even when said technology was out years before Apple did it. He was convinced the Commodore PET was a ripoff of the Apple II because they were both 6502-based computers.Oxford Guy - Saturday, March 27, 2021 - link
Shugart sold Apple faulty floppy controllers because it didn't want Apple to succeed with its revolutionary plan to do floppy control in software, cutting the large expense of a physical controller from the drives. Apple was well-acquainted with unreliable partners early on.One journalist suggested that the only reason Apple survived as a company, and prospered, is because it was offering much cheaper floppy drives. Remember, at that time a lot of people tried to be satisfied with cassette tape (which is pure garbage) because floppy drives were so very expensive. Apparently, the process of developing a software controller for a floppy was extremely challenging, so the brilliance of Wozniak (at least according to an account I read) was the reason the world got a software-driven floppy drive at that time, rather than later.
And, at that time, the market for computers was saturated with competing incompatible standards — making survival in the low-end business sector (especially once IBM entered the personal computer market) unlikely. The Apple II had some nice qualities but it wasn't so great, particularly for the cost of one. The cheapness of the floppy drive, though, brought a lot of interest Apple's way. All of the slots in the II also differentiated it from cheaper competitors like the C-64. Machines with lots of slots ('open platforms') weren't new but many in business tried to sell closed systems for better planned obsolescence. (That trend included Apple, with the II-to-III transition and with the Lisa-to-Mac one.)
Despite the continued success of the II line, Apple then proceeded to botch things extremely badly by ignoring the Apple II's development in favor of the Apple III — a product that should have never have happened. It didn't take much intelligence to realize that a new incompatible 8-bit platform was not going to be enough by that time to establish a successful standard. (Shipping a machine that overheated, causing it to pop-out its RAM chips, and which had a broken clock also didn't help.)
The Motorola 68000's design, after all, was finished in '79 I think. That's a 24/32-bit CPU. Apple then proceeded to botch the replacement of the III, by making the Lisa too slow to impress ordinary people (and giving it bad floppy drives). Ordinary people associated price tag mainly with perceived speed — not productivity and/or operational sophistication. Like the III and the II before it, it was also overpriced. Finally, it was created before the Japanese managed to price fix RAM by taking control of the market. That sent RAM prices sky high, making the Lisa's 1 MB standard untenable, as it cost Apple $2500 in part cost before the price fixing began.
The 68000 could be purchased in 10 MHz form when it launched but Apple put 5 MHz chips into the Lisa and slowed the machine further by refusing to add a GPU (which could have helped a lot with its very sluggish scrolling), coding too much in Pascal, and adding a sluggish kludge to get protected memory, since Motorola hadn't managed to get that working properly. The 68000 contained bugs that prevented it from working with virtual memory as well.
The Lisa was a tremendously good system in some ways but was fatally flawed in a few. It was too slow, too expensive, too incompatible with existing standards, and stupidly orphaned (in favor of the toy-quality 128K Mac) rather than improved upon.
As for your claim about the PET, do you have a source?
heickelrrx - Tuesday, March 23, 2021 - link
so you trust taiwanese company more that your own country companygreat
haghands - Wednesday, March 24, 2021 - link
Anyone who would trust Intel over TSMC is at absolute best a complete and total rube, any advanced technology company whose more concerned with petty nationalism than the quality of their products and partnerships is a hopelessly pathetic and anachronistic joke fully deserving of the inevitable grave its competitors will crush it into.fallaha56 - Wednesday, March 24, 2021 - link
have to agreemeantime expect to see TSMC's interesting in supporting Intel wane fast
tomatotree - Wednesday, March 24, 2021 - link
Agreed, I feel the same way about the US government.FunBunny2 - Wednesday, March 24, 2021 - link
"more concerned with petty nationalism"what would MAGA say?
jcbottorff - Wednesday, March 24, 2021 - link
I would think legally it would be much easier for a US company to impose penalties on another US company if they violated an NDA. Enforcing one countrie's laws on a company based in another country is not such an easy process. Of course if a company violates IP rights, they may find it more difficult to find customers willing to give them access to IP in the future, although proving who was the IP leaker might be difficult, so the damage to the leaker's business reputation may not fatal to the company. The engineering productivity optimal thing would be to share all the details with a partner company, under an NDA, and know if your IP was stolen you had significant power for legal remedies. Just taking an example of Apple IP being manufactured by TSMC or Samsung, vs being manufactured by Intel, I would imagine the consequences in the courts might be much worse for Intel than it would be for TSMC or Samsung, simply because Apple and Intel are US based companies. Any international IP lawyers reading this? Who might comment from the legal professional viewpoint? I'm a software engineer, so the accuracy of my legal understanding is limited.TheinsanegamerN - Saturday, March 27, 2021 - link
That's all fine and great until you realize that a fab being owned by a foreign company can cause problems with national security and can upset the balance of power between countries.You know, like TSMC being in an island nation that china wants to gobble up, and much of the world seems indifferent to this.
peevee - Tuesday, March 30, 2021 - link
Taiwan is going to be occupied by PRC soon enough, and any responsible western administration would introduce major sanctions against them.GeoffreyA - Wednesday, March 24, 2021 - link
"so you trust taiwanese company more that your own country company"I'd say that one needs to be sceptical of any company, though some *are* of higher integrity than others. Intel, though, will likely score particularly low on that scale.
Klimax - Wednesday, March 24, 2021 - link
Because Intel just loves being sued.One of more moronic assertions about Intel out of all BS claims floating out there.
shabby - Wednesday, March 24, 2021 - link
Don't be silly, back porting zen to 14nm is impossible!analogandy - Wednesday, March 31, 2021 - link
Intel has had empty fabs years.Publicly stating they’re entering head first into the foundry bizness won’t make a difference.
Timoo - Monday, April 5, 2021 - link
I think it will. Not for flagship productions,, though. I can see car manufacturers going to them. They are -apparently- in dire need of more production. Shortage of their own making, of course, but it also gives them a chance to divest their production.TristanSDX - Tuesday, March 23, 2021 - link
Intel is aware that dominance of x86 is coming to end, and ARM is quickly spreading as wildfire. Apple, Microsoft, NV, possibly soon AMD, and many others, are jumping into ARM side, as they do not want to be still locked in Intel / x86 hand. Intel is trying to hold this escape with their new foundry model "take our x86, manufacture at out fabs", but it is too late. TSMC and ARM will help to crush and dethrone Intel, and Intel can't do anything to stop that.TheinsanegamerN - Saturday, March 27, 2021 - link
2013 called and wants its prediction back.SarahKerrigan - Tuesday, March 23, 2021 - link
So unless I'm misreading, none of this sounds like customers can use physical IP for an Intel x86 core on a third-party foundry - just that Intel is likely to do so itself. Is there something I'm missing that says otherwise?Ian Cutress - Wednesday, March 24, 2021 - link
We won't know exactly until they disclose the exact licensing model, but you are likely correct.spaceship9876 - Tuesday, March 23, 2021 - link
If only intel did this when the PS4 was being designed, the cpu cores wouldn't have been slow.FunBunny2 - Wednesday, March 24, 2021 - link
"cpu cores wouldn't have been slow."these days the 'compute' area of the cpu is, what, 10% of the real estate? perhaps Dr. Cutress has some exact figures, but I'd wager it's been at least 2 decades since increasing transistor budgets were used for anything other than bringing off-cpu functions onto the chip. I guess that's progress of a sort, but also an implementation of monopoly. will ARM go down some different road? only The Shadow knows.
drothgery - Friday, March 26, 2021 - link
Console vendors pretty much always go with whatever CPU vendor is offering the best deal that comes remotely close to meeting their requirements. That's why everyone went with IBM in the Wii/360/PS3 generation and the last two have been AMD for MS & Sony and NVidia for Nintendo. Though there were a lot of rumors around MS going with Intel this generation early in the XSX development cycle, and that might well have happened if Intel 10nm hadn't been delayed so much.bernstein - Tuesday, March 23, 2021 - link
i think with compute cores they mean their gpu stuff. it’s where intel manufacturing has had far bigger troubles, and would mean not giving away their x86 core designs. Also in the consumer space about half of the cpu die is dedicated to the gpu.heickelrrx - Tuesday, March 23, 2021 - link
The Goal is not only business motive but also politicalThis mean, there will be new solution for cutting edge process node from US, not all is depend on Asian foundery
Silver5urfer - Wednesday, March 24, 2021 - link
Is this a joke ? Last time Itanium failed when Gelsinger was at Intel. This guy is really not Andy Grove to learn from mistakes but he is actively trying to net more cash. Again.x86 IP licensing LMAO. AMD and Intel are able to stronghold it because they are only ones in the game. What makes Intel think that x86 licensing to other fabless corporations will make them big ? And on top this is not ARM type IP where the IP owner is having no business in the core products except licensing. So this creates friction no matter how you see it. How can a company produce and create same stuff in its core business and also sell the same shit ? This just seems desperation that someome comes to Intel and buys their IP plus Fab capacity to build chips (think Surface garbage or a Console with new pathetic big little bs)
IFS, what is this bullshit. Intel 10nm still has to prove itself. With 11700K GN review it's clear that RKL is a DOA product. And their new Cypress Cove bs is all smoke. Its worse than Skylake damn it. 10700K beats it due to Clock boost as AT also showed there's clock regression in scaling. So who is even trusting Intel x86 uArch AND their fabs ? 14nm++ is a feat but its old. Impossible to compete now with TSMC 7N forget 7N with EUV. 5N EUV is havoc.
Next generation revenue lmao. Intel fucking needs to get their head straight. Which is getting Xeon on Track vs EPYC. And bringing competition back to HEDT and Mainstream. This whole thing smells so fake and random.
They are done. Once Zen 4 drops Intel is going to cough blood. And their Xe GPU is even more shame. We saw how EMIB worked out. AMD Vega based APU. Now Foveros as if they are first.
Just watch AMD with Xilinx FPGA plus RDNA and Zen will destroy this pathetic corp. Shame really how far they have fallen. I expected a 7nm design breakthrough not some merchant business.
ZoZo - Wednesday, March 24, 2021 - link
They'll be fine as long as AMD can't produce more.They have some important products in 2021 to keep them afloat in some markets: Ice Lake-SP, Tiger Lake-H and Alder Lake-S. With the latter they'll retake leadership in I/O (PCI-E 5.0 and DDR5).
By the time AMD becomes a threat in production capacity, Intel might be back on track technologically.
duploxxx - Wednesday, March 24, 2021 - link
With the latter they'll retake leadership in I/O (PCI-E 5.0 and DDR5).those chips will be capable of... not said when they will release with that spec... and what good is having I/O on par with AMD... knowing that the core can't follow...
Matthias B V - Wednesday, March 24, 2021 - link
TigerLake-H will be competitive to Zen3 in perfromance and offers more features. AlderLake will probably be an advantage in every metric to Zen3+ in mobile as the small cores might be a game changer. On Desktop AMD might have an advantage with all big cores but Desktop is a tiny share. It all comes down to servers and SaphireRapids...Anyway it doesn't really matter in 2021/22 as market js different and you can sell anything you can get shipped. So all that matters is capacity for the next 12-18 months.
That's why [besides the time it takes to implement] they focus resources on 2023 and onwards...
ET - Thursday, March 25, 2021 - link
Someone's been drinking.JayNor - Friday, March 26, 2021 - link
Looks to me like Intel will be producing Sapphire Rapids before zen4 chips come out. They are already sampling and the CEO gave q1 2022 production ramp schedule in the most recent presentation.The CEO also stated Alder Lake desktop chips will come before the mobile Alder Lake chips ... in 2H 2021. These add DDR5 and PCIE5 according to recent leaks. So, looks like Intel will move ahead in desktop chip technology in mid 2021.
wut - Tuesday, March 30, 2021 - link
You're typing like a teen. What a cringe to read...Santoval - Wednesday, March 24, 2021 - link
This quite frankly sounds like a premature April Fool's joke, particularly the part of core IP licensing for customers to fab x86 parts *not* at Intel. Almost as surprising is that Intel are willing to license x86 IP at all. So they are choosing new revenue streams over control? Wait, you're sure we are talking about Intel right?This is even more surprising than announcing that they are going to spin off their fabs ala AMD. This might be the next step, but even AMD did not license their cores, not even the old ones. I strongly doubt they are going to license anything newer than Skylake and its multiple variants; not even Comet Lake, I would guess Coffee Lake tops. Perhaps in 2 - 3 years they will licence Sunny Cove. As for iGPUs they clearly are not going to licence Xe, not even its predecessor Gen11. Crappy Gen9.5 tops. I guess we'll wait and see...
shadowjk - Wednesday, March 24, 2021 - link
Does the Chinese Zen count as "licensed AMD cores"?Matthias B V - Wednesday, March 24, 2021 - link
AMD did not license cores? Well THATIC might be only partially licenced. But what about SONY PS, Microsoft XBox and Samsung future SoC?They, especially SONY designs their APUs using AMDs IP with more or less guidance of AMD.
JayNor - Friday, March 26, 2021 - link
Looks like Intel wants to get the most out of its 3D fabrication headstart. Perhaps selling x86 tiles for a customer specified system on a package design is their way of fighting off the ARM homebrew projects.That doesn't give out their x86 processing secrets ... just a new way to sell cores.
eastcoast_pete - Wednesday, March 24, 2021 - link
I read these statements also as a way to address the obvious question about investing billions of dollars now in fabs that won't be ready for years. Having a much more flexible foundry strategy helps assuage the fears that all that capacity will hit just when demand is sagging. By planning for fabrication flexibility already at the chip design and foundry construction stages, Intel is less hemmed in. And, with foundries, the most expensive ones are those that are mothballed. So, fabbing some chips for IBM or fabless RISC-V or even ARM designs helps spread the risk. Being prepared to have TSMC build some CPUs for them means Intel can sell CPUs even if their manufacturing isn't ready.JoeDuarte - Wednesday, March 24, 2021 - link
This might end up creating opportunities for much more interesting ISAs, like Agner Fog's ForwardCom, the Mill CPU, or Rex Computing.RISC-V is pretty boring, an academic project with arbitrary academic decision-making. It's not future-focused enough, and without doing something about the programming language problem, a new conservative RISC ISA doesn't accomplish much.
There is that crazy performance they allegedly hit last year with a 1 watt core going 5.0 GHz or something, but I haven't seen any follow-up or explanation. An ISA doesn't just give you that kind of performance by itself, especially not a conventional ISA, so I'm not sure what that core was.
Also, Itanium. Intel should dust it off or something like it. It was good, but they sat around and watched people struggling to build compilers for it. You have to build the compiler with the chip – you can't just introduce the chip and walk away.
JoeDuarte - Wednesday, March 24, 2021 - link
I've thought about scenarios where we might want to license Intel's 14nm node, presumably a multi-plus evolved, mature version of it like whatever they're using for Rocket/Comet/Coffee Lake.That would be interesting, kind of like GlobalFoundries licensing Samsung's 14nm node. Intel's should be the best 14nm node out there, probably better than the "12nm" branded nodes from TSMC and GF.
What about their 10nm? Do you all think it will be a long node, something they could offer as a foundry node? Assume again a multi-plus iteration, the Enhanced SuperFin 10++++ or whatever that they'll use for Sapphire Rapids. What's their cost like compared to TSMC's 7nm? Do they disclose the cost in their corporate reporting as a public company? If TSMC 7nm is supposed to be more expensive than Samsung's 7nm, I wonder where Intel is. It seems like they struggled a lot with 10nm, which might translate into high costs, poor yields, etc.
Do you think 14nm and 10nm will ever be cheap? Well, cheaper? Is there supposed to be a significant drop in the cost of entry at these nodes in the coming years? I wonder if like in 2025 14nm will be super cheap everywhere, at why-not prices that you just use for new SoCs and ASICs by default, instead of 40 or 28, or whether it will still be a major cost barrier.
JayNor - Friday, March 26, 2021 - link
Intel CEO stated they shipped about 30 million Tiger Lake 10sf chips. While they struggled in 2019 and 2020 with pre 10sf, their reports on TGL yield improvements have been pretty positive.https://twitter.com/intelnews/status/1374472976649...
ballsystemlord - Thursday, March 25, 2021 - link
Spelling and grammar errors:"In a twist to the norm, Intel is now set to dissolve those walls keeping its x86 cores it itself."
"it itself." is incorrect English. IDK what you intended to say here.
sandeep_r_89 - Saturday, March 27, 2021 - link
"migrating the world’s semiconductor reliance away from Asia more into the USA and EU"What, you racist white people don't like the fact that Asians are making chips? Don't trust us I suppose?
Oxford Guy - Saturday, March 27, 2021 - link
pragmatic = No good options so here is some sugar coating for the least-worst oneanalogandy - Wednesday, March 31, 2021 - link
It’s official.Intel is Now apples B**CH.
No lube, no soap.
Microsoft is watching the sordid affair from the cheap seats munching popcorn after being reemed by the Solarwinds and Exchange hacks.
No wonder Billy “Boy” Gates bailed early to count his GMO vaccine money.
It’s only been 4 months since M1 Macs, and Greaselinger is on his knees offering to be their foundry.
Who was the Intel exec who turned down Steve Jobs for the iPhone chip contract?
Surely he must be at the bottom of San Francisco Bay wearing concrete boots.
Intel hasn’t move below the 14nm process, because the don’t have a market for it.
Apple has made their own market. Teach that in Harverd biznes skool.
Oxford Guy - Wednesday, March 31, 2021 - link
I presume you're American, since Americans have such a penchant for attacking legitimate sexuality when they're trying to belittle something/someone. Not a good look.JHS28677 - Sunday, April 4, 2021 - link
What in GOD's name does gay sex have to do with IT? These forums seriously need moderation. I guess you just cannot expect children to act like adults. Yes, I'm American. Go ahead and take your best shot; and, just so you know, your penchant for gay sex does not make it legitimate. Your denigrating presumptions about Americans is asinine. It does, however, serve to illustrate your low IQ. Have a nice day, mate.Oxford Guy - Tuesday, April 6, 2021 - link
None of that rebuts the point I made but the flamboyance is noted.Haawser - Wednesday, April 7, 2021 - link
The problem for Intel is the same as it was the last time they offered foundry services, ie- their mainstream processes are entirely focused on making very high clock speed CPUs. As such they are not ideal (from a cost/complexity standpoint) for pretty much anything else. They're too specialized, basically.