Yeah being a 22nm process for the chip this looks like a play to avoid having to use capacity constrained 14nm production for producing low end/inexpensive chips. Being stuck on 14nm for so long and having to start producing much larger chips due to AMD being competitive again and pushing core counts up across the board has really put a strain on their 14nm capacity so it make sense to not use it for low cost low margin chips.
They had planned for their mainstream production all to be on 10 nm, by now. Had that been the case, they'd have had spare 14 nm capacity to make chips to sell into whatever market is buying those Haswell Pentiums.
So, basically, 10 nm delays and problems backlogged everything else in their product linup.
if intel didn't have to make anything other then quad core cpu's still for the main stream, and kept 6 or more cores for the HEDT AMD server markets, their 14nm production wouldn't be so constrained, but, in order to compete with AMD on core count, they HAD to put more cores into their cpus or they would be even more behind AMD...
The compute cores are only half the CPU on a 8-core Coffee Lake part. If Intel offered a version with no giant space-wasting GPU they'd be just fine. Intel is behind in core count purely because they don't produce a real high-end consumer desktop chip and AMD does.
regardless... they are still using wafer space they weren't planning on using to make higher core count mainstream parts, when they didnt think they needed to. some one here did a nice reply to some one else on this, can't remember which article it was for, but it kinda of explained intels constraints quite nicely.
" Had that been the case, they'd have had spare 14 nm capacity"
Had Intel had 10nm ready to roll out to production scale, then they would have done exactly the same as they had for every past process node: migrate all but one line to the new process and retain one low-volume line for extended-support products. With current demand levels, Intel would have been on 10nm and supply constrained rather than on 14nm and supply constrained.
Even if they did as you say, you aren't counting two things:
The first is that 10nm brings a significant density improvement for Intel, meaning that they will be getting FAR more chips per wafer if they do nothing else to increase the effective transistor count of their processors. They would have almost no demand for their leading performance 14nm node in their volume production products, giving them plenty of capacity for the remaining demand.
Second, given the improved competitiveness of AMD, Intel would have likely retained more of their 14nm capacity for low to mid tier products without sacrificing their ability to meet demand on the high end. Their 14nm process is so well refined that they would have easily been able to produce dirt cheap 4 die with iGPU chips to handily beat AMD's mobile products in any market that they chose to compete in. Intel's 14nm, in its many variants, still outclasses global Foundaries 14/12nm process, and even GloFo's latest announcement on an improved 12nm product for next year still won't fully compete on anything but cost. AMD's cheap 28nm A series APUs are really just providing a price ceiling for the absolute bottom of the market. Intel may not have even wanted any part of that market to begin with.
lightningz71 " The first is that 10nm brings a significant density improvement for Intel, meaning that they will be getting FAR more chips per wafer" doesn't mean squat when they cant make any chips that has more then 4 cores( apparently ) , does it ?
you really have NO idea why intels 14nm process is so constrained do you?? its not the mobile space intel is screwed on. its desktop and server.
Intel likely has a ton of spare 22nm capacity, and thanks to 14nm being at capacity and the focus on 10nm, the only direction they can go is larger. Which isn't necessarily a bad thing. Intel is in an incredibly awkward position, more than most people can even begin to fathom. To build a new 14nm fab when 10/7nm production is so close is foolhardy. Yet they are getting completely locked out of the market at this point. Even Dell, a die hard Intel only company, is currently validating AMD designs due to Intel 14nm shortages. If Intel cannot execute 10nm or 7nm soon, they are at risk for a severe downturn in CPU sales.
There is no risk in a downturn in sales for Intel. They can currently, and for the near term can continue to sell EVERY SINGLE CHIP they produce at a premium. The ONLY thing they risk right now is revenue numbers from being able to command a price premium at the top of their stack (as seen by some of the price drops on their 10X series chips) and from lost opportunity sales in the mid to lower end market as the production that would be supplying those processors is still being used by higher end products.
Longer term, if they choose to not be price competitive, they risk market lock out, but, we're talking several years for that to even be worth debating. Intel is capacity constrained and has the ability to control demand AT WILL through pricing. They choose not to for current products and prefer to be where they are. It will take a lot more than a delay in 10/7nm and capacity constraints to break their stranglehold on the OEM market, which is still their bread and butter. What their constraints ARE doing, though, is allowing their OEMs to see the benefit in risking money on developing and validating products based on AMD processors. We saw some of this before for the Opteron server processors when they were competitive, and we saw some in the pre-construction days.
All Intel has to do is focus on their issues, price competitively for a few quarters to years, and once they get production straightened out, they will be able to regain control of the market easily enough. AMD is rising high on one thing, core count. That is a road of decreasing returns. Eventually, you have too many cores to keep busy, especially in the consumer space, and most will just sit there doing nothing 99% of the time. If that buys AMD enough time to get the rest of their processors on par with the Intel cores (with respect to IPC in all workloads), then AMD will be able to stay in the game. Given their progress from Zen,+ and Zen 2, it looks like AMD may just be able to pull that off. However, don't expect Intel to not continue to improve their cores as well. There's a lot that Intel hasn't produced yet that they have had in development for a long time simply because they have had their core development married to their process technology for so long.
The next few years will be quite interesting in the market.
lightningz71 " They can currently, and for the near term can continue to sell EVERY SINGLE CHIP they produce at a premium " to bad that cant make very many because they were NOT expecting to have to make more then 4 cores for the mainstream market, and only make 6+ cores for the HEDT and server markets. and if you call selling their top desktop chip for 1k LESS then the top chip it replaces a premium price, then there is something wrong.
"Intel is capacity constrained and has the ability to control demand AT WILL through pricing" WRONG, for 2 reasons, Intel CAN'T charge too much for their chips, because that would push people to AMD who has a very competitive product now, and 2, they cant charge to little, as their investors and shareholders, would have their heads, so explain how they can " control demand AT WILL through pricing ".. id like to see you try. " It will take a lot more than a delay in 10/7nm and capacity constraints to break their stranglehold on the OEM market" of course it would, cause they can always go back to bribing and threats to keep OEMs from also offering AMDs products :-) " AMD is rising high on one thing, core count" wrong again... they are also better then intel in IPC, power usage and performance per dollar. " on par with the Intel cores (with respect to IPC in all workloads) " um, they are already there in regards to IPC, AMDs lower clocked cpus compete VERY well with intels higher clocked cpus, and thats ALL intel has left to compete with, their higher clocked cpus that keep any performance advantage they have left. " There's a lot that Intel hasn't produced yet that they have had in development for a long time simply because they have had their core development married to their process technology for so long. " um yea ok sure.. then explain why intel has stagnated the market for so long, milking the current architecture for 10 generations now ? why didnt that back port their next gen architecture to 14nm by now ? to regain the performance they once had vs Zen ??
If the OEM has an existing product that fills the needs, then why change?
Changing to an AMD or even a newer Intel design would need to have the product system board re-designed, the cooling would need a re-design; tooling and assembly would need changes; the software would probably need some programming (Intel graphics -> AMD graphics); the new product would need QA validation; Stocking another model would need more warehouse space since you'd also need to stock spare parts. All this costs money and if the existing system doesn't benefit from the extra grunt of a newer CPU - why bother?
Of course if you're about to design something new - perhaps the Athlon would be better product. (along with many other CPUs)
You also forget one thing, DDR3 support. Haswell, (excluding some of the more expensive mobile products) was the last inexpensive desktop oriented product that supported DDR3. For OEMs, DDR3 is dirt cheap. Changing to anything newer means moving to DDR4, which is a big change in supply and platform design and cost.
sandtitz below hits its exactly. You also have to remember this isnt for 1st world countries. Everyones getting upset on other sites as if Intel is forcing everyone to buy them. There are billions of people in the world who would love a machine with one of these in it. Its easy to forget in our bubbles...
He's referring to our bubbles of privilege, affluent enough to be able to choose what parts we want in our PC. Might seem like a given, but there are many poor people in rural environments that don't have this privilege
Because they're already build reference designs around the H81, Q85, B85 chipsets that can be founds in a million NAS's, kiosks and embedded applications. ELO still ships their mainstream NEC production monitors with embedded Haswell PC's...like all those menu screens you see at McDonalds and every other food establishment. They sell a Skylake H110 slice card for $800 more, but most people are probably opting for the cheaper Haswell card because performance is nearly the same and these things don't need to be very powerful for their applications.
Better question, why would any user buy this over an AMD system. Not to mention that this intel CPU comes with all un-patched security holes and needs software mitigations that slows it down. It's just a terrible investment all around.
They already have the other tools to go along. OEM's don't switch supplier and with that the platform easily. Bu on the other hand, I don't see HP/Dell etc. sell Haswell Computers, even at the lowest price range. Maybe it's different in India and the like? Or even better, fill China with these chips for their security features (even though most where in HT I thought).
Yup. Intel has been pretty innovative when it comes to intentionally stagnating the global computer market at a soft-fixed 4-core CPU mark for a good 5-7 years. Then AMD comes out with Zen/2 and all of a sudden, Intel out of the goodness of it heart breaks the curse and start releasing 6 and 8 core mainstream CPUs and start selling $2000 CPUs of yesteryear rebranded a bit for $1000.
I always wondered why they didn't use more 22 nm chips when the 14 nm shortage became apparent. At 53 W this thing could easily run at 4.0 GHz and with HT active it could certainly rival the low end Skylake performance.
I agree with wondering why they hadn't done anything like this up until now. Given the amount of plus plus plussing that they've done on 14nm, don't you think they could have done the same with 22? I know 22 wasn't as good a clocker as 32nm was, but devil's canyon had decent clocks (4.0/4.4), better than skylake (4.0/4.2) in fact!
also the equipment is all fully depreciated at this point and yields should be near 100%
I also do think there was some hubris of "well 10nm will be totally fixed in 6-9 months, why waste time respinning a 22nm design?" and then recycling this flawed logic for 2+ years
> How do you know how many 22 nm chips they *did* sell?
I don't. But I know they did nothing to help the 14 nm shortage by making the most out of the existing 22 nm chips. Instead they introduced 14 nm quad cores without HT in lower price brackets, i.e. bigger dies than dual core with HT and on the capacity-constrained node.
BTW: I'm pretty sure that Haswell does not need those 54 W at 3.8 GHz dual core. It's just in this TDP class because it needs more than 35 W.
Guaranteed it doesn't need that. With all of the core and process revisions Haswell and 22nm have been through, it's probably closer to the 35W end of that band than 53W.
I will miss it, but it really is time to say goodbye even if the current replacement in Windows 10 is not somethign I really like. Then again, I don't have to like my operating system in order to use it do I?
I'd say it depends how you use your OS and how much annoyance you can withstand from it. My main annoyance with it is Windows update. No option to turn it off. Then everyone finds a way to turn it off, at which point, MS creates a new service to turn if back off and gives you no access to turn that service off. Then when you want to setup a time for when to get updates, it seems to just ignore it and do whatever the hell it wants.
Been dealing with it on my new laptop, since waiting on my household goods to arrive from the movers. Now I can wipe the laptop and install Win 8.1. Then Windows Update automatically doesn't work, since old Windows on new procs.
Given the expected customer is a rando with a budget PC rather than a data center with valuable amounts of information, I think the security issues are a moot point for this processor.
Also the mitigations only strongly affect certain tasks, and most of them being in data center or server related tasks. And even then, the people using the processor are likely doing basic tasks that won't even be impacted for all intents and purposes.
I suggest checking around to read up on the performance impacts of the mitigations. "Swiss cheese" and "bring it to its knees" are exaggerations to say the least.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
49 Comments
Back to Article
kpb321 - Monday, December 9, 2019 - link
Yeah being a 22nm process for the chip this looks like a play to avoid having to use capacity constrained 14nm production for producing low end/inexpensive chips. Being stuck on 14nm for so long and having to start producing much larger chips due to AMD being competitive again and pushing core counts up across the board has really put a strain on their 14nm capacity so it make sense to not use it for low cost low margin chips.nandnandnand - Monday, December 9, 2019 - link
I can't really fault Intel for this when AMD is still making 28nm Excavator chips. But it is another footnote in Intel's 10nm failure.Jorgp2 - Monday, December 9, 2019 - link
How is this even related to 10nm?mode_13h - Monday, December 9, 2019 - link
They had planned for their mainstream production all to be on 10 nm, by now. Had that been the case, they'd have had spare 14 nm capacity to make chips to sell into whatever market is buying those Haswell Pentiums.So, basically, 10 nm delays and problems backlogged everything else in their product linup.
Korguz - Monday, December 9, 2019 - link
if intel didn't have to make anything other then quad core cpu's still for the main stream, and kept 6 or more cores for the HEDT AMD server markets, their 14nm production wouldn't be so constrained, but, in order to compete with AMD on core count, they HAD to put more cores into their cpus or they would be even more behind AMD...Flunk - Monday, December 9, 2019 - link
The compute cores are only half the CPU on a 8-core Coffee Lake part. If Intel offered a version with no giant space-wasting GPU they'd be just fine. Intel is behind in core count purely because they don't produce a real high-end consumer desktop chip and AMD does.Korguz - Tuesday, December 10, 2019 - link
regardless... they are still using wafer space they weren't planning on using to make higher core count mainstream parts, when they didnt think they needed to. some one here did a nice reply to some one else on this, can't remember which article it was for, but it kinda of explained intels constraints quite nicely.edzieba - Tuesday, December 10, 2019 - link
" Had that been the case, they'd have had spare 14 nm capacity"Had Intel had 10nm ready to roll out to production scale, then they would have done exactly the same as they had for every past process node: migrate all but one line to the new process and retain one low-volume line for extended-support products. With current demand levels, Intel would have been on 10nm and supply constrained rather than on 14nm and supply constrained.
lightningz71 - Tuesday, December 10, 2019 - link
Even if they did as you say, you aren't counting two things:The first is that 10nm brings a significant density improvement for Intel, meaning that they will be getting FAR more chips per wafer if they do nothing else to increase the effective transistor count of their processors. They would have almost no demand for their leading performance 14nm node in their volume production products, giving them plenty of capacity for the remaining demand.
Second, given the improved competitiveness of AMD, Intel would have likely retained more of their 14nm capacity for low to mid tier products without sacrificing their ability to meet demand on the high end. Their 14nm process is so well refined that they would have easily been able to produce dirt cheap 4 die with iGPU chips to handily beat AMD's mobile products in any market that they chose to compete in. Intel's 14nm, in its many variants, still outclasses global Foundaries 14/12nm process, and even GloFo's latest announcement on an improved 12nm product for next year still won't fully compete on anything but cost. AMD's cheap 28nm A series APUs are really just providing a price ceiling for the absolute bottom of the market. Intel may not have even wanted any part of that market to begin with.
Korguz - Tuesday, December 10, 2019 - link
lightningz71 " The first is that 10nm brings a significant density improvement for Intel, meaning that they will be getting FAR more chips per wafer" doesn't mean squat when they cant make any chips that has more then 4 cores( apparently ) , does it ?you really have NO idea why intels 14nm process is so constrained do you?? its not the mobile space intel is screwed on. its desktop and server.
peevee - Tuesday, December 10, 2019 - link
"They had planned for their mainstream production all to be on 10 nm, by now"With 2 year cadence, it should have been all on 7nm by now.
eek2121 - Monday, December 9, 2019 - link
Intel likely has a ton of spare 22nm capacity, and thanks to 14nm being at capacity and the focus on 10nm, the only direction they can go is larger. Which isn't necessarily a bad thing. Intel is in an incredibly awkward position, more than most people can even begin to fathom. To build a new 14nm fab when 10/7nm production is so close is foolhardy. Yet they are getting completely locked out of the market at this point. Even Dell, a die hard Intel only company, is currently validating AMD designs due to Intel 14nm shortages. If Intel cannot execute 10nm or 7nm soon, they are at risk for a severe downturn in CPU sales.lightningz71 - Tuesday, December 10, 2019 - link
There is no risk in a downturn in sales for Intel. They can currently, and for the near term can continue to sell EVERY SINGLE CHIP they produce at a premium. The ONLY thing they risk right now is revenue numbers from being able to command a price premium at the top of their stack (as seen by some of the price drops on their 10X series chips) and from lost opportunity sales in the mid to lower end market as the production that would be supplying those processors is still being used by higher end products.Longer term, if they choose to not be price competitive, they risk market lock out, but, we're talking several years for that to even be worth debating. Intel is capacity constrained and has the ability to control demand AT WILL through pricing. They choose not to for current products and prefer to be where they are. It will take a lot more than a delay in 10/7nm and capacity constraints to break their stranglehold on the OEM market, which is still their bread and butter. What their constraints ARE doing, though, is allowing their OEMs to see the benefit in risking money on developing and validating products based on AMD processors. We saw some of this before for the Opteron server processors when they were competitive, and we saw some in the pre-construction days.
All Intel has to do is focus on their issues, price competitively for a few quarters to years, and once they get production straightened out, they will be able to regain control of the market easily enough. AMD is rising high on one thing, core count. That is a road of decreasing returns. Eventually, you have too many cores to keep busy, especially in the consumer space, and most will just sit there doing nothing 99% of the time. If that buys AMD enough time to get the rest of their processors on par with the Intel cores (with respect to IPC in all workloads), then AMD will be able to stay in the game. Given their progress from Zen,+ and Zen 2, it looks like AMD may just be able to pull that off. However, don't expect Intel to not continue to improve their cores as well. There's a lot that Intel hasn't produced yet that they have had in development for a long time simply because they have had their core development married to their process technology for so long.
The next few years will be quite interesting in the market.
Korguz - Tuesday, December 10, 2019 - link
lightningz71 " They can currently, and for the near term can continue to sell EVERY SINGLE CHIP they produce at a premium " to bad that cant make very many because they were NOT expecting to have to make more then 4 cores for the mainstream market, and only make 6+ cores for the HEDT and server markets. and if you call selling their top desktop chip for 1k LESS then the top chip it replaces a premium price, then there is something wrong."Intel is capacity constrained and has the ability to control demand AT WILL through pricing" WRONG, for 2 reasons, Intel CAN'T charge too much for their chips, because that would push people to AMD who has a very competitive product now, and 2, they cant charge to little, as their investors and shareholders, would have their heads, so explain how they can " control demand AT WILL through pricing ".. id like to see you try.
" It will take a lot more than a delay in 10/7nm and capacity constraints to break their stranglehold on the OEM market" of course it would, cause they can always go back to bribing and threats to keep OEMs from also offering AMDs products :-)
" AMD is rising high on one thing, core count" wrong again... they are also better then intel in IPC, power usage and performance per dollar. " on par with the Intel cores (with respect to IPC in all workloads) " um, they are already there in regards to IPC, AMDs lower clocked cpus compete VERY well with intels higher clocked cpus, and thats ALL intel has left to compete with, their higher clocked cpus that keep any performance advantage they have left.
" There's a lot that Intel hasn't produced yet that they have had in development for a long time simply because they have had their core development married to their process technology for so long. " um yea ok sure.. then explain why intel has stagnated the market for so long, milking the current architecture for 10 generations now ? why didnt that back port their next gen architecture to 14nm by now ? to regain the performance they once had vs Zen ??
JayNor - Wednesday, December 11, 2019 - link
This chip was already 22nm. This update was just to correct an error in an EOL notice.JayNor - Wednesday, December 11, 2019 - link
This is the Intel response story.https://www.tomshardware.com/news/intel-we-arent-s...
kallinteris - Monday, December 9, 2019 - link
why would OEMs buy this instead of AMD athlon 300g?kallinteris - Monday, December 9, 2019 - link
3000g*sandtitz - Monday, December 9, 2019 - link
If the OEM has an existing product that fills the needs, then why change?Changing to an AMD or even a newer Intel design would need to have the product system board re-designed, the cooling would need a re-design; tooling and assembly would need changes; the software would probably need some programming (Intel graphics -> AMD graphics); the new product would need QA validation; Stocking another model would need more warehouse space since you'd also need to stock spare parts. All this costs money and if the existing system doesn't benefit from the extra grunt of a newer CPU - why bother?
Of course if you're about to design something new - perhaps the Athlon would be better product. (along with many other CPUs)
eek2121 - Monday, December 9, 2019 - link
I agree with most of your post except the driver part. AMD/Intel writes the drivers, not the OEM.xenol - Tuesday, December 10, 2019 - link
OEMs can still add on top of the driver stack. This is why Microsoft started the DCH standard.lightningz71 - Tuesday, December 10, 2019 - link
You also forget one thing, DDR3 support. Haswell, (excluding some of the more expensive mobile products) was the last inexpensive desktop oriented product that supported DDR3. For OEMs, DDR3 is dirt cheap. Changing to anything newer means moving to DDR4, which is a big change in supply and platform design and cost.danielfranklin - Monday, December 9, 2019 - link
sandtitz below hits its exactly.You also have to remember this isnt for 1st world countries.
Everyones getting upset on other sites as if Intel is forcing everyone to buy them.
There are billions of people in the world who would love a machine with one of these in it.
Its easy to forget in our bubbles...
Korguz - Monday, December 9, 2019 - link
and living in intels bubbles of lies is better ??Retycint - Tuesday, December 10, 2019 - link
He's referring to our bubbles of privilege, affluent enough to be able to choose what parts we want in our PC. Might seem like a given, but there are many poor people in rural environments that don't have this privilegeSamus - Tuesday, December 10, 2019 - link
Because they're already build reference designs around the H81, Q85, B85 chipsets that can be founds in a million NAS's, kiosks and embedded applications. ELO still ships their mainstream NEC production monitors with embedded Haswell PC's...like all those menu screens you see at McDonalds and every other food establishment. They sell a Skylake H110 slice card for $800 more, but most people are probably opting for the cheaper Haswell card because performance is nearly the same and these things don't need to be very powerful for their applications.5080 - Tuesday, December 10, 2019 - link
Better question, why would any user buy this over an AMD system. Not to mention that this intel CPU comes with all un-patched security holes and needs software mitigations that slows it down. It's just a terrible investment all around.Irata - Tuesday, December 10, 2019 - link
Financial incentives ?Foeketijn - Wednesday, December 11, 2019 - link
They already have the other tools to go along. OEM's don't switch supplier and with that the platform easily. Bu on the other hand, I don't see HP/Dell etc. sell Haswell Computers, even at the lowest price range. Maybe it's different in India and the like? Or even better, fill China with these chips for their security features (even though most where in HT I thought).prisonerX - Monday, December 9, 2019 - link
No one can say Intel isn't innovative.Just not in processors.
milkywayer - Wednesday, December 11, 2019 - link
Yup. Intel has been pretty innovative when it comes to intentionally stagnating the global computer market at a soft-fixed 4-core CPU mark for a good 5-7 years. Then AMD comes out with Zen/2 and all of a sudden, Intel out of the goodness of it heart breaks the curse and start releasing 6 and 8 core mainstream CPUs and start selling $2000 CPUs of yesteryear rebranded a bit for $1000.MrSpadge - Monday, December 9, 2019 - link
I always wondered why they didn't use more 22 nm chips when the 14 nm shortage became apparent. At 53 W this thing could easily run at 4.0 GHz and with HT active it could certainly rival the low end Skylake performance.mode_13h - Monday, December 9, 2019 - link
How do you know how many 22 nm chips they *did* sell?The i3-4370 is the fastest Haswell-R dual-core. Its TDP is 54 W and it runs at 3.8 GHz. Maybe, if they enabled turbo, you'd hit 4.0 GHz.
drexnx - Monday, December 9, 2019 - link
I agree with wondering why they hadn't done anything like this up until now. Given the amount of plus plus plussing that they've done on 14nm, don't you think they could have done the same with 22? I know 22 wasn't as good a clocker as 32nm was, but devil's canyon had decent clocks (4.0/4.4), better than skylake (4.0/4.2) in fact!also the equipment is all fully depreciated at this point and yields should be near 100%
drexnx - Monday, December 9, 2019 - link
I also do think there was some hubris of "well 10nm will be totally fixed in 6-9 months, why waste time respinning a 22nm design?" and then recycling this flawed logic for 2+ yearsMrSpadge - Tuesday, December 10, 2019 - link
> How do you know how many 22 nm chips they *did* sell?I don't. But I know they did nothing to help the 14 nm shortage by making the most out of the existing 22 nm chips. Instead they introduced 14 nm quad cores without HT in lower price brackets, i.e. bigger dies than dual core with HT and on the capacity-constrained node.
BTW: I'm pretty sure that Haswell does not need those 54 W at 3.8 GHz dual core. It's just in this TDP class because it needs more than 35 W.
Spunjji - Wednesday, December 11, 2019 - link
Guaranteed it doesn't need that. With all of the core and process revisions Haswell and 22nm have been through, it's probably closer to the 35W end of that band than 53W.wow&wow - Monday, December 9, 2019 - link
One more Guinness World Records for Intel!Already has one for "Volume Down but Revenue Up", the only company in the world can do that, amazing!
khanikun - Tuesday, December 10, 2019 - link
Actually Apple already did that with their iPhone X.uefi - Monday, December 9, 2019 - link
Long live Windows 7!Flunk - Monday, December 9, 2019 - link
Until January 14th, 2020. Then it dies.PeachNCream - Tuesday, December 10, 2019 - link
I will miss it, but it really is time to say goodbye even if the current replacement in Windows 10 is not somethign I really like. Then again, I don't have to like my operating system in order to use it do I?khanikun - Tuesday, December 10, 2019 - link
I'd say it depends how you use your OS and how much annoyance you can withstand from it. My main annoyance with it is Windows update. No option to turn it off. Then everyone finds a way to turn it off, at which point, MS creates a new service to turn if back off and gives you no access to turn that service off. Then when you want to setup a time for when to get updates, it seems to just ignore it and do whatever the hell it wants.Been dealing with it on my new laptop, since waiting on my household goods to arrive from the movers. Now I can wipe the laptop and install Win 8.1. Then Windows Update automatically doesn't work, since old Windows on new procs.
Kougar - Tuesday, December 10, 2019 - link
I assume there's enough DDR3 supply to pair with it?Retycint - Tuesday, December 10, 2019 - link
DDR3 is still quite mainstream in the form of LPDDR3 for laptops, so I guess it will not be an issue5080 - Tuesday, December 10, 2019 - link
This CPU is a future security nightmare with more holes than Swiss cheese. And the available software mitigations bring it to its knees.xenol - Tuesday, December 10, 2019 - link
Given the expected customer is a rando with a budget PC rather than a data center with valuable amounts of information, I think the security issues are a moot point for this processor.Also the mitigations only strongly affect certain tasks, and most of them being in data center or server related tasks. And even then, the people using the processor are likely doing basic tasks that won't even be impacted for all intents and purposes.
PeachNCream - Tuesday, December 10, 2019 - link
I suggest checking around to read up on the performance impacts of the mitigations. "Swiss cheese" and "bring it to its knees" are exaggerations to say the least.mickulty - Tuesday, December 10, 2019 - link
"The 22nm chip is compatible with the widely available LGA1151 infrastructure that supports 53 W CPUs."LGA1150, surely?