BGA means soldering. I strongly doubt the CPUs could be unsoldered by the OEMs but it's possible Intel can do it if either the motherboard or the CPU dies. "Can do it" does not equate with "will do it" though. If this occurs in a tiny subset of the parts, and they are within warranty, Intel might just replace the entire thing rather than bother to have some of their technicians do (very carefully) the unsoldering, unpasting etc I am not sure what they would do if the warranty has expired though. It would probably depend on the OEM.
if you believe that then get a 10 year old cpu and game on it. go ahead and get a 2080 ti. ill wait. if thats the case i wonder how they got away with it? i mean surely they had a competitor during those 10 years right? cause if so how can you blame intel when the competition still lost to them.
reinventing the wheel can cause problems too. like p4 and fx. both relied on programmers to rewrite code to take advantage of parralellism years before the code was written (p4 with new sse and HT and fx with a bunch of slow cores). iterations are what work. intel hasnt been rehashing the same arch for 10 years. theyve been doing if for 20. and winning mostly.
now amd is back and im glad. but they are back with what? essentially a reverse engineered i7 quad core, doubled up in a dual socket config, then shrunk down to fit it all on one package to cut costs and more importantly latency.
so now even AMD are rehashing intels "old" arch.
intel should have went to 6 cores by the 6th gen at the latest. and they should have decoupled the arch improvements in cannonlake and later icelake from 10nm years ago so it would have worked on 14nm. but they didnt. they stood there like a deer in the headlights for 3 years or so saying "well at least amd still cant compete" until they waited till the that amd could compete with them. they really screwed some things up. but they are taking the right steps to fix it. they are doing things that id never think intel would do. like reaching out to the press and bringing in outsiders (intel never did that before). will it be enough? who knows.
but really intel only started screwing up about 3 years ago. and they kinda bought time with the 8th gen at least. so its more like 2 years maybe. its not insurrmountable. but its gonna take some hussle. and amd aint sleeping for sure
While the CPU's hold their own (technically even a i7-920 overclocked to 3.5GHz is competitive) the platforms have huge limitations and make the CPU's irrelevant for most modern applications, especially gaming, especially the ancient PCIe 2.0 bus (which is, per lane, half the speed of 3.0) which on paper might not seem bad since a modern video card at PCIe 3.0 x8 isn't totally saturated so theoretically a PCIe 2.0 x16 should be fine, all things being equal.
But then you get into the platform interconnects. The CPU is connected to the northbridge over QPI - the PCIe bus isn't native to the CPU internals, and it gets worse. The northbridge and southbridge are connected over DMI. That's 2GB/sec. The southbridge has ALL other PCIe lanes (like all the x1 lanes) and the SATA and USB controllers, etc. BTW, those native SATA controllers are 3Gbps, not even SATA 3.0, and obviously USB 3.0 wasn't native on an Intel chipset until Haswell (I think, maybe is was on the 70-series natively?)
Basically the platforms sucked until Haswell, and even still 80/90-series mostly lacked m2 NVMe or SATA Express. Some later boards have m2 NVMe PCIe x2 slots, but as far as I know none had x4 and most were never even made bootable because BIOS support sucked from most vendors.
Architecturally, still, you'd be crazy to game on anything older than Haswell. The power efficiency of the platform improved so much (which is why you suddenly saw ITX go mainstream with H\Z87) not just in the CPU but the PCH (shrunk from like 65nm to 32nm cutting power usage IN HALF.) And so many new instructions came around that generation, specifically AVX which is actually required for many games now, and anything DirectX12 requires it since its part of the API.
"now amd is back and im glad. but they are back with what? essentially a reverse engineered i7 quad core"
Nice try. The lead designer of the Zen architecture was Keller, who was also behind the DEC Alpha 21164/264, K7, and K8, and that's just his desktop work. Zen is the successor to those architectures, AMD lost their way with BD et al. Just as Intel did with Netburst. Both of them went back and modernized older designs (Intel used the P6-derived Pentium M as the basis for Core).
Now he works for Intel... they didn't hire him because he made a reasonable facsimile of their architecture.
Wow, you are a sore loser. Don't fret and be pissed just because Intel got fat and lazy. AMD has not only caught up, but has surpassed them. There is no reverse engineering whatsoever. The Ryzen CPUs have a completely different architecture. Do you even know what the hell you are talking about? Yup, didn't think so!
Nice try, but...Ring bus vs Infinity Fabric. The way each operates, from the architecture, the principle and the links to different system components (RAM clock speed in the case of IF) prove you are wrong. Zen is not an Intel architecture that was reverse engineered. It was built from the ground up. No wonder Intel had to get AMD's engineers to try and become competitive again.
10 years ago you could get the Core i7 870. 2.93Ghz, Quad-Core.
6~ months later you could get the Core i7 970. 3.2Ghz, Hex-Core. - This chip still stands up really well even today, especially once overclocked... Mostly thanks to the extra threads which translates well to newer titles.
Could you please tell me how the Zen core is related to Intel's I7s? I'm at a total loss, unless you conclude that the performance is similar and, therefore, it much be a reverse engineered I7!
Some people draw clueless similarities between Zen and Core because Zen has trivial things like Jackson Technology (hyper threading) and a similar implementation of turbo boost\speed step that controls voltage and clock speed together.
Architecturally they are entirely different, just look at a pipeline diagram they have little in common. The result is Zen is more efficient per clock, resulting in less energy usage to calculate the same arithmetic before specific optimizations. And the optimizations are where things have historically benefited Intel.
I just build a massive DVR for a client that records h264 video streams from a dozen 4k cameras at 15fps. It is incredibly CPU intensive and AMD seemed the natural choice since rendering is often favorable for AMD. Except AMD CPU's don't have any implementation of QuickSync - an Intel GPU + CPU encoding engine that uses the GPU EU's to lossy compress video with negligible impact on video quality. The catch is it doesn't support h265 which is more space-efficient but the fact a dozen cameras are EASILY handled by a Core i7-8700 6-core CPU (averaged 30% CPU load) when even the most powerful Ryzen 2800x 10-core CPU struggled with a dozen cameras (averaged 80% CPU load) left the AMD solution with no headroom for growth and a substantially higher power budget.
Goes to show how important optimizations. If a program DIDN'T take advantage of QuickSync then AMD would likely be the clear victor here, but the lack of extensions in various scenarios hurts them.
The first SMT design was the never-released DEC EV8. Tullsen's work later made it into the P4.
Also QuickSync is yet another proprietary and limited tech, and not an actual extension of x86-64. Quality of the resulting encodes is mediocre and it supports limited profiles (and no custom profiles). That doesn't even count the aforementioned lack of codec support. So from my perspective it's not useful at all. However for your client it's good because I'm sure they don't need perfect quality / max bitrate savings, just good enough and lots of video.
With that being said, it may not have as much headroom as you think. How many streams can QuickSync accelerate? What if they want to use a newer codec to save space down the road like HEVC, AV1, VVC? H.264 AVC is really not that efficient at 4K. Also, 2800X, 10 core? Have you been smoking the crack again?
Uhhhh.... You mention a Ryzen 2800X - this does NOT exist. Neither does ANY 10 Core AMD CPU!
You may be right about recording and all, but your knowledge of Ryzen is suspect. There is NO Ryzen, Threadripper, or EPYC CPU with exactly 10 cores...
(I have an 8700k @ 5.1 GHZ for my gaming machine m8, but I know Ryzen specs.)
Good point. Let's call it: Cascade-dozer. :) Seriously, when it comes to criticism doesn't it ever seem to you that people tend to the lowest common denominator (Fail-dozer [1]) in order to spread the message to the absolute maximum amount of people? If you really wanted to say something how about quoting Intel? "Inconsistent Performance from 2 Glued-together Desktop Die" [2] And, in the same vein, if you wanted to say something intelligent how about, "Gee, upgrading or replacing those CPUs is going to be difficult or beyond the average IT guy."? Or how about something witty, "It does not include the Nuclear reactor that powers the CPU?!", Or, if you preferred both, "Ah, another CPU that requires only a 4 phase MB!" Go ahead, throw us a bone instead of equally dumb criticism.
I do agree that current Intel efforts are unlike what they had in the past. I do agree that most of the stuff they get on the market in past two years feels like an afterthought. It might very well be, since they probably had some plans with 10 and 7nm processes and all of them became unfeazable when 10nm turned out to be broken. Still, I think we should give kudos to Intel because given all the shitty fab situation, they managed to find ways to improve stuff, ON THE SAME NODE. Sure, they can defy the laws of physics, so power consumption is high, but then again, remember that AMD was in the same place a few years ago with 32/28nm processes. They couldn't make anything remotely worthy of competing with Intel. Sure, they optimized power consumption in various ways, but they were still rubbish. Lets be fully honest. When you're stuck on the same node, that is when you MUST get creative. And surely so, Intel did bump up the clocks to absurd levels on desktop parts, while keeping - sort of - things in check. They do still compete on performance, so I think that is testament to how good they uArch was and still is. On the server side, things are more tricky, since cost is a big part of the picture. Because of Intel internal layout of the chip, they can't really split it easily into many chiplets without losing the low latency advantage. Also, making huge dies, while making money on them (taking into consideration yields) is VERY difficult. AMD has a lot of cards in their hands now with the server parts and Intel desperately needs a good fab process, Ice Lake with that 20% IPC improvement and a higher core count die (say 40 cores). That would bring them close to the 64 cores of AMD from a performance pov. Anyhow, things are quite interesting these days in the world of CPUs and I am happy that Intel will work their asses to get back to speed again.
By selling them in prebuilt barebones servers, Intel is basically saying "No thanks" to the highest volume customers. Seems like this is a "yeah, we have one too" part rather than an actual machine customers will want to buy.
What I was going to reply as well. They aren't ignoring the high volume customers. This is how it always works they supply the big OEM's like Dell, HP, etc.
Do you really think desktop or even server machines are highest volume customers - you can't just go off Amazon selling list - the larger market has significantly change or the last decade and has gone mobile.
This chip is specialize market and I would expect buyers upgrade there system on compatibility with previous system and desiring more horsepower and storages - extremely small percentage would use them for gam\ing if any.
Any chance for some performance analysis. It would be interesting to compare it dual cpu 28 environment. Or compare the 9222 again a dual cpu 16 environment. it would be nice for single 28 core again dual 14 core Just wanting to see if environment has effect.
Combine of cpu's in a package is not new - it was attempted in Pentium D and seems pretty popular now days - but I have some concerns CPU are involving into just adding more core instead on performance per core functionality.
Understand such a new expensive system could be hard - but it would be nice especially on this designed to know overhead on it doing multiple cpu's vs multiple cpu and such. You do such a good job in technical analysis.
I'd rather wait for an AV1 hardware codec - perhaps forthcoming with Navi? (I can dream!)
I don't mind waiting until 2020 when I can get it with hopefully a 7nm APU, since I'd be buying a system which I hope to use for the next decade, like the laptop I'm on now.
AVC/H.264 lasted surprisingly well, but we're reaching the end of the road there - it's just too inefficient for 4K+ - and I don't see HEVC/H.265 getting the same support online, while AV1 looks like the web's next format.
This reminds me so much of frequency war days and in the end that didn't matter to much. What matter is the architexture and not speed of CPU. This was of course Pentium 4 days and Intel learn from it and with I series.
Now we see this in too sides - number of cores and nm size. But of which in the long run will not matter has much.
I know it depends on what you using it for, but it would be interesting to know if point when adding more cores adds nothing to system.
It's very much like that. Recall AMD and Intel were on the same process node with Intel eventually taking the lead. This time around Intel is at a disadvantage.
AMD would be insane to price it that low, in light of Intel's pricing madness.
Ironic that Intel goes for quad-socket 28 core CPUs to dual-socket 56 core CPUs, whilst AMD goes from 32-core dual-socket CPUs to 64-core quad-socket CPUs.
Anyone requiring massive compute and memory bandwidth will be getting a dual or quad AMD 64-core system late this year if they have any influence over their in-house IT department buying habits.
I know Intel needed something to compete (on paper) with AMD, but this SKU is a total joke, and will sell in tiny numbers. I'm really not sure why Anandtech are giving this so much coverage, to be honest - is it for the LOLs?
Seems ridiculous. Btw fond of the way you calculated weight. You have nailed it till the gram. I think your mentioned pricing doesn't seem accurate though. It should have been Eyes, Kidneys, Liver and Lungs.
Besides all written already in other comments I really do not understand ridiculous small L3 for such core count. Even gen old POWER8 still holds 8MB of L3 per core which is 80MB on 10 core CPU. What's wrong with you Intel?
You know the industry is in trouble when package sizes become similar to pizza sizes. And it's not just Intel, this can't be avoided when the world is only giant enterprises that corner all markets and move in slow motion, or slower.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
66 Comments
Back to Article
eva02langley - Monday, April 8, 2019 - link
Cascade Fail...Chaitanya - Monday, April 8, 2019 - link
That's what happens after 10 years of rehashing same architecture.eva02langley - Monday, April 8, 2019 - link
I have read somewhere that these are not modular. Basically, if the mobo die, you need to buy a new board and another CPU.I don't remember where I read this.
Ian Cutress - Monday, April 8, 2019 - link
These are BGA. Soldered onto the motherboard. Intel is selling them as barebones systems to OEMs.Santoval - Monday, April 8, 2019 - link
BGA means soldering. I strongly doubt the CPUs could be unsoldered by the OEMs but it's possible Intel can do it if either the motherboard or the CPU dies. "Can do it" does not equate with "will do it" though. If this occurs in a tiny subset of the parts, and they are within warranty, Intel might just replace the entire thing rather than bother to have some of their technicians do (very carefully) the unsoldering, unpasting etc I am not sure what they would do if the warranty has expired though. It would probably depend on the OEM.Klimax - Thursday, April 11, 2019 - link
There are companies that can do BGA replacement. But not cheaply, especially for such massive BGA as is this case.bobhumplick - Monday, April 8, 2019 - link
if you believe that then get a 10 year old cpu and game on it. go ahead and get a 2080 ti. ill wait.if thats the case i wonder how they got away with it? i mean surely they had a competitor during those 10 years right? cause if so how can you blame intel when the competition still lost to them.
reinventing the wheel can cause problems too. like p4 and fx. both relied on programmers to rewrite code to take advantage of parralellism years before the code was written (p4 with new sse and HT and fx with a bunch of slow cores). iterations are what work. intel hasnt been rehashing the same arch for 10 years. theyve been doing if for 20. and winning mostly.
now amd is back and im glad. but they are back with what? essentially a reverse engineered i7 quad core, doubled up in a dual socket config, then shrunk down to fit it all on one package to cut costs and more importantly latency.
so now even AMD are rehashing intels "old" arch.
intel should have went to 6 cores by the 6th gen at the latest. and they should have decoupled the arch improvements in cannonlake and later icelake from 10nm years ago so it would have worked on 14nm. but they didnt. they stood there like a deer in the headlights for 3 years or so saying "well at least amd still cant compete" until they waited till the that amd could compete with them. they really screwed some things up. but they are taking the right steps to fix it. they are doing things that id never think intel would do. like reaching out to the press and bringing in outsiders (intel never did that before). will it be enough? who knows.
but really intel only started screwing up about 3 years ago. and they kinda bought time with the 8th gen at least. so its more like 2 years maybe. its not insurrmountable. but its gonna take some hussle. and amd aint sleeping for sure
FreckledTrout - Monday, April 8, 2019 - link
Going back 10 years may be a bit far but over 8 years ago the 2500K was released. It still holds its own today when overclocked.Samus - Tuesday, April 9, 2019 - link
While the CPU's hold their own (technically even a i7-920 overclocked to 3.5GHz is competitive) the platforms have huge limitations and make the CPU's irrelevant for most modern applications, especially gaming, especially the ancient PCIe 2.0 bus (which is, per lane, half the speed of 3.0) which on paper might not seem bad since a modern video card at PCIe 3.0 x8 isn't totally saturated so theoretically a PCIe 2.0 x16 should be fine, all things being equal.But then you get into the platform interconnects. The CPU is connected to the northbridge over QPI - the PCIe bus isn't native to the CPU internals, and it gets worse. The northbridge and southbridge are connected over DMI. That's 2GB/sec. The southbridge has ALL other PCIe lanes (like all the x1 lanes) and the SATA and USB controllers, etc. BTW, those native SATA controllers are 3Gbps, not even SATA 3.0, and obviously USB 3.0 wasn't native on an Intel chipset until Haswell (I think, maybe is was on the 70-series natively?)
Basically the platforms sucked until Haswell, and even still 80/90-series mostly lacked m2 NVMe or SATA Express. Some later boards have m2 NVMe PCIe x2 slots, but as far as I know none had x4 and most were never even made bootable because BIOS support sucked from most vendors.
Architecturally, still, you'd be crazy to game on anything older than Haswell. The power efficiency of the platform improved so much (which is why you suddenly saw ITX go mainstream with H\Z87) not just in the CPU but the PCH (shrunk from like 65nm to 32nm cutting power usage IN HALF.) And so many new instructions came around that generation, specifically AVX which is actually required for many games now, and anything DirectX12 requires it since its part of the API.
Alexvrb - Monday, April 8, 2019 - link
"now amd is back and im glad. but they are back with what? essentially a reverse engineered i7 quad core"Nice try. The lead designer of the Zen architecture was Keller, who was also behind the DEC Alpha 21164/264, K7, and K8, and that's just his desktop work. Zen is the successor to those architectures, AMD lost their way with BD et al. Just as Intel did with Netburst. Both of them went back and modernized older designs (Intel used the P6-derived Pentium M as the basis for Core).
Now he works for Intel... they didn't hire him because he made a reasonable facsimile of their architecture.
Ninjawithagun - Thursday, June 13, 2019 - link
Wow, you are a sore loser. Don't fret and be pissed just because Intel got fat and lazy. AMD has not only caught up, but has surpassed them. There is no reverse engineering whatsoever. The Ryzen CPUs have a completely different architecture. Do you even know what the hell you are talking about? Yup, didn't think so!3ogdy - Saturday, May 30, 2020 - link
Nice try, but...Ring bus vs Infinity Fabric. The way each operates, from the architecture, the principle and the links to different system components (RAM clock speed in the case of IF) prove you are wrong. Zen is not an Intel architecture that was reverse engineered. It was built from the ground up. No wonder Intel had to get AMD's engineers to try and become competitive again.StevoLincolnite - Monday, April 8, 2019 - link
10 years ago you could get the Core i7 870. 2.93Ghz, Quad-Core.6~ months later you could get the Core i7 970. 3.2Ghz, Hex-Core. - This chip still stands up really well even today, especially once overclocked... Mostly thanks to the extra threads which translates well to newer titles.
ballsystemlord - Monday, April 8, 2019 - link
Could you please tell me how the Zen core is related to Intel's I7s? I'm at a total loss, unless you conclude that the performance is similar and, therefore, it much be a reverse engineered I7!Samus - Tuesday, April 9, 2019 - link
Some people draw clueless similarities between Zen and Core because Zen has trivial things like Jackson Technology (hyper threading) and a similar implementation of turbo boost\speed step that controls voltage and clock speed together.Architecturally they are entirely different, just look at a pipeline diagram they have little in common. The result is Zen is more efficient per clock, resulting in less energy usage to calculate the same arithmetic before specific optimizations. And the optimizations are where things have historically benefited Intel.
I just build a massive DVR for a client that records h264 video streams from a dozen 4k cameras at 15fps. It is incredibly CPU intensive and AMD seemed the natural choice since rendering is often favorable for AMD. Except AMD CPU's don't have any implementation of QuickSync - an Intel GPU + CPU encoding engine that uses the GPU EU's to lossy compress video with negligible impact on video quality. The catch is it doesn't support h265 which is more space-efficient but the fact a dozen cameras are EASILY handled by a Core i7-8700 6-core CPU (averaged 30% CPU load) when even the most powerful Ryzen 2800x 10-core CPU struggled with a dozen cameras (averaged 80% CPU load) left the AMD solution with no headroom for growth and a substantially higher power budget.
Goes to show how important optimizations. If a program DIDN'T take advantage of QuickSync then AMD would likely be the clear victor here, but the lack of extensions in various scenarios hurts them.
Alexvrb - Tuesday, April 9, 2019 - link
The first SMT design was the never-released DEC EV8. Tullsen's work later made it into the P4.Also QuickSync is yet another proprietary and limited tech, and not an actual extension of x86-64. Quality of the resulting encodes is mediocre and it supports limited profiles (and no custom profiles). That doesn't even count the aforementioned lack of codec support. So from my perspective it's not useful at all. However for your client it's good because I'm sure they don't need perfect quality / max bitrate savings, just good enough and lots of video.
With that being said, it may not have as much headroom as you think. How many streams can QuickSync accelerate? What if they want to use a newer codec to save space down the road like HEVC, AV1, VVC? H.264 AVC is really not that efficient at 4K. Also, 2800X, 10 core? Have you been smoking the crack again?
Budburnicus - Wednesday, October 14, 2020 - link
Uhhhh.... You mention a Ryzen 2800X - this does NOT exist. Neither does ANY 10 Core AMD CPU!You may be right about recording and all, but your knowledge of Ryzen is suspect. There is NO Ryzen, Threadripper, or EPYC CPU with exactly 10 cores...
(I have an 8700k @ 5.1 GHZ for my gaming machine m8, but I know Ryzen specs.)
bobhumplick - Monday, April 8, 2019 - link
if youre going to fanboy then at least put some thought into itballsystemlord - Monday, April 8, 2019 - link
Good point. Let's call it: Cascade-dozer. :)Seriously, when it comes to criticism doesn't it ever seem to you that people tend to the lowest common denominator (Fail-dozer [1]) in order to spread the message to the absolute maximum amount of people?
If you really wanted to say something how about quoting Intel? "Inconsistent Performance from 2 Glued-together Desktop Die" [2]
And, in the same vein, if you wanted to say something intelligent how about, "Gee, upgrading or replacing those CPUs is going to be difficult or beyond the average IT guy."?
Or how about something witty, "It does not include the Nuclear reactor that powers the CPU?!",
Or, if you preferred both, "Ah, another CPU that requires only a 4 phase MB!"
Go ahead, throw us a bone instead of equally dumb criticism.
[1] http://en.linuxreviews.org/AMD_Faildozer
[2] https://www.techpowerup.com/img/QhA6gdonrmBT27fr.j...
yeeeeman - Thursday, August 8, 2019 - link
I do agree that current Intel efforts are unlike what they had in the past. I do agree that most of the stuff they get on the market in past two years feels like an afterthought. It might very well be, since they probably had some plans with 10 and 7nm processes and all of them became unfeazable when 10nm turned out to be broken.Still, I think we should give kudos to Intel because given all the shitty fab situation, they managed to find ways to improve stuff, ON THE SAME NODE. Sure, they can defy the laws of physics, so power consumption is high, but then again, remember that AMD was in the same place a few years ago with 32/28nm processes. They couldn't make anything remotely worthy of competing with Intel. Sure, they optimized power consumption in various ways, but they were still rubbish.
Lets be fully honest. When you're stuck on the same node, that is when you MUST get creative. And surely so, Intel did bump up the clocks to absurd levels on desktop parts, while keeping - sort of - things in check. They do still compete on performance, so I think that is testament to how good they uArch was and still is. On the server side, things are more tricky, since cost is a big part of the picture. Because of Intel internal layout of the chip, they can't really split it easily into many chiplets without losing the low latency advantage. Also, making huge dies, while making money on them (taking into consideration yields) is VERY difficult. AMD has a lot of cards in their hands now with the server parts and Intel desperately needs a good fab process, Ice Lake with that 20% IPC improvement and a higher core count die (say 40 cores). That would bring them close to the 64 cores of AMD from a performance pov.
Anyhow, things are quite interesting these days in the world of CPUs and I am happy that Intel will work their asses to get back to speed again.
bigvlada - Monday, April 8, 2019 - link
That's no moon. It's a CPU.ballsystemlord - Monday, April 8, 2019 - link
ROFLI find your lack of faith in Intel disturbing!
oRAirwolf - Monday, April 8, 2019 - link
I'm excited for these to go out if warranty in several years so I can pick up a server using it for cheap and put pfSense on it.imaheadcase - Monday, April 8, 2019 - link
That is totally a overkill.chipped - Monday, April 8, 2019 - link
Lol, I think anyone buying these processors will go for the 4 year warranty, minimum. Maybe just buy a AMD 64 core?Sahrin - Monday, April 8, 2019 - link
By selling them in prebuilt barebones servers, Intel is basically saying "No thanks" to the highest volume customers. Seems like this is a "yeah, we have one too" part rather than an actual machine customers will want to buy.FunBunny2 - Monday, April 8, 2019 - link
or... this is Intel making another tilt at the z machine windmill.Ian Cutress - Monday, April 8, 2019 - link
Intel sells barebones to OEMs. OEMs fill them with memory, storage, support packages. Customers buy those.FreckledTrout - Monday, April 8, 2019 - link
What I was going to reply as well. They aren't ignoring the high volume customers. This is how it always works they supply the big OEM's like Dell, HP, etc.BigMamaInHouse - Monday, April 8, 2019 - link
How should it perform vs dual 8280? latency improvement between Dies?HStewart - Monday, April 8, 2019 - link
Do you really think desktop or even server machines are highest volume customers - you can't just go off Amazon selling list - the larger market has significantly change or the last decade and has gone mobile.This chip is specialize market and I would expect buyers upgrade there system on compatibility with previous system and desiring more horsepower and storages - extremely small percentage would use them for gam\ing if any.
HStewart - Monday, April 8, 2019 - link
Any chance for some performance analysis. It would be interesting to compare it dual cpu 28 environment. Or compare the 9222 again a dual cpu 16 environment. it would be nice for single 28 core again dual 14 core Just wanting to see if environment has effect.Combine of cpu's in a package is not new - it was attempted in Pentium D and seems pretty popular now days - but I have some concerns CPU are involving into just adding more core instead on performance per core functionality.
Ian Cutress - Monday, April 8, 2019 - link
We're trying to get access to a system for testing.HStewart - Monday, April 8, 2019 - link
Understand such a new expensive system could be hard - but it would be nice especially on this designed to know overhead on it doing multiple cpu's vs multiple cpu and such. You do such a good job in technical analysis.mooninite - Monday, April 8, 2019 - link
We can finally encode AV1 in real time!GreenReaper - Monday, April 8, 2019 - link
I'd rather wait for an AV1 hardware codec - perhaps forthcoming with Navi? (I can dream!)I don't mind waiting until 2020 when I can get it with hopefully a 7nm APU, since I'd be buying a system which I hope to use for the next decade, like the laptop I'm on now.
AVC/H.264 lasted surprisingly well, but we're reaching the end of the road there - it's just too inefficient for 4K+ - and I don't see HEVC/H.265 getting the same support online, while AV1 looks like the web's next format.
DigitalFreak - Monday, April 8, 2019 - link
Will be interesting to see how this processor stacks up against a 64 core EPYC @ a guesstimated $5 - 7k.twtech - Monday, April 8, 2019 - link
It seems to me like the only exists because of Epyc 2. It's a stopgap measure to say that, "hey, we have very high core count processors too!""Nevermind of course the fact that ours cost almost 10x as much and use enormous power."
HStewart - Monday, April 8, 2019 - link
This reminds me so much of frequency war days and in the end that didn't matter to much. What matter is the architexture and not speed of CPU. This was of course Pentium 4 days and Intel learn from it and with I series.Now we see this in too sides - number of cores and nm size. But of which in the long run will not matter has much.
I know it depends on what you using it for, but it would be interesting to know if point when adding more cores adds nothing to system.
FreckledTrout - Monday, April 8, 2019 - link
It's very much like that. Recall AMD and Intel were on the same process node with Intel eventually taking the lead. This time around Intel is at a disadvantage.GreenReaper - Monday, April 8, 2019 - link
Well, they both use a lot of power. Intel does go a little higher with speeds, which costs a little more.10x is pushing it, but 2-2.5x, yeah. For some people that's not a consideration, but if it is then you can make a significant saving with AMD.
Ian Cutress - Monday, April 8, 2019 - link
This is more a Xeon Phi replacement than an EPYC competitor.HStewart - Monday, April 8, 2019 - link
I don't think Intel is too worry about EPYC - they have 4 and 8 cpu solution to deal with high core count.Alexvrb - Monday, April 8, 2019 - link
Wow I didn't know they started giving away free CPUs. I thought you had to pay for each one!psychobriggsy - Wednesday, April 10, 2019 - link
AMD would be insane to price it that low, in light of Intel's pricing madness.Ironic that Intel goes for quad-socket 28 core CPUs to dual-socket 56 core CPUs, whilst AMD goes from 32-core dual-socket CPUs to 64-core quad-socket CPUs.
Anyone requiring massive compute and memory bandwidth will be getting a dual or quad AMD 64-core system late this year if they have any influence over their in-house IT department buying habits.
phoenix_rizzen - Wednesday, April 10, 2019 - link
Where have you seen mention of a quad-socket EPYC motherboard? Far as I've seen, AMD has stated they won't be supporting anything beyond dual-socket.FATCamaro - Monday, April 8, 2019 - link
Nice package...CityBlue - Monday, April 8, 2019 - link
I know Intel needed something to compete (on paper) with AMD, but this SKU is a total joke, and will sell in tiny numbers. I'm really not sure why Anandtech are giving this so much coverage, to be honest - is it for the LOLs?Lord of the Bored - Monday, April 8, 2019 - link
Perhaps so Intel doesn't blacklist them.bobhumplick - Monday, April 8, 2019 - link
now somebody get Derbauer one so he can delid it quick.bobhumplick - Monday, April 8, 2019 - link
journalists used to be bigger. that guy holding the 8700k cant be more than 3 feet tallAlexvrb - Monday, April 8, 2019 - link
You've got it backwards, Ian has a huge melon so that thing is MASSIVE. At least 8" square.Wingartz - Monday, April 8, 2019 - link
Intel: plz don't nom the coresGreenReaper - Monday, April 8, 2019 - link
28 Cores a-Slave'dpatel21 - Monday, April 8, 2019 - link
Seems ridiculous. Btw fond of the way you calculated weight. You have nailed it till the gram.I think your mentioned pricing doesn't seem accurate though. It should have been Eyes, Kidneys, Liver and Lungs.
phoenix_rizzen - Wednesday, April 10, 2019 - link
But then you can't sing about the pricing ... ;)kgardas - Monday, April 8, 2019 - link
Besides all written already in other comments I really do not understand ridiculous small L3 for such core count. Even gen old POWER8 still holds 8MB of L3 per core which is 80MB on 10 core CPU. What's wrong with you Intel?coburn_c - Monday, April 8, 2019 - link
That's not system on a chip. That's system on a wafer.littleM - Monday, April 8, 2019 - link
This has about three times as many balls as their first microprocessor had transistors.The Hardcard - Monday, April 8, 2019 - link
Am I overlooking the info on the imposer technology, or are they not talking about it. EIMB? Foveros?ballsystemlord - Monday, April 8, 2019 - link
They're not using it here so they're not talking about it.Ian Cutress - Tuesday, April 9, 2019 - link
This is just on-package UPIjjj - Tuesday, April 9, 2019 - link
You know the industry is in trouble when package sizes become similar to pizza sizes. And it's not just Intel, this can't be avoided when the world is only giant enterprises that corner all markets and move in slow motion, or slower.Oxford Guy - Thursday, April 11, 2019 - link
"Inside the package is two XCC enabled silicon die"= Inside the package are two XCC-enabled silicon dies
yukyuk - Tuesday, May 12, 2020 - link
can it run csgo????Idrisse - Thursday, September 24, 2020 - link
Hello. What is the motherboard supported socket BGA5903