Like that even makes sense. They are comparing the most expensive configs with the cheapest, in Intel's words a "commercially available OEM system with AMD".
I hope for Intel's sake the AMD system at least was dual channel, and not single, or otherwise the Intel fake benchmarks (remember last year lol?) strike again.
It was 100% single channel. Intel's deceptive benchmark is two-fold, with both dramatically slower DRAM and single channel DRAM. Either is bad, both is a disaster - or in Intel's case an "it makes us look good marketing opportunity".
Why would you write something like that? "100% single channel". That was never all that plausible to begin with and Intel has now confirmed that the AMD laptop was run in dual channel mode:
It is not intel fault that AMD did not update their memory controller and use LPDDR4X. Laptop with Ice lake will use LPDDDR4X while AMD laptop will use DDR4. That what will be available to consumers,
On top of that Intel used a single channel Ryzen APU machine, handicapping the AMD system even more. Let me make an uneducated guess about your reply.. : "They did so because dual channel ones are hard to find".
LPDDR4x-3733 actually has a bandwidth of 29.9GB/s, bus width is 32bits in LPDDR4 and its variants. The AMD chip is the one with the bandwidth advantage.
To be fair AMD Ryzen RavenRidge only supports DDR4-2400 in laptops and i haven't read anything indicating that the new 3700U supports higher memory speeds. 2400 is the highest supported memory, i haven't seen any RR laptop that supports higher memory speeds. My 2666MHz does only work as 2400MHz.
The AMD comparison is certainly interesting and no where near as good as i would have thought. Yes AMD arent going to have a next-gen APU for almost another year giving the 7nm to desktops and Navi (A decision i dont agree with), but when you look at what is possible, it doesnt look great for Intel's architecture. They barely beat AMD when they picked the benchmarks, are running 56% more bandwidth and are a process node ahead. What will AMD be able to do with 7nm and LPDDR4X??? They are on a good run, they really need to get the next gen APUs out into laptops!
I seem random altering between low and medium settings. My guess is that intel cherry picked the settings and games chosen so in reality I expect ice lake to be a bit behind amd.
Still a pretty decent improvement assuming intel didn't doctor the results too much.
Yea, this is similar to what they did in GTX 1060 Max-Q vs Kabylake-G comparisons.
They said Kabylake-G was equal to GTX 1060 Max-Q, but the former had dual channel system memory while the latter was on single channel.
The effect is small in games, but that 5-10% is enough to make it so Kabylake-G slower. Now that doesn't sound too good in marketing so they pushed for that extra.
I am typing this on a Kaby Lake G, but it states that it newer Vega engine, but test should it older. But CPU is quite impressive compare to older Intel's
There were several Broadwell and Skylake parts which already could do 1 TeraFLOP/s at Single Precision, even on OpenCL. The Iris Pro 580 in the Core i7 6770HQ for example benchmarks at > 1 TeraFLOP/s in clpeak.
So I wonder what they mean by "first TeraFLOP GPU".
I just benchmarked the HD9 graphics in my Core i7-7500U ultrabook at 430 GFLOP/s in clpeak. It has 24 execution units, so I would expect all more recent parts with 48 execution units or more to get that TeraFLOP/s, especially when they don't have to run inside the power envelope of an ultrabook. Namely that would be Iris Plus Graphics 640/650/655. Those three are labelled "Consumer".
The Iris Plus 640/650 systems use the expensive eDRAM packaging while the one in Icelake-U does away with it while delivering better performance. This then becomes comparable to the GT2 part with 24EU and without eDRAM.
Seems like a decently fair comparison. However, I wonder how much and which direction those benchmarks would shift if they’d been run at higher resolutions. I would expect an Intel core to beat the Ryzen core in gaming at low resolution even if the graphics were evenly matched. I’d like to have seen a more pure graphics test, but I guess if you’re gaming on a 25 watt or less than 25 watt machine you won’t be pushing resolution very much anyway.
There's no such thing as a truly redesigned architecture. That would be a waste of time anyway.
Gen 11 is a significant improvement over Gen 9, but the fundamentals are still Intel GPU architecture.
Raja won't be able to have much effect on this considering the timeline. We can expect more input on the next gen, now called by Xe name. But it'll still be Intel GPU architecture. If Raja had any part in the direction of the design, it'll be low level that most of us won't get to know.
Low level wouldn’t be the best way to describe it, low level details we’ll never be told, probably yes, but he’s in charge of that division with 4,500 people under him. And I definitely think his input would have greatly impacted performance, because Intel likely would not have been that close to finishing the design when they hired him. Die shot still looks like the 8 cluster EU groups, though. Obviously I didn’t mean totally redesigned in a literal sense talking about chip architecture, but rather, just not a tweak to a few aspects. His start at Intel seemed to coincide with the AMD cross-license agreement. And yeah, for GPU’s Intel’s mainly just had to do that because of patent infringement reasons, but I think it would be stupid to get access to some of AMD’s GPU patent portfolio and not implement parts of it that weren’t available with the previous Nvidia portfolio. I expect they would also HAVE to get rid of some things that were in the Nvidia license but not in the AMD license. Also the way they’re touting this for AI suggests Raja’s experience came into play, or it could just be from the cross-license and Nvidia wouldn’t give some of that in the previous deal or a new deal, or they could be exaggerating abilities a bit.
I root for team Red, but in the notebook segment they’re talking about people won’t be playing on very high settings regardless. I did take note of the high/low settings, but it could also have been done because of a required minimum frame-rate for either system. That’s a legitimate reason. It doesn’t mean much if Intel’s system performs better when both are under 30fps, they would have to adjust the settings. Frame rate numbers would have been better. But seriously, this is one of the least unfair comparisons Intel has touted over the years. The ram speed is unfair, but at least it’s out there and someone on a forum can run the same test with different ram and clarify the frame rates.
It took a jump to 10nm and 50% more memory bandwidth just to slightly outperform one of AMD's most underpowered 14nm APU ? This does not bode well for Intel, Ryan sure tried his best though.
AMD's own fault, that they artificially limit U-series APUs to 2400MHz and even downclock it in games. On the other hand- I expect Intel laptops with LPDDR4X will be in another price levels than 90% of laptops with 3700U. Also- Anandtech citing 40% performance jump from Whiskey Lake to Ice Lake still should not be enough to catch up to ~70% faster 3700U.
The point is we know that these things are memory bandwidth starved most of the time, if Intel wanted to show off their new and shiny architecture it would have been far more impressive if they would have done the testing with matching memory performance.
AMD limits these APUs to 2400 mhz because that's the best that they'll ever see from lsptop manufactures at that price point anyway, hell even high end mobile CPUs don't see more than 2400 mhz most of the time. And you got to ask yourselves, how many of these Ice Lake chips are we even going to see around with these particular LPDDR4X speeds ?
If these Ice Lake chips end up on the market paired with LPDDR4X, that RAM will be running at least this fast. This isn't an exotic speed for LPDDR4X, and the memory manufacturers have mostly moved on to even higher speeds (4266).
Might as well ask how much Intel paid them to put up an article and a pipeline story on a Sunday, I don't think I have ever seen anything published here on a Sunday. Not to mention the fact that there is no way Ian didnt notice the glaring problems with the benchmarks and so my guess is that this is paid for content by Intel the day before AMD has a reveal event
While I tend to give them plenty of leeway and have defended them some over some of this behavior (there's certainly some very petulant commenters on some articles), their silence on this is starting to be a problem for anyone that wants to give Anandtech credibility. Since they allegedly were waiting for official comment from Intel before discussing it, they should be telling Intel that until they address this major security issue (that is far bigger than stuff like this article is about or the 9900KS), that they won't publish any Intel stories. Its obvious Intel wants to stonewall things and ignore glaring screwups, but that only works if the media helps them. At some point they need to stop waiting for Intel (who is just going to blow smoke up their backsides anyway as we've seen with the Spectre/Meltdown situations) and actually do their own study into this or else they're just going to look like they're being paid off by Intel to coverup what is absolutely an outright fiasco.
This coupled with their atrocious coverage of that ridiculous "security research" team trying to trash AMD makes Anandtech look really bad. Considering how they try to appear that they're so concerned about benchmark cheating (on Android at least...) and other security situations, them letting Intel slide on an issue that is both enabling them to cheat on benchmarks as well as creates a huge security problem on top of it, well its becoming absurd.
I'm starting to wonder if the recent hires of other tech media person by Intel isn't playing a role as well.
keep in mind.. this is intel published benchmarks, and as such.. should be taken as cherry picked and taken with loads of salt, and considered as false and just a PR move by intel
"Might as well ask how much Intel paid them to put up an article and a pipeline story on a Sunday"
This is part of our regular Computex coverage. Taiwan is 12 hours ahead of the US east coast, and vendors are already rolling out announcements on Sunday because there's so much going on the rest of the week.
"so my guess is that this is paid for content by Intel the day before AMD has a reveal event"
If it were paid content (and I wouldn't approve of something like this), then it would be made very clear in the article that it was a sponsored news post.
"May I ask Why AnandTech hasn't covered or mention, even in the news pipeline about Zombieload or MDS?"
Backlogged on testing. I'm in the middle of something, but I ran out of time before Computex.
I don't want to put up an article without data; there's too many misconceptions and wishcasting on the subject, which is leading to everyone losing their minds.
Thx for the reply. I just thought since Intel release an actual statement and CVE, it would be great if Anandtech just mention with a few sentence post, and with further details to come later. Just one or two sentence will do.
I mostly limited to reading Anandtech as the only tech web site, when I saw there were discussion on twitter a few days after the Zombieload I was suppressed I didn't read about it earlier.
Don't believe a word this scumbag company puts out when comparing products, especially to AMD. Do I need to remind everyone of the "9900K 50% faster than 2700X" 'study' they commissioned? Intel would have done everything in its power, skirting the boundary of deceit, to make the Intel CPU have an advantage.
So... They are benchmarking thd intel system with high frequency high capacity memory And the AMD system with half the capacity and lower frequency ram. They must have given the marketing department lots of leeway on this one. Does Intel make allowances for their sieve like security and performance losses from patches?
Benchmarks from the manufacturer of a product are never biased. Never. Not at all. I'll believe the numbers when I see a credible independent third party like notebookcheck post benchmarks.
With that said, that applies to the Ryzen vs Intel information. I doubt they would be as untrustworthy when comparing their own GPUs to one another. Still, grains of salt are being taken until someone gets their hands on retail hardware.
Looking at the numbers and the discussion there seems to be some consensus that it puts the new Ice Lake standard iGPU on a similar performance level as the GT3 variants from previous generations.
I have always been fascinated by these chips, because they are oddly priced.
They are extremely hard to get outside a Mac, where their end-user price is obviuosly insane.
The only other form, where you can get them easily is a NUC, where they conform to a classic Intel rule: Don't charge for the iGPU no matter what size!
So even if common sense would dictate that the extra 64/128MB of eDRAM as well as the double-sized GT3 (or quad GT4) iGPU should cost extra money, end-user pricing on NUCs doesn't reflect that: Those are soley priced on Pentium/i3/i5/i7 or CPU power "merits", even if the GPU in these configurations is taking up much more die space then the CPU.
But Intel doesn't seem to sell them to anyone but Apple.
There is one single other instance where I have ever seen an Iris Pro/Plus outside a Mac or a NUC and that was a Medion notebook sold via Aldi in Germany, based on the i5-5257U and sold at €600, quite an ordinary price for an ordinary (HD520) i5 Skylake at the time, and an obvious bargain at double GPU power for free. So I grabbed one, especially because the dGPUs at the time were all still 28nm and very clunky.
Alas, while double GPU power turned out to be true and the machine is fine and remains in good shape with great Linux compatibility, it doesn't turn the notebook into a viable gaming device, nor very likely into a viable AI inferencing monster.
At least not when you have desktops with Nvidia dGPUs running next door or somewhere in a cloud close by.
So to all this hot-headed discussion that's been going on in this thread I say: I doesn't really matter if Intel is cheating here or has made radical improvements, because every machine with either generation (or iGPU configuration) essentially remains a 2D device. It still takes GDDR RAM and at least 50 Watts of pure GPU power to make most of my games playable at the full resolution of the screen. So an Ultrabook simply isn't going to cut it, if it's PC gaming you're after (Android games work, but rarely attractive).
But what also works just as well in both configurations is Steam streaming. A GTX 1060 is good enough for 1920x1080 on the server side and will give you the performance no APU or beefed up iGPU will give you for a long time to come on a 15 or even 10Watt ultrabook without even running hot or short.
So that's what I do. I show people my ultrabook and impress them with the most demanding games running at full hilt seemingly without even breaking a sweat at battery power.
Some actually figure out that I must be cheating, but most people actually believe in both magic and advertisement.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
69 Comments
Back to Article
hukax - Sunday, May 26, 2019 - link
> For all the games in Intel’s test methodology, they scored anywhere from a 6% loss to a 16% gain, with the average somewhere around a 4-5% gain.With 56% more bandwidth
Ice Lake-U LPDDR4X-3733
Ryzen DDR4-2400
RedGreenBlue - Sunday, May 26, 2019 - link
Good catch^Alistair - Sunday, May 26, 2019 - link
That is absolutely hilarious. Benches were the following for the 3 different systems:1. Ice Lake-U LPDDR4X-3733
2. Intel 8565U 16GB memory DDR4-2400
3. AMD 8GB DDR4-2400
Like that even makes sense. They are comparing the most expensive configs with the cheapest, in Intel's words a "commercially available OEM system with AMD".
Alistair - Sunday, May 26, 2019 - link
I hope for Intel's sake the AMD system at least was dual channel, and not single, or otherwise the Intel fake benchmarks (remember last year lol?) strike again.Alistair - Sunday, May 26, 2019 - link
Ryan Shrout... great... /sSantoval - Sunday, May 26, 2019 - link
It was 100% single channel. Intel's deceptive benchmark is two-fold, with both dramatically slower DRAM and single channel DRAM. Either is bad, both is a disaster - or in Intel's case an "it makes us look good marketing opportunity".Brunnis - Monday, May 27, 2019 - link
Why would you write something like that? "100% single channel". That was never all that plausible to begin with and Intel has now confirmed that the AMD laptop was run in dual channel mode:https://twitter.com/i/web/status/11329606557309706...
Siats - Sunday, May 26, 2019 - link
It'll do you well to read up on the differences between LPDDR4 and DDR4, a 3733 configuration of the former is low end nowadays.maroon1 - Sunday, May 26, 2019 - link
So ?! What is your point ?!It is not intel fault that AMD did not update their memory controller and use LPDDR4X. Laptop with Ice lake will use LPDDDR4X while AMD laptop will use DDR4. That what will be available to consumers,
Alistair - Sunday, May 26, 2019 - link
We are interested in a cpu comparison, not a memory comparison that Intel is providing here. Ryzen can run with faster memory also.maroon1 - Sunday, May 26, 2019 - link
I have not seen any Ryzen-U laptop that use more than DDR4 2400Even most of expensive high-end laptop use something like DDR4 2666Mhz
danielfranklin - Sunday, May 26, 2019 - link
Exactly,If thats what they are shipping with, thats what you compare it against.
If Intel ship their Ice Lake with LPDDR4X then good on them!
Santoval - Sunday, May 26, 2019 - link
On top of that Intel used a single channel Ryzen APU machine, handicapping the AMD system even more. Let me make an uneducated guess about your reply.. : "They did so because dual channel ones are hard to find".Brunnis - Monday, May 27, 2019 - link
No, it was run in dual channel, as expected. It's confirmed here:https://twitter.com/i/web/status/11329606557309706...
Siats - Sunday, May 26, 2019 - link
LPDDR4x-3733 actually has a bandwidth of 29.9GB/s, bus width is 32bits in LPDDR4 and its variants. The AMD chip is the one with the bandwidth advantage.skavi - Sunday, May 26, 2019 - link
I love how everyone immediately attacked without knowing this.Santoval - Sunday, May 26, 2019 - link
"The AMD chip is the one with the bandwidth advantage.". It is most certainly not. npz below explains why.tekniknord - Sunday, May 26, 2019 - link
To be fair AMD Ryzen RavenRidge only supports DDR4-2400 in laptops and i haven't read anything indicating that the new 3700U supports higher memory speeds.2400 is the highest supported memory, i haven't seen any RR laptop that supports higher memory speeds.
My 2666MHz does only work as 2400MHz.
danielfranklin - Sunday, May 26, 2019 - link
The AMD comparison is certainly interesting and no where near as good as i would have thought.Yes AMD arent going to have a next-gen APU for almost another year giving the 7nm to desktops and Navi (A decision i dont agree with), but when you look at what is possible, it doesnt look great for Intel's architecture.
They barely beat AMD when they picked the benchmarks, are running 56% more bandwidth and are a process node ahead.
What will AMD be able to do with 7nm and LPDDR4X???
They are on a good run, they really need to get the next gen APUs out into laptops!
qlum - Sunday, May 26, 2019 - link
I seem random altering between low and medium settings.My guess is that intel cherry picked the settings and games chosen so in reality I expect ice lake to be a bit behind amd.
Still a pretty decent improvement assuming intel didn't doctor the results too much.
IntelUser2000 - Sunday, May 26, 2019 - link
Yea, this is similar to what they did in GTX 1060 Max-Q vs Kabylake-G comparisons.They said Kabylake-G was equal to GTX 1060 Max-Q, but the former had dual channel system memory while the latter was on single channel.
The effect is small in games, but that 5-10% is enough to make it so Kabylake-G slower. Now that doesn't sound too good in marketing so they pushed for that extra.
Probably the same here.
HStewart - Sunday, May 26, 2019 - link
I am typing this on a Kaby Lake G, but it states that it newer Vega engine, but test should it older. But CPU is quite impressive compare to older Intel'sSantoval - Sunday, May 26, 2019 - link
Of course it's the same. The AMD system has a single 8 GB DIMM, aka it has single channel memory.Brunnis - Monday, May 27, 2019 - link
No, it was run in dual channel, as expected. It's confirmed here:https://twitter.com/i/web/status/11329606557309706...
johannesburgel - Sunday, May 26, 2019 - link
There were several Broadwell and Skylake parts which already could do 1 TeraFLOP/s at Single Precision, even on OpenCL. The Iris Pro 580 in the Core i7 6770HQ for example benchmarks at > 1 TeraFLOP/s in clpeak.So I wonder what they mean by "first TeraFLOP GPU".
IntelUser2000 - Sunday, May 26, 2019 - link
Actually they claimed 1TFlop with Haswell, when you combine the CPU and GPU together. The GPU was quite close.In this case they mean 1TFLOP for mainstream systems.
johannesburgel - Sunday, May 26, 2019 - link
I just benchmarked the HD9 graphics in my Core i7-7500U ultrabook at 430 GFLOP/s in clpeak. It has 24 execution units, so I would expect all more recent parts with 48 execution units or more to get that TeraFLOP/s, especially when they don't have to run inside the power envelope of an ultrabook. Namely that would be Iris Plus Graphics 640/650/655. Those three are labelled "Consumer".IntelUser2000 - Sunday, May 26, 2019 - link
The Iris Plus 640/650 systems use the expensive eDRAM packaging while the one in Icelake-U does away with it while delivering better performance. This then becomes comparable to the GT2 part with 24EU and without eDRAM.isthisavailable - Sunday, May 26, 2019 - link
Nice improvement but the gap between current Ryzen chips and this is so less that I expect AMD to pull ahead again with next gen Ryzen mobile chips.maroon1 - Sunday, May 26, 2019 - link
Ryzen 7 3700U came out recently. You won't see any new APU until 2020 (probably Q2 2020)RedGreenBlue - Sunday, May 26, 2019 - link
Seems like a decently fair comparison. However, I wonder how much and which direction those benchmarks would shift if they’d been run at higher resolutions. I would expect an Intel core to beat the Ryzen core in gaming at low resolution even if the graphics were evenly matched. I’d like to have seen a more pure graphics test, but I guess if you’re gaming on a 25 watt or less than 25 watt machine you won’t be pushing resolution very much anyway.RedGreenBlue - Sunday, May 26, 2019 - link
Looking forward to seeing if this is a totally redesigned architecture Raja was involved in.IntelUser2000 - Sunday, May 26, 2019 - link
There's no such thing as a truly redesigned architecture. That would be a waste of time anyway.Gen 11 is a significant improvement over Gen 9, but the fundamentals are still Intel GPU architecture.
Raja won't be able to have much effect on this considering the timeline. We can expect more input on the next gen, now called by Xe name. But it'll still be Intel GPU architecture. If Raja had any part in the direction of the design, it'll be low level that most of us won't get to know.
RedGreenBlue - Sunday, May 26, 2019 - link
Low level wouldn’t be the best way to describe it, low level details we’ll never be told, probably yes, but he’s in charge of that division with 4,500 people under him. And I definitely think his input would have greatly impacted performance, because Intel likely would not have been that close to finishing the design when they hired him. Die shot still looks like the 8 cluster EU groups, though.Obviously I didn’t mean totally redesigned in a literal sense talking about chip architecture, but rather, just not a tweak to a few aspects. His start at Intel seemed to coincide with the AMD cross-license agreement. And yeah, for GPU’s Intel’s mainly just had to do that because of patent infringement reasons, but I think it would be stupid to get access to some of AMD’s GPU patent portfolio and not implement parts of it that weren’t available with the previous Nvidia portfolio. I expect they would also HAVE to get rid of some things that were in the Nvidia license but not in the AMD license. Also the way they’re touting this for AI suggests Raja’s experience came into play, or it could just be from the cross-license and Nvidia wouldn’t give some of that in the previous deal or a new deal, or they could be exaggerating abilities a bit.
R0H1T - Sunday, May 26, 2019 - link
How does it seem like a fair comparison? Did you see the memory speed on ICL, all the different settings in games 🤔RedGreenBlue - Sunday, May 26, 2019 - link
I root for team Red, but in the notebook segment they’re talking about people won’t be playing on very high settings regardless. I did take note of the high/low settings, but it could also have been done because of a required minimum frame-rate for either system. That’s a legitimate reason. It doesn’t mean much if Intel’s system performs better when both are under 30fps, they would have to adjust the settings.Frame rate numbers would have been better. But seriously, this is one of the least unfair comparisons Intel has touted over the years. The ram speed is unfair, but at least it’s out there and someone on a forum can run the same test with different ram and clarify the frame rates.
VyaDomus - Sunday, May 26, 2019 - link
It took a jump to 10nm and 50% more memory bandwidth just to slightly outperform one of AMD's most underpowered 14nm APU ? This does not bode well for Intel, Ryan sure tried his best though.maroon1 - Sunday, May 26, 2019 - link
Ryzen 7 3700U is most powerful AMD-U seriesLPDDR4X is not supported by current AMD APU, so you won't see any AMD laptop that will use it.
neblogai - Sunday, May 26, 2019 - link
AMD's own fault, that they artificially limit U-series APUs to 2400MHz and even downclock it in games. On the other hand- I expect Intel laptops with LPDDR4X will be in another price levels than 90% of laptops with 3700U. Also- Anandtech citing 40% performance jump from Whiskey Lake to Ice Lake still should not be enough to catch up to ~70% faster 3700U.VyaDomus - Sunday, May 26, 2019 - link
The point is we know that these things are memory bandwidth starved most of the time, if Intel wanted to show off their new and shiny architecture it would have been far more impressive if they would have done the testing with matching memory performance.AMD limits these APUs to 2400 mhz because that's the best that they'll ever see from lsptop manufactures at that price point anyway, hell even high end mobile CPUs don't see more than 2400 mhz most of the time. And you got to ask yourselves, how many of these Ice Lake chips are we even going to see around with these particular LPDDR4X speeds ?
Billy Tallis - Sunday, May 26, 2019 - link
If these Ice Lake chips end up on the market paired with LPDDR4X, that RAM will be running at least this fast. This isn't an exotic speed for LPDDR4X, and the memory manufacturers have mostly moved on to even higher speeds (4266).ksec - Sunday, May 26, 2019 - link
May I ask Why AnandTech hasn't covered or mention, even in the news pipeline about Zombieload or MDS?Irish_adam - Sunday, May 26, 2019 - link
Might as well ask how much Intel paid them to put up an article and a pipeline story on a Sunday, I don't think I have ever seen anything published here on a Sunday. Not to mention the fact that there is no way Ian didnt notice the glaring problems with the benchmarks and so my guess is that this is paid for content by Intel the day before AMD has a reveal eventdarkswordsman17 - Sunday, May 26, 2019 - link
While I tend to give them plenty of leeway and have defended them some over some of this behavior (there's certainly some very petulant commenters on some articles), their silence on this is starting to be a problem for anyone that wants to give Anandtech credibility. Since they allegedly were waiting for official comment from Intel before discussing it, they should be telling Intel that until they address this major security issue (that is far bigger than stuff like this article is about or the 9900KS), that they won't publish any Intel stories. Its obvious Intel wants to stonewall things and ignore glaring screwups, but that only works if the media helps them. At some point they need to stop waiting for Intel (who is just going to blow smoke up their backsides anyway as we've seen with the Spectre/Meltdown situations) and actually do their own study into this or else they're just going to look like they're being paid off by Intel to coverup what is absolutely an outright fiasco.This coupled with their atrocious coverage of that ridiculous "security research" team trying to trash AMD makes Anandtech look really bad. Considering how they try to appear that they're so concerned about benchmark cheating (on Android at least...) and other security situations, them letting Intel slide on an issue that is both enabling them to cheat on benchmarks as well as creates a huge security problem on top of it, well its becoming absurd.
I'm starting to wonder if the recent hires of other tech media person by Intel isn't playing a role as well.
Ryan Smith - Sunday, May 26, 2019 - link
"This coupled with their atrocious coverage of that ridiculous "security research" team trying to trash AMD makes Anandtech look really bad."Do you mean CTS-Labs? Where Ian basically poked several large holes in their story when he interviewed them?
https://www.anandtech.com/show/12536/our-interesti...
HStewart - Sunday, May 26, 2019 - link
Because it not real news - who cares about this bs. Show me a real example of it.HStewart - Sunday, May 26, 2019 - link
Keep in mind this is IceLake - it has most of if not all migrations in hardware. End of story.Korguz - Sunday, May 26, 2019 - link
HStewart,keep in mind.. this is intel published benchmarks, and as such.. should be taken as cherry picked and taken with loads of salt, and considered as false and just a PR move by intel
Phynaz - Sunday, May 26, 2019 - link
And another one, priceless.Ryan Smith - Sunday, May 26, 2019 - link
"Might as well ask how much Intel paid them to put up an article and a pipeline story on a Sunday"This is part of our regular Computex coverage. Taiwan is 12 hours ahead of the US east coast, and vendors are already rolling out announcements on Sunday because there's so much going on the rest of the week.
"so my guess is that this is paid for content by Intel the day before AMD has a reveal event"
If it were paid content (and I wouldn't approve of something like this), then it would be made very clear in the article that it was a sponsored news post.
HStewart - Sunday, May 26, 2019 - link
Keep in mind, not only 12 hour time difference but also that tomorrow is not consider a holiday in Taiwan. So it is business as usual.Phynaz - Sunday, May 26, 2019 - link
Oh look, the AMD fanboy is all butthurtRyan Smith - Sunday, May 26, 2019 - link
"May I ask Why AnandTech hasn't covered or mention, even in the news pipeline about Zombieload or MDS?"Backlogged on testing. I'm in the middle of something, but I ran out of time before Computex.
I don't want to put up an article without data; there's too many misconceptions and wishcasting on the subject, which is leading to everyone losing their minds.
ksec - Monday, May 27, 2019 - link
Thx for the reply. I just thought since Intel release an actual statement and CVE, it would be great if Anandtech just mention with a few sentence post, and with further details to come later. Just one or two sentence will do.I mostly limited to reading Anandtech as the only tech web site, when I saw there were discussion on twitter a few days after the Zombieload I was suppressed I didn't read about it earlier.
AshlayW - Sunday, May 26, 2019 - link
Don't believe a word this scumbag company puts out when comparing products, especially to AMD. Do I need to remind everyone of the "9900K 50% faster than 2700X" 'study' they commissioned?Intel would have done everything in its power, skirting the boundary of deceit, to make the Intel CPU have an advantage.
Klimax - Monday, May 27, 2019 - link
As if there is any difference between AMD and Intel. (AMD is forced to sort of behave, for now)CBeddoe - Sunday, May 26, 2019 - link
So...They are benchmarking thd intel system with high frequency high capacity memory
And the AMD system with half the capacity and lower frequency ram.
They must have given the marketing department lots of leeway on this one.
Does Intel make allowances for their sieve like security and performance losses from patches?
PeachNCream - Sunday, May 26, 2019 - link
Benchmarks from the manufacturer of a product are never biased. Never. Not at all. I'll believe the numbers when I see a credible independent third party like notebookcheck post benchmarks.With that said, that applies to the Ryzen vs Intel information. I doubt they would be as untrustworthy when comparing their own GPUs to one another. Still, grains of salt are being taken until someone gets their hands on retail hardware.
Krayzieka - Sunday, May 26, 2019 - link
now intel getting to the point to marketing. i suggest people go support AMD all the wayabufrejoval - Monday, May 27, 2019 - link
Looking at the numbers and the discussion there seems to be some consensus that it puts the new Ice Lake standard iGPU on a similar performance level as the GT3 variants from previous generations.I have always been fascinated by these chips, because they are oddly priced.
They are extremely hard to get outside a Mac, where their end-user price is obviuosly insane.
The only other form, where you can get them easily is a NUC, where they conform to a classic Intel rule: Don't charge for the iGPU no matter what size!
So even if common sense would dictate that the extra 64/128MB of eDRAM as well as the double-sized GT3 (or quad GT4) iGPU should cost extra money, end-user pricing on NUCs doesn't reflect that: Those are soley priced on Pentium/i3/i5/i7 or CPU power "merits", even if the GPU in these configurations is taking up much more die space then the CPU.
But Intel doesn't seem to sell them to anyone but Apple.
There is one single other instance where I have ever seen an Iris Pro/Plus outside a Mac or a NUC and that was a Medion notebook sold via Aldi in Germany, based on the i5-5257U and sold at €600, quite an ordinary price for an ordinary (HD520) i5 Skylake at the time, and an obvious bargain at double GPU power for free. So I grabbed one, especially because the dGPUs at the time were all still 28nm and very clunky.
Alas, while double GPU power turned out to be true and the machine is fine and remains in good shape with great Linux compatibility, it doesn't turn the notebook into a viable gaming device, nor very likely into a viable AI inferencing monster.
At least not when you have desktops with Nvidia dGPUs running next door or somewhere in a cloud close by.
So to all this hot-headed discussion that's been going on in this thread I say: I doesn't really matter if Intel is cheating here or has made radical improvements, because every machine with either generation (or iGPU configuration) essentially remains a 2D device. It still takes GDDR RAM and at least 50 Watts of pure GPU power to make most of my games playable at the full resolution of the screen. So an Ultrabook simply isn't going to cut it, if it's PC gaming you're after (Android games work, but rarely attractive).
But what also works just as well in both configurations is Steam streaming. A GTX 1060 is good enough for 1920x1080 on the server side and will give you the performance no APU or beefed up iGPU will give you for a long time to come on a 15 or even 10Watt ultrabook without even running hot or short.
So that's what I do. I show people my ultrabook and impress them with the most demanding games running at full hilt seemingly without even breaking a sweat at battery power.
Some actually figure out that I must be cheating, but most people actually believe in both magic and advertisement.
abufrejoval - Monday, May 27, 2019 - link
Sorry got the wrong CPU: It's an i5-6267U Skylake GT3, no the Broadwell i5-5257U.just4U - Monday, May 27, 2019 - link
Overall this is good news. Intel needs to boost it's integrated graphics performance (in my opinion)