easy job online from home. I have received exactly $20845 last month from this home job. Join now this job and start making extra cash online. salary8 . com
What are you talking about? Look at the iGPU tests in this review. The Xe iGPU beats Vega 8 in 2 out of 8 tests with both having the 4267MHz RAM. While the Xe iGPU is much better than before, it still cannot beat AMD according to these benchmarks.
both iGPU sucks. A 1660ti or better is mandatory for any decent gaming (raising laptop prices a LOT). I also see this a fail from AMD - they have the technology but decided not to use it.
"1660ti or better is mandatory for any decent gaming"
The iGPUs are a blessing to many people, especially those on a slender budget, like myself. One can actually play games on these and have a lot of fun.
Indeed I didn't base my statement on the results in this test. I thought I remembered seeing 15W Tiger Lake vs 15W Renoir giving an edge to Tiger Lake.
The only 15W CPU in this iGPU test is that of the 4800U. The 1185G7 is 28-35W and the other Ryzen's are 35W. The 4900HS has a RAM speed deficiency on the 1185G7 and wins 4 out of 7 tests.
Because they are mostly CPU bound, it's primarily a CPU test! Look at the resolution. Anandtech lowered the resolution to 360p-480p low details in some of them to give Vega a chance to beat Xe. Furthermore Vega needs to clock at 2100 Mhz, a 55% clock speed advantage. No AV1 decoding either which is another big flaw for a mobile device.
@mikk - Pretty sure those results still aren't CPU bound, otherwise they'd be in the 100+fps range - and you'd expect Intel to win under those circumstances, because of their high single-thread performance. If Intel's iGPU only "wins" when they're both running sub-30 fps then it's a fairly meaningless win.
"Vega needs to clock at 2100 Mhz" - irrelevant. High clocks is only a disadvantage if it leads to worse power consumption, and that's not the case here. If Intel can increase their clocks within the same power envelope then they should.
Lack of AV1 decoding is a downside. It's not clear yet whether it will be a major one, but it shows the age of AMD's solution.
LOL @ ZoZo ___ he is messin' with you, ts You are correct in that Dr Su and AMD has played yet another "Rope-A-Dope" on the competition. I suspect RDNA2/Navi II will raise its pretty head after the "Lexa" cores run their course. It has been a productive run.
There are Radeon pro CNDA1 cores floating around that will likely evolve into the RX 6500 RDNA2/Navi IIs discreet replacements for Lexa. These will be the Display Core Next: 3.0 // Video Core Next: 3.0 arch associated with the Big Navi.
And ... I don't think AMD is being lazy. I think the Zen2/Zen3 APU product stack is being developed as yet to be revealed. Home / Office / Creator ? There is a Radeon Pro Mac Navi Mobile with RDNA1 discreet video w/HBM2.
We will see how the 6xxx APUs evolve. Grab your popcorn!
A console APU is not a PC APU - they have completely different design constraints and memory architectures. Vega was used here because it allowed AMD to bring Zen 3 APUs to market faster than they managed with Zen 2 - it's all mentioned in the review that you're commenting on......
The consoles don't use iGPUs.......most likely, RDNA2 design so far hasn't been designed for low power usage, it's focused more on high performance. Once they do the work to create a low power version, it can appear in iGPUs, laptop dGPUs, low end desktop dGPUs etc.
A discussion of a company's technological competitiveness is not a discussion of their financial health. Any dolt knows this, why do you pretend we can't see you moving the goalposts in *every single comment section*?
Really no reason for them to move away from Vega for these chips. Do you also complain that Intel has not changed their IGP for years?
The efficiency of Vega is quite good when not OC'ed way past where it should be like in the desktop cards. And it still offers adequate performance for the majority of people looking at a laptop. For anything more you want a discrete card anyway.
Just on the Intel point, it's worth noting that they've had to develop GPU IP specifically for their CPUs. The paradigm has changed with the advent of Xe scalable, but even then the first product released with Xe was a CPU. Obviously AMD is disadvantaged with RTG not being as tightly nit as Intel's GPU group
Not a question of capacity - it's the fact that TSMCs fragile long supply chain is broken and limited resources have to be allocated - and AMD is contractually obligated to reach delivery targets for the Console SOCs. They have to use the limited resources to provide the ultra high volume, ultra low margin SOCs over their high margin PC GPUs and CPUs.
In this case, it's not AMD's fault - it's an issue with TSMC
It's one thing to develop great processors, it's entirely another thing to effectively ship them. I would have liked to purchase a Zen 3 processors for my new PC, but I had to make do with a 3700X. Would have been interested by an RTX 3070 or an AMD latest gen graphic card, but again, they only seem to exist in the hands of testers, YouTubers and twitchers.
Let's see if AMD can really ship a decent number of Zen 3 mobile CPUs.
lmao the amount of times I see that just go to a MicroCenter. lol not everyone on this site lives in the USA. Secondly even if you are in america not everyone has access to a MicroCenter. So no all his problems are not solved next...
My comment was tongue in cheek. Every time I say there is a shortage of AMD product available, people make posts about how there must not be a problem since they just bought a chip from MicroCenter.
See the most recent comments for the article about the Intel chip for an example of my copious frustration with these MicroCenter commenters.
Page 7 I start a comment with: "Ryzen 5 5600x at $299 is a lie right now than and has been for months. It's slowly coming down to $399 with general availability. It will be months before it's actually available at $299."
Some of us also pointed to availability in other countries that aren't the USA. If you're sick of the MicroCenter comments, why drag it back out again over here? 🙄
@bji - seriously? 3 comments back: "See the most recent comments for the article about the Intel chip for an example of my copious frustration with these MicroCenter commenters."
What fourth world country do you live in mate, even in india there are micro center alternatives on which the zen 3 desktop CPUs are available at hardly any premium.
As you can see everyone and their brother just writes in a tangent to the actual issue. Bring up supply issues in the USA, people start posting about buying the chip in India or Europe. Bring up price markups in USA, people start mentioning prices in other countries whose price is only vaguely related to USA prices. Bring up the fact that Ananadtech is supplying misleading information when it puts prices in its articles, people ask about chips in India vs US MSRP and UK/EU price. The actual issues are constantly ignored by everyone wanting to post some tangentially related information local to them. Whatever.
@bji - The end of your original post: "The simple fact is that no Ryzen 3(sic) processors have had general availability at anywhere near MSRP for months."
So yeah, it's pretty relevant for people who don't live in the USA to respond by pointing out that there is availability at or around MSRP where they live. It's equally relevant to point out you're wrong when you respond by saying "that's not MSRP" as if everyone pays the same equivalent dollar price at retail as you do in the USA.
"The actual issues are constantly ignored..." This is a neat way of saying "let me complain regardless of the facts outside of the USA". These responses aren't "tangential" for the people posting them. If you want to only talk about the USA, make that clear from the start.
I checked six of their stores and not one of them has had a Zen 3 for the week I've been checking. My local store had one chip, the overpriced 5800X, in the last several weeks.
I've got an 5950RX and a GTX 3090 and I'm neither a tester nor do I have a youtube channel. Maybe the supply situation is better here in old Europe than in the Colonies?
Calling it "the Colonies" is pretty stupid but matches the mindset of the rest of your comment. You were just lucky to buy at the right time and now are smug about it.
(before anyone gets themselves in a tizzy here, I said that only in the hopes that Spunji would take offense, and then I could say, "I was joking, but you certainly gave me the reaction I was after". But I kinda wish I hadn't written that now because it's a pretty harsh way to try to make my point, which is, "the colonies" is a belittling way to speak about the USA and joke or not, it is not appreciated)
I didn't have a single issue getting the 3090, 6900XT, 5900X or 5950X - paid MSRP and got them on the day after launch (1 day shipping)... and the Colonies is kinda silly - they ceased being Colonies after we gave King George the universal sign of peace, love and respect - May have heard of it - something about a Revolution - you got Canada as a consolation prize...
We apologise for offending the feelings of the Republic. There appears to be a striking loss of information when jokes cross the Atlantic, even those made in good humour. We promise in the future to use more precise, up-to-date terminology, and not make ourselves look like Mr. Rip Van Winkle waking up after 20 years of sleep.
I wonder what is the point of new chips with old Zen2. 15% die size difference is meaningful but is that sufficient reason to bother (re)designing? As for the 5980HS, CPU part is pretty great when allowed to run at 35W. "Silent mode" is sometimes great but somewhat weird too - eg CB20MT shows a huge delta between the modes. Now lets just hope AMD/TSMC will manage to actually produce enough of these chips.
I'd be interested in whether any of these differences in Lucienne are physical design alterations, as opposed to VRM / BIOS alterations, along with maybe some enabling of silicon that wasn't functional in Renoir for some reason.
Either way, Lucienne's probably slightly more than 15% cheaper to make - not sure whether that would make up for the costs of extra masks and design work, though.
Ian mentions in the article that he thinks AMD was stockpiling Renoir chips all of last year in order to make a big push with the 5000 series. Is it possible that the stockpiled chips are these Zen2 "Lucienne" variety and once they sell out of them, that's all there will be? I wonder if AMD is having TSMC manufacture new Lucienne chips. I mean, why would you make something that's inferior, if it's on the same exact node as a better product (Cezanne)?
saving money. The Zen2 chips offer the power savings that Zen3 got with an already established design. That lets AMD sell them cheaper, and, lets face it, 95% of the end users out there would likely be blown away by a 5700U.
On top of that, I would wager the Zen2 chips are a 100% straight drop in upgrade for any existing 4000 series mobile designs, possibly even with little to no BIOS update needed(beyond adding the CPU ID). That lets OEMs show off a Ryzen 5000 laptop with zero extra investment needed.
Read the review - there are lots of changes besides the cores that supposedly are also in the non-zen 3 5000 chips - given they also get the faster vega this seems true. I do agree it is weird..
From a personal point of view, I don't like this mixing of Zen 2 and 3, not at all, and certainly won't be glad of their continuing this practice; but it does make good sense. In a way, elegant.
In this case, it helps to look at the cores as hidden, abstracted, a black box. Now, if such and such model fits its notch on the performance scale (5800U > 5700U > 5600U), then it shouldn't make much difference whether it's Zen 2 or 3 behind the doors. Sort of like an implementation detail.
It's great to see AMD kicking Intel's butt in a much larger market (i.e., laptops vastly outsell desktops): AMD really should be alongside, or simply replacing, Intel in most premium notebooks. Gaming notebooks are not my cup of tea, but glad to see for upcoming 15W Zen3 parts.
Will we see actual, high-end Zen3 notebooks? Lenovo, HP, ASUS, Dell: for shame if you keep ramming toasty Tiger Lake down customers' throats. Lenovo's done some great offerings with both AMD & Intel; that means some compromises with notebook design (just go all AMD, man; if/when Intel is on top, switch back!), but beefier cooling for Intel will also help AMD.
Still, overall, I don't see anything convincing me that x86 is really right for notebooks, either. So much waste heat...for what? The M1 has rightly rejiggered expectations: 20 hours on 150 nits should be ordinary, not miraculous. Limited to no fan spin-up and max CPU load should yield a chassis maximum of 40C (slightly warmer than body temperature). And, all the while with class-leading 1T performance.
As this is a gaming laptop, it's not too relevant to compare web benchmarks (what most laptops do), but this is peak Zen3 mobile and it still falls quite short:
You can double / triple x86 wattage and still be miles behind M1. I almost feel silly buying an x86 laptop again: just kilowatts of waste heat over time. Why? Electrons that never get used, just exhausted and thrown out as soon as possible because it'll throttle even worse otherwise.
because you here are benchmarking javascript engine in the browser but not being enough you are comparing those in single thread so here you are comparing 1/16 of the 5950hs vs 1/4 of the m1 a 128core epyc or a 64core threadripper probably will be even worse in this single threaded benchmark ( because those are levaring threads and are less efficient in single threaded app ) if you like wrong calculations then 1 core of the 15w version use less tha 1w for what result ? ~ 100 points ? so who is wasting electrons here ? ( btw 1 core doesn't use 1/16 because there are boosts , but it's even less wrong than your comparison )
128-core EPYC? Where? His comparison is indeed misleading in terms of energy efficiency, but it's sad that no x86 is able to come even close to that single-threaded performance.
The energy efficient comparisons are pretty clear: the best x86 (Zen3) has stunningly lower IPC than M1, which barely cracks 3 GHz. The only way to make up for such a gulf in IPC is faster clocks. Faster clocks require the 100+W TDPs so common in high-performance desktop CPUs. It's why Zen3 mobile clocks so much lower than Zen3 desktop (3-4 GHz instead of 4-5 GHz)
A CPU that needs 3x power to do the same work (and do it slower in most cases) must exhaust an enormous amount of heat, when considering nT or 1T benchmarks (Zen3 requires ~20W for 5 GHz boost on a *single* core). Look at those boost power consumption measurements.
Specifically in desktops (noted in my comparison about tripling TDP...), the CPU *alone* eats up an extra 60 to 90 watts during peak usage. Call it +20W average continuously, so we can do the math.
20W x 8 hours x 7 days a week = +1.1 kWh excess exhaust heat per week. x86 had two corporate giants to do better. It's been severely litigated, but that's Intel's comeuppance. If Intel can't put out high-perf, high-efficiency x86 architectures, then people will start to feel less attached to x86 as an ISA. x86 had billions and billions and billions of R&D.
I see no reason for consumers to religiously follow x86 Wintel or Wintel-clones in laptops especially, but desktops, too: where is the efficiency going to be coming from? Even if Apple *had flat 1T* for the next three years, I'd still feel more optimistic about M1-based CPUs in the long-term than x86.
"I see no reason for consumers to religiously follow x86 Wintel or Wintel-clones in laptops especially, but desktops, too: where is the efficiency going to be coming from?"
Software, and getting work done. M1 is great and all, but just need to convince the boss that Apple or 3rd party has software available for our company....... Nope, oh well. Other negatives- For personal use, people aren't going to spend thousands of dollars to get new software on new platform. They can't play games (or should I say they can't play a majority), which is probably the largest market. They can't change anything about their software They can't customize anything. They can't upgrade any piece of their hardware. They don't have options for same accessories.
So I'll go ahead and spend the extra $15 a year on energy to keep Windows.
"A CPU that needs 3x power to do the same work" It doesn't. It's been demonstrated a few times now that if you scale back Zen 3 cores to similar performance levels to M1, M1's perf/watt advantage drops to about 30%. It's still better than the node advantage alone, but it's not crippling, and M1 is simply not capable of scaling up to the clock speeds required to match x86 on desktop / HPC workloads.
They're different core designs matched to different purposes (ultra-mobile first vs. server first) and show different strengths as a result.
M1 is a significant achievement - no doubt about it - but you're *massively* overstating the case in its favour.
"M1 is simply not capable of scaling up to the clock speeds required to match x86 on desktop / HPC workloads" ...Yet. In a couple of years x86 will be behind ARM across the board.
Fastest HPC in the world is ARM *right now*. Only the fifth fastest is x86.
Sigh, this is just a plain confused commenter: are you confused that 1T benchmarks exist? Why do people get so worried / panicked when the M1 comparisons start? What on Earth does EPYC have to do with a fanless laptop SoC?
Yes, the M1 has faster 1T so it naturally has faster JavaScript performance.
Just buy your M1 Mac and leave us peasants alone. Let us worry what single threaded performance. Arguing with you apple fans is really getting old, and tiring. Live and let live.
Do you realize, that any Apple personal computer is a PC (Personal Computer)? So what You wrote above is pure nonsense. I know, that you mean Windows OS and MacOS. Just be precise if you want to say something properly.
Sureeeeee, let's bring up the only benchmark where Safari's built in Javascript engine can cheat at, and call it a "magnitude" of improvement!
I swear to god, Apple sycophants pretty much focus on the two use-cases in benchmarking software that are extremely friendly for wide-issue CPUs like theirs (JS and Geekbench) and then dishonestly assume that performance elsewhere is bad/broken for x86.
Oh, geez. M1 is notably faster in Kraken, Jetstream, etc. Any web benchmark you look at, yes: M1 has a sizable lead.
Y'all get so emotional when the M1 is brought up and it makes zero sense: it is *no surprise* that one of the fastest 1T CPUs in the world also does well in JavaScript.
Pro-tip: 1T performance and JavaScript performance are quite closely correlated. :)
It's all about the money, and I'm pretty sure Intel is handing out more marketing funds and rebates than AMD. Most people don't care if they have Intel or AMD in their laptop.
Providing designs to OEMs and supplying most of the parts for a laptop - making it super easy for them to come to market with an Intel design... That's called smart business.
That's not the same thing as marketing funds and rebates, which Intel also do - they even do it at the reseller level.
So there's "smart business", then there's "buying marketshare", and then there's "outright bribery". Intel got fined for doing the last one, so now they mostly only do the first two - although it's a toss-up as to whether you think their contra-revenue scheme counted as option 2 or 3.
It's easy to compare them (M1 vs x86) on some metrics, but I think it is more nuanced than that. Do note that M1 is at 5nm, with size at around 120.5mm^2. The AMD parts are at 180mm^2 at 7nm. The M1 has 16 billion transistors versus 10.7 billion transistors for the Zen3 APUs. That is 49.5% more transistors in favor of M1.
I think a huge part of the reason M1 performs so well in many benchmarks, are that it can target specific workloads, and offload it to specific hardware cores for specific accelerated performance at lower power consumption. It becomes easy for Apple to achieve such, I think, because this is all transparent from the application developers, as they control the entire hardware AND software stack, much like consoles performing at high-end GPU levels despite having less powerful GPU cores.
This is not a cost-effective approach, although not impossible for AMD and Intel. Also part of the reason why I think if M1 were put into cloud servers, it would not be cost-effective. There will be so much dedicated hardware accelerated cores that will not be put to use when M1 is deployed in the cloud.
That said, Apple M1 is a great feat. Hopefully, AMD can also achieve a similar feat (high efficiency accelerated processing) using their Infinity Fabric and glue, allowing them to continue focusing on their Zen cores while also uplifting ancillary workload performance. The big impediment here, would be the OS support, unless it becomes a standard.
An interesting thought and one I'd like to see reviewers looking into. Also, if it were possible to get Windows ARM running on the M1, that would be an insightful experiment, removing Apple's software layers out of the picture.
Intel is in premium laptops because they make it easy for the OEMs to make good designs - not only the "blueprints" but also high efficiency parts other than just the CPU. So an OEM has little to no R&D expense, and can roll out a great laptop.
AMD should do the same - it's good business and would negate the reticence of the OEMs to invest in a smaller segment - not like this would have AMD selling more than Intel - but would improve their market presence in laptops significantly.
"AMD should do the same" I suspect they will once they have the funds to do so. You can't just bully your way into a market by copying the precise strategies of a company that's several times larger than you.
Ok, so the Ryzen 7 5800U is a 16 threaded CPU that Turbos to 4400 Mhz, and only uses 15 Watts. Oh, and btw, it also has a 2000 Mhz GPU for no extra power cost?
There are few mistakes in your assertion - the 15w number is only guaranteed at the base clocks of 1900mhz, not the 4400mhz turbo - the cpu & gpu clocks mentioned in the specifications are their respective maximum clocks, not their typical clocks in a mixed workload. So the 2ghz GPU clock won't happen together with a 4.4ghz CPU clock and certainly not in the 15w power envelope
What Intel uses for TDP is even worse. AMD: We have a 65W TDP chip but max full package draw is 88W. Intel: We have a 125W TDP chip, but we can allow it to go to 250W for 56 seconds in an absolute stock operation. However, motherboard manufacturers can all for unlimited turbo settings and that is the SOP for those motherboards. Therefore you actually have a 250W TDP chip but we will tell you it is 125W.
"Oh, and btw, it also has a 2000 Mhz GPU for no extra power cost"
What do you mean "no extra power cost"? They covered how it gets the faster GPU cocks in the review: partly improved efficiency in the GPU, but mostly from improved efficiency on the CPU side allowing more TDP to be used by the GPU.
Because it's slower than a desktop 3070 and the performance won't be great at 4K. Because the vast majority of the laptops have 1440p displays (or lower). Etc.
Even if they are ahead in cpu performance, I doubt that they can beat intel with their Xe graphics. AMD needed a better gpu to stay competitive in that regard
Check the benchmarks, it is already beaten. On top of that, Intel suffers texture isues in some games, or just fail to launch some others. And one more thing- it seems to suffer way more from using ddr4 than Vega, again dropping their performance. The only possible benefit about it, is that some games run significantly better on AMD, but some- on Intel. So if your favourite game prefers Intel, then Intel can be a better igpu specificaly for you.
Is there any mention to the number and speed of PCIe lanes on Cezanne?
I've been seeing reports of it only having x8 PCIe 3.0 lanes, which could present a problem to AMD's apparent goal of pairing Cezanne with discrete GPUs.
Also, I've read the explanation for the super weird resolutions chosen for the IGP tests, but it still comes off as rather irrelevant. The author first claims the IGP is good for eSports, but then there are no eSports games being tested. eSports is also apparently the reason the author is trying to pull >60FPS out of these SoCs, but I don't see a single title that anyone would want to play at those framerates.
The memory bandwidth limitation is also presented as a fact to be aware of, but then the author chooses very low render resolutions that are less likely to be impacted by memory bandwidth.
The Vega 8 at 2100MHz has a fillrate between the Xbox One and the PS4, a compute performance well above the PS4 and with LPDDR4X it has a memory bandwidth similar to that of the XBox One (without eDRAM). IMO it would be a much more interesting procedure to test 8th-gen multiplatform games at resolutions and settings similar to the 8th-gen consoles, than trying to push Borderlands 3 to run at 90FPS at a 360p resolution that no one is ever going to enable. The only useful result I see in there is FFXV at 720p medium.
I'm quite sure there'd be plenty Zen 3 designs that can run fanless for 10h+ with performance enough for 90% of users on the web/streaming/office etc. And that's before gets access to TSMC 5nm.
For typical user invested in Windows/x86 software, there's still no compelling reason to switch to Apple silicon. Plus, at the price MacBooks go, you can get features unheard of in the Apple world, such as 4k OLED, touchscreen with stylus, 360 design, upgradable memory (affordable 32GB RAM and 4TB storage for less than Apple would charge for 8/1), discrete GPU with vast games' catalogue...
Not to take away from M1 superiority, but x86 is still simply good enough and would only get better.
A typical user invested in x86 isn't going to change to Apple, no, but they're not the typical user. THE typical user is a lot more software-agnostic, and yes, ARM Apple is going to take marketshare. It's inevitable.
Apple market share explode? That is hilarious. The average cost of a laptop sold last year was $400. Remind me what the cheapest Mac costs? Most purchasers have no idea about relative performance which is why they are still buying laptop with 8th generation Intel inside. Even battery life has little impact when x86 laptop claim to have up to 15 hours.
Where Mac will take some share back is in professional designers where they had lost share over the past 5 years. But even then, the lack of multi monitor support may hamper that.
IIRC, all their current mobile CPUs that support external graphics have 8 PCIe 3.0 lanes. That's more than enough for any dedicated GPU in a laptop right now.
the FP5 package used by AMD mobile processor only allows PCIe 3.0 x8 connection to dGPU, but you still have extra x4/x4 connection for I/O and storage.
when moving to AM4, desktop Renoir still have the same PCIe lanes as Matisse, that is 16+4+4 lanes.
it's not a problem since TB3 eGPU are using PCIe 3.0 x4 anyway.
I was wondering about that as well, it also seems a bit confusing with Tiger Lake (is it 4 or 8 lanes of PCIe 4.0?). The advantage with Tiger Lake is that it has TB4 integrated in the SoC, unfortunately I haven't seen any AMD based laptop with TB so far so I went with Intel to keep my docking station(s). Maybe this time around there will be a model foregoing the dGPU for a TB controller.
"I've been seeing reports of it only having x8 PCIe 3.0 lanes, which could present a problem to AMD's apparent goal of pairing Cezanne with discrete GPUs."
Nope. See all the announced devices with Cezanne and RTX 3070 or 3080 GPUs.
It was never a problem with Renoir, either. People just came up with post-hoc rationalizations for why Intel still dominated gaming laptops despite having an inferior CPU.
The mobile Zen3 CPUs are a great generational update. Glad to see a healthy increase in new design wins and one can only hope that AMD will be able to deliver all these CPUs to the OEMs in sufficient quantities so these notebooks will be available to the consumer.
That being said, the true challenge is Apple Silicon. While AMD can beat the M1 CPU in multi core tasks, Apple will outclass everything x86 once they introduce their second gen silicon with much higher core count and other architectural improvements.
So, I wonder what kind of strategy AMD (and Intel) will follow in the near future. I remember - maybe ~10+y ago - when AMD had some sort of transient partnership with ARM and everybody thought AMD would somehow implement ARM designs into some sort of hybride chip. For some reasons that never came to fruition. In order to stay relevant in the mobile (and desktop) CPU market, AMD will have to react to the huge attack from Apple silicon in one way or another. So, what does AMD have up their sleeves?
Intel is apparently going the big.little route in their next generation of mobile CPUs with little Atom-based cores and big performance cores. I am curious what AMD is up to.
Well about AMD's relationship with arm, they have an arm license, so does Intel for that matter. So if x86 starts going south, AMD will almost certainly abandon it, and Intel most likely will do so as well especially with their new CEO.
ARM is not some magic silver bullet - MediaTech has vast experience with ARM but are their chromebook chips any way close to Apple M1? (or Zen3 for that matter?)
And remember AMD is yet to get acess to the same TSMC process as Apple - maybe once they're on par, large part of that efficiency advantage dissapears?
AMD has K12, which Jim Keller also worked on, waiting in the wings. Most assuredly they have continued developing it. Whether it will play in the same league with M1 remains to be seen, but they also have the graphics IP to go with it so they could likely come out with a strong offering if it comes to that. Not sure what Intel will do..
M1 is Apple's replacement for ultra-low power, nominal 15w Intel chips. Later this year we will see their replacement for higher powered (35-65w) Intel chips. Nobody knows what those chips will be like yet, but it's pretty obvious they'll have 8 or 16 performance cores instead of just 4, with a similar scale up of the number of GPU cores. They'll add the ability to handle more than 16gb and two ports, and they will put it in their high end laptops and imac desktops. Potentially also on the menu would be a faster peak clock rate. That's not an "I'll believe it when I see it," that's a foregone conclusion. Also a foregone conclusion: next year they will have an even faster core with even better IPC to put in their phones, tablets, and computers.
As of last year, Apple's chips had far better IPC and performance per watt than anything Intel or AMD could make, and they only fell short on overall performance due to only having 4 performance cores in their ultra-low power chips.
(For the record, I use Windows. But there's no denying that Apple is utterly dominating in the contest to see who can make the fastest CPUs)
Apple will release faster cores but so will AMD. And now that they've got an idea of what Apple's design is capable of, I'm pretty sure they could overtake it, if they wanted to.
As much as I hate to say it, the M1 could be analogous to Core and K8 in the Netburst era. The return to lower clock speeds, higher IPC, and wider execution. Having Skylake and Sunny C. as their measure, AMD produced so and so (and brilliant stuff too, Zen 3 is). Perhaps the M1 will recalibrate the perf/watt measure, like Conroe did, like the Athlon 64 did.
I've got a feeling, too, that ARM isn't playing the role in the M1 that people are thinking. It's possible the difference in perf/watt between Zen 3 and M1 is due not to x86 vs. ARM but rather the astonishing width of that core, as well as caches. How much juice ARM is adding, I doubt whether we can say, unless the other components were similar. My belief, it isn't adding much.
Very nice comment, and this little thread is a really fascinating read. I've not thought of the comparisons of the P4 -> Core2Duo Mhz regression, but I really think you're on to something here. The thing is, this isn't anything new with M1, Apple has been doing it since the A9 back in 2015, when it finally had IPC parity with the Core M chips. The M1 is just the evolution and scaling up to that of an equivalent TDP laptop chip that Intel has been producing.
So the question, then, is, if it's not the "ARM" architecture giving the huge advantages, why haven't we seen a radical shift in the x86 technology back to ultra wide cores, and caches? Or maybe we are, incrementally, with Ice/Tiger Lake, and Zen 2/3/4?
"Or maybe we are, incrementally, with Ice/Tiger Lake, and Zen 2/3/4?"
I think that sums it up. As to why their scaling is going at a slower rate, there are a few possible explanations. Likely, truth is somewhere in between.
Firstly, AMD and Intel have aimed for high-frequency designs, which is at loggerheads with widening of a core. Then, AMD has been targeting Haswell (and later) perf/watt with Zen. When one's measure is such, one won't go much beyond that (Zen 2 and 3 did, but there's still juice in the tank). Lastly, it could be owing to the main bottleneck in x86: the variable-length instructions, which make parallel decoding difficult. Adding more decoders helps but causes power to go up. So the front end could be limiting how much we can widen our resources down the line.
Having said that, I still think that AMD's ~15% IPC increase each year has been impressive. "The stuff of legend." Intel, back when it was leading, had us believe such gains were impossible. It's going to be some interesting years ahead, watching the directions Intel, Apple, and AMD take. I'm confident AMD will keep up the good work.
I'm aware of all of the above, but it still doesn't justify the original claims being made - and "Potentially also on the menu would be a faster peak clock rate" is nothing but speculation. Based on what we know about the design and the relatively poor clock scaling in respect of TDP shown between the A14 and M1, I'd say it's extremely unlikely that Apple will be able to push clocks up by more than a couple of hundred megahertz without a significant redesign.
What Apple will most likely have in that TDP range is something that's performance-competitive with Cezanne on the CPU side in native applications, significantly outclasses it on the GPU side, and maintains a perf/watt advantage commensurate with the node advantage that largely disappears when running translated code.
It's still far better than what Intel have, but it's not going to redefine the industry. If that order of advantage were enough to do so, then AMD wouldn't have existed after 2007.
"Apple will outclass everything x86 once they introduce their second gen silicon"
I've noticed the idea circulating is that the M1 is Apple's first-generation CPU. Sure, it may be the first one going into a computer, but as far as I'm aware, the M1 descends from the A14, which goes back to the A6 of 2012. How many iterations is that? Nine? Granted, some might be "ticks," but this certainly isn't a brand-new design. Zen 3, despite borrowing from Athlon, Bulldozer, and Core, is on its 3rd iteration, or 4th if one counts Zen+.
Not only have iGPUs cannibalized the sub-$100 discrete GPU market, but they have also chewed into the cool and quiet GPU market. If you have a HTPC or compact mITX system, your options aren't that great. I'd really like a GTX 3060L on a low profile PCIe card, but I won't hold my breath.
Also, I love the return of the 16:10 screen format. I'd kill for a 27" desktop version of the X13's screen with the same resolution and color coverage.
New features tend to come slower to iGPU parts than to discrete GPU parts. As example, it used to be very difficult to build a 4K60p system with a Raven Ridge APU because so few boards supported HDMI 2.0. Likewise, you're often stuck with an older video decoder/encoder than what is available on the discrete GPU market. Luckily the only feature missing from the latest generation of AMD parts is hardware AV1 decoding which will come with the RDNA2 APUs next round.
"But what is perhaps worrying is the temperature, being reported as a sustained 92-94ºC on average. That’s quite high. " --> 94C is fine, the silicon is rated to handle it 24/7. What is strange to me is that it most of the tests, the CPU temperature stays in the 80s, when there is more thermal headroom to go. It could clock higher.
Can someone PLEASE find out if this thing is running in quad channel or dual channel lpddr4x. It’s already at a disadvantage since lpddr4x has half the bus width of standard ddr4. It would be fine if it ran in quad channel because it’s bus width would then be the same size as ddr4 at 128 bits, but no reviews anywhere show what channel configuration it’s running in...
I don't think there were any 4000-series laptops running LPDDR4x just dual channel- I've only seen it to be quad-channel. So this flagship device (and used by AMD to impress about 5000H performance) should be no different.
I feel with the introduction of Renoir, what blew most away is the fact that AMD managed to squeeze 8 cores into the U series. Not only that, the Zen 2 architecture also resulted a some serious uplift in performance as compared to the previous Zen+. This year round while it is all nice and good to see decent performance bump, the wow factor is not there. I am not expecting a core increase especially on the same N7 node, and to be honest, 8 cores is plenty of performance for a mobile PC.
On the point of still using Vega, despite the age, Vega is still very competitive. One may argue that Intel's Xe graphics is better, but reviews out there proved otherwise. Xe is certainly fast, but both the iGPUs from AMD and Intel are likely memory bandwidith limited if one is pushing 1080p. Adding more cores will likely have diminishing returns. And honestly if you are a gamer, you cannot avoid getting a system with a dedicated GPU no matter how good the iGPU is.
"Shouldn't it be pretty straightforward given that these APU already kind-of exist in the consoles?" Those APUs use a totally different memory subsystem, much larger GPU slices, and they also use Zen 2 cores. AMD were specifically aiming to get Zen 3 out across their range - there's probably a lot of work needed to scale RDNA 2 down to iGPU levels without unbalancing its performance.
Amd should reduce Cezanne's core count to 6 then use the transistor budget for more gpu cores. That way it will beat all Intel laptop processors at all aspects
Now they need to sell a version that cuts out the silly integrated graphics and uses a faster dedicated GPU. I don't understand the motivation for having a steroid pumped 8 core CPU paired with anemic integrated graphics. It seems AMD is more interested in selling the idea of APUs than actually providing a balanced system.
AMD is clear that integrated GPU is for the very same chip at 15W. It is a pretty fine GPU there, TDP and bandwidth limit potential anyway. I wonder what is the die area saving by ditching GPU. If it is sizeable then yeah, it would be great to have a GPU-less variant of the chip, especially with current wafer supply issues.
That would mean more design effort on AMD's part, having APU and non-APU versions. In all likelihood, not worth it to them. Anyhow, these iGPUs work pretty well for older or simpler games. Also, some people might be interested more in the CPU performance (say, for encoding) but would still like to play the odd game now and then.
Apparently, on Chinese forums there have been reports of abnormally slow L3 cache speeds - 150 GB/s instead of 600GB/s, tested via AIDA64. Not sure about the exact numbers, but the actual speeds were supposedly much lower than the theoretical speeds. Is there any indication of this occurring?
LOL, what a liar. Everyone can check on google what the stock MBP 2013 was. To inform you - stock was 256 gb. Besides that - if all you think about is big size, you will never find a GF.
Wrong. Check Wikipedia - 2013 MacBook Pros were available from Apple with 1TB SSDs. They’re still good even now as you can replace that 2013 Apple SSD with a modern NVME SSD for a huge speed up.
And yes Apple supported the NVMe standard before it was even a standard. It wasn’t finalised by 2013 so these macs need a $10 hardware adaptor in the m.2 bay to physically take the NVMe drive but electronically and on the software level NVME is fully supported.
Sorry but you are wrong or don`t understand what stock means. On Apple`s own website states clearly that MBP 2013 had STOCK 256 gb SSD with OPTION to upgrade to as high as 1 tb SSD. So maybe your Apple lies again and wiki is ofc correct. On top of that: bragging about 1 tb SSD when in PC world you could get 2 tb SSD in top machines isn`t rellay something to brag about.
Stock means that they were in stock, available from the manufacturer for order. Which is fair to apply in this case. Most likely they didn't have any SSD in them until they were configured upon sale.
What you're thinking of is base. At the same time, it's fair to call out as an unfair comparison, because they are cited as the standard/base configuration of this model, where it wasn't for the MBP
1. Worrying about what was standard 7 years ago as if it's relevant to what people need today is silly 2. TB SSDs were probably about $600-$700 in 2013. If you spent that much to upgrade your MBP, good for you, that doesn't mean it's the best use of funds for everyone.
It is a good review thank you Dr. Ian. My concern is, and has always been the fact that, CPU manufacturers make beefier iGPUs on higher core count CPUs which is not right/fair in my view, because higher core count CPUs and most especially the H series are most of the time bundled with a dGPU, while lower core count CPUs may or may not be bundled with a dGPU. I think lower core count APUs would sell much better if the iGPUs on lower core count CPUs are made beefier because they have enough die space for this, I suppose, in order to satisfy clients who can only afford lower core count CPUs which are not paired with a dGPU. It's a bit of a waste of resources in my view to give 8 vega cores to a ryzen 9 5980HS which is going to be paired with a dgpu and only 6 vega cores to a ryzen 3 5300 whose prospects of being paired with a dGPU are limited. I don't know what you think about this, but if you agree, then it'd be helpful if you managed to get them to reconsider. Thanks.
I get your point here, and I agree that it would be a nice thing to have - a 15W 4-core CPU with fully-enabled iGPU would be lovely. Unfortunately it doesn't make much sense from AMD's perspective - they only have one chip design, and they want to get as much money as possible for the fully-enabled ones. It would also add a lot of complexity to their product lineup to have some models that have more CPU cores and fewer GPU CUs, and some that reversed the balance. It's easier for them just to have one line-up that goes from worst to best. :/
Yes. It could be that, they are sticking with their original plan from the time they decided to introduce iGPUs to X86. But, I don't see why they can't make an overhaul to their offerings now that they are also on top. They could still offer 8 vega dies from the beginning of the series to the top most 8 core cpu offering. And those would be the high end offerings. Then, the other mid and low end variants would be those without the fully enabled vega dies. This way, nothing would be wasted and cezanne would then have a multitude of offerings, I believe people, even at this moment, would like to own a piece of cezanne, be it 3 cores or 5 cores. I think it's the customer to decide what is valuable and what is not valuable. Black and white thinking won't do (that cores will only sell if they are in even numbers). They should simply offer everything they have especially since their design can allow them to do so and more so now that there are supply constraints.
The problem is that it's not just about what the end-user might want. AMD's customers are the OEMs, and the OEMs don't want to build a range of laptops with several dozen CPU options in it, because then they have to keep stock of all of those processors and try to guess the right amount of laptops to build with each different option. It's just not efficient for them. Unfortunately, what you're asking for isn't likely to happen.
I might be in the market for a laptop later this year, and it's nice to know that unlike the jump from Zen+ to Zen 2, the newer APUs are better but not *devastatingly so*. I might be able to pick up something using a 4000 series APU on discount and not feel like I'm missing out, but if funds allow I can go for a new device with a 5000 APU and know that I'm getting the absolute best mobile x86 performance per watt/dollar on the market. Either way, it's good to see that the Intel/Nvidia duopoly is finally being broken in a meaningful way.
I do have one request - it would be nice to get a separate article with a little more analysis on Tiger Lake in shipping devices vs. the preview device they sent you. Your preview model appears to absolutely annihilate its own very close retail cousin here, and I'd love to see some informed thoughts on how and why that happens. I really don't like the fact that Intel seeded reviewers with something that, in retrospect, appears to significantly over-represent the performance of actually shipping products. It would be good to know whether that's a fluke or something you can replicate consistently - and, if it's the latter, for that to be called out more prominently.
Regardless, thanks for the efforts. It's good to see AMD maintaining good pace. When they get around to slapping RDNA 2 into a future APU, I might finally go ahead and replace my media centre with something that can game!
I've revised the charts, from what I've seen, the MSI Prestige Evo 14 is configured at 35W and happens to match performance with the intel reference configured at 28W. The biggest discrepancy I have seen between the two is in the y-Cruncher benchmark. However putting this one benchmark aside, intel's reference unit doesn't seem to differ greatly in performance from the shipping intel unit in the msi prestige evo 14.
Having gone over things again more carefully, I definitely overstated things when I said "annihilate", but in the tests where they both appear the Intel reference platform at 28W is faster than the MSI Prestige at 35W more often than the other way around (16-13). When the MSI does win, it's often not by much, and there are even 7 examples where the Intel reference platform at 15W beats the MSI at 35W - usually single-thread tests.
My best guess is that the Intel platform might be showing some sort of latency advantage, possibly related to how quickly it can shift between speed states - which would favour it in the shorter and/or more lightly-threaded tests. I'd love to see a detailed analysis, though - ideally with more Tiger Lake platforms, as I think the Prestige may actually be one of the fastest ones shipping.
Yeah and Tiger Lake is better at power consumption too. So why again has AMD dropped the ball on adding PCIE-4? Last year it was acceptable, with Renoir, as PCIE-4 was brand new to Desktop and Intel wasn't anywhere close to releasing it, but now it feels AMD missed the bus on this one, along with not providing the now-free Thunderbolt 4 connection.
It doesn't make high-end graphics as that cuts into the power budget and die space budget - plus they want to sell discrete mobile gpus to laptop OEMs. they'll continue to include good enough graphics, but there isn't a compelling reason for them to waste die space on a solution that isn't needed for most normal laptop use cases or that will cannibalize sales of discrete graphics.
It sure does look like it - and would explain the insulating goop surrounding it.
It's probably necessary to get the CPU's expected performance out of a device in this form factor without it constantly sounding like a tiny jet engine.
Do you use the Handbrake presets unmodified? If so, have you considered turning off the de-interlacing filter?
Filters can slow down transcoding speed dramatically. For example, using the same video and the 480p Discord preset, my system (i7-4790K) transcodes the video at 136 fps. Turning off the interlace detection and the filter results in a speed of 221 fps. Enabling the Denoise filter reduces transcode speed from 221 fps to 15 fps.
Would it not be a better to test just the transcoding speed, without any filters?
I'd like to point out that this new Asus Flow X13 laptop is quite unique, and kinda/sorta the first of it's kind of any device out there. How do I mean?
Well, up to now, it's been all but impossible to buy a 360-degree hinged TOUCHSCREEN device, that also has a 120hz refresh panel. The only other laptop that had this were ones from HP that had a first-generation Privacy Screen built-in. The privacy screen has a feature that you can turn on and off to enable the viewing angles to be severely limited when desired. The first-gen versions of these had a knock-on effect of running at 120hz, so the touchscreen, 360-degree foldable versions of those are the only other touchscreen laptops with high refresh panels. None of them had particularly great gaming performance, as the best one available was a 4-core Kaby Lake-R powered Ultrabook. The new versions of HP's Privacy Screen no longer run at 120hz, so it was a limited time option.
That's why I'm so excited about this ASUS laptop. I wish it didn't have an external GPU, and only relied on the iGPU instead. But this is as close to my perfect device that has yet been created.
AMD says it had "100 design wins" for Renoir, and 50% more, "150 design wins" for Cezanne. Whatever a "Design Win" means, for that matter.
All I know is that when I go into Best Buy to peruse the laptops section, I consistently see two things:
1. While a substantially bigger section, with at least 10x as many different products on display, the "Windows" section is nearly always barren of customers, while the Apple section is most certainly dangerously close to violating every COVID Best Practices known to science.
2. Last I counted, there were about 20-30 Intel Tiger Lake "EVO" branded laptops from all the major OEMs, while I saw less than five, yes FIVE IN TOTAL Ryzen laptops even available for sale. And most of those Ryzen laptops were of the gaming variety. I don't recall seeing a single Ryzen Ultrabook (ie, with just a iGPU).
So I want to know, what is AMD's plans exactly for changing this? They may have an overall better product (albeit at times, only narrowly) than Intel's Tiger Lake, if they can't ship these and get them out in front of the normal Joe customer who doesn't follow the tech scene, they'll never gain significant market share from Intel in the Laptop segment.
I wonder how much of this is due to public perception. Enthusiasts know that AMD is good but most people don't know or care, while some have a vague instinct telling them Intel is first-rate and AMD substandard (corroborated by advice of salespeople in the shop). The laptop seems proper if it's got an Intel sticker, otherwise no good. And that's something which will be very hard for AMD to change. Perhaps fewer Ryzen gaming laptops will help. Even a new logo/sticker for their mobile CPUs, with minimal elegant design.
Also, they need to capture the general public's imagination, as silly as that sounds. Just like Samsung did. There was a time when people didn't think much of Samsung (in my country at any rate) but nowadays go into the shop and they've got the best fridges, TVs, and washing machines. Or that's the perception.
I'm a bit disappointed and left scratching my head regarding the GPU review section of this article. 360p, 480p resolutions ... what are we playing MS-DOS games in EGA 256-color mode?!
How did those games even run at such low resolutions? That's mind boggling.
At any rate, it all led to a confusing non-conclusion of what exactly the iGPU performance is on these Cezanne chips, and how it compares to Tiger Lake. Is it better than Tiger Lake or not? How much better than Renoir is it?
I realize this isn't all Ian's fault, as the laptop given to him has a 35W CPU, and we're asking him to compare it to 15W Ultrabooks, etc. But it was still very confusing to me. Hopefully it will become more clear when (and if) AMD Cezanne Ultrabooks come out with 15W parts.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
218 Comments
Back to Article
t.s - Tuesday, January 26, 2021 - link
Good, AMD! Just use Vega GPU for your next SOC, ryzen 6000 series. Make it '5 years with Vega'.YB1064 - Tuesday, January 26, 2021 - link
Looking forward to the ASUS laptop (used here) review. Looks like a killer implementation. Intel is now the poor man's AMD!medi05 - Wednesday, January 27, 2021 - link
I wonder, what happens to the green-blue "no AMD Laptop with GPU higher than 2060" deal now.(Polish site purepc reported that)
Unashamed_unoriginal_username_x86 - Wednesday, January 27, 2021 - link
Gone, reduced to atoms. Something like half of the gaming laptops I've seen offer Ryzen with a 3070 or 3080Aninajoe - Sunday, January 31, 2021 - link
easy job online from home. I have received exactly $20845 last month from this home job. Join now this job and start making extra cash online. salary8 . comZoZo - Tuesday, January 26, 2021 - link
Yes, they don't know how to make APUs with RDNA2, otherwise the new generation of consoles would've had that.Wait...
t.s - Tuesday, January 26, 2021 - link
They have no reason to use RDNA2, cause for IGP, theirs have no competition. In other words, they just lazy, or content with what they have now.ZoZo - Tuesday, January 26, 2021 - link
No competition? If I remember correctly, Xe in Tiger Lake beats the Vega 8 by significantly more than a margin of error.schujj07 - Tuesday, January 26, 2021 - link
What are you talking about? Look at the iGPU tests in this review. The Xe iGPU beats Vega 8 in 2 out of 8 tests with both having the 4267MHz RAM. While the Xe iGPU is much better than before, it still cannot beat AMD according to these benchmarks.TelstarTOS - Tuesday, January 26, 2021 - link
both iGPU sucks. A 1660ti or better is mandatory for any decent gaming (raising laptop prices a LOT). I also see this a fail from AMD - they have the technology but decided not to use it.Samus - Wednesday, January 27, 2021 - link
The 1660Ti has a higher TDP than the entire POWER SUPPLY of these test systems...GeoffreyA - Wednesday, January 27, 2021 - link
"1660ti or better is mandatory for any decent gaming"The iGPUs are a blessing to many people, especially those on a slender budget, like myself. One can actually play games on these and have a lot of fun.
ZoZo - Tuesday, January 26, 2021 - link
Indeed I didn't base my statement on the results in this test. I thought I remembered seeing 15W Tiger Lake vs 15W Renoir giving an edge to Tiger Lake.yeeeeman - Wednesday, January 27, 2021 - link
this test is comparing a 35W part iGPU to a 15W part iGPU. 1185G7 beats 4800U iGPU..schujj07 - Wednesday, January 27, 2021 - link
The only 15W CPU in this iGPU test is that of the 4800U. The 1185G7 is 28-35W and the other Ryzen's are 35W. The 4900HS has a RAM speed deficiency on the 1185G7 and wins 4 out of 7 tests.mikk - Thursday, January 28, 2021 - link
Because they are mostly CPU bound, it's primarily a CPU test! Look at the resolution. Anandtech lowered the resolution to 360p-480p low details in some of them to give Vega a chance to beat Xe. Furthermore Vega needs to clock at 2100 Mhz, a 55% clock speed advantage. No AV1 decoding either which is another big flaw for a mobile device.Spunjji - Friday, January 29, 2021 - link
@mikk - Pretty sure those results still aren't CPU bound, otherwise they'd be in the 100+fps range - and you'd expect Intel to win under those circumstances, because of their high single-thread performance. If Intel's iGPU only "wins" when they're both running sub-30 fps then it's a fairly meaningless win."Vega needs to clock at 2100 Mhz" - irrelevant. High clocks is only a disadvantage if it leads to worse power consumption, and that's not the case here. If Intel can increase their clocks within the same power envelope then they should.
Lack of AV1 decoding is a downside. It's not clear yet whether it will be a major one, but it shows the age of AMD's solution.
dotjaz - Friday, February 5, 2021 - link
That's called competition, honey. Don't have to beat it.Spunjji - Thursday, January 28, 2021 - link
You could have read the review to see that's not true. Xe routinely loses to Vega 8 in actual games, albeit not by a significant margin.Tams80 - Monday, February 1, 2021 - link
Xe does beat it, no doubt. But it's still pretty close and for integrated graphics Vega is good enough for another year.As mentioned in the article, they probably used Vega again mainly to ensure a quick release.
Smell This - Tuesday, January 26, 2021 - link
LOL @ ZoZo ___ he is messin' with you, ts
You are correct in that Dr Su and AMD has played yet another "Rope-A-Dope" on the competition. I suspect RDNA2/Navi II will raise its pretty head after the "Lexa" cores run their course. It has been a productive run.
There are Radeon pro CNDA1 cores floating around that will likely evolve into the RX 6500 RDNA2/Navi IIs discreet replacements for Lexa. These will be the Display Core Next: 3.0 // Video Core Next: 3.0 arch associated with the Big Navi.
And ... I don't think AMD is being lazy. I think the Zen2/Zen3 APU product stack is being developed as yet to be revealed. Home / Office / Creator ? There is a Radeon Pro Mac Navi Mobile with RDNA1 discreet video w/HBM2.
We will see how the 6xxx APUs evolve. Grab your popcorn!
TelstarTOS - Tuesday, January 26, 2021 - link
lazy, definitely lazy.vortmax2 - Saturday, January 30, 2021 - link
One sees lazy, another sees smart business decision.samal90 - Friday, February 12, 2021 - link
The APU in 2022 will use RDNA 2 finally. Expect a substantial GPU performance lift next year with the new Rembrandt chip.Spunjji - Thursday, January 28, 2021 - link
A console APU is not a PC APU - they have completely different design constraints and memory architectures. Vega was used here because it allowed AMD to bring Zen 3 APUs to market faster than they managed with Zen 2 - it's all mentioned in the review that you're commenting on......sandeep_r_89 - Friday, January 29, 2021 - link
The consoles don't use iGPUs.......most likely, RDNA2 design so far hasn't been designed for low power usage, it's focused more on high performance. Once they do the work to create a low power version, it can appear in iGPUs, laptop dGPUs, low end desktop dGPUs etc.Netmsm - Tuesday, January 26, 2021 - link
any hope for Intel?Deicidium369 - Wednesday, January 27, 2021 - link
LOL. Any hope for AMD?Releases Zen 3, RDNA2 and consoles - and only grows revenue $240M over Q3.... Didn't even gross $10B last year.
Meanwhile Intel posts 5 YEARS of record growth...
Spunjji - Thursday, January 28, 2021 - link
A discussion of a company's technological competitiveness is not a discussion of their financial health. Any dolt knows this, why do you pretend we can't see you moving the goalposts in *every single comment section*?Spunjji - Thursday, January 28, 2021 - link
This post is even more hilarious in the context of AMD's financial disclosure today 😁Stuka87 - Wednesday, January 27, 2021 - link
Really no reason for them to move away from Vega for these chips. Do you also complain that Intel has not changed their IGP for years?The efficiency of Vega is quite good when not OC'ed way past where it should be like in the desktop cards. And it still offers adequate performance for the majority of people looking at a laptop. For anything more you want a discrete card anyway.
Unashamed_unoriginal_username_x86 - Thursday, January 28, 2021 - link
Just on the Intel point, it's worth noting that they've had to develop GPU IP specifically for their CPUs. The paradigm has changed with the advent of Xe scalable, but even then the first product released with Xe was a CPU. Obviously AMD is disadvantaged with RTG not being as tightly nit as Intel's GPU groupIGTrading - Tuesday, January 26, 2021 - link
Amazing execution from AMD.Unfortunately, the only way they could gain some significant market share would be if they manage to source enough capacity from TSMC.
The demand for AMD CPUs in the market is huge.
At Amazon, AMD's chips come with a +50% price premium on a regular basis. SONY & Microsoft are going nuts trying to get more chips from AMD.
If AMD managed to negotiate well their cut from TSMC, we should see an explosion of AMD's revenue in 2021.
The new crypto boom will only propel AMD's ASPs even higher, although it will annoy the IT enthusiasts.
Great piece, Ian! Thanks.
Deicidium369 - Wednesday, January 27, 2021 - link
Not a question of capacity - it's the fact that TSMCs fragile long supply chain is broken and limited resources have to be allocated - and AMD is contractually obligated to reach delivery targets for the Console SOCs. They have to use the limited resources to provide the ultra high volume, ultra low margin SOCs over their high margin PC GPUs and CPUs.In this case, it's not AMD's fault - it's an issue with TSMC
Spunjji - Thursday, January 28, 2021 - link
"TSMCs fragile long supply chain is broken"---citation needed---
Qasar - Thursday, January 28, 2021 - link
---will never see it---e36Jeff - Tuesday, January 26, 2021 - link
Minor quibble, the chart at the bottom of the first page lists the Flow X13's memory speed as LPDDR4-3267 rather than 4267.Silma - Tuesday, January 26, 2021 - link
It's one thing to develop great processors, it's entirely another thing to effectively ship them.I would have liked to purchase a Zen 3 processors for my new PC, but I had to make do with a 3700X.
Would have been interested by an RTX 3070 or an AMD latest gen graphic card, but again, they only seem to exist in the hands of testers, YouTubers and twitchers.
Let's see if AMD can really ship a decent number of Zen 3 mobile CPUs.
bji - Tuesday, January 26, 2021 - link
All your problems are easily solved. Just go to MicroCenter!Qasar - Tuesday, January 26, 2021 - link
and if there is no microcenter near you, then what ?Makaveli - Tuesday, January 26, 2021 - link
lmao the amount of times I see that just go to a MicroCenter. lol not everyone on this site lives in the USA. Secondly even if you are in america not everyone has access to a MicroCenter. So no all his problems are not solved next...bji - Tuesday, January 26, 2021 - link
My comment was tongue in cheek. Every time I say there is a shortage of AMD product available, people make posts about how there must not be a problem since they just bought a chip from MicroCenter.See the most recent comments for the article about the Intel chip for an example of my copious frustration with these MicroCenter commenters.
bji - Tuesday, January 26, 2021 - link
https://www.anandtech.com/show/16343/intel-core-i7...Page 7 I start a comment with: "Ryzen 5 5600x at $299 is a lie right now than and has been for months. It's slowly coming down to $399 with general availability. It will be months before it's actually available at $299."
MicroCenter hilarity ensues.
Nottheface - Saturday, January 30, 2021 - link
I do say Microcenter is the best place on earth.Spunjji - Thursday, January 28, 2021 - link
Some of us also pointed to availability in other countries that aren't the USA. If you're sick of the MicroCenter comments, why drag it back out again over here? 🙄bji - Thursday, January 28, 2021 - link
Show me where I said I was sick of it.Spunjji - Thursday, January 28, 2021 - link
@bji - seriously?3 comments back: "See the most recent comments for the article about the Intel chip for an example of my copious frustration with these MicroCenter commenters."
🤷♂️
eek2121 - Wednesday, January 27, 2021 - link
I live in the U.S. in a major city and the closest Microcenter is 4.5 hours away.Sharma_Ji - Wednesday, January 27, 2021 - link
What fourth world country do you live in mate, even in india there are micro center alternatives on which the zen 3 desktop CPUs are available at hardly any premium.Sharma_Ji - Wednesday, January 27, 2021 - link
For those of you say i might be a fluke.Here's the link, estimated delivery in a week. (India)
https://www.primeabgb.com/buy-online-price-india/c...
All parts are available 5600, 5700, 5800, 5900, etc.
Deicidium369 - Wednesday, January 27, 2021 - link
Would bet that fourth world country has flush toilets...GeoffreyA - Thursday, January 28, 2021 - link
Deicidium, none of us is better than the next person.Spunjji - Thursday, January 28, 2021 - link
Nice racism there, shitlord.GreenReaper - Saturday, January 30, 2021 - link
Just wait until you hear what the Japanese think about everyone else's toilets. ಠ_ಠSpunjji - Monday, February 1, 2021 - link
🤣bji - Wednesday, January 27, 2021 - link
That price is significantly above US MSRP.It is significantly above the price listed by Anandtech whenever they do chip comparisons.
Spunjji - Thursday, January 28, 2021 - link
Why would chips in India be selling at US MSRP? It's about the same as the UK / EU price.bji - Thursday, January 28, 2021 - link
As you can see everyone and their brother just writes in a tangent to the actual issue. Bring up supply issues in the USA, people start posting about buying the chip in India or Europe. Bring up price markups in USA, people start mentioning prices in other countries whose price is only vaguely related to USA prices. Bring up the fact that Ananadtech is supplying misleading information when it puts prices in its articles, people ask about chips in India vs US MSRP and UK/EU price. The actual issues are constantly ignored by everyone wanting to post some tangentially related information local to them. Whatever.Spunjji - Thursday, January 28, 2021 - link
@bji - The end of your original post:"The simple fact is that no Ryzen 3(sic) processors have had general availability at anywhere near MSRP for months."
So yeah, it's pretty relevant for people who don't live in the USA to respond by pointing out that there is availability at or around MSRP where they live. It's equally relevant to point out you're wrong when you respond by saying "that's not MSRP" as if everyone pays the same equivalent dollar price at retail as you do in the USA.
"The actual issues are constantly ignored..."
This is a neat way of saying "let me complain regardless of the facts outside of the USA". These responses aren't "tangential" for the people posting them. If you want to only talk about the USA, make that clear from the start.
Nottheface - Saturday, January 30, 2021 - link
Move obviously.Oxford Guy - Wednesday, January 27, 2021 - link
I checked six of their stores and not one of them has had a Zen 3 for the week I've been checking. My local store had one chip, the overpriced 5800X, in the last several weeks.nils_ - Wednesday, January 27, 2021 - link
I've got an 5950RX and a GTX 3090 and I'm neither a tester nor do I have a youtube channel. Maybe the supply situation is better here in old Europe than in the Colonies?bji - Wednesday, January 27, 2021 - link
Calling it "the Colonies" is pretty stupid but matches the mindset of the rest of your comment. You were just lucky to buy at the right time and now are smug about it.Spunjji - Thursday, January 28, 2021 - link
He was almost certainly joking (hence "old Europe"), but you certainly gave him the reaction he was after.bji - Thursday, January 28, 2021 - link
OK faggot.bji - Thursday, January 28, 2021 - link
(before anyone gets themselves in a tizzy here, I said that only in the hopes that Spunji would take offense, and then I could say, "I was joking, but you certainly gave me the reaction I was after". But I kinda wish I hadn't written that now because it's a pretty harsh way to try to make my point, which is, "the colonies" is a belittling way to speak about the USA and joke or not, it is not appreciated)Spunjji - Thursday, January 28, 2021 - link
It absolutely is a belittling way to talk about the USA, but I think you'll all live.Unfortunately, that reply really didn't make your point well at all.
Deicidium369 - Wednesday, January 27, 2021 - link
I didn't have a single issue getting the 3090, 6900XT, 5900X or 5950X - paid MSRP and got them on the day after launch (1 day shipping)... and the Colonies is kinda silly - they ceased being Colonies after we gave King George the universal sign of peace, love and respect - May have heard of it - something about a Revolution - you got Canada as a consolation prize...GeoffreyA - Friday, January 29, 2021 - link
We apologise for offending the feelings of the Republic. There appears to be a striking loss of information when jokes cross the Atlantic, even those made in good humour. We promise in the future to use more precise, up-to-date terminology, and not make ourselves look like Mr. Rip Van Winkle waking up after 20 years of sleep.Meteor2 - Thursday, February 4, 2021 - link
Well I thought it was funnyZizy - Tuesday, January 26, 2021 - link
I wonder what is the point of new chips with old Zen2. 15% die size difference is meaningful but is that sufficient reason to bother (re)designing? As for the 5980HS, CPU part is pretty great when allowed to run at 35W. "Silent mode" is sometimes great but somewhat weird too - eg CB20MT shows a huge delta between the modes. Now lets just hope AMD/TSMC will manage to actually produce enough of these chips.ToTTenTranz - Tuesday, January 26, 2021 - link
"I wonder what is the point of new chips with old Zen2. "Diversified offer.
Fully operational Renoir is an arguably better performer than a flawed Cezanne with disabled units, and it's cheaper to make.
drothgery - Tuesday, January 26, 2021 - link
But how much cheaper? Zen 3's not that much of a bigger die than Zen 2, and it's fabbed on the same process.SaturnusDK - Wednesday, January 27, 2021 - link
How much cheaper? Until stocks last is my guess.Spunjji - Thursday, January 28, 2021 - link
I'd be interested in whether any of these differences in Lucienne are physical design alterations, as opposed to VRM / BIOS alterations, along with maybe some enabling of silicon that wasn't functional in Renoir for some reason.Either way, Lucienne's probably slightly more than 15% cheaper to make - not sure whether that would make up for the costs of extra masks and design work, though.
Farfolomew - Thursday, February 4, 2021 - link
Ian mentions in the article that he thinks AMD was stockpiling Renoir chips all of last year in order to make a big push with the 5000 series. Is it possible that the stockpiled chips are these Zen2 "Lucienne" variety and once they sell out of them, that's all there will be? I wonder if AMD is having TSMC manufacture new Lucienne chips. I mean, why would you make something that's inferior, if it's on the same exact node as a better product (Cezanne)?e36Jeff - Tuesday, January 26, 2021 - link
saving money. The Zen2 chips offer the power savings that Zen3 got with an already established design. That lets AMD sell them cheaper, and, lets face it, 95% of the end users out there would likely be blown away by a 5700U.On top of that, I would wager the Zen2 chips are a 100% straight drop in upgrade for any existing 4000 series mobile designs, possibly even with little to no BIOS update needed(beyond adding the CPU ID). That lets OEMs show off a Ryzen 5000 laptop with zero extra investment needed.
antonkochubey - Tuesday, January 26, 2021 - link
New chips with old Zen2 aren't really new chips, they're the same silicon and stepping, just running a newer firmware.jospoortvliet - Wednesday, January 27, 2021 - link
Read the review - there are lots of changes besides the cores that supposedly are also in the non-zen 3 5000 chips - given they also get the faster vega this seems true. I do agree it is weird..GeoffreyA - Wednesday, January 27, 2021 - link
From a personal point of view, I don't like this mixing of Zen 2 and 3, not at all, and certainly won't be glad of their continuing this practice; but it does make good sense. In a way, elegant.In this case, it helps to look at the cores as hidden, abstracted, a black box. Now, if such and such model fits its notch on the performance scale (5800U > 5700U > 5600U), then it shouldn't make much difference whether it's Zen 2 or 3 behind the doors. Sort of like an implementation detail.
Meteor2 - Thursday, February 4, 2021 - link
Great point.ikjadoon - Tuesday, January 26, 2021 - link
It's great to see AMD kicking Intel's butt in a much larger market (i.e., laptops vastly outsell desktops): AMD really should be alongside, or simply replacing, Intel in most premium notebooks. Gaming notebooks are not my cup of tea, but glad to see for upcoming 15W Zen3 parts.Will we see actual, high-end Zen3 notebooks? Lenovo, HP, ASUS, Dell: for shame if you keep ramming toasty Tiger Lake down customers' throats. Lenovo's done some great offerings with both AMD & Intel; that means some compromises with notebook design (just go all AMD, man; if/when Intel is on top, switch back!), but beefier cooling for Intel will also help AMD.
Still, overall, I don't see anything convincing me that x86 is really right for notebooks, either. So much waste heat...for what? The M1 has rightly rejiggered expectations: 20 hours on 150 nits should be ordinary, not miraculous. Limited to no fan spin-up and max CPU load should yield a chassis maximum of 40C (slightly warmer than body temperature). And, all the while with class-leading 1T performance.
As this is a gaming laptop, it's not too relevant to compare web benchmarks (what most laptops do), but this is peak Zen3 mobile and it still falls quite short:
Speedometer 2.0
35W Ryzen 5980HS: 102 points (-57%)
125W i9-10900K: 119 points (-49%)
35W i7-1185G7: 128 points (-46%)
105W Ryzen 5950X: 140 points (-40%)
30W Apple M1: 234 points
You can double / triple x86 wattage and still be miles behind M1. I almost feel silly buying an x86 laptop again: just kilowatts of waste heat over time. Why? Electrons that never get used, just exhausted and thrown out as soon as possible because it'll throttle even worse otherwise.
undervolted_dc - Tuesday, January 26, 2021 - link
because you here are benchmarking javascript engine in the browserbut not being enough you are comparing those in single thread so here you are comparing 1/16 of the 5950hs vs 1/4 of the m1
a 128core epyc or a 64core threadripper probably will be even worse in this single threaded benchmark ( because those are levaring threads and are less efficient in single threaded app )
if you like wrong calculations then 1 core of the 15w version use less tha 1w for what result ? ~ 100 points ? so who is wasting electrons here ?
( btw 1 core doesn't use 1/16 because there are boosts , but it's even less wrong than your comparison )
ZoZo - Tuesday, January 26, 2021 - link
128-core EPYC? Where?His comparison is indeed misleading in terms of energy efficiency, but it's sad that no x86 is able to come even close to that single-threaded performance.
WaltC - Tuesday, January 26, 2021 - link
Doubly sad for the M1 that we are living in the multicore/multithread era...;)ikjadoon - Tuesday, January 26, 2021 - link
The energy efficient comparisons are pretty clear: the best x86 (Zen3) has stunningly lower IPC than M1, which barely cracks 3 GHz. The only way to make up for such a gulf in IPC is faster clocks. Faster clocks require the 100+W TDPs so common in high-performance desktop CPUs. It's why Zen3 mobile clocks so much lower than Zen3 desktop (3-4 GHz instead of 4-5 GHz)A CPU that needs 3x power to do the same work (and do it slower in most cases) must exhaust an enormous amount of heat, when considering nT or 1T benchmarks (Zen3 requires ~20W for 5 GHz boost on a *single* core). Look at those boost power consumption measurements.
Specifically in desktops (noted in my comparison about tripling TDP...), the CPU *alone* eats up an extra 60 to 90 watts during peak usage. Call it +20W average continuously, so we can do the math.
20W x 8 hours x 7 days a week = +1.1 kWh excess exhaust heat per week. x86 had two corporate giants to do better. It's been severely litigated, but that's Intel's comeuppance. If Intel can't put out high-perf, high-efficiency x86 architectures, then people will start to feel less attached to x86 as an ISA. x86 had billions and billions and billions of R&D.
I see no reason for consumers to religiously follow x86 Wintel or Wintel-clones in laptops especially, but desktops, too: where is the efficiency going to be coming from? Even if Apple *had flat 1T* for the next three years, I'd still feel more optimistic about M1-based CPUs in the long-term than x86.
Dug - Tuesday, January 26, 2021 - link
"I see no reason for consumers to religiously follow x86 Wintel or Wintel-clones in laptops especially, but desktops, too: where is the efficiency going to be coming from?"Software, and getting work done. M1 is great and all, but just need to convince the boss that Apple or 3rd party has software available for our company....... Nope, oh well.
Other negatives-
For personal use, people aren't going to spend thousands of dollars to get new software on new platform.
They can't play games (or should I say they can't play a majority), which is probably the largest market.
They can't change anything about their software
They can't customize anything.
They can't upgrade any piece of their hardware.
They don't have options for same accessories.
So I'll go ahead and spend the extra $15 a year on energy to keep Windows.
Spunjji - Thursday, January 28, 2021 - link
"A CPU that needs 3x power to do the same work"It doesn't. It's been demonstrated a few times now that if you scale back Zen 3 cores to similar performance levels to M1, M1's perf/watt advantage drops to about 30%. It's still better than the node advantage alone, but it's not crippling, and M1 is simply not capable of scaling up to the clock speeds required to match x86 on desktop / HPC workloads.
They're different core designs matched to different purposes (ultra-mobile first vs. server first) and show different strengths as a result.
M1 is a significant achievement - no doubt about it - but you're *massively* overstating the case in its favour.
GeoffreyA - Friday, January 29, 2021 - link
Thank you for this.Meteor2 - Thursday, February 4, 2021 - link
"M1 is simply not capable of scaling up to the clock speeds required to match x86 on desktop / HPC workloads" ...Yet. In a couple of years x86 will be behind ARM across the board.Fastest HPC in the world is ARM *right now*. Only the fifth fastest is x86.
Deicidium369 - Wednesday, January 27, 2021 - link
No time soon... market already not buying the 64C - the 32C outsells the 64C by huge marginsikjadoon - Tuesday, January 26, 2021 - link
Sigh, this is just a plain confused commenter: are you confused that 1T benchmarks exist? Why do people get so worried / panicked when the M1 comparisons start? What on Earth does EPYC have to do with a fanless laptop SoC?Yes, the M1 has faster 1T so it naturally has faster JavaScript performance.
Not complicated.
lilmoe - Wednesday, January 27, 2021 - link
Just buy your M1 Mac and leave us peasants alone. Let us worry what single threaded performance. Arguing with you apple fans is really getting old, and tiring. Live and let live.Deicidium369 - Wednesday, January 27, 2021 - link
I had a guy keep trying to sell me on Apple - like I am new to the computer game... finally had to put him on block...GeoffreyA - Wednesday, January 27, 2021 - link
As a Windows/x86/AMD fan, I admit it, the M1 is faster.Deicidium369 - Wednesday, January 27, 2021 - link
Faster on Apple Software - and doesn't run PC softwaremax - Wednesday, January 27, 2021 - link
Do you realize, that any Apple personal computer is a PC (Personal Computer)? So what You wrote above is pure nonsense. I know, that you mean Windows OS and MacOS. Just be precise if you want to say something properly.Sailor23M - Sunday, January 31, 2021 - link
Finally bit the bullet and bought my first Mac, M1 MBA. I must say that its very well built and everything feels faster than my Ryzen 4800U notebook.perch101 - Tuesday, January 26, 2021 - link
Sureeeeee, let's bring up the only benchmark where Safari's built in Javascript engine can cheat at, and call it a "magnitude" of improvement!I swear to god, Apple sycophants pretty much focus on the two use-cases in benchmarking software that are extremely friendly for wide-issue CPUs like theirs (JS and Geekbench) and then dishonestly assume that performance elsewhere is bad/broken for x86.
ikjadoon - Tuesday, January 26, 2021 - link
Oh, geez. M1 is notably faster in Kraken, Jetstream, etc. Any web benchmark you look at, yes: M1 has a sizable lead.Y'all get so emotional when the M1 is brought up and it makes zero sense: it is *no surprise* that one of the fastest 1T CPUs in the world also does well in JavaScript.
Pro-tip: 1T performance and JavaScript performance are quite closely correlated. :)
Sources: https://www.notebookcheck.net/Apple-M1-Processor-B...
https://arstechnica.com/gadgets/2020/11/hands-on-w...
Meteor2 - Thursday, February 4, 2021 - link
Web JavaScript benchmarks really don't count for much. They certainly don't reflect the user experience of the web.DigitalFreak - Tuesday, January 26, 2021 - link
It's all about the money, and I'm pretty sure Intel is handing out more marketing funds and rebates than AMD. Most people don't care if they have Intel or AMD in their laptop.msroadkill612 - Tuesday, January 26, 2021 - link
Yes its about money and no Intels strategies dont seem to be hiding reality so well these days - OEMs are deserting their designs in droves.There is a deal breaking cost & power saving at the mainstream mobile sweet spot, where the APU delivers competent modern graphics w/o need of a DGPU.
Intel can only match amd graphics by adding a dgpu.
Deicidium369 - Wednesday, January 27, 2021 - link
"OEMs are deserting their designs in droves." Really? So now only 10:1 vs AMD designs?Deicidium369 - Wednesday, January 27, 2021 - link
Providing designs to OEMs and supplying most of the parts for a laptop - making it super easy for them to come to market with an Intel design... That's called smart business.Spunjji - Thursday, January 28, 2021 - link
That's not the same thing as marketing funds and rebates, which Intel also do - they even do it at the reseller level.So there's "smart business", then there's "buying marketshare", and then there's "outright bribery". Intel got fined for doing the last one, so now they mostly only do the first two - although it's a toss-up as to whether you think their contra-revenue scheme counted as option 2 or 3.
theqnology - Wednesday, January 27, 2021 - link
It's easy to compare them (M1 vs x86) on some metrics, but I think it is more nuanced than that. Do note that M1 is at 5nm, with size at around 120.5mm^2. The AMD parts are at 180mm^2 at 7nm. The M1 has 16 billion transistors versus 10.7 billion transistors for the Zen3 APUs. That is 49.5% more transistors in favor of M1.I think a huge part of the reason M1 performs so well in many benchmarks, are that it can target specific workloads, and offload it to specific hardware cores for specific accelerated performance at lower power consumption. It becomes easy for Apple to achieve such, I think, because this is all transparent from the application developers, as they control the entire hardware AND software stack, much like consoles performing at high-end GPU levels despite having less powerful GPU cores.
This is not a cost-effective approach, although not impossible for AMD and Intel. Also part of the reason why I think if M1 were put into cloud servers, it would not be cost-effective. There will be so much dedicated hardware accelerated cores that will not be put to use when M1 is deployed in the cloud.
That said, Apple M1 is a great feat. Hopefully, AMD can also achieve a similar feat (high efficiency accelerated processing) using their Infinity Fabric and glue, allowing them to continue focusing on their Zen cores while also uplifting ancillary workload performance. The big impediment here, would be the OS support, unless it becomes a standard.
GeoffreyA - Sunday, January 31, 2021 - link
An interesting thought and one I'd like to see reviewers looking into. Also, if it were possible to get Windows ARM running on the M1, that would be an insightful experiment, removing Apple's software layers out of the picture.Deicidium369 - Wednesday, January 27, 2021 - link
Intel is in premium laptops because they make it easy for the OEMs to make good designs - not only the "blueprints" but also high efficiency parts other than just the CPU. So an OEM has little to no R&D expense, and can roll out a great laptop.AMD should do the same - it's good business and would negate the reticence of the OEMs to invest in a smaller segment - not like this would have AMD selling more than Intel - but would improve their market presence in laptops significantly.
Spunjji - Thursday, January 28, 2021 - link
"AMD should do the same"I suspect they will once they have the funds to do so. You can't just bully your way into a market by copying the precise strategies of a company that's several times larger than you.
andychow - Tuesday, January 26, 2021 - link
Ok, so the Ryzen 7 5800U is a 16 threaded CPU that Turbos to 4400 Mhz, and only uses 15 Watts. Oh, and btw, it also has a 2000 Mhz GPU for no extra power cost?Spoelie - Wednesday, January 27, 2021 - link
There are few mistakes in your assertion- the 15w number is only guaranteed at the base clocks of 1900mhz, not the 4400mhz turbo
- the cpu & gpu clocks mentioned in the specifications are their respective maximum clocks, not their typical clocks in a mixed workload. So the 2ghz GPU clock won't happen together with a 4.4ghz CPU clock and certainly not in the 15w power envelope
Deicidium369 - Wednesday, January 27, 2021 - link
Watts are different on AMDs - something something roadhouse!Spunjji - Thursday, January 28, 2021 - link
Intel have a "15W" CPU that needs ~30W to perform at the advertised levels, but sure, something something AMDschujj07 - Friday, January 29, 2021 - link
What Intel uses for TDP is even worse. AMD: We have a 65W TDP chip but max full package draw is 88W. Intel: We have a 125W TDP chip, but we can allow it to go to 250W for 56 seconds in an absolute stock operation. However, motherboard manufacturers can all for unlimited turbo settings and that is the SOP for those motherboards. Therefore you actually have a 250W TDP chip but we will tell you it is 125W.Spunjji - Thursday, January 28, 2021 - link
"Oh, and btw, it also has a 2000 Mhz GPU for no extra power cost"What do you mean "no extra power cost"? They covered how it gets the faster GPU cocks in the review: partly improved efficiency in the GPU, but mostly from improved efficiency on the CPU side allowing more TDP to be used by the GPU.
hanselltc - Tuesday, January 26, 2021 - link
Where efficiency?R3MF - Tuesday, January 26, 2021 - link
Does this 5xxx mobile APU offer 8x PCIe lanes or 16x?And does it support PCIe3 or PCIe4?
neblogai - Tuesday, January 26, 2021 - link
x8, pcie 3.0 on laptops. Should only affect performance by 2-3% with a mobile 3080, as long as resolution is not 4K, or 3080 is the 16GB version.vladx - Wednesday, January 27, 2021 - link
Why would someone buy RTX 3080 laptop and not use it for 4k?Spunjji - Thursday, January 28, 2021 - link
Because it's slower than a desktop 3070 and the performance won't be great at 4K.Because the vast majority of the laptops have 1440p displays (or lower).
Etc.
Tams80 - Monday, February 1, 2021 - link
Desktop GPUs and mobile GPUs are not equal. Just because they have almost the same product name does not mean that they are the same.leo_sk - Tuesday, January 26, 2021 - link
Even if they are ahead in cpu performance, I doubt that they can beat intel with their Xe graphics. AMD needed a better gpu to stay competitive in that regardneblogai - Tuesday, January 26, 2021 - link
Check the benchmarks, it is already beaten. On top of that, Intel suffers texture isues in some games, or just fail to launch some others. And one more thing- it seems to suffer way more from using ddr4 than Vega, again dropping their performance. The only possible benefit about it, is that some games run significantly better on AMD, but some- on Intel. So if your favourite game prefers Intel, then Intel can be a better igpu specificaly for you.DigitalFreak - Tuesday, January 26, 2021 - link
LOL, seriously? There were benchmarks in the article. Guess you didn't bother reading it.olafgarten - Tuesday, January 26, 2021 - link
The laptop reminds me of the old Sony Vaio Z Series, which had a similar dock that provided a Blu-Ray drive, a GPU and more ports.Tams80 - Monday, February 1, 2021 - link
That was an AMD GPU as well in the dock. They stuck with calling it Light Peak (perhaps due to Apple) and it used USB A 3.0.JimmyZeng - Tuesday, January 26, 2021 - link
If they could ditch that ridiculous 1650 and fold over hinge and shave the weight to like under 1kg, I'd actually buy one.Amigo123 - Tuesday, January 26, 2021 - link
Yes for first two suggestions, but put larger battery at same weight! That would better for me!Spunjji - Thursday, January 28, 2021 - link
The hinge is partly there because it's meant to be used in tent mode when connected to the dock - for better cooling, as I understand it.ToTTenTranz - Tuesday, January 26, 2021 - link
Is there any mention to the number and speed of PCIe lanes on Cezanne?I've been seeing reports of it only having x8 PCIe 3.0 lanes, which could present a problem to AMD's apparent goal of pairing Cezanne with discrete GPUs.
Also, I've read the explanation for the super weird resolutions chosen for the IGP tests, but it still comes off as rather irrelevant.
The author first claims the IGP is good for eSports, but then there are no eSports games being tested.
eSports is also apparently the reason the author is trying to pull >60FPS out of these SoCs, but I don't see a single title that anyone would want to play at those framerates.
The memory bandwidth limitation is also presented as a fact to be aware of, but then the author chooses very low render resolutions that are less likely to be impacted by memory bandwidth.
The Vega 8 at 2100MHz has a fillrate between the Xbox One and the PS4, a compute performance well above the PS4 and with LPDDR4X it has a memory bandwidth similar to that of the XBox One (without eDRAM).
IMO it would be a much more interesting procedure to test 8th-gen multiplatform games at resolutions and settings similar to the 8th-gen consoles, than trying to push Borderlands 3 to run at 90FPS at a 360p resolution that no one is ever going to enable.
The only useful result I see in there is FFXV at 720p medium.
Makaveli - Tuesday, January 26, 2021 - link
Why is apple silicon the "true challenge"I've already invested into the X86 eco system and all my software is there, why would I even consider an M1 regardless or performance?
And the same could be said for someone invested in the apple eco why would they even look at an x86 product?
pSupaNova - Tuesday, January 26, 2021 - link
Because a laptop that can run for hours/days with light use, is performant, does not need a fan is going to fly of the shelves.Watch Apple's market share explode as x86 users switch.
Ptosio - Tuesday, January 26, 2021 - link
I'm quite sure there'd be plenty Zen 3 designs that can run fanless for 10h+ with performance enough for 90% of users on the web/streaming/office etc. And that's before gets access to TSMC 5nm.For typical user invested in Windows/x86 software, there's still no compelling reason to switch to Apple silicon. Plus, at the price MacBooks go, you can get features unheard of in the Apple world, such as 4k OLED, touchscreen with stylus, 360 design, upgradable memory (affordable 32GB RAM and 4TB storage for less than Apple would charge for 8/1), discrete GPU with vast games' catalogue...
Not to take away from M1 superiority, but x86 is still simply good enough and would only get better.
Meteor2 - Thursday, February 4, 2021 - link
A typical user invested in x86 isn't going to change to Apple, no, but they're not the typical user. THE typical user is a lot more software-agnostic, and yes, ARM Apple is going to take marketshare. It's inevitable.Speedfriend - Wednesday, January 27, 2021 - link
Apple market share explode? That is hilarious. The average cost of a laptop sold last year was $400. Remind me what the cheapest Mac costs? Most purchasers have no idea about relative performance which is why they are still buying laptop with 8th generation Intel inside. Even battery life has little impact when x86 laptop claim to have up to 15 hours.Where Mac will take some share back is in professional designers where they had lost share over the past 5 years. But even then, the lack of multi monitor support may hamper that.
Deicidium369 - Wednesday, January 27, 2021 - link
Apple Market will expand slightly - but not from people moving from PC to Mac. People using a Mac by choice will continue to buy and use AppleThe hoops you need to jump thru for a 2nd monitor is kinda ridiculous - move designed to sell TB docking stations
Deicidium369 - Wednesday, January 27, 2021 - link
I get 12 or 13 hours from my Tiger Lake XPS13 - which is about 10 hours longer than I need...DigitalFreak - Tuesday, January 26, 2021 - link
IIRC, all their current mobile CPUs that support external graphics have 8 PCIe 3.0 lanes. That's more than enough for any dedicated GPU in a laptop right now.ToTTenTranz - Tuesday, January 26, 2021 - link
All current external GPUs have an immense bottleneck due to Thunderbolt 3.0 only using 4 PCIe 3.0 lanes.I don't know if Asus' 8x solution is enough, either.
vladx - Wednesday, January 27, 2021 - link
What? Since Thunderbolt 3 has a 40Gbps bandwidth it is absolutely not "only using 4 PCIe 3.0 lanes on PCs that have Titan Ridge controllers.Spunjji - Thursday, January 28, 2021 - link
The short answer is: yes, it is.The long answer is:
https://www.techspot.com/review/2104-pcie4-vs-pcie...
Tams80 - Monday, February 1, 2021 - link
8x will be enough. It should only be a 2-3% drop in performance.Of course 16x would be nice, but I don't think OCuLink is available as that.
Fulljack - Wednesday, January 27, 2021 - link
the FP5 package used by AMD mobile processor only allows PCIe 3.0 x8 connection to dGPU, but you still have extra x4/x4 connection for I/O and storage.when moving to AM4, desktop Renoir still have the same PCIe lanes as Matisse, that is 16+4+4 lanes.
it's not a problem since TB3 eGPU are using PCIe 3.0 x4 anyway.
nils_ - Wednesday, January 27, 2021 - link
I was wondering about that as well, it also seems a bit confusing with Tiger Lake (is it 4 or 8 lanes of PCIe 4.0?). The advantage with Tiger Lake is that it has TB4 integrated in the SoC, unfortunately I haven't seen any AMD based laptop with TB so far so I went with Intel to keep my docking station(s). Maybe this time around there will be a model foregoing the dGPU for a TB controller.Spunjji - Thursday, January 28, 2021 - link
TL has 4 lanes of PCIe 4.0Spunjji - Thursday, January 28, 2021 - link
"I've been seeing reports of it only having x8 PCIe 3.0 lanes, which could present a problem to AMD's apparent goal of pairing Cezanne with discrete GPUs."Nope. See all the announced devices with Cezanne and RTX 3070 or 3080 GPUs.
It was never a problem with Renoir, either. People just came up with post-hoc rationalizations for why Intel still dominated gaming laptops despite having an inferior CPU.
ottonis - Tuesday, January 26, 2021 - link
The mobile Zen3 CPUs are a great generational update. Glad to see a healthy increase in new design wins and one can only hope that AMD will be able to deliver all these CPUs to the OEMs in sufficient quantities so these notebooks will be available to the consumer.That being said, the true challenge is Apple Silicon. While AMD can beat the M1 CPU in multi core tasks, Apple will outclass everything x86 once they introduce their second gen silicon with much higher core count and other architectural improvements.
So, I wonder what kind of strategy AMD (and Intel) will follow in the near future. I remember - maybe ~10+y ago - when AMD had some sort of transient partnership with ARM and everybody thought AMD would somehow implement ARM designs into some sort of hybride chip. For some reasons that never came to fruition.
In order to stay relevant in the mobile (and desktop) CPU market, AMD will have to react to the huge attack from Apple silicon in one way or another. So, what does AMD have up their sleeves?
Intel is apparently going the big.little route in their next generation of mobile CPUs with little Atom-based cores and big performance cores. I am curious what AMD is up to.
JfromImaginstuff - Tuesday, January 26, 2021 - link
Well about AMD's relationship with arm, they have an arm license, so does Intel for that matter. So if x86 starts going south, AMD will almost certainly abandon it, and Intel most likely will do so as well especially with their new CEO.Deicidium369 - Wednesday, January 27, 2021 - link
LOL - Gelsinger's great grand kids will be on Social Security before that happensPtosio - Tuesday, January 26, 2021 - link
ARM is not some magic silver bullet - MediaTech has vast experience with ARM but are their chromebook chips any way close to Apple M1? (or Zen3 for that matter?)And remember AMD is yet to get acess to the same TSMC process as Apple - maybe once they're on par, large part of that efficiency advantage dissapears?
ABR - Wednesday, January 27, 2021 - link
AMD has K12, which Jim Keller also worked on, waiting in the wings. Most assuredly they have continued developing it. Whether it will play in the same league with M1 remains to be seen, but they also have the graphics IP to go with it so they could likely come out with a strong offering if it comes to that. Not sure what Intel will do..Deicidium369 - Wednesday, January 27, 2021 - link
ancient design, far exceeded by even 10 year old ARM designs.Spunjji - Thursday, January 28, 2021 - link
You say some really silly thingsSpunjji - Thursday, January 28, 2021 - link
"Apple will outclass everything x86 once they introduce their second gen silicon with much higher core count and other architectural improvements."I'll believe it when I see it. Their first move was far better than expected, but it doesn't come close to justifying the claims you're making here.
Glaurung - Saturday, January 30, 2021 - link
M1 is Apple's replacement for ultra-low power, nominal 15w Intel chips. Later this year we will see their replacement for higher powered (35-65w) Intel chips. Nobody knows what those chips will be like yet, but it's pretty obvious they'll have 8 or 16 performance cores instead of just 4, with a similar scale up of the number of GPU cores. They'll add the ability to handle more than 16gb and two ports, and they will put it in their high end laptops and imac desktops. Potentially also on the menu would be a faster peak clock rate. That's not an "I'll believe it when I see it," that's a foregone conclusion. Also a foregone conclusion: next year they will have an even faster core with even better IPC to put in their phones, tablets, and computers.As of last year, Apple's chips had far better IPC and performance per watt than anything Intel or AMD could make, and they only fell short on overall performance due to only having 4 performance cores in their ultra-low power chips.
(For the record, I use Windows. But there's no denying that Apple is utterly dominating in the contest to see who can make the fastest CPUs)
GeoffreyA - Sunday, January 31, 2021 - link
Apple will release faster cores but so will AMD. And now that they've got an idea of what Apple's design is capable of, I'm pretty sure they could overtake it, if they wanted to.GeoffreyA - Sunday, January 31, 2021 - link
As much as I hate to say it, the M1 could be analogous to Core and K8 in the Netburst era. The return to lower clock speeds, higher IPC, and wider execution. Having Skylake and Sunny C. as their measure, AMD produced so and so (and brilliant stuff too, Zen 3 is). Perhaps the M1 will recalibrate the perf/watt measure, like Conroe did, like the Athlon 64 did.I've got a feeling, too, that ARM isn't playing the role in the M1 that people are thinking. It's possible the difference in perf/watt between Zen 3 and M1 is due not to x86 vs. ARM but rather the astonishing width of that core, as well as caches. How much juice ARM is adding, I doubt whether we can say, unless the other components were similar. My belief, it isn't adding much.
Farfolomew - Thursday, February 4, 2021 - link
Very nice comment, and this little thread is a really fascinating read. I've not thought of the comparisons of the P4 -> Core2Duo Mhz regression, but I really think you're on to something here. The thing is, this isn't anything new with M1, Apple has been doing it since the A9 back in 2015, when it finally had IPC parity with the Core M chips. The M1 is just the evolution and scaling up to that of an equivalent TDP laptop chip that Intel has been producing.So the question, then, is, if it's not the "ARM" architecture giving the huge advantages, why haven't we seen a radical shift in the x86 technology back to ultra wide cores, and caches? Or maybe we are, incrementally, with Ice/Tiger Lake, and Zen 2/3/4?
Very fascinating times!
GeoffreyA - Sunday, February 7, 2021 - link
"Or maybe we are, incrementally, with Ice/Tiger Lake, and Zen 2/3/4?"I think that sums it up. As to why their scaling is going at a slower rate, there are a few possible explanations. Likely, truth is somewhere in between.
Firstly, AMD and Intel have aimed for high-frequency designs, which is at loggerheads with widening of a core. Then, AMD has been targeting Haswell (and later) perf/watt with Zen. When one's measure is such, one won't go much beyond that (Zen 2 and 3 did, but there's still juice in the tank). Lastly, it could be owing to the main bottleneck in x86: the variable-length instructions, which make parallel decoding difficult. Adding more decoders helps but causes power to go up. So the front end could be limiting how much we can widen our resources down the line.
Having said that, I still think that AMD's ~15% IPC increase each year has been impressive. "The stuff of legend." Intel, back when it was leading, had us believe such gains were impossible. It's going to be some interesting years ahead, watching the directions Intel, Apple, and AMD take. I'm confident AMD will keep up the good work.
Spunjji - Monday, February 1, 2021 - link
I'm aware of all of the above, but it still doesn't justify the original claims being made - and "Potentially also on the menu would be a faster peak clock rate" is nothing but speculation. Based on what we know about the design and the relatively poor clock scaling in respect of TDP shown between the A14 and M1, I'd say it's extremely unlikely that Apple will be able to push clocks up by more than a couple of hundred megahertz without a significant redesign.What Apple will most likely have in that TDP range is something that's performance-competitive with Cezanne on the CPU side in native applications, significantly outclasses it on the GPU side, and maintains a perf/watt advantage commensurate with the node advantage that largely disappears when running translated code.
It's still far better than what Intel have, but it's not going to redefine the industry. If that order of advantage were enough to do so, then AMD wouldn't have existed after 2007.
GeoffreyA - Sunday, January 31, 2021 - link
"Apple will outclass everything x86 once they introduce their second gen silicon"I've noticed the idea circulating is that the M1 is Apple's first-generation CPU. Sure, it may be the first one going into a computer, but as far as I'm aware, the M1 descends from the A14, which goes back to the A6 of 2012. How many iterations is that? Nine? Granted, some might be "ticks," but this certainly isn't a brand-new design. Zen 3, despite borrowing from Athlon, Bulldozer, and Core, is on its 3rd iteration, or 4th if one counts Zen+.
Lucky Stripes 99 - Tuesday, January 26, 2021 - link
Not only have iGPUs cannibalized the sub-$100 discrete GPU market, but they have also chewed into the cool and quiet GPU market. If you have a HTPC or compact mITX system, your options aren't that great. I'd really like a GTX 3060L on a low profile PCIe card, but I won't hold my breath.Also, I love the return of the 16:10 screen format. I'd kill for a 27" desktop version of the X13's screen with the same resolution and color coverage.
DigitalFreak - Tuesday, January 26, 2021 - link
What's the problem with integrated graphics in a HTPC?Lucky Stripes 99 - Tuesday, January 26, 2021 - link
New features tend to come slower to iGPU parts than to discrete GPU parts. As example, it used to be very difficult to build a 4K60p system with a Raven Ridge APU because so few boards supported HDMI 2.0. Likewise, you're often stuck with an older video decoder/encoder than what is available on the discrete GPU market. Luckily the only feature missing from the latest generation of AMD parts is hardware AV1 decoding which will come with the RDNA2 APUs next round.dudedud - Tuesday, January 26, 2021 - link
I thought you will be including the M1 in more benchmarks besides GB and SPEC.:/
danwat1234 - Tuesday, January 26, 2021 - link
"But what is perhaps worrying is the temperature, being reported as a sustained 92-94ºC on average. That’s quite high. " --> 94C is fine, the silicon is rated to handle it 24/7. What is strange to me is that it most of the tests, the CPU temperature stays in the 80s, when there is more thermal headroom to go. It could clock higher.Fulljack - Wednesday, January 27, 2021 - link
80°C means either you have a glaring jet sound on your laptop fans or downclock it enough to keep it quite.Deicidium369 - Wednesday, January 27, 2021 - link
quite what?and pretty sure glaring refers to vision, while blaring refers to SPL
abufrejoval - Tuesday, January 26, 2021 - link
I wonder why you peg the mobile 8-core against a desktop 6-core instead of the 5800X...?Having the various 8-cores side-by-side allows a much better understanding of how architecture and power settings compare generally.
BTW tried do the manual comparison via Bench, but it seems the Chezanne results aren't in there yet.
Lemnisc8 - Tuesday, January 26, 2021 - link
Can someone PLEASE find out if this thing is running in quad channel or dual channel lpddr4x. It’s already at a disadvantage since lpddr4x has half the bus width of standard ddr4. It would be fine if it ran in quad channel because it’s bus width would then be the same size as ddr4 at 128 bits, but no reviews anywhere show what channel configuration it’s running in...neblogai - Tuesday, January 26, 2021 - link
I don't think there were any 4000-series laptops running LPDDR4x just dual channel- I've only seen it to be quad-channel. So this flagship device (and used by AMD to impress about 5000H performance) should be no different.xza23 - Tuesday, January 26, 2021 - link
As always , excellent article , thank you!watzupken - Tuesday, January 26, 2021 - link
I feel with the introduction of Renoir, what blew most away is the fact that AMD managed to squeeze 8 cores into the U series. Not only that, the Zen 2 architecture also resulted a some serious uplift in performance as compared to the previous Zen+. This year round while it is all nice and good to see decent performance bump, the wow factor is not there. I am not expecting a core increase especially on the same N7 node, and to be honest, 8 cores is plenty of performance for a mobile PC.On the point of still using Vega, despite the age, Vega is still very competitive. One may argue that Intel's Xe graphics is better, but reviews out there proved otherwise. Xe is certainly fast, but both the iGPUs from AMD and Intel are likely memory bandwidith limited if one is pushing 1080p. Adding more cores will likely have diminishing returns. And honestly if you are a gamer, you cannot avoid getting a system with a dedicated GPU no matter how good the iGPU is.
Fulljack - Wednesday, January 27, 2021 - link
I agree, the R&D cost of moving from Vega to RDNA probably isn't worth it in the grand scheme of business.rumor has it that in 2022, Rembrandt will still leverage Zen 3 CPU but will use RDNA2 with DDR5 memory.
Ptosio - Wednesday, January 27, 2021 - link
Shouldn't it be pretty straightforward given that these APU already kind-of exist in the consoles?Hopefully Alder Lake would push AMD to offer best CPU/GPU combination they have!
As I understand, going to RDNA2 would also mean smaller core for the same performance? So there should be some savings in it for AMD as well.
Spunjji - Thursday, January 28, 2021 - link
"Shouldn't it be pretty straightforward given that these APU already kind-of exist in the consoles?"Those APUs use a totally different memory subsystem, much larger GPU slices, and they also use Zen 2 cores. AMD were specifically aiming to get Zen 3 out across their range - there's probably a lot of work needed to scale RDNA 2 down to iGPU levels without unbalancing its performance.
zamroni - Tuesday, January 26, 2021 - link
Amd should reduce Cezanne's core count to 6 then use the transistor budget for more gpu cores.That way it will beat all Intel laptop processors at all aspects
dicobalt - Tuesday, January 26, 2021 - link
Now they need to sell a version that cuts out the silly integrated graphics and uses a faster dedicated GPU. I don't understand the motivation for having a steroid pumped 8 core CPU paired with anemic integrated graphics. It seems AMD is more interested in selling the idea of APUs than actually providing a balanced system.Zizy - Wednesday, January 27, 2021 - link
AMD is clear that integrated GPU is for the very same chip at 15W. It is a pretty fine GPU there, TDP and bandwidth limit potential anyway. I wonder what is the die area saving by ditching GPU. If it is sizeable then yeah, it would be great to have a GPU-less variant of the chip, especially with current wafer supply issues.GeoffreyA - Wednesday, January 27, 2021 - link
That would mean more design effort on AMD's part, having APU and non-APU versions. In all likelihood, not worth it to them. Anyhow, these iGPUs work pretty well for older or simpler games. Also, some people might be interested more in the CPU performance (say, for encoding) but would still like to play the odd game now and then.jakky567 - Wednesday, January 27, 2021 - link
For laptops? Yeah, you kinda want integrated graphics for the power savings. I mean, not everyone needs a dedicated gpu too.Ideally you'd get both, but yeah, AMD hasn't made rdna2 in laptops available. You can get Ampere in quite a few though.
oRAirwolf - Tuesday, January 26, 2021 - link
"...as well as a Western Digital SN350 1TB PCIe 3.0 x4 NVMe storage drive."I think you meant an SN530
TheHughMan - Wednesday, January 27, 2021 - link
Using Threadripper model numbers on laptop CPUs can drive unrealistic expectations through the roof.Qasar - Wednesday, January 27, 2021 - link
how is this using threadripper model numbers ?yeeeeman - Wednesday, January 27, 2021 - link
Barely faster than last gen...AMD has hit the process limitation wall it seems, they need 5nm.
ABR - Wednesday, January 27, 2021 - link
Thank you for including a compile benchmark, finally!Retycint - Wednesday, January 27, 2021 - link
Apparently, on Chinese forums there have been reports of abnormally slow L3 cache speeds - 150 GB/s instead of 600GB/s, tested via AIDA64. Not sure about the exact numbers, but the actual speeds were supposedly much lower than the theoretical speeds. Is there any indication of this occurring?Oxford Guy - Wednesday, January 27, 2021 - link
'1 TB NVMe SSD'Underwhelming, considering my 2013 Macbook Pro has a stock 1 TB SSD.
Kuhar - Wednesday, January 27, 2021 - link
LOL, what a liar. Everyone can check on google what the stock MBP 2013 was. To inform you - stock was 256 gb. Besides that - if all you think about is big size, you will never find a GF.Tomatotech - Thursday, January 28, 2021 - link
Wrong. Check Wikipedia - 2013 MacBook Pros were available from Apple with 1TB SSDs. They’re still good even now as you can replace that 2013 Apple SSD with a modern NVME SSD for a huge speed up.And yes Apple supported the NVMe standard before it was even a standard. It wasn’t finalised by 2013 so these macs need a $10 hardware adaptor in the m.2 bay to physically take the NVMe drive but electronically and on the software level NVME is fully supported.
Kuhar - Thursday, January 28, 2021 - link
Sorry but you are wrong or don`t understand what stock means. On Apple`s own website states clearly that MBP 2013 had STOCK 256 gb SSD with OPTION to upgrade to as high as 1 tb SSD. So maybe your Apple lies again and wiki is ofc correct. On top of that: bragging about 1 tb SSD when in PC world you could get 2 tb SSD in top machines isn`t rellay something to brag about.GreenReaper - Saturday, January 30, 2021 - link
Stock means that they were in stock, available from the manufacturer for order. Which is fair to apply in this case. Most likely they didn't have any SSD in them until they were configured upon sale.What you're thinking of is base. At the same time, it's fair to call out as an unfair comparison, because they are cited as the standard/base configuration of this model, where it wasn't for the MBP
grant3 - Wednesday, January 27, 2021 - link
1. Worrying about what was standard 7 years ago as if it's relevant to what people need today is silly2. TB SSDs were probably about $600-$700 in 2013. If you spent that much to upgrade your MBP, good for you, that doesn't mean it's the best use of funds for everyone.
Makste - Wednesday, January 27, 2021 - link
It is a good review thank you Dr. Ian.My concern is, and has always been the fact that, CPU manufacturers make beefier iGPUs on higher core count CPUs which is not right/fair in my view, because higher core count CPUs and most especially the H series are most of the time bundled with a dGPU, while lower core count CPUs may or may not be bundled with a dGPU. I think lower core count APUs would sell much better if the iGPUs on lower core count CPUs are made beefier because they have enough die space for this, I suppose, in order to satisfy clients who can only afford lower core count CPUs which are not paired with a dGPU. It's a bit of a waste of resources in my view to give 8 vega cores to a ryzen 9 5980HS which is going to be paired with a dgpu and only 6 vega cores to a ryzen 3 5300 whose prospects of being paired with a dGPU are limited.
I don't know what you think about this, but if you agree, then it'd be helpful if you managed to get them to reconsider. Thanks.
Spunjji - Thursday, January 28, 2021 - link
I get your point here, and I agree that it would be a nice thing to have - a 15W 4-core CPU with fully-enabled iGPU would be lovely. Unfortunately it doesn't make much sense from AMD's perspective - they only have one chip design, and they want to get as much money as possible for the fully-enabled ones. It would also add a lot of complexity to their product lineup to have some models that have more CPU cores and fewer GPU CUs, and some that reversed the balance. It's easier for them just to have one line-up that goes from worst to best. :/Makste - Thursday, January 28, 2021 - link
Yes. It could be that, they are sticking with their original plan from the time they decided to introduce iGPUs to X86. But, I don't see why they can't make an overhaul to their offerings now that they are also on top. They could still offer 8 vega dies from the beginning of the series to the top most 8 core cpu offering. And those would be the high end offerings.Then, the other mid and low end variants would be those without the fully enabled vega dies. This way, nothing would be wasted and cezanne would then have a multitude of offerings, I believe people, even at this moment, would like to own a piece of cezanne, be it 3 cores or 5 cores. I think it's the customer to decide what is valuable and what is not valuable. Black and white thinking won't do (that cores will only sell if they are in even numbers). They should simply offer everything they have especially since their design can allow them to do so and more so now that there are supply constraints.
Spunjji - Friday, January 29, 2021 - link
The problem is that it's not just about what the end-user might want. AMD's customers are the OEMs, and the OEMs don't want to build a range of laptops with several dozen CPU options in it, because then they have to keep stock of all of those processors and try to guess the right amount of laptops to build with each different option. It's just not efficient for them. Unfortunately, what you're asking for isn't likely to happen.Makste - Friday, January 29, 2021 - link
Sigh... I realise the cold hard truth now that you've put it more bluntly....An OEM has to fill this gap.
Spunjji - Thursday, January 28, 2021 - link
I might be in the market for a laptop later this year, and it's nice to know that unlike the jump from Zen+ to Zen 2, the newer APUs are better but not *devastatingly so*. I might be able to pick up something using a 4000 series APU on discount and not feel like I'm missing out, but if funds allow I can go for a new device with a 5000 APU and know that I'm getting the absolute best mobile x86 performance per watt/dollar on the market. Either way, it's good to see that the Intel/Nvidia duopoly is finally being broken in a meaningful way.I do have one request - it would be nice to get a separate article with a little more analysis on Tiger Lake in shipping devices vs. the preview device they sent you. Your preview model appears to absolutely annihilate its own very close retail cousin here, and I'd love to see some informed thoughts on how and why that happens. I really don't like the fact that Intel seeded reviewers with something that, in retrospect, appears to significantly over-represent the performance of actually shipping products. It would be good to know whether that's a fluke or something you can replicate consistently - and, if it's the latter, for that to be called out more prominently.
Regardless, thanks for the efforts. It's good to see AMD maintaining good pace. When they get around to slapping RDNA 2 into a future APU, I might finally go ahead and replace my media centre with something that can game!
Makste - Thursday, January 28, 2021 - link
I've revised the charts, from what I've seen, the MSI Prestige Evo 14 is configured at 35W and happens to match performance with the intel reference configured at 28W. The biggest discrepancy I have seen between the two is in the y-Cruncher benchmark. However putting this one benchmark aside, intel's reference unit doesn't seem to differ greatly in performance from the shipping intel unit in the msi prestige evo 14.Spunjji - Friday, January 29, 2021 - link
Having gone over things again more carefully, I definitely overstated things when I said "annihilate", but in the tests where they both appear the Intel reference platform at 28W is faster than the MSI Prestige at 35W more often than the other way around (16-13). When the MSI does win, it's often not by much, and there are even 7 examples where the Intel reference platform at 15W beats the MSI at 35W - usually single-thread tests.My best guess is that the Intel platform might be showing some sort of latency advantage, possibly related to how quickly it can shift between speed states - which would favour it in the shorter and/or more lightly-threaded tests. I'd love to see a detailed analysis, though - ideally with more Tiger Lake platforms, as I think the Prestige may actually be one of the fastest ones shipping.
gruffi - Thursday, January 28, 2021 - link
And people really thought that Tiger Lake's new iGPU would be superior to "old" Vega. Loses here in 6 out of 8 titles. Nice job, Intel marketing. LOL.iLloydski - Thursday, January 28, 2021 - link
Why isn't PCIe 4.0 a thing in mobile?Spunjji - Friday, January 29, 2021 - link
Power consumption and board complexity. Tiger Lake has PCIe 4.0, but only 4 lanes of it.Farfolomew - Thursday, February 4, 2021 - link
Yeah and Tiger Lake is better at power consumption too. So why again has AMD dropped the ball on adding PCIE-4? Last year it was acceptable, with Renoir, as PCIE-4 was brand new to Desktop and Intel wasn't anywhere close to releasing it, but now it feels AMD missed the bus on this one, along with not providing the now-free Thunderbolt 4 connection.jtd871 - Thursday, January 28, 2021 - link
It doesn't make high-end graphics as that cuts into the power budget and die space budget - plus they want to sell discrete mobile gpus to laptop OEMs. they'll continue to include good enough graphics, but there isn't a compelling reason for them to waste die space on a solution that isn't needed for most normal laptop use cases or that will cannibalize sales of discrete graphics.jtd871 - Thursday, January 28, 2021 - link
Replace intro above with "It doesn't make sense for AMD to put high-end graphics on-die as that cuts..."sandeep_r_89 - Friday, January 29, 2021 - link
Was that liquid metal TIM on the CPU in the picture? Did Asus actually use liquid metal TIM for a consumer product?Spunjji - Monday, February 1, 2021 - link
It sure does look like it - and would explain the insulating goop surrounding it.It's probably necessary to get the CPU's expected performance out of a device in this form factor without it constantly sounding like a tiny jet engine.
Tams80 - Monday, February 1, 2021 - link
Yes.Max_Nexor - Tuesday, February 2, 2021 - link
Do you use the Handbrake presets unmodified? If so, have you considered turning off the de-interlacing filter?Filters can slow down transcoding speed dramatically. For example, using the same video and the 480p Discord preset, my system (i7-4790K) transcodes the video at 136 fps. Turning off the interlace detection and the filter results in a speed of 221 fps. Enabling the Denoise filter reduces transcode speed from 221 fps to 15 fps.
Would it not be a better to test just the transcoding speed, without any filters?
Farfolomew - Thursday, February 4, 2021 - link
I'd like to point out that this new Asus Flow X13 laptop is quite unique, and kinda/sorta the first of it's kind of any device out there. How do I mean?Well, up to now, it's been all but impossible to buy a 360-degree hinged TOUCHSCREEN device, that also has a 120hz refresh panel. The only other laptop that had this were ones from HP that had a first-generation Privacy Screen built-in. The privacy screen has a feature that you can turn on and off to enable the viewing angles to be severely limited when desired. The first-gen versions of these had a knock-on effect of running at 120hz, so the touchscreen, 360-degree foldable versions of those are the only other touchscreen laptops with high refresh panels. None of them had particularly great gaming performance, as the best one available was a 4-core Kaby Lake-R powered Ultrabook. The new versions of HP's Privacy Screen no longer run at 120hz, so it was a limited time option.
That's why I'm so excited about this ASUS laptop. I wish it didn't have an external GPU, and only relied on the iGPU instead. But this is as close to my perfect device that has yet been created.
Tams80 - Thursday, February 4, 2021 - link
You do realise that the external GPU dock is optional right? And that *all* models have an Nvidia GTX 1650 in the laptop itself?Farfolomew - Thursday, February 4, 2021 - link
AMD says it had "100 design wins" for Renoir, and 50% more, "150 design wins" for Cezanne. Whatever a "Design Win" means, for that matter.All I know is that when I go into Best Buy to peruse the laptops section, I consistently see two things:
1. While a substantially bigger section, with at least 10x as many different products on display, the "Windows" section is nearly always barren of customers, while the Apple section is most certainly dangerously close to violating every COVID Best Practices known to science.
2. Last I counted, there were about 20-30 Intel Tiger Lake "EVO" branded laptops from all the major OEMs, while I saw less than five, yes FIVE IN TOTAL Ryzen laptops even available for sale. And most of those Ryzen laptops were of the gaming variety. I don't recall seeing a single Ryzen Ultrabook (ie, with just a iGPU).
So I want to know, what is AMD's plans exactly for changing this? They may have an overall better product (albeit at times, only narrowly) than Intel's Tiger Lake, if they can't ship these and get them out in front of the normal Joe customer who doesn't follow the tech scene, they'll never gain significant market share from Intel in the Laptop segment.
GeoffreyA - Sunday, February 7, 2021 - link
I wonder how much of this is due to public perception. Enthusiasts know that AMD is good but most people don't know or care, while some have a vague instinct telling them Intel is first-rate and AMD substandard (corroborated by advice of salespeople in the shop). The laptop seems proper if it's got an Intel sticker, otherwise no good. And that's something which will be very hard for AMD to change. Perhaps fewer Ryzen gaming laptops will help. Even a new logo/sticker for their mobile CPUs, with minimal elegant design.GeoffreyA - Sunday, February 7, 2021 - link
Also, they need to capture the general public's imagination, as silly as that sounds. Just like Samsung did. There was a time when people didn't think much of Samsung (in my country at any rate) but nowadays go into the shop and they've got the best fridges, TVs, and washing machines. Or that's the perception.Farfolomew - Thursday, February 4, 2021 - link
I'm a bit disappointed and left scratching my head regarding the GPU review section of this article. 360p, 480p resolutions ... what are we playing MS-DOS games in EGA 256-color mode?!How did those games even run at such low resolutions? That's mind boggling.
At any rate, it all led to a confusing non-conclusion of what exactly the iGPU performance is on these Cezanne chips, and how it compares to Tiger Lake. Is it better than Tiger Lake or not? How much better than Renoir is it?
I realize this isn't all Ian's fault, as the laptop given to him has a 35W CPU, and we're asking him to compare it to 15W Ultrabooks, etc. But it was still very confusing to me. Hopefully it will become more clear when (and if) AMD Cezanne Ultrabooks come out with 15W parts.