I would prefer if AMD made a quad core Zen with ''atleast'' 1024 GCN cores, whatever memory subsystem needed to feed the chip including RAM and motherboard for less than 500$ in 2016. Pretty reasonable if you ask me.
The PS4 processor, which AMD makes, is an R9-280 with an 8 core CPU. The problem is that it would be memory limited quite badly on a PC motherboard. In the PS4 it's paired with very fast memory.
Possibly when AMD goes to DDR4 their APUs will start to shine, and they'll have the memory bandwidth to go wider on the GPU.
Its not a 280 -- Its got 20 GPU clusters only for 1280 shader lanes, two clusters are there for yield purposes and 4 more are dedicated to compute tasks. It has only 896 shaders dedicated to graphics, and and 256 for compute (though some engines leverage compute during rendering, so the line is a bit fuzzy). The PS4 has a 512bit memory bus, which is really wide for the GPU power, but its also feeding CPU compute. Its got 8 ACES like the 290/X.
The 280 fully unlocked has 32 clusters for 2048 shaders. A 280/X has a 256bit GDDR5 bus, and only 2 ACES.
What's in the PS4 is also custom-extended beyond any retail GPU, but the closest thing would be like a Hawaii (290/X) or Tonga (285), but cut down to 18 clusters.
The ps4 doesn't have 256 shaders for compute only, it has 1280 shaders total and that is it, how you decide to divide the workload for your engine is up to you, what you are thinking of was totally taken out of the context of it being an example of how to use the tools provided to the developers.
Actually the R9 280/280X have a 384-Bit BUS and their memory is clocked higher. That is why they offer around 288GB/s for it alone while the PS4 has 176GB/s shared for the system including RAM/BUS/CPU/GPU operations.
Not sure if trolling,but anyway. They said "no comments" to express great disppointment. As in " nothing to say about the poor performance. the numbers speak for themselves".
They said "no comments" as in "I have nothing to say about the poor performance of this APU. The number's speak for themselves." It had nothing to do with the actual number of comments.
No, not really. They would have said "no comment" if that was the case, instead of "no comments."
They're saying that AMD is in a sorry state these days because nobody is bothering to comment on a review of their new APU. They probably didn't realise the article had just been published, though.
Only outplays Intel's offerings when it comes to the somewhat irrelevant onboard gaming market. Usually barely matches the Core i3 performance sucking thrice the power. Not really impressed by this piece of silicon.
May be irrelevant to you. May not be as irrelevant to many. Where the performances of the core i3 may shine much brighter on paper, this may not be the case of the typical daily use of the typical daily computer for facebook, youtube, netflix, and some gaming on a tight budget.
Could we get a benchmark or two from the broadwell nuc line included? Comparing a released today Amd against Intel's year+ old igp's is a little disappointing.
Well, Broadwell is supposed to be out in 2013 according to Tick Tock. So, Intel's at least 1.5 years to their party. However, doing smaller nodes is REALLY hard. So, hard to blame them.
Err, no. Haswell was 2013 and Ivy was 2012. Broadwell would have been summer of 2014. It ended up being a mostly paper launch in late fall 2014, with parts meaningfully showing up winter of 2015 and Intel mentioned up front that Broadwell would be a mostly mobile release.
So they are perhaps 6 months late on Broadwell, but unless something different happens, Intel is still claiming summer/fall for Skylake, which puts them right back on the original schedule (and probably also why Broadwell is limited, Intel has known about their 14nm node issues for awhile, so they limited it to get the node out there and more experience on it and then jumping in to Skylake with both feet).
I was just thinking it wouldn't be too far off as a total cost comparison - a tall body i5 nuc + win 8 license + ram and scrounged up HDD/ssd is just about $600, which isn't too far above what a simple box with this would run. And my suspicion is that you don't give up too much gfx perf going down to the i3 and saving a hundred. Bandwidth being the bottleneck that it is.
A rebadge from what, exactly? No... It's not a rebadge, its just a lower model sku in a lineup that we have already seen. That is not what a rebadge is. We have not seen this core in this (or a similar) config released with a different sku before.
untill by the end of the year you start to see DX12 benchmarking :) and this more power silicon gets a free bump.
25W is btw the difference in just a light bulb near your desktop or the minimal powerconsumption you have of dedicated gpu for the lack of onboard Intel GPU power :)
If this abomination is all about mobile applications, no wonder one has to search with a flashlight for an hour to find a notebook on AMD, and then it`s some 15 inch tn+ crap.
And in daily use it`s extremely easy to spot a difference, since systems on AMD will always have wailing CO.
Ummm. Not quite. For desktop, the APU concept might be mostly irrelevant unless on a tight budget. For laptop people (like me), the APU is everything. To get a discrete graphics chip, you are generally looking at north of $1000. If your laptop budget is around $500 or so and you want to play the occasional game, the APU matters. An AMD processor will game circles around and Intel chip if using built-in graphics.
My dream machine right now is a laptop with a high-end Carrizo and a displayPort to drive big monitors.
AMD integrated graphics is better than Intel's... but only if we're talking about desktop offerings with 95W TDP. AMD mobile offering (low power apus with "good enough" graphics) is pretty much non existent.
coming from someone who had an AMD APU notebook, no. AMD's graphics are nowhere near as nice in mobile, where the low TDP hammers them. When it comes to games, intel's hd 4600 ran circles around the a10-4600m and the a10-5750m. framerates were not only higher, but much more consistent. AMD's kaveri chips were 15 watt, and still couldnt match 15 watt intel chips.
Did you look into why i3-4130T ended up faster in x265 than i3-4330? The latter is strictly faster and there is no turbo that could difference things due to individual chip quality. I suspect some of those results must be wrong, which sorta casts shadow onto all of them.
(I hope you didn't mix different x265 versions, because the encoder is continually being optimised and thus newer versions do more work per MHz than older ones? You don't ever say what parameters/data the the tests use, so it is hard to guess what went wrong).
It seems you have a better idea in designing silicon than AMD. Why not make your own silicon so that you will be impressed by your pwn expectation? The APU is a revolutionary design and no silicon maker can match this on general purpose use from office to gaming.
I think this PC setup is a good option. we all shop on budgets, i dont know anyone who does not. if more money comes in, say, 6-12 month later, i would just buy a dedicated GPU (~150 bucks) and thats it...
That's because Intel's efforts are solely focused on laptops/mobile. They dominate the high end, and would only compete with themselves. This at least leaves AMD an opening next year though, as cramming battery life into the Core series has stalled Intel's development of performance per mm^2 other than process shrink.
Ian, I'll grant you it isn't abysmal performance and I doubt most casual users would notice a difference. It doesn't seem honest to say that, "While the APUs aren't necessarily ahead in terms of absolute performance, and in some situations they are behind, but with the right combination of hardware the APU route can offer equivalent performance at a cheaper rate"
Uhhhh, unless I misread the benchmarks, the AMD processors are at least a little behind to a lot behind vaguely similarly priced Intel processors in the vast majority of CPU benchmarks. That doesn't say "in some" to me, that to me says in most are almost all.
The only place I see them is either extreme budget or your size constrictions prevent you from getting even a cheap discrete graphics card. Cost and performance wise, you'd probably be better off with something like a GTX750 or 750ti combined with an Intel Celeron or Pentium Haswell processor.
I really want Zen to be a turn around.
A quick Amazon check shows that an Intel Haswell Pentium, plus H97 board, plus 2x2GB of DDR3-1600 and a GTX750 would run you in the region of $250. Granted that doesn't include case ($30 for low end), PSU ($40 for a good low power one) or storage ($90 for a 120GB SSD or $50-60 for a 2TB HDD), but it sounds like it was well within that $300 budget considering the bits that could have/were reused...
Deffinitely to each his own, I just think especially once you start getting in to "dual graphics" (even low end), you are almost certainly better if you are talking two discrete cards, or just getting a slightly faster discrete card than relying on the iGPU+dGPU to drive things as well as a somewhat better processor, that might not be any more expensive (or cheaper, Haswell Pentium/Celeron).
No matter what people say, AMD is driving itself into an ever tighter corner, be it on the CPU or GPU realms. One really has a hard time trying to justify choosing them over Intel/nVidia, but for some very specific – and sometimes bizarre - circumstances (eg.: because the only thing I do is compact files on WinRar, I end up finding AMD FX and its 8 cores the best cost/benefit ratio!) A8-7650K is no different. It is said that things are like that. As a consumer with no intrinsic brand preferences, I would like to see real competition.
Try compressing those files using 7Zip, and you'll see a dramatic improvement on the FX-8350. 7Zip is highly optimized for multi-threading, whereas WinRAR is single-threaded.
I've been getting BSODs lately due to a bad Windows Update. The Microsoftie asked me to upload a complete memory crash dump. There's no way I can upload a 16GB dump file in a reasonable timeframe on a ~800kbps upload connection, especially when my machine BSODs every 24 hours. Compression brought that down to a much more manageable 4GB.
I use it everyday :( rocking a [email protected] for the last 3 years.. I picked it up for $180 with the CPU and!! Motherboard. I was about to pick up a 3770k too, saved about $200 but am about 15-20% down on performance. And if you're worried about electrical cost, you're walking over dollars to pick up pennies.
I do it to send pictures of work I do, and a good SSD is key :)
If you look at the WinRAR benchmark, then that result strongly suggests that WinRAR is multi-threaded. I mean, two core two thread Pentium is clearly slower than the two core but four thread Core i3, and quad-core i5 is clearly faster than Core i3, and Core i7 with its eight threads is clearly faster than Core i5. Hence galta's comment that AMD FX with 8 cores is probably even faster, but he says that this is not normal usage.
There is an actual checkbox in winrar for multithreading for ages now. ROFL. 95% of usenet uses winrar, as does most of the web. That doesn't mean I don't have 7zip installed, just saying it is only installed for the once in 6 months I find a file that uses it.
You apparently didn't even read what he said. He clearly states he's using winrar and finds FX is much faster using 8 cores of FX in winrar. You're like, wrong on all fronts. He's using winrar (can't read?), he's using FX (why suggest it? Can't read?) AND there is a freaking check-box to turn on multi-threading in the app. Not sure whether you're shilling for AMD here or 7zip, but...jeez.
Last AMD CPU I had was the old and venerable 386DX@40Mhz. Where any of you alive back in the early 90s? Ever since I've been using Intel. Of course there were some brief moments during this time when AMD had the upper hand, but the last it happened was some 10 years ago when Athlom and its two cores were a revolution and smashed Pentium Ds. It's just that during that particular moment I wasn't looking for an upgrade so I've Intel ever since. Having said that, I have to add that I don't understand why we are spending so much time discussing compression of files. Of course that the more cores you have the better, and AMD happens to have the least expensive 8 core processor on the market, BUT most users spend something like 0.15% of their time compressing files, making this particular shinny performance irrelevant for most of us. Because most of other software does not scale so good in multithreading (and for games, it has nothing to do with DX12 as someone said elsewhere), we are most likely interested in performance per core, and Intel clearly has the lead here.
Truth is the average user won't be able to tell the difference on a system with a i3 running on a ssd and a A6-7400k on a ssd or even a A10-7850k which would be more direct competition to the i3. I build about 2-4 new Intel and AMD systems a month and the only time I myself notice is when I'm setting them up, after that they all feel relitivly close in speed due to the SSD which was the largest bottleneck to have been overcome in the last 10 years.
So Intel might feel snappier but are still not much faster in day to day use of heavy browsing and media consumtion as long as you have enough ram and a decent SSD.
Ian Cutress wrote: > Being a scaling benchmark, C-Ray prefers threads and seems more designed for Intel."
It was never specifically designed for Intel. John told me it was, "...an extremely small program I did one day to figure out how would the simplest raytracer program look like in the least amount of code lines."
The default simple scene doesn't make use of any main RAM at all (some systems could hold it entirely in L1 cache). The larger test is more useful, but it's still wise to bare in mind to what extent the test is applicable to general performance comparisons. John confirmed this, saying, "This thing only measures 'floating point CPU performance' and nothing more, and it's good that nothing else affects the results. A real rendering program/scene would be still CPU-limited meaning that by far the major part of the time spent would be CPU time in the fpu, but it would have more overhead for disk I/O, shader parsing, more strain for the memory bandwidth, and various other things. So it's a good approximation being a renderer itself, but it's definitely not representative."
As a benchmark though, c-ray's scalability is incredibly useful, in theory only limited by the no. of lines in an image, so testing a system with dozens of CPUs is easy.
Thanks for using the correct link btw! 8)
Ian.
PS. Ian, which c-ray test file/image are you using, and with what settings? ie. how many threads? Just wondered if it's one of the stated tests on my page, or one of those defined by Phoronix. The Phoronix page says they use 16 threads per core, 8x AA and 1600x1200 output, but not which test file is used (scene or sphfract; probably the latter I expect, as 'scene's incredibly simple).
I guess saying it preferred Intel is a little harsh. Many programs are just written the way people understand how to code, and it ends up sheer luck if they're better on one platform by default than the other, such as with 3DPM.
I don't think it's sheer luck when you're doing one of two things: 1. you write the compiler they're using. 2. you're the chip/platform etc they are DOING the coding on and thus optimizing for best perf on the platform they're using. Granted, these two things might not help in ALL cases, but it's a pretty sure bet if EVERYONE decided to code their app/game ON Intel/Nvidia, if you're AMD you're not likely to win many things. You may code how you know how to code, but you OPTIMIZE for whatever is in your hands, and get to others if financing allows (or someone pays you, like Dice/B4F netting 8mil for frostbite running on mantle).
If you don't have access for platform X, and it runs well on it vs. platform Y that you program on, THEN that was luck. But when it runs well on what you're programming/compiling on, that probably has much less to do with luck. It's just common sense to get that. I'm not saying that's the case here, but you're making a general statement that would seem to go against simple logic in what I'd guess was MOST cases. IE, how many ports of console games do you see that are BETTER on a PC. In most cases we get "another crappy port" comments all over the place. Consoles are admittedly (generally) a worst case scenario, but you get the point. Usually the 2nd platform etc is an afterthought to milk the original cow, not coded with the care of the main platform. Large firms with bigger teams (EA, Blizzard etc) may depend on the skill of the teams doing said work (but even then it's quite rare), but for smaller firms where financing is a big issue, other platform optimization may never happen at all.
Why do you think Nvidia bought a company like PGI? To make sure they were on even footing with Intel compilers for HPC. Being the vid card that ~75% of workstations and 76% of gamers (according to peddie) use doesn't hurt either, but compilers/tools are a big help too.
Linux has adapted to some AMD specialities rather quickly, like the module/core division, and further back in time, discovered you could have iommu on amd cpus before they even were released.
Unfortunately, I don't think AMD participates as actively in compiler development..
I love AMD's naming scheme, mimicking Intel's but using higher numbers. I wonder how many would fall for that? Surely a 7850K is much faster than a 4560K? And an A8 or A10 clearly a better CPU than an i5 or i7? Awesome chutzpah.
What happened to the DX12 benchmarks? Do we need to remind you that DX12 hasn't even been released yet, so is completely unsuitable for comparing hardware?
Porting a CURRENT game designed and CODED to DX11 MAX SPEC to DX12 does not mean that it will automatically look better or play better if you do not consider faster fps as the main criteria for quality game play. In fact DX11 Game benchmarks will not show ANY increase in performance using Mantle or DX12 And logically, continuing to write to this DX11 MAXSPEC will NOT improve gaming community-wide in general. Let’s be clear, a higher spec game will cost more money. So the studio must balance cost and projected sales. So I would expect that incremental increases in game quality may occur over the next few years as studios become more confident with spending more of the gaming budget on a higher MINSPEC DX12 game. Hey, it is ALL ABOUT THE MONEY. If a game was written with the limitations or, better, say the maximums or MAXSPEC of DX11 then that game will in all likelihood not look any better with DX12. You will run it at faster frame rates but if the polygons, texture details and AI objects aren't there then the game will only be as detailed as the original programming intent will allow. However, what DX12 will give you is a game that is highly playable with much less expensive hardware. For instance using 3dMark API Overhead test, it is revealed with DX11 Intel i7-4960 with a GTX 980 can produce 2,000,000 draw calls at 30fps. Switch to DX12 and it is revealed that a single $100 AMD A6-7400 APU can produce 4,400,000 draw calls and get 30 fps. Of course these aren't rendered but you can't render the object if hasn;t been drawn. If you are happy with the level of performance that $1500 will get you with DX11 then you should be ecstatic to get very close to the same level of play that DX12and a $100 A6 AMD APU will get you!!!! That was the whole point behind Mantle, er (cough, cough) DX12. Gaming is opened up to more folks without massive amounts of surplus CASH.
Yes, yes, I see your point about AMD's iGPUs benefitting a lot from DirectX 12/Mantle, however I don't think you needed so many posts to make it. Additionally, not benchmarking a specific way doesn't make somebody a liar, it just means they didn't benchmark a specific way.
Draw calls don't necessarily mean better performance, and if you're memory or ROP limited to begin with... what's more, the performance difference between the 384-shader 7600 and the 512-shader 7850K is practically nothing. Based off this, why would I opt for the 7850K when the 7600 performs similarly for less power? The 7400K is only a little behind but is significantly slower in DX11 testing. Does that mean we don't need the 7600 either if we're playing DX12 titles? Has the test highlighted a significant memory bottleneck with the whole Kaveri product stack that DX12 simply cannot solve?
In addition, consider the dGPU results. Intel still smokes AMD on a per-FPU basis. By your own logic, AMD will not gain any ground on Intel at all in this area if we judge performance purely on draw calls.
DirectX 11 is still current. There aren't many Mantle games out there to provide much for this comparison, but I'm sure somebody will have those results on another site for you to make further comparisons.
There is ONLY ONE BENCHMARK that is relevant to gamers.
3dMark API Overhead Test!
If I am considering a GPU purchase I am not buying it becasue I want to Calculate Pi to a BILLION decimal places. I want better gameplay.
When I am trying to decide on an AMD APU or Intel IGP then that decision is NOT based on CineBench but rather what siliocn produces QUALITY GAMEPLAY.
You are DELIBERATELY IGNORING DX12 API Overhead Tests and that makes you a liar.
The 3dMark API Overhead Test measures the draw calls that are produced when the FPS drops below 30. As the following numbers will show the AMD APU will give the BEST GAMING VISUAL EXPERIENCE.
So what happens when this benchmark is run on AMD APU’s and Intel IGP? AMD A10-7700k DX11 = 655,000 draw calls. Mantle = 4,509,000 Draw calls. DX11 = 4,470,000 draw calls.
These numbers were gathered from AnandTech piece written on March 27, 2015. Intel IGP is hopelessly outclassed by AMD APU’s using DX12. AMD outperforms Intel by 100%!!!
whining about no DX12 test just take the info that was given & learn from that and wait for a released DX12 program that can truely be tested. testing DX12 at this point has very little to offer because it is still a beta product & the code is far from finished & by the time it is done all the tests you are screaming to have done will not be worth a pinch of racoon crap.
Back when DX11 was about be released, AMD fans said the same: nVidia is better @DX10, but with DX11, Radeons superior I-don't-know-what will rule. Time passed and nVidia smashed Radeons new - and rebranded - GPUs. I suspect it will be the same this time.
AMD APU is a watt, money, time wasting bottlenecking inferior choice that there is next to no market for, for AMD fusion was and still is a delusion. Intel's world class IPC performance, node process and a dGPU are a MUCH BETTER investment.
Intel's APU's performance advantage makes them a wise choice for the Tablet, Convertible, or Ultrabook market, I'm looking forward to a Surface Skylake to go mobile with.
The way I like to think about it is that even if software only uses one core, I like to have many on the go at the time. Chrome tabs are a nice example.
But multithreading is now being taught in some CS undergraduate classes, meaning that at least it's slowly entering the software ecosystem as default knowledge, rather than as an afterthought. In my opinion, that's always been a big barrier to multithreading (as well as having parallelizable code).
Another thought is consider the software you use. Is it made by a big multinational with a strong software development team? If yes, chances are it is multithreaded. If it uses a big commercial engine, it probably is as well. If it's based on a small software team, then it more likely isn't.
Multithreading being taught at CS classes today doesnt matter much.
It's not like multithreading is some unknown new technology we can't take advantage of. Dual/quad core processors have been common for over a decade.
OS X have Grand Central Despatch. Windows 7/8 can take advantage of multithreading.
The problem is that it's not all tasks on a computer/in an operating system that does benefit from multithreading.
And that's not going to change. Otherwise we wouldn't see AMD going back to the drawing board and throwing the module-concept in the trash in order to focus on single thread performance like in the Zen CPU.
So unless you know you need it today, multithreading performance is a lousy parameter to choose a CPU from, cause it won't get better in the future.
They also seriously think that "Mantle is basically DX12 +/- 10%" which is beyond deluded.
Even after AMD knew that Mantle was a one way ticket to nowhere, and pretty much said as much, they still keep bringing it up and treat it as if it's not obsolete. Insanity...
Mantle is developed as AMD GCN API so don't go telling us its optimized for Intel or Nvidia because its NOT! Mantle is DOA, dead and buried, stop pumping a Zombie API.
You've misread Gigaplex's comment, which was stating that you can run an AMD dGPU on any CPU and still use Mantle. It wasn't about using Mantle on Intel iGPUs or NVIDIA dGPUs, because we know that functionality was never enabled.
Mantle isn't "dead and buried"; sure, it may not appear in many more games, but considering it's at the very core of Vulkan... though that could be just splitting hairs.
Incorrect. The core of Mantle sales pitches was HLSL. You only think Mantle is Vulkan because you read Mantle/Vulkan articles on Anandtech...LOL. Read PCPER's take on it, and understand how VASTLY different Vulkan (Headed by Nvidia's Neil Trevett, who also came up with OpenGL ES BTW) is from Mantle. At best AMD ends up equal here, and worst Nvidia has an inside track always with the president of Khronus being the head of Nvidia's mobile team too. That's pretty much like Bapco being written by Intel software engineers and living on Intel Land across the street from Intel itself...ROFL. See Van Smith Articles on Bapco/sysmark etc and why tomshardware SHAMEFULLY dismissed him and removed his name from his articles ages ago
Anandtech seems to follow this same path of favoritism for AMD these days since 660ti article - having AMD portal etc no Nvidia portal - mantle lovefest articles etc, same reason I left toms years ago circa 2001 or so. It's not the same team at tomshardware now, but the damage done then is still in many minds today (and shown at times in forum posts etc). Anandtech would be wise to change course, but Anand isn't running things now, and doesn't even own them today. I'd guess stock investors in the company that bought anandtech probably hold massive shares in sinking AMD ;) But that's just a guess.
http://www.pcper.com/reviews/General-Tech/GDC-15-W... Real scoop on Vulkan. A few bits of code don't make Vulkan Mantle...LOL. If it was based on HLSL completely you might be able to have a valid argument but that is far from the case here. It MIGHT be splitting hairs if this was IN, but it's NOT.
http://www.pcper.com/category/tags/glnext The articles on glNext.: "Vulkan is obviously different than Mantle in significant ways now, such as its use of SPIR-V for its shading language (rather than HLSL)." CORE? LOL. Core of Vulkan would be HLSL and not all the major changes due to the GROUP effort now.
Trevett: "Being able to start with the Mantle design definitely helped us get rolling quickly – but there has been a lot of design iteration, not the least making sure that Vulkan can run across many different GPU architectures. Vulkan is definitely a working group design now."
Everything that was AMD specific is basically gone as is the case with DX12 (mantle ideas, but not direct usage). Hence NV showing victories in AMD's own mantle showcase now (starswarm)...ROFL. How bad is that? Worse NV was chosen for DX12 Forza Demo which is an AMD console game. Why didn't MS chose AMD?
They should have spent the time they wasted on Mantle making DX12/Vulkan driver advances, not to mention DX11 driver improvements which affect everything on the market now and probably for a while into the future (until win10 takes over at least if ever if vulkan is on billions of everything else first), rather than a few mantle games. Nvidia addressed the entire market with their R&D while AMD wasted it on Mantle, consoles & apu. The downfall of AMD started with a really bad ATI price and has been killing them since then.
Mantle is almost useless for FAST cpus and is dead now (wasted R&D). It was meant to help AMD weak cpus which only needed to happen because they let guys like Dirk Meyer (who in 2011 said it was a mistake to spend on anything but CORE cpu/gpu, NOT APU), & Keller go ages ago. Adding Papermaster might make up for missing Meyer though. IF they would NOT have made these mistakes, we wouldn't even have needed Mantle because they'd still be in the cpu race with much higher IPC as we see with ZEN. You have no pricing power in APU as it feeds poor people and is being crushed by ARM coming up and Intel going down to stop them. GAMERS (and power users) will PAY a premium for stuff like Intel and Nvidia & AMD ignored engineers who tried to explain this to management. It is sad they're now hiring them back to create again what they never should have left to begin with. The last time they made money for the year was Athlon's and high IPC. Going into consoles instead of spending on CORE products was a mistake too. Which is why Nvidia said they ignored it. We see they were 100% correct as consoles have made amd nothing and lost the CPU & GPU race while dropping R&D on both screwing the future too. The years spent on this crap caused AMD's current problems for 3yrs on cpu/gpu having zero pricing power, selling off fabs, land, laying off 1/3 of employees etc. You can't make a profit on low margin junk without having massive share. Now if AMD had negotiated 20%+ margins from the get-go on consoles, maybe they'd have made money over the long haul. But as it stands now they may not even recover R&D and time wasted as mobile kills consoles at 1/2 through their life with die shrinks+revving yearly, far cheaper games and massive numbers sold yearly that is drawing devs away from consoles.
Even now with 300's coming (and only top few cards are NOT rebadges which will just confuse users and piss them off probably), Nvidia just releases a faster rehash of tech waiting to answer and again keep a great product down in pricing. AMD will make nothing from 300's. IF they had ignored consoles/apus they would have ZEN out already (2yrs ago? maybe 3?) and 300's would have been made on 28nm optimized possibly like maxwell squeezed out more perf on the same process 6 months ago. Instead NV has had nearly a year to just pile up profits on an old process and have an answer waiting in the wings (980ti) to make sure AMD's new gpu has no pricing power.
Going HBM when it isn't bandwidth starved is another snafu that will keep costs higher, especially with low yields on that and the new process. But again because of lack of R&D (after blowing it on consoles/apu), they needed HBM to help drop the wattage instead of having a great 28nm low watt alternative like maxwell that can still milk a very cheap old DDR5 product which has more than enough bandwidth as speeds keep increasing. HBM is needed at some point, just not today for a company needing pofits that has no cash to burn on low yields etc. They keep making mistakes and then having to make bad decisions to make up for them that stifle much needed profits. They also need to follow Nvidia in splitting fp32 from fp64 as that will further cement NV gpus if they don't. When you are a professional at both things instead of a jack of all trades loser in both, you win in perf and can price accordingly while keeping die size appropriate for both.
Intel hopefully will be forced back to this due to ZEN also on the cpu side. Zen will cause Intel to have to respond because they won't be able to shrink their way to keeping the gpu (not with fabs catching Intel fabs) and beat AMD with a die fully dedicated to CPU and IPC. Thank god too, I've been saying AMD needed to do this for ages and without doing it would never put out another athlon that would win for 2-3yrs. I'm not even sure Zen can do this but at least it's a step in the right direction for profits. Fortunately for AMD an opening has been created by Intel massively chasing ARM and ignoring cpu enthusiasts and desktop pros. We have been getting crap on cpu side since AMD exited, while Intel just piled on gpu side which again hurt any shot of AMD making profits here...LOL. They don't seem to understand they make moves that screw themselves longer term. Short term thinking kills you.
Yes, and the APU being reviewed, the A8-7650K also happens to be "AMD ONLY", so why not test mantle? There's a reasonable number of high-profile games that support it:
- Battlefield 4 and Hardline - Dragon Age: Inquisition - Civilization: Beyond Earth - Sniper Elite III
Plus another bunch coming up, like Star Wars Battlefront and Mirror's Edge.
So why would it hurt so much to show at least one of these games running Mantle with a low-specced CPU like this?
What is anandtech so afraid to show, by refusing to test Mantle comparisons with anything other than >$400 CPUs?
There isn't anyth to be scared off, but Mantle is only available on a handful of games, and beyond those it's dead and buried.
Anandtech doesn't run Mantle benchmarks for the same reason they don't review AGP graphics cards: It's a dead technology aside from the few people who currently use it...
I seriously considered an A10-7850K Kaveri build last year around this time for a small power-efficient HTPC to stream DVR'd shows from my NAS, but in the end a number of issues steered me away:
1) Need for chassis, PSU, cooler. 2) Lack of good mini-ITX options at launch. 3) Not good enough graphics for gaming (not a primary consideration anyways, but something fast enough might've changed my usage patterns and expectations).
Sadly, this was the closest I've gotten to buying an AMD CPU product in a long, long time but ultimately I went with an Intel NUC that was cheaper to build, smaller form factor, and much less power usage. And all I gave up was GPU performance that wasn't realistically good enough to change my usage patterns or expectations anyways.
This is the problem AMD's APUs face in the marketplace today though. That's why I think AMD made a big mistake in betting their future on Fusion, people just aren't willing to trade fast efficient or top-of-the-line CPUs for a mediocre CPU/GPU combo.
Today, there's even bigger challenges out there for AMD. You have Alienware that offers the Alpha with an i3 and GTX 860+M that absolutely destroys these APUs in every metric for $500, $400 on sale, and it takes care of everything from chassis, PSU, cooling, even Windows licensing. That's what AMD is facing now though in the low-end PC market, and I just can't see them competing with that kind of performance and value.
I would have opted for the A8-7600 instead of the 7850K, though I do admit it was very difficult to source back then. 65W mode doesn't perform much faster than 45W mode. I suppose it's all about what you want from a machine in the end, and AMD don't make a faster CPU with weaker iGPU which might make more sense.
The one thing stopping AMD from releasing a far superior product, in my eyes, was the requirement to at least try to extract as much performance from a flawed architecture so they could say it wasn't a complete waste of time.
+1 Fusion was not only poor strategy, it was poor implementation. Leaving aside the discussion of the merits integrated GPU, if AMD had done it right we would have seen Apple adopting their processor on their Macbook series, given their obsession with slim hardware, with no discrete graphics. Have we seen that? No. You see, even though Intel has never said that integrated GPU was the future, the single most important customer on that market segment was claimed by them.
I heard a rumour that AMD were unable to meet demand and as such failed to secure a contract with Apple. Make of that what you will. As it was, Llano went from being under-produced to the exact opposite.
Nah, Llano would have been way too hot for an Apple laptop. Heck,'the CPU/GPU in a MacBook Air has a tdp of 15watt. Does AMD have anything even close to that, that doesn't involve Jaguar cores?
Again, they were not able to deliver their strategy, even if it was a poor one. One says that integrated GPU is the future. That, per se, is questionable. Later, we find out that they can't meet production orders and/or deliver a chip that is too hot for one of its potential markets. This is poor implementation.
Cynically, AMD may consider it better to have *any* product to discount/write-off down the road rather than fork over another wafer agreement penalty to GloFo with nothing to show for it.
I noticed that as well, but the fact that this is a 95 watt processor isn't that much of a concern when you have the power envelope of a desktop chassis at your disposal. The intended niche for these APUs seems more to make a value proposition for budget gaming in a low-complexity system (meaning lacking the additional PCB complexity introduced by using a discrete GPU). Unfortunately, I don't see OEMs really putting any weight behind AMD APUs by selling systems containing them which leaves much of the sales up to the comparatively few people who DIY-build desktop hardware. Even those people are hard-pressed to find a lot of value in picking an APU-based platform over competing Intel products as they tend to have a little more budget flexibility and are targeting greater GPU performance than the A-series has available, putting them into discrete graphics solutions.
Those two more and much more expensive Intel CPUs on the charts, make APUs look totally pathetic. Yes you do have the prices next to the charts, yes they do make APUs, look extremely valuable in the 3D games, but most people probably would not go past the first 4-5 pages in this article having being totally disappointed from the first results. Also the long blue lines will imprint in their memories, they will forget the prices. Next time throw a few Xeon e7 in the charts. PS. PLEASE PLEASE PLEASE, don't turn to Tom's Hardware.
Much more expensive i7 and i5 in the charts and wrong higher-older prices on AMD APUs. Am I wrong?
Please, I am NOT asking you to make AMD APUs look good, don't make it look like that, just do not make them look awful. You want to add a much more expensive i7, at least change the color of the line, do it black or something. Even the i5 is much more expensive than the APUs especially considering that AMD changed it's prices a few days ago, which means that the AMD prices on the charts are also wrong. 7850K's price that is the most expensive is $127 not $173.
From the five Intel processors you have in the charts only three of them are at the same price range as the APUs. Some Intel prices are the tray prices, not the box, and most of them are the prices on Intel's site. AMD prices on the other hand are the old much higher prices. Even in your article you give lower prices than those on the charts. AM I WRONG?
Accept the critic when it is fair, don't try to make the other guy look like a brainless fanboy who asks you to make AMD APUs look good by putting GPU test first.
That's an insult to paid shills. No one being paid to shill for a company would act that obnoxious and incoherent. Looks more like a volunteer effort, or someone who deliberately wants to make vocal AMD supporters look obnoxious and incoherent.
I use the same screenshot in all the games on the other pages where I am testing 1080p. It's just a generic screenshot of the game showing what happens in the benchmark.
Your comments assume that ANAND provided benchmarks using DX12 and they did not. ALL of the GRAPHIC benchmarks were with either synthetic benchmarks or game benchmarks using DX11.
DX11 cripples the performance of ALL APU's, IGP and dGPU. Draw calls ARE a measure of CPU-to-GPU "bottleneck" or elimination thereof. You can not render a polygon until you draw it.
DX12 enables CPU core scaling; basically increased draw calls are a function of the amount of multithreaded cpu cores. DX11 does not allow multithreeaded gaming.
DX11 may be current but why should I base hardware purchases from testing based on obsolete software AND benchmarks?
DX12 will be in widespread use by game developers by Christmas.
Anand has psent quite a bit of time and money testing hardware on obsolete benchmarks TO WHAT END?
Starswarm and 3dMArk API Overhead Test are available but ANAND ignore them.
Why?
AMD's APU was designed to FLY using Mantle and DX12. It is not AMD's fault that Intle IGP is so poorly designed. That is Intel's problem.
Test Intel IGP using the latest API and you will see. Comaparatively test AMD and Intel using obsolete benchmarks with DX11 and ANAND is lying to the consumer and can not be trusted.
AN unbiased and well balanced piece should use legacy benchmarks, they should also use the very latest available. ANAND di not do this.
"Starswarm and 3dMArk API Overhead Test are available but ANAND ignore them.
Why?"
Because they want to hide the truth. "It is hard for a person to wake if he is asleep because he pretends to be asleep but infact he is not. He just want to fool you because of his stupidity"
The refusal to support the upcoming DX12 give as hint that the review is biased and something fishy going at the backdoor. I am not an IT guy and new on this site but i could easily detect what is the difference between biased and unbiased review.
The reviewer and Anadtech guys for sure are all intellegent guys but they allowed themselves to be succumed by their own personal interest.
Or its just something as simple as DX12 not being released yet, the performance is likely to change, so is an invalid test for comparing hardware at this time. The benchmarks you refer to are only valid as a preview for potential gains.
DX12 may not be final. API probably is, runtime is likely close, drivers likely won't be.
And you're delusional if you think all new games released at the end of this year will be DX12. It takes years to develop a AAA game, so they would need to have started before DX12 was available. The market for DX12 will be tiny by Christmas as DX12 will be Windows 10 only. Not everyone will be willing or able to upgrade the OS. Not all hardware even supports DX12. You're completely ignoring the history of previous DirectX roll outs.
Ah, so basically the choice is buy an AMD APU and get shoddy performance now, and great performance in a year, or buy an Intel/Intel-Nvidia solution and get great performance now and great performance in a year!
So theres really no reason to get the AMD is what you're saying?
I don't think he is nuts, but you seem a bit angry. From a CPU perspective, multithreaded games need not wait for DX12. They could have been written before. Anyway, we have a clear statement of you: DX12 will make AMD shine. We should talk again on Christmas. Just keep in mind that the same was said when DX11 was about to be released, with known results...
Hold on. You suggest that the 770 and the 285 are nearly the same price, but you list the used/refurbished price for the 770 first. That opens up a Pandora's box, doesn't it? If it's too hard to find a card new, pick a different one, like the 970, or 960, which is actually close in price to the 285 (at least a couple go for $200 on Newegg). Even though you say you split the GPUs based on price ranges, rather than similar prices, people are going to compare ATI to NVidia and you have an unfair used-vs-new price comparison.
Ideally the tests are meant to show comparisons within a GPU class, not between GPU classes. Ultimately I work with the cards I have, and on the NV side I have a GTX 980 and a GTX 770, whereas on the AMD side is an R9 290X and R9 285 (the latest Tonga). In comparison to what I have, the 980/290X are high end, and the 770/285 are a class below. The 770 Lightning is also hard to source new, due to its age, but is still a relevant card. If I could have sourced a 960/970, I would have.
That is one of the things that really puts me off AT these days. The attitude of "this is what I have available, this is what I'll test against." If it's not relevant don't use it. Get out the AT credit card and buy some new hardware.
if they want to spend more money they will have to make more money first.
To make more money they would have to spread million annoying ads and this site would quickly turned into Toms super boring "best SSD for the money" articles like, with millions amazon, newegg links, ads and all that crap
They better use what they have in disposal and try to maintain still decent enough articles, reviews etc.
"Despite the rated memory on the APUs being faster, NPB seems to require more IPC than DRAM speed."
Guys.. the Intel chips have better memory controllers since many years. They extract much higher performance and lower latency if you compare them at similar DRAM clock & timings. Lot's of AT benchmarks showed this as well, back when such things were still included (e.g. when a new architecture appears).
Here we go again, this is another pro Intel review. The crooked company who paid AMD Billion $ settlement case due to unfair competition practices are still being supported by lots of rotten people based on their comments here is disturbing. I find these people scum on this earth as they continue to support the scammers. Shame on you guys!
By the way the review is biased as you benchmark it using DX11. This is another manipulative benchmark trying to hold on the past and not on the future. Read my lips "WE DONT WANT DX11 review only! DX12 is coming in a few months why not use the MS Technical review version so people can have a glimpse instead of repeating to us reader, then you redo the benchmark when MS has released its new OS (MS 10). If you dont have full version DX12 now (as we all know) then do not benchmark it bacause it is the same for the past 5 years. You are just wasting your time if your intent is neutral to general consumers as if you are trying to sway us from the truth.
Speak for yourself. What use to me are benchmarks of APIs I don't have and can't get, of games I don't have and don't play. With this latest review method change, the last game I had disappeared, so now I can't compare the results with my own system anymore. This makes it harder to decide whether this product is a good upgrade over my current system or not.
Am I the only poster who is impressed with the performance of the Kaveri parts in the gaming benchmarks?
For one, the Kaveri parts virtually eliminate the need for a $70 discrete GPU. If you were thinking of that kind of low-end GPU, might as well buy an APU. Next, the is very little difference or none at all in average FPS under many settings if you use a $240 dedicated GPU, which means that and $200 GPU is still the bottleneck in a gaming system. Only once the benchmarks are run with very high end GPUs, we finally see the superiority of the Haswell parts.
Of course, the business, web, compression, and conversion benchmarks are another story. Except for a few special cases, the APUs struggle to catch a Core i3.
Yeah, these APUs are certainly a lot better on the CPU front than what they used to be.
I think the APU's only downfall is that $72 Pentium G3258 and availability of cheaper (sub $100) H97 motherboards to overclock them on. A88X FM2+ boards are around the same price as H97 boards, but that $30 saving from cheaper CPU can go into a decent cooler for overclocking.
I've often wondered if the G3258 is really the better choice in this price range. Sure, there are titles it cannot play, but workarounds exist in one or two titles to allow it to work. Newer titles may indeed render it obselete, but there's always the argument about buying a better CPU for the platform later on. Additionally, it overclocks like buggery if you feel that way inclined; how long has it been since we had a CPU that could be overclocked by 50% without costing the earth in power?
The concern I have with upgrading just the CPU is that Intel doesn't stick with its sockets for a long time, and if you're buying a CPU that will eventually become as useful as a chocolate fireguard when playing modern titles, it'd make more sense to buy its i3 cousins in the first place. AMD is banking on you considering its quad core APUs for this, however they have their flaws too - FM2+ has a year left (Carrizo is destined for FM3 along with Zen), they don't overclock as well, power usage is higher even during idle, and the GPU-less derivatives don't appear to be any faster. H81 boards aren't expensive, either, for overclocking that Pentium. Still, you really do need a discrete card with the G3258/Athlons, whereas the APUs and i3 have enough iGPU grunt to go into an HTPC if you're not gaming heavily.
Decisions, decisions... and right now, I'm wondering how I could even consider AMD. Has anybody made systems for both Pentium and Athlon/APU systems and can share their thoughts?
Nice review, covers pretty much everything, and says what I guess everyone was expecting.
One thing I wondered though, why choose the 770 for mid-range when the 960 is a much more logical choice ? Price wise it's £140 here in UK so I guess about $200 over the pond, and is a much more competent card than the 770
Because I've had 770s in as part of my test bed for 18 months. The rest of the cards (290X, 980, 285) I've sourced for my 2015 testing, and it's really hard to source GPUs for testing these days - I had to personally purchase the 285 for example, because I felt it was extremely relevant. Unfortunately we don't all work in a big office to pass around hardware!
If you do ever get a GTX 960 or 750Ti, it would be nice to see some total system power consumption numbers between overclocked A8-7650K+R7 240 vs. i3-4xxx+750Ti vs. overclocked G3258+750Ti
"Scientific computation" is a somewhat amorphous term. Moreover, I don't know if there exists a benchmark suite for either Matlab or Python. In any case, Matlab and Python or both used in numerics as fast prototyping tools or for computations where the compute time is inconsequential. If you're running in speed issues with Matlab it's time to start coding in something else, although in from my observations, most people who run into performance issues with Matlab don't know how to optimize Matlab code for speed. Most don't know how to code at all.
Your really don't know what you're talking about... Matlab is SO much more than fast prototyping software. I have quite a few programs what would be good speed tests, one of which being a full non-linear aircraft dynamics Simulink simulation. A 5 minute simulation could easily take 2 minutes of compute time. Anything that starts getting into serious differential equations takes compute time.
3DPM is a Brownian Motion based benchmark, and Photoscan does interesting 2D to 3D correlation projections. The Linux benchmarks also include NAMD/NPB, both of which are hardcore scientific calculations.
Author, do you know about existence of 'Haswell Refresh' CPU models? It's basically the same 'Haswell' with +100/+200/+300 MHz to their x86-core's speed. Why not use them in tests? It's not like it's 2013 right now, when i3-4330 was released. FYI, i3-4370 have the same $138 MSRP (tray) as i3-4330, but it +300 MHz faster.
Same story about i3-4130 and i3-4170: +300 MHz for i3-4170 basically for free.
You should put them in test rather an old 'Haswell' core i3 models. Thanks.
How could a next generation API improve AMD's APU performance if it already has decent if not very good performance in integrated 3D graphics (beating the lowest end discrete)?
AMD still needs better CPU performance as it shows poorer value compared to an Intel of near or similar price (without considering the GPU).
The occasional gaming niche is pretty nil too as that kind can be accomplished in a notebook, tablet, or smartphone.
This remains valuable for people with regular gaming in mind but with absolutely limited budget. I see myself getting this for getting back into Diablo 3 after a day from work but saving a bit more, I might as well get a decent laptop.
First, thank you for the review, which was rich with performance figures and information. That said, something seems missing in the Conclusion. To be precise, the article doesn't really have a clear conclusion or recommendation, which is what many of us come here for.
It's nice to hear about your cousin-in-law's good experiences, but the conclusion doesn't clearly answer the key question I think many readers might have: Where does this product fit in the world of options to consider when buying a new processor? Is it a good value in its price range? Should it be ignored unless you plan to use the integrated graphics for gaming? Or does it offer enough bang-for-the-buck to be a viable alternative to Intel's options for general non-gaming usage, especially if motherboard costs are considered? Should we consider AMD again, if we are in a particular niche of price and desired features?
Basically, after all of your time with this chip and with your broader knowledge of the market offerings, what is your expert interpretation of the merits or demerits of considering this processor or its closely related AMD peers?
" Ultimately AMD likes to promote that for a similarly priced Intel+NVIDIA solution, a user can enable dual graphics with an APU+R7 discrete card for better performance."
I have *long* wondered why Intel and Nvidia don't get together and figure out a way to pair up the on-board graphics power of their CPUs with a discrete Nvidia GPU. It just seems to me such a waste for those of us who build our rigs for discrete video cards and just disable the on-board graphics of the CPU. Game developers could code their games based on this as well for better performance. Right now game developer Slightly Mad Studios claims their Project Cars racing simulation draws PhysX from the CPU and not a dedicated GPU. However, I have yet to find that definitively true based on benchmarks...I see no difference in performance between moving PhysX resources to my GPUs (970 SLI) or CPU (4690K 4.7GHz) in the Nvidia control panel in that game.
Something similar to what you're describing is coming in DX12...
But the main reason they haven't is because unless youre one of the few people who got an AMD APU because your total CPU+GPU budget is around 100$ it doesn't make any sense.
First if all, the performance you get from an Intel igpu in a desktop system will be minimal, compared to even a 2-300$ Nvidia card. And secondly, if you crank up the igpu on an Intel CPU, it may take away some of the CPUs performance/overhead.
If we're talking about a laptop, taking watts away from the CPU, and overall negatively impacting battery life will be even bigger drawbacks.
"But the main reason they haven't is because unless youre one of the few people who got an AMD APU because your total CPU+GPU budget is around 100$ it doesn't make any sense."
Did you even read the hardware I have? Further, reading benchmarks from the built in 4600 graphics of i3/i5/i7 CPUs shows me that it is a wasted resource. And regarding impact on CPU performance, considering that higher resolutions (1440p and 4K) and higher quality/AA settings are more dependent on GPU performance than CPU performance, the theory that utilizing onboard CPU graphics with a dedicated GPU would decrease overall performance is debatable. I see little gains in my highly overclocked 4690K running at 4.7GHz and running at the stock 3.9GHz turbo frequency in most games.
All we have to go on currently is 1) Intel HD 4600 performance alone in games, and 2) CPU performance demands at higher resolutions on games with dedicated cards.
I am guessing that they didn't get together because dual-graphics is very difficult to make to work right. AMD is putting effectively the same type of GPU cores on the discrete GNUs and integrated APUs, and it still took them a while to make it work at all.
I guess one thing we all learned today, besides the fact that AMDs APUs still kinda blow, is that there is a handful of people, who are devoted enough to their favorite processor manufacturer to seriously believe that:
A: Intel is some kind of evil and corrupt empire ala Star Wars.
B: They're powerful enough to bribe/otherwise silence "da twooth" among all of Anandtech and most of the industry.
C: 95% of the tech press is corrupt enough to gladly do their bidding.
D: Mantle was an API hardcoded by Jesus Christ himself in assembler language. It's so powerful that if it got widespread, no one would need to buy a new CPU or GPU the rest of this decade. Which is why "they" forced
Which is why "they" forced AMD to cancel Mantle. Then Microsoft totally 110% copied it and renamed it "DX12".
Obviously all of the above is 100% logical, makes total sense and is much more likely than AMD releasing shoddy CPUs the last decade, and the press acknowledging that.
Still really can't see a scenario where the APU would be the best choice. Well, there may be one: for those with a very tight budget and wish for playing games on PC regardless. But this would mean that AMD has designed and reiterated a product that would only find its market in the least interesting group of consumers: those that want everything for nothing... Not really where you want to be.
Well, right now arguably, if one has $500 bucks or less for a gaming PC build, it would be better to buy a Playstation 4. High end builds is where the money is in the enthusiast gaming market.
Nah, you can get a really nice gaming PC for even just 500$... Sure it won't be a octo core CPU, probably not even a quad core, but the performance and graphics will be hard to tell apart from a PS4. Especially once DX12 games become common.
Yeah, you might get a few more FPS or a few more details in some games on a ps4.
But just set aside the 10-30$ you save every time you buy a game for a PC vs. a PS4 and you should be able to upgrade your computer in a year or less.
For those on a very tight budget, wish for PC games* AND who already have a motherboard that uses the same socket as these APUs, I would add.
Zen is going to require a new socket, so you're kinda stuck in regards to upgrades from this.
And if you have to go out and get a new motherboard as well, than it really only makes sense to go for Intel. Yup,
Skylake is also going to need a new socket, but if you go the Intel route, at least there's a possibility to upgrade to a Haswell i3/i5/i7 from a Pentium down the road, so you have the possibility of a lot more performance.
I don't really get the point of this CPU at all. It comes out, now, in May 2015? And it's really nothing new yet AT bothered to review it? It's a few bucks more than an A8-7600 but it has higher TDP and is otherwise nearly exactly the same. Sure it's unlocked but it doesn't overclock well anyway. Might as well just save the few bucks and the 30W power consumption and get the 7600. OTOH if you want something with better, you'd just go for the $135-140 A10 CPUs w/512 SPs. The 7650K seems to be totally pointless, especially at this point in 2015 where Skylake is around the corner.
The Dual Graphics scores look pretty decent (other than GTAV which is clearly not working with it), but there's no mention at all in this review about frametime? I mean have all the frametime issues been solved now (particularly with Dual Graphics which IIRC was the worst for stuttering) that we don't need to even mention it anymore? That's great if that's the case, but the review doesn't even seem to touch on it?
For the love of everything, test APUs with casual games. Someone who wants to play something like GTA V is likely going to have a better system. Meanwhile, games like LoL, Dota 2, Sims 4, etc have tons of players who don't have great systems and wouldn't like to spend much on them either. Test games that these products are actually geared towards. I appreciate the inclusion of what the system could become with the addition of differing levels of gpu horsepower, but you are still missing the mark here a bit. Everyone seems to be with APUs and it drives me nuts.
A little late but it mainly depends on what R7 you're talking about. If you're talking about an R7 240, then yeah it's better to do dual-graphics, 'cause a 240 on its own is not going to do much for gaming. If you're talking about a single R7 260X or 265 then that's a different story (and a much better idea).
For gaming, a quad-core CPU really helps for modern games BUT dual-core with HT (like an i3) is quite good too. Dual-core only isn't the greatest of ideas for gaming, TBH. So, ditch the Pentiums and dual-core APUs.
Out of your choices I'd probably go with the i3 4150 and an R7 260X, R7 265/HD 7850, or GTX 750 Ti. Unless you already have some parts (like the motherboard), this will be the best of your choices for gaming (everything else you listed is no problem for any of those CPUs).
The i3 4150 is benefits from newer features in Haswell and has HT. Compared to the X4 860K It may still lose out in some [limited] things which really make use of four physical cores, but not very much and probably not anything you'll be doing anyway. The Haswell i3 also uses very little power so it's good in a small/compact build where you want less heat/noise and can't use a large air cooler easily (or just don't want to spend a lot on a cooler).
If you're talking about an R7 240 though, then go with an A8-7600 and run Dual Graphics. It might be cheaper but it won't be better than the i3 and higher-end R7 card.
It's time for this so called benchmarks make the scripts and data processed available to the public, for example, is the AgiSoft PhotoScan OpenCL activated in the preferences, if it's not, only the CPU will be used, and it makes an huge difference, we all know what AMD is good at with those APUs, not in the CPU but in the GPU and multithread, I find it hard to believe that Intel i3 had such better results.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
177 Comments
Back to Article
Sejong - Tuesday, May 12, 2015 - link
No comments. That just states AMD`s current position.bumble12 - Tuesday, May 12, 2015 - link
Well, if you post a minute or two after the review goes live, what might you expect exactly?YuLeven - Tuesday, May 12, 2015 - link
I think you'll find that he said "no comments" as in "no comments about AMD's poor performance", not as in "there aren't comments on this review" pal.anandreader106 - Tuesday, May 12, 2015 - link
....not sure if serious....or trolling....DevilSlayerWizard - Tuesday, May 12, 2015 - link
I would prefer if AMD made a quad core Zen with ''atleast'' 1024 GCN cores, whatever memory subsystem needed to feed the chip including RAM and motherboard for less than 500$ in 2016. Pretty reasonable if you ask me.barleyguy - Tuesday, May 12, 2015 - link
The PS4 processor, which AMD makes, is an R9-280 with an 8 core CPU. The problem is that it would be memory limited quite badly on a PC motherboard. In the PS4 it's paired with very fast memory.Possibly when AMD goes to DDR4 their APUs will start to shine, and they'll have the memory bandwidth to go wider on the GPU.
extide - Tuesday, May 12, 2015 - link
No, it's not. It is roughly between an R7 265 and R9 270 -- not anywhere near an R9 280.ravyne - Tuesday, May 12, 2015 - link
Its not a 280 -- Its got 20 GPU clusters only for 1280 shader lanes, two clusters are there for yield purposes and 4 more are dedicated to compute tasks. It has only 896 shaders dedicated to graphics, and and 256 for compute (though some engines leverage compute during rendering, so the line is a bit fuzzy). The PS4 has a 512bit memory bus, which is really wide for the GPU power, but its also feeding CPU compute. Its got 8 ACES like the 290/X.The 280 fully unlocked has 32 clusters for 2048 shaders. A 280/X has a 256bit GDDR5 bus, and only 2 ACES.
What's in the PS4 is also custom-extended beyond any retail GPU, but the closest thing would be like a Hawaii (290/X) or Tonga (285), but cut down to 18 clusters.
Revdarian - Wednesday, May 13, 2015 - link
The ps4 doesn't have 256 shaders for compute only, it has 1280 shaders total and that is it, how you decide to divide the workload for your engine is up to you, what you are thinking of was totally taken out of the context of it being an example of how to use the tools provided to the developers.nikaldro - Saturday, May 16, 2015 - link
Ps4 has 1152evolucion8 - Wednesday, May 13, 2015 - link
Actually the R9 280/280X have a 384-Bit BUS and their memory is clocked higher. That is why they offer around 288GB/s for it alone while the PS4 has 176GB/s shared for the system including RAM/BUS/CPU/GPU operations.Edens_Remorse - Wednesday, May 27, 2015 - link
/slow clapevolucion8 - Friday, October 16, 2015 - link
280/280X actually has a 384-Bit BUS.DevilSlayerWizard - Wednesday, May 13, 2015 - link
Every one of those 8 cores are a big joke compared to even Nehalem.silverblue - Wednesday, May 13, 2015 - link
...and this says you're telling porkies, for the most part:http://anandtech.com/bench/product/697?vs=47
Changed the 47 to a 100 for the 950, and to 45 for the 965 Extreme. Remember that these are 8-thread, triple channel CPUs.
nikaldro - Thursday, May 14, 2015 - link
Mmmm hellooooo? You actually believe that 8 Jaguar cores can even remotely be compared to any decent desktop class cpu?Kraelic - Thursday, May 14, 2015 - link
Those older generation discontinued Intel at 3 GHz trade blows with a current 4GHz AMD. Do you not get the joke?redraider89 - Tuesday, May 19, 2015 - link
That's not what he said. Who knows what he means. AMD's comment about what? Poached eggs? Like I said, stupid.yannigr2 - Tuesday, May 12, 2015 - link
56 comments.yannigr2 - Wednesday, May 13, 2015 - link
108 comments.nikaldro - Wednesday, May 13, 2015 - link
And you still haven't got what he meantyannigr2 - Wednesday, May 13, 2015 - link
Tell me about it.nikaldro - Thursday, May 14, 2015 - link
Not sure if trolling,but anyway.They said "no comments" to express great disppointment.
As in " nothing to say about the poor performance. the numbers speak for themselves".
nikaldro - Thursday, May 14, 2015 - link
They said "no comments" as in "I have nothing to say about the poor performance of this APU. The number's speak for themselves."It had nothing to do with the actual number of comments.
nikaldro - Thursday, May 14, 2015 - link
*numbers, not number's. Damn autocorrectkrabboss - Saturday, May 23, 2015 - link
No, not really. They would have said "no comment" if that was the case, instead of "no comments."They're saying that AMD is in a sorry state these days because nobody is bothering to comment on a review of their new APU. They probably didn't realise the article had just been published, though.
redraider89 - Tuesday, May 19, 2015 - link
Stupid comments. That states Sejong's comments.YuLeven - Tuesday, May 12, 2015 - link
Only outplays Intel's offerings when it comes to the somewhat irrelevant onboard gaming market. Usually barely matches the Core i3 performance sucking thrice the power. Not really impressed by this piece of silicon.nightbringer57 - Tuesday, May 12, 2015 - link
May be irrelevant to you.May not be as irrelevant to many.
Where the performances of the core i3 may shine much brighter on paper, this may not be the case of the typical daily use of the typical daily computer for facebook, youtube, netflix, and some gaming on a tight budget.
takeship - Tuesday, May 12, 2015 - link
Could we get a benchmark or two from the broadwell nuc line included? Comparing a released today Amd against Intel's year+ old igp's is a little disappointing.testbug00 - Tuesday, May 12, 2015 - link
Well, Broadwell is supposed to be out in 2013 according to Tick Tock. So, Intel's at least 1.5 years to their party. However, doing smaller nodes is REALLY hard. So, hard to blame them.azazel1024 - Tuesday, May 12, 2015 - link
Err, no. Haswell was 2013 and Ivy was 2012. Broadwell would have been summer of 2014. It ended up being a mostly paper launch in late fall 2014, with parts meaningfully showing up winter of 2015 and Intel mentioned up front that Broadwell would be a mostly mobile release.So they are perhaps 6 months late on Broadwell, but unless something different happens, Intel is still claiming summer/fall for Skylake, which puts them right back on the original schedule (and probably also why Broadwell is limited, Intel has known about their 14nm node issues for awhile, so they limited it to get the node out there and more experience on it and then jumping in to Skylake with both feet).
Ian Cutress - Tuesday, May 12, 2015 - link
I have the i5-5200U here in a BRIX that I can test, though it's worth noting that the 5200U list price is $281, more than any APU.takeship - Tuesday, May 12, 2015 - link
I was just thinking it wouldn't be too far off as a total cost comparison - a tall body i5 nuc + win 8 license + ram and scrounged up HDD/ssd is just about $600, which isn't too far above what a simple box with this would run. And my suspicion is that you don't give up too much gfx perf going down to the i3 and saving a hundred. Bandwidth being the bottleneck that it is.Refuge - Tuesday, May 12, 2015 - link
This SKU is new, but the chip is just a re-badge.extide - Tuesday, May 12, 2015 - link
A rebadge from what, exactly? No... It's not a rebadge, its just a lower model sku in a lineup that we have already seen. That is not what a rebadge is. We have not seen this core in this (or a similar) config released with a different sku before.Edens_Remorse - Wednesday, May 27, 2015 - link
Quit confusing the ignorants(shut up spell check, it's a word).duploxxx - Tuesday, May 12, 2015 - link
untill by the end of the year you start to see DX12 benchmarking :) and this more power silicon gets a free bump.25W is btw the difference in just a light bulb near your desktop or the minimal powerconsumption you have of dedicated gpu for the lack of onboard Intel GPU power :)
Michael Bay - Tuesday, May 12, 2015 - link
Yes, while all the action is in mobile segment. Where, you guessed it, AMD has no foothold.Not even mentioning DX12 being largely irrelevant this AND next year outside of useless synthetics.
duploxxx - Wednesday, May 13, 2015 - link
Carrizo is all about Mobile :)useless synthetics and benchmarking is all where Intel shines, the result bares show only better result, reall life daily use you don't even notice.
Michael Bay - Wednesday, May 13, 2015 - link
If this abomination is all about mobile applications, no wonder one has to search with a flashlight for an hour to find a notebook on AMD, and then it`s some 15 inch tn+ crap.And in daily use it`s extremely easy to spot a difference, since systems on AMD will always have wailing CO.
jabber - Wednesday, May 13, 2015 - link
Yep the OEMs just don't want or need AMD anymore.Pissedoffyouth - Tuesday, May 12, 2015 - link
Works great for extremely micro buildsharrkev - Tuesday, May 12, 2015 - link
Ummm. Not quite. For desktop, the APU concept might be mostly irrelevant unless on a tight budget. For laptop people (like me), the APU is everything. To get a discrete graphics chip, you are generally looking at north of $1000. If your laptop budget is around $500 or so and you want to play the occasional game, the APU matters. An AMD processor will game circles around and Intel chip if using built-in graphics.My dream machine right now is a laptop with a high-end Carrizo and a displayPort to drive big monitors.
jabber - Tuesday, May 12, 2015 - link
Just a shame you'll never see one in the stores!LogOver - Tuesday, May 12, 2015 - link
AMD integrated graphics is better than Intel's... but only if we're talking about desktop offerings with 95W TDP. AMD mobile offering (low power apus with "good enough" graphics) is pretty much non existent.TheinsanegamerN - Wednesday, May 13, 2015 - link
coming from someone who had an AMD APU notebook, no. AMD's graphics are nowhere near as nice in mobile, where the low TDP hammers them. When it comes to games, intel's hd 4600 ran circles around the a10-4600m and the a10-5750m. framerates were not only higher, but much more consistent. AMD's kaveri chips were 15 watt, and still couldnt match 15 watt intel chips.geekfool - Tuesday, May 12, 2015 - link
Did you look into why i3-4130T ended up faster in x265 than i3-4330? The latter is strictly faster and there is no turbo that could difference things due to individual chip quality. I suspect some of those results must be wrong, which sorta casts shadow onto all of them.(I hope you didn't mix different x265 versions, because the encoder is continually being optimised and thus newer versions do more work per MHz than older ones? You don't ever say what parameters/data the the tests use, so it is hard to guess what went wrong).
rp1367 - Tuesday, May 12, 2015 - link
It seems you have a better idea in designing silicon than AMD. Why not make your own silicon so that you will be impressed by your pwn expectation? The APU is a revolutionary design and no silicon maker can match this on general purpose use from office to gaming.jeffry - Tuesday, May 12, 2015 - link
I think this PC setup is a good option. we all shop on budgets, i dont know anyone who does not. if more money comes in, say, 6-12 month later, i would just buy a dedicated GPU (~150 bucks) and thats it...r3loaded - Tuesday, May 12, 2015 - link
Now, more than ever, AMD needs Zen. They still have nothing out on the market that can conclusively beat my four year old 2500K.close - Tuesday, May 12, 2015 - link
Even Intel barely has something that can conclusively beat your four year old 2500K :). Progress isn't what it used to be.Frenetic Pony - Tuesday, May 12, 2015 - link
That's because Intel's efforts are solely focused on laptops/mobile. They dominate the high end, and would only compete with themselves. This at least leaves AMD an opening next year though, as cramming battery life into the Core series has stalled Intel's development of performance per mm^2 other than process shrink.mapesdhs - Tuesday, May 12, 2015 - link
Especially once oc'd of course. What clock are you using?I'm building a 2500K system for a friend atm, easily the best value on a very limited budget.
r3loaded - Tuesday, May 12, 2015 - link
4.5Ghz for full time use on air in my own system. But yeah, even at stock speeds it's still not a contest for the Intel chip.der - Tuesday, May 12, 2015 - link
Awesome testing Methology guys, and definitely a great review.azazel1024 - Tuesday, May 12, 2015 - link
Ian, I'll grant you it isn't abysmal performance and I doubt most casual users would notice a difference. It doesn't seem honest to say that, "While the APUs aren't necessarily ahead in terms of absolute performance, and in some situations they are behind, but with the right combination of hardware the APU route can offer equivalent performance at a cheaper rate"Uhhhh, unless I misread the benchmarks, the AMD processors are at least a little behind to a lot behind vaguely similarly priced Intel processors in the vast majority of CPU benchmarks. That doesn't say "in some" to me, that to me says in most are almost all.
The only place I see them is either extreme budget or your size constrictions prevent you from getting even a cheap discrete graphics card. Cost and performance wise, you'd probably be better off with something like a GTX750 or 750ti combined with an Intel Celeron or Pentium Haswell processor.
I really want Zen to be a turn around.
A quick Amazon check shows that an Intel Haswell Pentium, plus H97 board, plus 2x2GB of DDR3-1600 and a GTX750 would run you in the region of $250. Granted that doesn't include case ($30 for low end), PSU ($40 for a good low power one) or storage ($90 for a 120GB SSD or $50-60 for a 2TB HDD), but it sounds like it was well within that $300 budget considering the bits that could have/were reused...
Deffinitely to each his own, I just think especially once you start getting in to "dual graphics" (even low end), you are almost certainly better if you are talking two discrete cards, or just getting a slightly faster discrete card than relying on the iGPU+dGPU to drive things as well as a somewhat better processor, that might not be any more expensive (or cheaper, Haswell Pentium/Celeron).
galta - Tuesday, May 12, 2015 - link
No matter what people say, AMD is driving itself into an ever tighter corner, be it on the CPU or GPU realms.One really has a hard time trying to justify choosing them over Intel/nVidia, but for some very specific – and sometimes bizarre - circumstances (eg.: because the only thing I do is compact files on WinRar, I end up finding AMD FX and its 8 cores the best cost/benefit ratio!)
A8-7650K is no different.
It is said that things are like that. As a consumer with no intrinsic brand preferences, I would like to see real competition.
anubis44 - Tuesday, May 12, 2015 - link
Try compressing those files using 7Zip, and you'll see a dramatic improvement on the FX-8350. 7Zip is highly optimized for multi-threading, whereas WinRAR is single-threaded.galta - Tuesday, May 12, 2015 - link
No, it's not: http://forums.anandtech.com/showthread.php?t=22533...Even if it were, that's not the point.
How many of us, inclunding the bizarre ones, do only compacting on their PCs?
jabber - Tuesday, May 12, 2015 - link
Exactly."Yayyyy I use 7Zip all day long! "
Said no one...ever.
I don't even know why people still compact files? Are they still using floppies? Man, poor devils.
Gigaplex - Tuesday, May 12, 2015 - link
I've been getting BSODs lately due to a bad Windows Update. The Microsoftie asked me to upload a complete memory crash dump. There's no way I can upload a 16GB dump file in a reasonable timeframe on a ~800kbps upload connection, especially when my machine BSODs every 24 hours. Compression brought that down to a much more manageable 4GB.galta - Tuesday, May 12, 2015 - link
So it makes perfect sense for yoy to stay with AMD...NeatOman - Wednesday, May 13, 2015 - link
I use it everyday :( rocking a [email protected] for the last 3 years.. I picked it up for $180 with the CPU and!! Motherboard. I was about to pick up a 3770k too, saved about $200 but am about 15-20% down on performance. And if you're worried about electrical cost, you're walking over dollars to pick up pennies.I do it to send pictures of work I do, and a good SSD is key :)
UtilityMax - Tuesday, May 12, 2015 - link
If you look at the WinRAR benchmark, then that result strongly suggests that WinRAR is multi-threaded. I mean, two core two thread Pentium is clearly slower than the two core but four thread Core i3, and quad-core i5 is clearly faster than Core i3, and Core i7 with its eight threads is clearly faster than Core i5. Hence galta's comment that AMD FX with 8 cores is probably even faster, but he says that this is not normal usage.TheJian - Thursday, May 14, 2015 - link
There is an actual checkbox in winrar for multithreading for ages now. ROFL. 95% of usenet uses winrar, as does most of the web. That doesn't mean I don't have 7zip installed, just saying it is only installed for the once in 6 months I find a file that uses it.You apparently didn't even read what he said. He clearly states he's using winrar and finds FX is much faster using 8 cores of FX in winrar. You're like, wrong on all fronts. He's using winrar (can't read?), he's using FX (why suggest it? Can't read?) AND there is a freaking check-box to turn on multi-threading in the app. Not sure whether you're shilling for AMD here or 7zip, but...jeez.
galta - Saturday, May 16, 2015 - link
Last AMD CPU I had was the old and venerable 386DX@40Mhz. Where any of you alive back in the early 90s?Ever since I've been using Intel.
Of course there were some brief moments during this time when AMD had the upper hand, but the last it happened was some 10 years ago when Athlom and its two cores were a revolution and smashed Pentium Ds. It's just that during that particular moment I wasn't looking for an upgrade so I've Intel ever since.
Having said that, I have to add that I don't understand why we are spending so much time discussing compression of files.
Of course that the more cores you have the better, and AMD happens to have the least expensive 8 core processor on the market, BUT most users spend something like 0.15% of their time compressing files, making this particular shinny performance irrelevant for most of us.
Because most of other software does not scale so good in multithreading (and for games, it has nothing to do with DX12 as someone said elsewhere), we are most likely interested in performance per core, and Intel clearly has the lead here.
NeatOman - Wednesday, May 13, 2015 - link
Truth is the average user won't be able to tell the difference on a system with a i3 running on a ssd and a A6-7400k on a ssd or even a A10-7850k which would be more direct competition to the i3. I build about 2-4 new Intel and AMD systems a month and the only time I myself notice is when I'm setting them up, after that they all feel relitivly close in speed due to the SSD which was the largest bottleneck to have been overcome in the last 10 years.So Intel might feel snappier but are still not much faster in day to day use of heavy browsing and media consumtion as long as you have enough ram and a decent SSD.
mapesdhs - Tuesday, May 12, 2015 - link
Ian Cutress wrote:> Being a scaling benchmark, C-Ray prefers threads and seems more designed for Intel."
It was never specifically designed for Intel. John told me it was, "...an extremely
small program I did one day to figure out how would the simplest raytracer program
look like in the least amount of code lines."
The default simple scene doesn't make use of any main RAM at all (some systems
could hold it entirely in L1 cache). The larger test is more useful, but it's still wise to
bare in mind to what extent the test is applicable to general performance comparisons.
John confirmed this, saying, "This thing only measures 'floating point CPU performance'
and nothing more, and it's good that nothing else affects the results. A real rendering
program/scene would be still CPU-limited meaning that by far the major part of the time
spent would be CPU time in the fpu, but it would have more overhead for disk I/O, shader
parsing, more strain for the memory bandwidth, and various other things. So it's a good
approximation being a renderer itself, but it's definitely not representative."
As a benchmark though, c-ray's scalability is incredibly useful, in theory only limited by
the no. of lines in an image, so testing a system with dozens of CPUs is easy.
Thanks for using the correct link btw! 8)
Ian.
PS. Ian, which c-ray test file/image are you using, and with what settings? ie. how many
threads? Just wondered if it's one of the stated tests on my page, or one of those defined
by Phoronix. The Phoronix page says they use 16 threads per core, 8x AA and 1600x1200
output, but not which test file is used (scene or sphfract; probably the latter I expect, as
'scene's incredibly simple).
Ian Cutress - Tuesday, May 12, 2015 - link
It's the c-ray hard test on Linux-Bench, usingcat sphfract | ./c-ray-mt -t $threads -s 3840x2160 -r 8 > foo.ppm
I guess saying it preferred Intel is a little harsh. Many programs are just written the way people understand how to code, and it ends up sheer luck if they're better on one platform by default than the other, such as with 3DPM.
TheJian - Friday, May 15, 2015 - link
I don't think it's sheer luck when you're doing one of two things: 1. you write the compiler they're using. 2. you're the chip/platform etc they are DOING the coding on and thus optimizing for best perf on the platform they're using. Granted, these two things might not help in ALL cases, but it's a pretty sure bet if EVERYONE decided to code their app/game ON Intel/Nvidia, if you're AMD you're not likely to win many things. You may code how you know how to code, but you OPTIMIZE for whatever is in your hands, and get to others if financing allows (or someone pays you, like Dice/B4F netting 8mil for frostbite running on mantle).If you don't have access for platform X, and it runs well on it vs. platform Y that you program on, THEN that was luck. But when it runs well on what you're programming/compiling on, that probably has much less to do with luck. It's just common sense to get that. I'm not saying that's the case here, but you're making a general statement that would seem to go against simple logic in what I'd guess was MOST cases. IE, how many ports of console games do you see that are BETTER on a PC. In most cases we get "another crappy port" comments all over the place. Consoles are admittedly (generally) a worst case scenario, but you get the point. Usually the 2nd platform etc is an afterthought to milk the original cow, not coded with the care of the main platform. Large firms with bigger teams (EA, Blizzard etc) may depend on the skill of the teams doing said work (but even then it's quite rare), but for smaller firms where financing is a big issue, other platform optimization may never happen at all.
Why do you think Nvidia bought a company like PGI? To make sure they were on even footing with Intel compilers for HPC. Being the vid card that ~75% of workstations and 76% of gamers (according to peddie) use doesn't hurt either, but compilers/tools are a big help too.
shadowjk - Friday, May 15, 2015 - link
Linux has adapted to some AMD specialities rather quickly, like the module/core division, and further back in time, discovered you could have iommu on amd cpus before they even were released.Unfortunately, I don't think AMD participates as actively in compiler development..
LarsBars - Tuesday, May 12, 2015 - link
Glad to see the IGP benchmarks updated, they are so much more relevant now! No more 1280x1024 ;) Great work!BrokenCrayons - Wednesday, May 13, 2015 - link
I agree, the new IGP benchmarks are a much-needed realignment to make them more current.darkfalz - Tuesday, May 12, 2015 - link
I love AMD's naming scheme, mimicking Intel's but using higher numbers. I wonder how many would fall for that? Surely a 7850K is much faster than a 4560K? And an A8 or A10 clearly a better CPU than an i5 or i7? Awesome chutzpah.akamateau - Tuesday, May 12, 2015 - link
Another piece of JUNK SCIENCE and yellow journalism from the journalistically bankrupt Anand Tech.What happened to the API Overhead Tests?
What HAPPENED to the DX12 benchmarks?
I am not INTERESTED in OBSOLETE GARBAGE.
There is nothing that you use for benchmarking that is relevant.
ALL gaming is now written to DX11 MAXSPEC. DX12 MINSPEC is 12x broader and allows for far more performance.
When you FAIL to use relevant benchmarks the you are LYING to the consumer.
ANAND TECH is nothing more than a garbage website.
extide - Tuesday, May 12, 2015 - link
This isnt a GPU benchmark article, it is a CPU benchmark articleCrunchy005 - Tuesday, May 12, 2015 - link
Why are you even here reading the articles or commenting on them if you think they are garbage?Michael Bay - Tuesday, May 12, 2015 - link
Your post lacks capitalization.NeatOman - Wednesday, May 13, 2015 - link
this is the only time I've liked what Michael Bay has said or done lolGigaplex - Tuesday, May 12, 2015 - link
What happened to the DX12 benchmarks? Do we need to remind you that DX12 hasn't even been released yet, so is completely unsuitable for comparing hardware?akamateau - Tuesday, May 12, 2015 - link
Porting a CURRENT game designed and CODED to DX11 MAX SPEC to DX12 does not mean that it will automatically look better or play better if you do not consider faster fps as the main criteria for quality game play. In fact DX11 Game benchmarks will not show ANY increase in performance using Mantle or DX12And logically, continuing to write to this DX11 MAXSPEC will NOT improve gaming community-wide in general. Let’s be clear, a higher spec game will cost more money. So the studio must balance cost and projected sales. So I would expect that incremental increases in game quality may occur over the next few years as studios become more confident with spending more of the gaming budget on a higher MINSPEC DX12 game. Hey, it is ALL ABOUT THE MONEY.
If a game was written with the limitations or, better, say the maximums or MAXSPEC of DX11 then that game will in all likelihood not look any better with DX12. You will run it at faster frame rates but if the polygons, texture details and AI objects aren't there then the game will only be as detailed as the original programming intent will allow.
However, what DX12 will give you is a game that is highly playable with much less expensive hardware.
For instance using 3dMark API Overhead test, it is revealed with DX11 Intel i7-4960 with a GTX 980 can produce 2,000,000 draw calls at 30fps. Switch to DX12 and it is revealed that a single $100 AMD A6-7400 APU can produce 4,400,000 draw calls and get 30 fps. Of course these aren't rendered but you can't render the object if hasn;t been drawn.
If you are happy with the level of performance that $1500 will get you with DX11 then you should be ecstatic to get very close to the same level of play that DX12and a $100 A6 AMD APU will get you!!!!
That was the whole point behind Mantle, er (cough, cough) DX12. Gaming is opened up to more folks without massive amounts of surplus CASH.
silverblue - Tuesday, May 12, 2015 - link
Yes, yes, I see your point about AMD's iGPUs benefitting a lot from DirectX 12/Mantle, however I don't think you needed so many posts to make it. Additionally, not benchmarking a specific way doesn't make somebody a liar, it just means they didn't benchmark a specific way.Draw calls don't necessarily mean better performance, and if you're memory or ROP limited to begin with... what's more, the performance difference between the 384-shader 7600 and the 512-shader 7850K is practically nothing. Based off this, why would I opt for the 7850K when the 7600 performs similarly for less power? The 7400K is only a little behind but is significantly slower in DX11 testing. Does that mean we don't need the 7600 either if we're playing DX12 titles? Has the test highlighted a significant memory bottleneck with the whole Kaveri product stack that DX12 simply cannot solve?
In addition, consider the dGPU results. Intel still smokes AMD on a per-FPU basis. By your own logic, AMD will not gain any ground on Intel at all in this area if we judge performance purely on draw calls.
DirectX 11 is still current. There aren't many Mantle games out there to provide much for this comparison, but I'm sure somebody will have those results on another site for you to make further comparisons.
akamateau - Tuesday, May 12, 2015 - link
There is ONLY ONE BENCHMARK that is relevant to gamers.3dMark API Overhead Test!
If I am considering a GPU purchase I am not buying it becasue I want to Calculate Pi to a BILLION decimal places. I want better gameplay.
When I am trying to decide on an AMD APU or Intel IGP then that decision is NOT based on CineBench but rather what siliocn produces QUALITY GAMEPLAY.
You are DELIBERATELY IGNORING DX12 API Overhead Tests and that makes you a liar.
The 3dMark API Overhead Test measures the draw calls that are produced when the FPS drops below 30. As the following numbers will show the AMD APU will give the BEST GAMING VISUAL EXPERIENCE.
So what happens when this benchmark is run on AMD APU’s and Intel IGP?
AMD A10-7700k
DX11 = 655,000 draw calls.
Mantle = 4,509,000 Draw calls.
DX11 = 4,470,000 draw calls.
AMD A10-7850K
DX11 = 655,000 draw calls
Mantle = 4,700,000 draw calls
DX12 = 4,454,000 draw calls.
AMD A8-7600
DX11 = 629,000 draw calls
Mantle = 4,448,000 draw calls.
DX12 = 4,443,000 draw calls.
AMD A6-7400k
DX11 = 513,000 draw calls
Mantle = 4,047,000 draw calls
DX12 = 4,104,000 draw calls
Intel Core i7-4790
DX11 = 696,000 draw calls.
DX12 = 2,033,000 draw calls
Intel Core i5-4690
DX11 = 671,000 draw calls
DX12 = 1,977,000 draw calls.
Intel Core i3-4360
DX11 = 640,000 draw calls.
DX12 = 1,874,000 draw calls
Intel Core i3-4130T
DX11 = 526,000 draw calls.
DX12 = 1,692,000 draw calls.
Intel Pentium G3258
DX11 = 515,000 draw calls.
DX12 = 1,415,000 draw calls.
These numbers were gathered from AnandTech piece written on March 27, 2015.
Intel IGP is hopelessly outclassed by AMD APU’s using DX12. AMD outperforms Intel by 100%!!!
JumpingJack - Wednesday, May 13, 2015 - link
"There is ONLY ONE BENCHMARK that is relevant to gamers.3dMark API Overhead Test!"
NO, that is a syntethic, it simply states how many draw call can be made. It does not measure the capability of the entire game engine.
There is only ONE benchmark of concern to gamers -- actual performance of the games they play. Period.
Get ready for a major AMD DX12 let down if this is your expectation.
akamateau - Tuesday, May 12, 2015 - link
Legacy Benchmarks?????? i am going to spend money based on OBSOLETE BENCHMARKS???CineBench 11.5 was released in 2010 and is obsolete. It is JUNK
TrueCrypt???? TreuCrypt development was ended in MAY 2014. Another piece of JUNK.
Where is 3dMark API Overhead Test? That is brand new.
Where Is STARSWARM?? That is brand new.
akamateau - Tuesday, May 12, 2015 - link
Where are your DX12 BENCHMARKS?akamateau - Tuesday, May 12, 2015 - link
Where are your DX12 BENCHMARKS?rocky12345 - Tuesday, May 12, 2015 - link
whining about no DX12 test just take the info that was given & learn from that and wait for a released DX12 program that can truely be tested. testing DX12 at this point has very little to offer because it is still a beta product & the code is far from finished & by the time it is done all the tests you are screaming to have done will not be worth a pinch of racoon crap.galta - Tuesday, May 12, 2015 - link
Back when DX11 was about be released, AMD fans said the same: nVidia is better @DX10, but with DX11, Radeons superior I-don't-know-what will rule.Time passed and nVidia smashed Radeons new - and rebranded - GPUs.
I suspect it will be the same this time.
CPUGPUGURU - Tuesday, May 12, 2015 - link
AMD APU is a watt, money, time wasting bottlenecking inferior choice that there is next to no market for, for AMD fusion was and still is a delusion. Intel's world class IPC performance, node process and a dGPU are a MUCH BETTER investment.Intel's APU's performance advantage makes them a wise choice for the Tablet, Convertible, or Ultrabook market, I'm looking forward to a Surface Skylake to go mobile with.
mayankleoboy1 - Tuesday, May 12, 2015 - link
Ian, this is probably the 3rd or 4th testing methodology/benchmark changes that you have seen during AT. My question is:Do you think that Multithreading is *really* more mainstream now? As in, do most general purpose softwares use more than 2 cores?
Ian Cutress - Tuesday, May 12, 2015 - link
The way I like to think about it is that even if software only uses one core, I like to have many on the go at the time. Chrome tabs are a nice example.But multithreading is now being taught in some CS undergraduate classes, meaning that at least it's slowly entering the software ecosystem as default knowledge, rather than as an afterthought. In my opinion, that's always been a big barrier to multithreading (as well as having parallelizable code).
Another thought is consider the software you use. Is it made by a big multinational with a strong software development team? If yes, chances are it is multithreaded. If it uses a big commercial engine, it probably is as well. If it's based on a small software team, then it more likely isn't.
-Ian
V900 - Tuesday, May 12, 2015 - link
Multithreading being taught at CS classes today doesnt matter much.It's not like multithreading is some unknown new technology we can't take advantage of. Dual/quad core processors have been common for over a decade.
OS X have Grand Central Despatch. Windows 7/8 can take advantage of multithreading.
The problem is that it's not all tasks on a computer/in an operating system that does benefit from multithreading.
And that's not going to change. Otherwise we wouldn't see AMD going back to the drawing board and throwing the module-concept in the trash in order to focus on single thread performance like in the Zen CPU.
So unless you know you need it today, multithreading performance is a lousy parameter to choose a CPU from, cause it won't get better in the future.
ppi - Tuesday, May 12, 2015 - link
But now, how many real tasks, where CPU is the real bottleneck ...... and not GPU, storage, internet connection, or gasp ... the user ...
.. and such task is not multithreaded on reasonably written software?
Oxford Guy - Sunday, May 17, 2015 - link
According to rumor you mean.ToTTenTranz - Tuesday, May 12, 2015 - link
Why does Anandtech keep refusing to test lower performance CPUs and APUs with Mantle-enabled games?Those should be a great way to predict the advantages of lower CPU overhead in DX12 and Vulkan.
CPUGPUGURU - Tuesday, May 12, 2015 - link
BECAUSE Mantle is AMD ONLY and DX12/Vulkan will be Intel NVIDIA and AMD, THAT'S WHY.ALSO, Win10 DX12 HAS NOT Been released, drivers are beta at best, SO WHY waste time testing something that's beta and has NOT been released, WHY?
You AMD pumpoholics are brain dead and clueless.
V900 - Tuesday, May 12, 2015 - link
They also seriously think that "Mantle is basically DX12 +/- 10%" which is beyond deluded.Even after AMD knew that Mantle was a one way ticket to nowhere, and pretty much said as much, they still keep bringing it up and treat it as if it's not obsolete. Insanity...
ppi - Tuesday, May 12, 2015 - link
Mantle is currently a great way for to reduce CPU overhead for AMD APUs and CPUs.Gigaplex - Tuesday, May 12, 2015 - link
Mantle for AMD discrete GPUs runs on Intel CPUs so is a completely valid test for CPU gaming performance.CPUGPUGURU - Tuesday, May 12, 2015 - link
Mantle is developed as AMD GCN API so don't go telling us its optimized for Intel or Nvidia because its NOT! Mantle is DOA, dead and buried, stop pumping a Zombie API.silverblue - Wednesday, May 13, 2015 - link
You've misread Gigaplex's comment, which was stating that you can run an AMD dGPU on any CPU and still use Mantle. It wasn't about using Mantle on Intel iGPUs or NVIDIA dGPUs, because we know that functionality was never enabled.Mantle isn't "dead and buried"; sure, it may not appear in many more games, but considering it's at the very core of Vulkan... though that could be just splitting hairs.
TheJian - Friday, May 15, 2015 - link
Incorrect. The core of Mantle sales pitches was HLSL. You only think Mantle is Vulkan because you read Mantle/Vulkan articles on Anandtech...LOL. Read PCPER's take on it, and understand how VASTLY different Vulkan (Headed by Nvidia's Neil Trevett, who also came up with OpenGL ES BTW) is from Mantle. At best AMD ends up equal here, and worst Nvidia has an inside track always with the president of Khronus being the head of Nvidia's mobile team too. That's pretty much like Bapco being written by Intel software engineers and living on Intel Land across the street from Intel itself...ROFL. See Van Smith Articles on Bapco/sysmark etc and why tomshardware SHAMEFULLY dismissed him and removed his name from his articles ages agoAnandtech seems to follow this same path of favoritism for AMD these days since 660ti article - having AMD portal etc no Nvidia portal - mantle lovefest articles etc, same reason I left toms years ago circa 2001 or so. It's not the same team at tomshardware now, but the damage done then is still in many minds today (and shown at times in forum posts etc). Anandtech would be wise to change course, but Anand isn't running things now, and doesn't even own them today. I'd guess stock investors in the company that bought anandtech probably hold massive shares in sinking AMD ;) But that's just a guess.
http://www.pcper.com/reviews/General-Tech/GDC-15-W...
Real scoop on Vulkan. A few bits of code don't make Vulkan Mantle...LOL. If it was based on HLSL completely you might be able to have a valid argument but that is far from the case here. It MIGHT be splitting hairs if this was IN, but it's NOT.
http://www.pcper.com/category/tags/glnext
The articles on glNext.:
"Vulkan is obviously different than Mantle in significant ways now, such as its use of SPIR-V for its shading language (rather than HLSL)."
CORE? LOL. Core of Vulkan would be HLSL and not all the major changes due to the GROUP effort now.
Trevett:
"Being able to start with the Mantle design definitely helped us get rolling quickly – but there has been a lot of design iteration, not the least making sure that Vulkan can run across many different GPU architectures. Vulkan is definitely a working group design now."
Everything that was AMD specific is basically gone as is the case with DX12 (mantle ideas, but not direct usage). Hence NV showing victories in AMD's own mantle showcase now (starswarm)...ROFL. How bad is that? Worse NV was chosen for DX12 Forza Demo which is an AMD console game. Why didn't MS chose AMD?
They should have spent the time they wasted on Mantle making DX12/Vulkan driver advances, not to mention DX11 driver improvements which affect everything on the market now and probably for a while into the future (until win10 takes over at least if ever if vulkan is on billions of everything else first), rather than a few mantle games. Nvidia addressed the entire market with their R&D while AMD wasted it on Mantle, consoles & apu. The downfall of AMD started with a really bad ATI price and has been killing them since then.
TheJian - Friday, May 15, 2015 - link
Mantle is almost useless for FAST cpus and is dead now (wasted R&D). It was meant to help AMD weak cpus which only needed to happen because they let guys like Dirk Meyer (who in 2011 said it was a mistake to spend on anything but CORE cpu/gpu, NOT APU), & Keller go ages ago. Adding Papermaster might make up for missing Meyer though. IF they would NOT have made these mistakes, we wouldn't even have needed Mantle because they'd still be in the cpu race with much higher IPC as we see with ZEN. You have no pricing power in APU as it feeds poor people and is being crushed by ARM coming up and Intel going down to stop them. GAMERS (and power users) will PAY a premium for stuff like Intel and Nvidia & AMD ignored engineers who tried to explain this to management. It is sad they're now hiring them back to create again what they never should have left to begin with. The last time they made money for the year was Athlon's and high IPC. Going into consoles instead of spending on CORE products was a mistake too. Which is why Nvidia said they ignored it. We see they were 100% correct as consoles have made amd nothing and lost the CPU & GPU race while dropping R&D on both screwing the future too. The years spent on this crap caused AMD's current problems for 3yrs on cpu/gpu having zero pricing power, selling off fabs, land, laying off 1/3 of employees etc. You can't make a profit on low margin junk without having massive share. Now if AMD had negotiated 20%+ margins from the get-go on consoles, maybe they'd have made money over the long haul. But as it stands now they may not even recover R&D and time wasted as mobile kills consoles at 1/2 through their life with die shrinks+revving yearly, far cheaper games and massive numbers sold yearly that is drawing devs away from consoles.Even now with 300's coming (and only top few cards are NOT rebadges which will just confuse users and piss them off probably), Nvidia just releases a faster rehash of tech waiting to answer and again keep a great product down in pricing. AMD will make nothing from 300's. IF they had ignored consoles/apus they would have ZEN out already (2yrs ago? maybe 3?) and 300's would have been made on 28nm optimized possibly like maxwell squeezed out more perf on the same process 6 months ago. Instead NV has had nearly a year to just pile up profits on an old process and have an answer waiting in the wings (980ti) to make sure AMD's new gpu has no pricing power.
Going HBM when it isn't bandwidth starved is another snafu that will keep costs higher, especially with low yields on that and the new process. But again because of lack of R&D (after blowing it on consoles/apu), they needed HBM to help drop the wattage instead of having a great 28nm low watt alternative like maxwell that can still milk a very cheap old DDR5 product which has more than enough bandwidth as speeds keep increasing. HBM is needed at some point, just not today for a company needing pofits that has no cash to burn on low yields etc. They keep making mistakes and then having to make bad decisions to make up for them that stifle much needed profits. They also need to follow Nvidia in splitting fp32 from fp64 as that will further cement NV gpus if they don't. When you are a professional at both things instead of a jack of all trades loser in both, you win in perf and can price accordingly while keeping die size appropriate for both.
Intel hopefully will be forced back to this due to ZEN also on the cpu side. Zen will cause Intel to have to respond because they won't be able to shrink their way to keeping the gpu (not with fabs catching Intel fabs) and beat AMD with a die fully dedicated to CPU and IPC. Thank god too, I've been saying AMD needed to do this for ages and without doing it would never put out another athlon that would win for 2-3yrs. I'm not even sure Zen can do this but at least it's a step in the right direction for profits. Fortunately for AMD an opening has been created by Intel massively chasing ARM and ignoring cpu enthusiasts and desktop pros. We have been getting crap on cpu side since AMD exited, while Intel just piled on gpu side which again hurt any shot of AMD making profits here...LOL. They don't seem to understand they make moves that screw themselves longer term. Short term thinking kills you.
ToTTenTranz - Wednesday, May 13, 2015 - link
Yes, and the APU being reviewed, the A8-7650K also happens to be "AMD ONLY", so why not test mantle? There's a reasonable number of high-profile games that support it:- Battlefield 4 and Hardline
- Dragon Age: Inquisition
- Civilization: Beyond Earth
- Sniper Elite III
Plus another bunch coming up, like Star Wars Battlefront and Mirror's Edge.
So why would it hurt so much to show at least one of these games running Mantle with a low-specced CPU like this?
What is anandtech so afraid to show, by refusing to test Mantle comparisons with anything other than >$400 CPUs?
V900 - Thursday, May 14, 2015 - link
There isn't anyth to be scared off, but Mantle is only available on a handful of games, and beyond those it's dead and buried.Anandtech doesn't run Mantle benchmarks for the same reason they don't review AGP graphics cards: It's a dead technology aside from the few people who currently use it...
chizow - Tuesday, May 12, 2015 - link
I seriously considered an A10-7850K Kaveri build last year around this time for a small power-efficient HTPC to stream DVR'd shows from my NAS, but in the end a number of issues steered me away:1) Need for chassis, PSU, cooler.
2) Lack of good mini-ITX options at launch.
3) Not good enough graphics for gaming (not a primary consideration anyways, but something fast enough might've changed my usage patterns and expectations).
Sadly, this was the closest I've gotten to buying an AMD CPU product in a long, long time but ultimately I went with an Intel NUC that was cheaper to build, smaller form factor, and much less power usage. And all I gave up was GPU performance that wasn't realistically good enough to change my usage patterns or expectations anyways.
This is the problem AMD's APUs face in the marketplace today though. That's why I think AMD made a big mistake in betting their future on Fusion, people just aren't willing to trade fast efficient or top-of-the-line CPUs for a mediocre CPU/GPU combo.
Today, there's even bigger challenges out there for AMD. You have Alienware that offers the Alpha with an i3 and GTX 860+M that absolutely destroys these APUs in every metric for $500, $400 on sale, and it takes care of everything from chassis, PSU, cooling, even Windows licensing. That's what AMD is facing now though in the low-end PC market, and I just can't see them competing with that kind of performance and value.
silverblue - Tuesday, May 12, 2015 - link
I would have opted for the A8-7600 instead of the 7850K, though I do admit it was very difficult to source back then. 65W mode doesn't perform much faster than 45W mode. I suppose it's all about what you want from a machine in the end, and AMD don't make a faster CPU with weaker iGPU which might make more sense.The one thing stopping AMD from releasing a far superior product, in my eyes, was the requirement to at least try to extract as much performance from a flawed architecture so they could say it wasn't a complete waste of time.
galta - Tuesday, May 12, 2015 - link
+1Fusion was not only poor strategy, it was poor implementation.
Leaving aside the discussion of the merits integrated GPU, if AMD had done it right we would have seen Apple adopting their processor on their Macbook series, given their obsession with slim hardware, with no discrete graphics.
Have we seen that? No.
You see, even though Intel has never said that integrated GPU was the future, the single most important customer on that market segment was claimed by them.
silverblue - Tuesday, May 12, 2015 - link
I heard a rumour that AMD were unable to meet demand and as such failed to secure a contract with Apple. Make of that what you will. As it was, Llano went from being under-produced to the exact opposite.galta - Tuesday, May 12, 2015 - link
Exactly: not only have they devised a poor strategy, they were also unable to follow it!V900 - Tuesday, May 12, 2015 - link
Nah, Llano would have been way too hot for an Apple laptop. Heck,'the CPU/GPU in a MacBook Air has a tdp of 15watt. Does AMD have anything even close to that, that doesn't involve Jaguar cores?galta - Tuesday, May 12, 2015 - link
Again, they were not able to deliver their strategy, even if it was a poor one.One says that integrated GPU is the future. That, per se, is questionable.
Later, we find out that they can't meet production orders and/or deliver a chip that is too hot for one of its potential markets. This is poor implementation.
Teknobug - Tuesday, May 12, 2015 - link
Enjoying my i3 4010U NUC as well, all I need for daily use and some occasional light gaming on Steam (I run Debian Linux on it).nerd1 - Tuesday, May 12, 2015 - link
Yoga 3 pro with 4.5W Core M got 90 points in Cinebench R15 Single-Threaded testThis 95W chip got 85 points in Cinebench R15 Single-Threaded test
So who's gonna buy this at all?
takeship - Tuesday, May 12, 2015 - link
Cynically, AMD may consider it better to have *any* product to discount/write-off down the road rather than fork over another wafer agreement penalty to GloFo with nothing to show for it.BrokenCrayons - Wednesday, May 13, 2015 - link
I noticed that as well, but the fact that this is a 95 watt processor isn't that much of a concern when you have the power envelope of a desktop chassis at your disposal. The intended niche for these APUs seems more to make a value proposition for budget gaming in a low-complexity system (meaning lacking the additional PCB complexity introduced by using a discrete GPU). Unfortunately, I don't see OEMs really putting any weight behind AMD APUs by selling systems containing them which leaves much of the sales up to the comparatively few people who DIY-build desktop hardware. Even those people are hard-pressed to find a lot of value in picking an APU-based platform over competing Intel products as they tend to have a little more budget flexibility and are targeting greater GPU performance than the A-series has available, putting them into discrete graphics solutions.zodiacfml - Wednesday, May 13, 2015 - link
The APU has more cores than that. In the test, did it tell you how much power it is using?yannigr2 - Tuesday, May 12, 2015 - link
Those two more and much more expensive Intel CPUs on the charts, make APUs look totally pathetic. Yes you do have the prices next to the charts, yes they do make APUs, look extremely valuable in the 3D games, but most people probably would not go past the first 4-5 pages in this article having being totally disappointed from the first results. Also the long blue lines will imprint in their memories, they will forget the prices.Next time throw a few Xeon e7 in the charts.
PS. PLEASE PLEASE PLEASE, don't turn to Tom's Hardware.
Ian Cutress - Wednesday, May 13, 2015 - link
Would it be worth putting the gaming tests first? Perhaps for the mid range CPUs, it makes more sense.yannigr2 - Wednesday, May 13, 2015 - link
Much more expensive i7 and i5 in the charts and wrong higher-older prices on AMD APUs. Am I wrong?Please, I am NOT asking you to make AMD APUs look good, don't make it look like that, just do not make them look awful. You want to add a much more expensive i7, at least change the color of the line, do it black or something. Even the i5 is much more expensive than the APUs especially considering that AMD changed it's prices a few days ago, which means that the AMD prices on the charts are also wrong. 7850K's price that is the most expensive is $127 not $173.
From the five Intel processors you have in the charts only three of them are at the same price range as the APUs. Some Intel prices are the tray prices, not the box, and most of them are the prices on Intel's site. AMD prices on the other hand are the old much higher prices. Even in your article you give lower prices than those on the charts. AM I WRONG?
Accept the critic when it is fair, don't try to make the other guy look like a brainless fanboy who asks you to make AMD APUs look good by putting GPU test first.
akamateau - Tuesday, May 12, 2015 - link
What is being benchmarked are APU's; AMD's integrated graphics processors.akamateau - Tuesday, May 12, 2015 - link
I comment becasue they are JUNK. I read them hoping that Anand will write something useful.I am also settting the record straight and I am challenging ANANDTECH to write the truth.
superflex - Tuesday, May 12, 2015 - link
You sound like a paid shill with all your whining.Maybe AMD could hire shills with better English grammar.
eRacer1 - Tuesday, May 12, 2015 - link
That's an insult to paid shills. No one being paid to shill for a company would act that obnoxious and incoherent. Looks more like a volunteer effort, or someone who deliberately wants to make vocal AMD supporters look obnoxious and incoherent.akamateau - Tuesday, May 12, 2015 - link
I comment becasue they are JUNK. I read them hoping that Anand will write something useful.I am also settting the record straight and I am challenging ANANDTECH to write the truth.
Raiher - Tuesday, May 12, 2015 - link
Review says that it's 720p benchmark, but screenshot is 1080p. Normally I wouldn't care, but screenshot even shows number of FPS. What is wrong?Ian Cutress - Tuesday, May 12, 2015 - link
I use the same screenshot in all the games on the other pages where I am testing 1080p. It's just a generic screenshot of the game showing what happens in the benchmark.lilmoe - Tuesday, May 12, 2015 - link
Too painful to watch. I just hope things getter better in 2016-17akamateau - Tuesday, May 12, 2015 - link
Your comments assume that ANAND provided benchmarks using DX12 and they did not. ALL of the GRAPHIC benchmarks were with either synthetic benchmarks or game benchmarks using DX11.DX11 cripples the performance of ALL APU's, IGP and dGPU. Draw calls ARE a measure of CPU-to-GPU "bottleneck" or elimination thereof. You can not render a polygon until you draw it.
DX12 enables CPU core scaling; basically increased draw calls are a function of the amount of multithreaded cpu cores. DX11 does not allow multithreeaded gaming.
DX11 may be current but why should I base hardware purchases from testing based on obsolete software AND benchmarks?
DX12 will be in widespread use by game developers by Christmas.
Anand has psent quite a bit of time and money testing hardware on obsolete benchmarks TO WHAT END?
Starswarm and 3dMArk API Overhead Test are available but ANAND ignore them.
Why?
AMD's APU was designed to FLY using Mantle and DX12. It is not AMD's fault that Intle IGP is so poorly designed. That is Intel's problem.
Test Intel IGP using the latest API and you will see. Comaparatively test AMD and Intel using obsolete benchmarks with DX11 and ANAND is lying to the consumer and can not be trusted.
AN unbiased and well balanced piece should use legacy benchmarks, they should also use the very latest available. ANAND di not do this.
rp1367 - Tuesday, May 12, 2015 - link
"Starswarm and 3dMArk API Overhead Test are available but ANAND ignore them.Why?"
Because they want to hide the truth. "It is hard for a person to wake if he is asleep because he pretends to be asleep but infact he is not. He just want to fool you because of his stupidity"
The refusal to support the upcoming DX12 give as hint that the review is biased and something fishy going at the backdoor. I am not an IT guy and new on this site but i could easily detect what is the difference between biased and unbiased review.
The reviewer and Anadtech guys for sure are all intellegent guys but they allowed themselves to be succumed by their own personal interest.
Gigaplex - Tuesday, May 12, 2015 - link
Or its just something as simple as DX12 not being released yet, the performance is likely to change, so is an invalid test for comparing hardware at this time. The benchmarks you refer to are only valid as a preview for potential gains.akamateau - Tuesday, May 12, 2015 - link
Windows 10 with DX12 will be released in less than 2 months. Mantle is final and DX12 is final. Anand has it but ignores it.By Christmas ALL new games released will be DX12.
Gigaplex - Tuesday, May 12, 2015 - link
DX12 may not be final. API probably is, runtime is likely close, drivers likely won't be.And you're delusional if you think all new games released at the end of this year will be DX12. It takes years to develop a AAA game, so they would need to have started before DX12 was available. The market for DX12 will be tiny by Christmas as DX12 will be Windows 10 only. Not everyone will be willing or able to upgrade the OS. Not all hardware even supports DX12. You're completely ignoring the history of previous DirectX roll outs.
akamateau - Tuesday, May 12, 2015 - link
Windows 10 with DX12 will be released in less than 2 months. Mantle is final and DX12 is final. Anand has it but ignores it.By Christmas ALL new games released will be DX12.
V900 - Tuesday, May 12, 2015 - link
Ah, so basically the choice is buy an AMD APU and get shoddy performance now, and great performance in a year, or buy an Intel/Intel-Nvidia solution and get great performance now and great performance in a year!So theres really no reason to get the AMD is what you're saying?
JumpingJack - Tuesday, May 12, 2015 - link
I will bet you what ever you make in a year that not all games released between DX12 release and christmas of this year will be DX12 native.akamateau - Tuesday, May 12, 2015 - link
@Ian Cutress"as well as having parallelizable code"
ARE YOU NUTS?
You really need to cut the crap.
Mutlthreaded gaming will come as a result of DX12 and Asynchronous Shader Pipelines and Asynchronous Compute Engines.
galta - Tuesday, May 12, 2015 - link
I don't think he is nuts, but you seem a bit angry.From a CPU perspective, multithreaded games need not wait for DX12. They could have been written before.
Anyway, we have a clear statement of you: DX12 will make AMD shine. We should talk again on Christmas.
Just keep in mind that the same was said when DX11 was about to be released, with known results...
xprojected - Tuesday, May 12, 2015 - link
Mid-range:- MSI GTX 770 Lightning 2GB ($245-$255 on eBay/Amazon, $330 new)
- MSI R9 285 Gaming 2GB ($240)
Hold on. You suggest that the 770 and the 285 are nearly the same price, but you list the used/refurbished price for the 770 first. That opens up a Pandora's box, doesn't it? If it's too hard to find a card new, pick a different one, like the 970, or 960, which is actually close in price to the 285 (at least a couple go for $200 on Newegg). Even though you say you split the GPUs based on price ranges, rather than similar prices, people are going to compare ATI to NVidia and you have an unfair used-vs-new price comparison.
Ian Cutress - Wednesday, May 13, 2015 - link
Ideally the tests are meant to show comparisons within a GPU class, not between GPU classes. Ultimately I work with the cards I have, and on the NV side I have a GTX 980 and a GTX 770, whereas on the AMD side is an R9 290X and R9 285 (the latest Tonga). In comparison to what I have, the 980/290X are high end, and the 770/285 are a class below. The 770 Lightning is also hard to source new, due to its age, but is still a relevant card. If I could have sourced a 960/970, I would have.Navvie - Wednesday, May 13, 2015 - link
That is one of the things that really puts me off AT these days. The attitude of "this is what I have available, this is what I'll test against." If it's not relevant don't use it. Get out the AT credit card and buy some new hardware.milkod2001 - Wednesday, May 13, 2015 - link
if they want to spend more money they will have to make more money first.To make more money they would have to spread million annoying ads and this site would quickly turned into Toms super boring "best SSD for the money" articles like, with millions amazon, newegg links, ads and all that crap
They better use what they have in disposal and try to maintain still decent enough articles, reviews etc.
MrSpadge - Tuesday, May 12, 2015 - link
"Despite the rated memory on the APUs being faster, NPB seems to require more IPC than DRAM speed."Guys.. the Intel chips have better memory controllers since many years. They extract much higher performance and lower latency if you compare them at similar DRAM clock & timings. Lot's of AT benchmarks showed this as well, back when such things were still included (e.g. when a new architecture appears).
rp1367 - Tuesday, May 12, 2015 - link
Here we go again, this is another pro Intel review. The crooked company who paid AMD Billion $ settlement case due to unfair competition practices are still being supported by lots of rotten people based on their comments here is disturbing. I find these people scum on this earth as they continue to support the scammers. Shame on you guys!By the way the review is biased as you benchmark it using DX11. This is another manipulative benchmark trying to hold on the past and not on the future. Read my lips "WE DONT WANT DX11 review only! DX12 is coming in a few months why not use the MS Technical review version so people can have a glimpse instead of repeating to us reader, then you redo the benchmark when MS has released its new OS (MS 10). If you dont have full version DX12 now (as we all know) then do not benchmark it bacause it is the same for the past 5 years. You are just wasting your time if your intent is neutral to general consumers as if you are trying to sway us from the truth.
shadowjk - Friday, May 15, 2015 - link
Speak for yourself. What use to me are benchmarks of APIs I don't have and can't get, of games I don't have and don't play. With this latest review method change, the last game I had disappeared, so now I can't compare the results with my own system anymore. This makes it harder to decide whether this product is a good upgrade over my current system or not.Teknobug - Tuesday, May 12, 2015 - link
Looks like A8 7600 with R7 240 dual gfx is the combination here. I have an A8 7600, so now just need that card.UtilityMax - Tuesday, May 12, 2015 - link
Am I the only poster who is impressed with the performance of the Kaveri parts in the gaming benchmarks?For one, the Kaveri parts virtually eliminate the need for a $70 discrete GPU. If you were thinking of that kind of low-end GPU, might as well buy an APU. Next, the is very little difference or none at all in average FPS under many settings if you use a $240 dedicated GPU, which means that and $200 GPU is still the bottleneck in a gaming system. Only once the benchmarks are run with very high end GPUs, we finally see the superiority of the Haswell parts.
Of course, the business, web, compression, and conversion benchmarks are another story. Except for a few special cases, the APUs struggle to catch a Core i3.
meacupla - Wednesday, May 13, 2015 - link
Yeah, these APUs are certainly a lot better on the CPU front than what they used to be.I think the APU's only downfall is that $72 Pentium G3258 and availability of cheaper (sub $100) H97 motherboards to overclock them on.
A88X FM2+ boards are around the same price as H97 boards, but that $30 saving from cheaper CPU can go into a decent cooler for overclocking.
silverblue - Thursday, May 14, 2015 - link
I've often wondered if the G3258 is really the better choice in this price range. Sure, there are titles it cannot play, but workarounds exist in one or two titles to allow it to work. Newer titles may indeed render it obselete, but there's always the argument about buying a better CPU for the platform later on. Additionally, it overclocks like buggery if you feel that way inclined; how long has it been since we had a CPU that could be overclocked by 50% without costing the earth in power?The concern I have with upgrading just the CPU is that Intel doesn't stick with its sockets for a long time, and if you're buying a CPU that will eventually become as useful as a chocolate fireguard when playing modern titles, it'd make more sense to buy its i3 cousins in the first place. AMD is banking on you considering its quad core APUs for this, however they have their flaws too - FM2+ has a year left (Carrizo is destined for FM3 along with Zen), they don't overclock as well, power usage is higher even during idle, and the GPU-less derivatives don't appear to be any faster. H81 boards aren't expensive, either, for overclocking that Pentium. Still, you really do need a discrete card with the G3258/Athlons, whereas the APUs and i3 have enough iGPU grunt to go into an HTPC if you're not gaming heavily.
Decisions, decisions... and right now, I'm wondering how I could even consider AMD. Has anybody made systems for both Pentium and Athlon/APU systems and can share their thoughts?
Tunnah - Tuesday, May 12, 2015 - link
Nice review, covers pretty much everything, and says what I guess everyone was expecting.One thing I wondered though, why choose the 770 for mid-range when the 960 is a much more logical choice ? Price wise it's £140 here in UK so I guess about $200 over the pond, and is a much more competent card than the 770
Ian Cutress - Wednesday, May 13, 2015 - link
Because I've had 770s in as part of my test bed for 18 months. The rest of the cards (290X, 980, 285) I've sourced for my 2015 testing, and it's really hard to source GPUs for testing these days - I had to personally purchase the 285 for example, because I felt it was extremely relevant. Unfortunately we don't all work in a big office to pass around hardware!meacupla - Wednesday, May 13, 2015 - link
If you do ever get a GTX 960 or 750Ti, it would be nice to see some total system power consumption numbers between overclocked A8-7650K+R7 240 vs. i3-4xxx+750Ti vs. overclocked G3258+750TiDrazick - Wednesday, May 13, 2015 - link
Hello,Could you please add MATLAB to your performance benchmark?
Or at least Python / Julia.
We need data about scientific computation.
Thank You.
UtilityMax - Wednesday, May 13, 2015 - link
"Scientific computation" is a somewhat amorphous term. Moreover, I don't know if there exists a benchmark suite for either Matlab or Python. In any case, Matlab and Python or both used in numerics as fast prototyping tools or for computations where the compute time is inconsequential. If you're running in speed issues with Matlab it's time to start coding in something else, although in from my observations, most people who run into performance issues with Matlab don't know how to optimize Matlab code for speed. Most don't know how to code at all.freekier93 - Wednesday, May 13, 2015 - link
Your really don't know what you're talking about... Matlab is SO much more than fast prototyping software. I have quite a few programs what would be good speed tests, one of which being a full non-linear aircraft dynamics Simulink simulation. A 5 minute simulation could easily take 2 minutes of compute time. Anything that starts getting into serious differential equations takes compute time.Ian Cutress - Wednesday, May 13, 2015 - link
3DPM is a Brownian Motion based benchmark, and Photoscan does interesting 2D to 3D correlation projections. The Linux benchmarks also include NAMD/NPB, both of which are hardcore scientific calculations.Smile286 - Wednesday, May 13, 2015 - link
Author, do you know about existence of 'Haswell Refresh' CPU models? It's basically the same 'Haswell' with +100/+200/+300 MHz to their x86-core's speed. Why not use them in tests? It's not like it's 2013 right now, when i3-4330 was released. FYI, i3-4370 have the same $138 MSRP (tray) as i3-4330, but it +300 MHz faster.Same story about i3-4130 and i3-4170: +300 MHz for i3-4170 basically for free.
You should put them in test rather an old 'Haswell' core i3 models. Thanks.
zodiacfml - Wednesday, May 13, 2015 - link
How could a next generation API improve AMD's APU performance if it already has decent if not very good performance in integrated 3D graphics (beating the lowest end discrete)?AMD still needs better CPU performance as it shows poorer value compared to an Intel of near or similar price (without considering the GPU).
The occasional gaming niche is pretty nil too as that kind can be accomplished in a notebook, tablet, or smartphone.
This remains valuable for people with regular gaming in mind but with absolutely limited budget. I see myself getting this for getting back into Diablo 3 after a day from work but saving a bit more, I might as well get a decent laptop.
TrackSmart - Wednesday, May 13, 2015 - link
This comment is for Ian Cutress,First, thank you for the review, which was rich with performance figures and information. That said, something seems missing in the Conclusion. To be precise, the article doesn't really have a clear conclusion or recommendation, which is what many of us come here for.
It's nice to hear about your cousin-in-law's good experiences, but the conclusion doesn't clearly answer the key question I think many readers might have: Where does this product fit in the world of options to consider when buying a new processor? Is it a good value in its price range? Should it be ignored unless you plan to use the integrated graphics for gaming? Or does it offer enough bang-for-the-buck to be a viable alternative to Intel's options for general non-gaming usage, especially if motherboard costs are considered? Should we consider AMD again, if we are in a particular niche of price and desired features?
Basically, after all of your time with this chip and with your broader knowledge of the market offerings, what is your expert interpretation of the merits or demerits of considering this processor or its closely related AMD peers?
Nfarce - Thursday, May 14, 2015 - link
" Ultimately AMD likes to promote that for a similarly priced Intel+NVIDIA solution, a user can enable dual graphics with an APU+R7 discrete card for better performance."I have *long* wondered why Intel and Nvidia don't get together and figure out a way to pair up the on-board graphics power of their CPUs with a discrete Nvidia GPU. It just seems to me such a waste for those of us who build our rigs for discrete video cards and just disable the on-board graphics of the CPU. Game developers could code their games based on this as well for better performance. Right now game developer Slightly Mad Studios claims their Project Cars racing simulation draws PhysX from the CPU and not a dedicated GPU. However, I have yet to find that definitively true based on benchmarks...I see no difference in performance between moving PhysX resources to my GPUs (970 SLI) or CPU (4690K 4.7GHz) in the Nvidia control panel in that game.
V900 - Thursday, May 14, 2015 - link
Something similar to what you're describing is coming in DX12...But the main reason they haven't is because unless youre one of the few people who got an AMD APU because your total CPU+GPU budget is around 100$ it doesn't make any sense.
First if all, the performance you get from an Intel igpu in a desktop system will be minimal, compared to even a 2-300$ Nvidia card. And secondly, if you crank up the igpu on an Intel CPU, it may take away some of the CPUs performance/overhead.
If we're talking about a laptop, taking watts away from the CPU, and overall negatively impacting battery life will be even bigger drawbacks.
Nfarce - Thursday, May 14, 2015 - link
"But the main reason they haven't is because unless youre one of the few people who got an AMD APU because your total CPU+GPU budget is around 100$ it doesn't make any sense."Did you even read the hardware I have? Further, reading benchmarks from the built in 4600 graphics of i3/i5/i7 CPUs shows me that it is a wasted resource. And regarding impact on CPU performance, considering that higher resolutions (1440p and 4K) and higher quality/AA settings are more dependent on GPU performance than CPU performance, the theory that utilizing onboard CPU graphics with a dedicated GPU would decrease overall performance is debatable. I see little gains in my highly overclocked 4690K running at 4.7GHz and running at the stock 3.9GHz turbo frequency in most games.
All we have to go on currently is 1) Intel HD 4600 performance alone in games, and 2) CPU performance demands at higher resolutions on games with dedicated cards.
UtilityMax - Friday, May 15, 2015 - link
I am guessing that they didn't get together because dual-graphics is very difficult to make to work right. AMD is putting effectively the same type of GPU cores on the discrete GNUs and integrated APUs, and it still took them a while to make it work at all.V900 - Thursday, May 14, 2015 - link
I guess one thing we all learned today, besides the fact that AMDs APUs still kinda blow, is that there is a handful of people, who are devoted enough to their favorite processor manufacturer to seriously believe that:A: Intel is some kind of evil and corrupt empire ala Star Wars.
B: They're powerful enough to bribe/otherwise silence "da twooth" among all of Anandtech and most of the industry.
C: 95% of the tech press is corrupt enough to gladly do their bidding.
D: Mantle was an API hardcoded by Jesus Christ himself in assembler language. It's so powerful that if it got widespread, no one would need to buy a new CPU or GPU the rest of this decade. Which is why "they" forced
V900 - Thursday, May 14, 2015 - link
Which is why "they" forced AMD to cancel Mantle. Then Microsoft totally 110% copied it and renamed it "DX12".Obviously all of the above is 100% logical, makes total sense and is much more likely than AMD releasing shoddy CPUs the last decade, and the press acknowledging that.
wingless - Thursday, May 14, 2015 - link
AMD DOMINATION!!!!! If only the charts looked like that with discrete graphics as well....Vayra - Friday, May 15, 2015 - link
Still really can't see a scenario where the APU would be the best choice. Well, there may be one: for those with a very tight budget and wish for playing games on PC regardless. But this would mean that AMD has designed and reiterated a product that would only find its market in the least interesting group of consumers: those that want everything for nothing... Not really where you want to be.UtilityMax - Friday, May 15, 2015 - link
Well, right now arguably, if one has $500 bucks or less for a gaming PC build, it would be better to buy a Playstation 4. High end builds is where the money is in the enthusiast gaming market.V900 - Friday, May 15, 2015 - link
Nah, you can get a really nice gaming PC for even just 500$... Sure it won't be a octo core CPU, probably not even a quad core, but the performance and graphics will be hard to tell apart from a PS4. Especially once DX12 games become common.Yeah, you might get a few more FPS or a few more details in some games on a ps4.
But just set aside the 10-30$ you save every time you buy a game for a PC vs. a PS4 and you should be able to upgrade your computer in a year or less.
V900 - Friday, May 15, 2015 - link
For those on a very tight budget, wish for PC games* AND who already have a motherboard that uses the same socket as these APUs, I would add.Zen is going to require a new socket, so you're kinda stuck in regards to upgrades from this.
And if you have to go out and get a new motherboard as well, than it really only makes sense to go for Intel. Yup,
Skylake is also going to need a new socket, but if you go the Intel route, at least there's a possibility to upgrade to a Haswell i3/i5/i7 from a Pentium down the road, so you have the possibility of a lot more performance.
ES_Revenge - Saturday, May 16, 2015 - link
I don't really get the point of this CPU at all. It comes out, now, in May 2015? And it's really nothing new yet AT bothered to review it? It's a few bucks more than an A8-7600 but it has higher TDP and is otherwise nearly exactly the same. Sure it's unlocked but it doesn't overclock well anyway. Might as well just save the few bucks and the 30W power consumption and get the 7600. OTOH if you want something with better, you'd just go for the $135-140 A10 CPUs w/512 SPs. The 7650K seems to be totally pointless, especially at this point in 2015 where Skylake is around the corner.The Dual Graphics scores look pretty decent (other than GTAV which is clearly not working with it), but there's no mention at all in this review about frametime? I mean have all the frametime issues been solved now (particularly with Dual Graphics which IIRC was the worst for stuttering) that we don't need to even mention it anymore? That's great if that's the case, but the review doesn't even seem to touch on it?
1920.1080p.1280.720p - Sunday, May 17, 2015 - link
For the love of everything, test APUs with casual games. Someone who wants to play something like GTA V is likely going to have a better system. Meanwhile, games like LoL, Dota 2, Sims 4, etc have tons of players who don't have great systems and wouldn't like to spend much on them either. Test games that these products are actually geared towards. I appreciate the inclusion of what the system could become with the addition of differing levels of gpu horsepower, but you are still missing the mark here a bit. Everyone seems to be with APUs and it drives me nuts.johnxxx - Monday, May 18, 2015 - link
hello , what's the best solution for you ?(internet , mail, office , game , listen music and see a movie )
apu + r7 for dual graphics
apu + nvidia card
apu only (oc with a big fan )
x4 860k + r7
x4 860k + nvidia card
or pentium g3xxx + r7
pentium g3xxx + nvidia card
or i3 4150 + r7
i3+nvidia
thank you very much
ES_Revenge - Saturday, May 30, 2015 - link
A little late but it mainly depends on what R7 you're talking about. If you're talking about an R7 240, then yeah it's better to do dual-graphics, 'cause a 240 on its own is not going to do much for gaming. If you're talking about a single R7 260X or 265 then that's a different story (and a much better idea).For gaming, a quad-core CPU really helps for modern games BUT dual-core with HT (like an i3) is quite good too. Dual-core only isn't the greatest of ideas for gaming, TBH. So, ditch the Pentiums and dual-core APUs.
Out of your choices I'd probably go with the i3 4150 and an R7 260X, R7 265/HD 7850, or GTX 750 Ti. Unless you already have some parts (like the motherboard), this will be the best of your choices for gaming (everything else you listed is no problem for any of those CPUs).
The i3 4150 is benefits from newer features in Haswell and has HT. Compared to the X4 860K It may still lose out in some [limited] things which really make use of four physical cores, but not very much and probably not anything you'll be doing anyway. The Haswell i3 also uses very little power so it's good in a small/compact build where you want less heat/noise and can't use a large air cooler easily (or just don't want to spend a lot on a cooler).
If you're talking about an R7 240 though, then go with an A8-7600 and run Dual Graphics. It might be cheaper but it won't be better than the i3 and higher-end R7 card.
CVZalez - Wednesday, May 11, 2016 - link
It's time for this so called benchmarks make the scripts and data processed available to the public, for example, is the AgiSoft PhotoScan OpenCL activated in the preferences, if it's not, only the CPU will be used, and it makes an huge difference, we all know what AMD is good at with those APUs, not in the CPU but in the GPU and multithread, I find it hard to believe that Intel i3 had such better results.