Considering that the competition is perfecting it's 14nm production with second generation products already out of the gates while AMD is still stuck with 28nm tells us everything we need to know about these chips.
Where are the GDDR5 / DDR4 versions with fast enough GPU's that could actually run games? Or innovative mixed GDDR5 / DDR3 designs or edram/sram L4 cached versions?
Where is my 250W TDP highly clocked 4-core Steamroller with integrated 7970 class GPU, soldered to motherboard with enough edram / sram for framebuffer, 4GB of GDDR5 + two DDR3 sockets? Shipped with AIO water cooler? With clever driver programming to benefit from all the bandwidth and shared memory space with low level mantle api? The ultimate fusion of paraller- and serial computing power that became theoretically possible when AMD bought ATI?
Instead we have lousy bottom-to-middleclass CPU's with barely "good enough for Sims" GPU's that don't spark anyones interest. No deal AMD, No deal.
As is usually the case, the low-wattage mobile variants are likely to be the most interesting. To my knowledge, Intel does not have Atom on a 14nm process, which is what those would be competing with. Insisting on continuing to compare these with Intel's Core line that cost significantly more is just silly.
Of course, the issue is that, no matter how good AMD's chips are for mobile designs, they will continue to be ignored by OEMs and wont appear in anything but bottom-of-the-barrel designs. I wish something could be done about this, as I'd really love to get a table with an AMD SoC.
Part of earning trust is being on time, delivering on promises, and providing an enticing option. With respect to CPUs/APUs, AMD has failed on all accounts. Kaveri was delayed and so were Kabini and Temash. Worse yet, Kaveri showed up a fifth of its performance missing. Requiring high-speed dual-channel memory is required for the chips to shine, yet that also increases BoM and limits form factors.
Personally, I can't believe they still haven't been able to produce an APU that can drive 1080p at high settings. Such a chip would sell like hot cakes, yet the company continually falters. What's the point of all of that GPU silicon if you can't utilize it? Do the CPU cores really need that much dark silicon? :P
Yet their ramp for mobile Kaveris has been incredibly slow, and SFF NUC-like Kaveri and Beema/Mullins systems are nearly nonexistent.
I understand the point. Speaking for myself, I'd love to buy an NUC system with an AMD APU inside that can drive 1080p. For that to happen, AMD still has to decrease power consumption, increase both CPU and GPU performance, cure the memory bottleneck, and actually get some of these products made so we're not chatting about hypotheticals. 2400mhz RAM on SO-DIMMs isn't going to happen, and that ~1.6v required to get there is too high. Even at 95W TDP Kaveri throttles the CPU down to ~3ghz when the GPU is under load.
I really don't think Carrizo is going to fix any of these issues. They might look a little better, but it's clear that nearly all of AMD's resources have been poured into their 2016 Zen and K12 architectures.
SFF NUC-like, yes I've been looking for them, too. Surely this could be a segment of the market where APUs would fit. In the end I put together a 45W A8-7600 based Kaveri based M-ITX form factor but I'm surprised the kitbuilders haven't done so. I expect Carrizo to improve 10% on Kaveri, but much more. The Zen project is, as you say, where it's at.
It doesn't make much sense to bash AMD over 'being on time' and 'delivering on promises' given what a complete dogs dinner broadwell and Intel's 14nm process has been. Remember when desktop broadwell was supposed to come out January 2014?
Intel isn't the one attempting to persuade OEMs to use their chips.
And I actually agree with you. Intel hasn't made an interesting processor since Sandy Bridge. Their Broadwell yields are still poor. But despite Intel being late and offering 5-10% improvements YoY, AMD still hasn't been able to claw back any ground.
AMD did have the opportunity to claw back some performance ground. Intel has been sleeping at the helm with terrible yields. That said, Intel is well known for "Anit-Competitive" practices. They were also charged and had to pay AMD for damages. They paid up more than once. http://www.theverge.com/2014/6/12/5803442/intel-ne...
You keep mentioning 1080p. For games or video? If for video, then current and pay APUs can already do that on AMD's side. For games, that's not feasible given the price point and power envelopes that APUs are aimed at.
nVidia's Maxwell 750Ti drives 1080p at high really well and it does that at sub-75W TDPs. Bear in mind that's for the full GPU, including the PCB, vRAM, etc. If on a single die, redundancies would be eliminated and realistic TDP would be quite a bit lower.
AMD producing an APU that can do the same is certainly possible if not for a few bottlenecks: 1 - Their CPU architecture is inefficient. A new uArch isn't coming until 2016, and I don't expect any significant improvements until then 2 - Their current GPU architecture is lackluster in perf-per-watt, but that should (hopefully) change. 3 - AMD still hasn't managed to alleviate the memory bottleneck. Adding any sort of GPU power to an APU is fruitless until that is completely addressed.
To clarify, I wasn't mentioning video. You can buy an ARM SoC for a few bucks that can drive 1080p video just fine. 1080p video should be a given at this point.
Well,if Intel can"t make a APU that runs 1080p on high why would you expect AMD to at this point? Fact is they can make one,but not at the cost needed.Furthermore,gamers that want to run all games on Ultra settings on 1080p are going to buy a dedicated GPU,not an APU.
Intel can now that it has a few years of graphics engineering under its belt. Broadwell is increasing throughput per core by 40% while also including 20% more cores. Intel didn't care about gaming ever. It's been aiming at increasing compute power to maintain absolute dominance over IBM and Sun for supercomputer CPUs in the future and encourage even more OEMs to switch. Beyond that, Intel has finally looked at knocking Nvidia clean out of the mobile GPU market, and AMD is waking up a bit more in laptop integrated graphics.
The main problem with Iris Pro 5200 is 10 EUs (cores) per sub-slice (a group connected by a data bus). That created a big throughput bottleneck where the bus couldn't keep the cores fed. With that knocked out, we can see Intel start to cook, especially if they create a good tessellation engine.
Even though AMD's quad-core processors have 800 million more transistors and 2.x million per transistors per sq. mm, Intel still has the performance crown. If you think Intel won't make an APU by the end of 2016 capable of doing 1080p high/ultra at 50+ fps, you're naive as IBM was to say Intel would never compete in the server world. Look at how that turned out for IBM. All they have left are mainframes for banks.
At the time, AMD went through a complete management overhaul. Hopefully all gets rectified in 2015. And yes I have no problems driving 1080p on MAX Settings with my Kaveri APU.
well we are comparing them to core because the last Kavari chips were priced like Core. I think $155 is the cheapest unit, most of the 384 and 512 Shader Sku's were $180+. Same price as a lastest and greatest Core i5.
Ofc AMD's chips are ignored by OEM's since they are utter crap compared to Intel, Qualcomm and Nvidia chips. Why would an OEM pick something that has by far worse perf/w and idle usage characteristics than competition?
AMD has two product lines that are really competitive. APU's with highend Graphics (Console chips in Xbox one and PS4) and desktop class GPU's. Intel can't compete with GPU's, Nvidia doesn't have fast x86 CPU's and Qualcomm doesn't have high-tdp products.
For some reason AMD has not capitalised on this opportunity, but instead it has tried to compete in a market where it doesn't have competitive products. Not to mention that it hasn't had R&D resources to catch competition.
They can't get any more consoles since I think they got them all :) As for desktop class GPUs, they're in heated battle with Nvidia as you know. I'm still not sure why I hear they have smallish market share here though.
There are still hundreds of millions of "consoles" AMD could get the drop on, *if* they play their cards right. (Geddit?)
And that's the mobile sector, if AMD can have good enough performance, with low power and at the right price they may end up being an attractive proposition for Nintendo's and Sony's hand held consoles. (I bet they regret selling it's mobile Radeon business, aka. Adreno.) Obviously, it won't be high-profit monolithic chips, but AMD can do with the extra business.
"Where is my 250W TDP highly clocked 4-core Steamroller with integrated 7970 class GPU, soldered to motherboard with enough edram / sram for framebuffer, 4GB of GDDR5 + two DDR3 sockets? Shipped with AIO water cooler? With clever driver programming to benefit from all the bandwidth and shared memory space with low level mantle api? The ultimate fusion of paraller- and serial computing power that became theoretically possible when AMD bought ATI?"
Are you insane 0_0? Honestly, have you been checked?
you are kind of missing the point . sure for absolute best performance a intel cpu with a nvidia card is best for most gaming . but do you need anything better than a i7 4770 ? that is intells dilemma .
skylake has been pushed back . intels alleged answer to amds apu's that can play games . skylake is 2016 now . amds problem for years now has been its tdp situation . with that little problem resolved it can then go back to simply building on the number of cores it uses on the dies since they will also be smaller . sooo carizo is 2015 and skylake is 2016 . do you think intel will be able to out preform amd with their integrated solution on gaming with zero help from nvidia in 2016 ? because thats all that other companies care about . steam has a slew of publishers who has started putting up in the minimum requirements that intel integrated graphics are not supported .
if for instance better than a i7 4770 isn't needed for years especially because the new systems dont have massive cpu power and AAA developers for the first time EVER have the ability to port things faster because the hardware is using x86 architecture on all platforms than what will intel have in 3 years ? cheap i7's sure but they want top end product to move
that's called denial lol... I guess you could call it "hope" as well. The "brick wall" wont happen for at least 6-10 years, and to be honest, i don't think AMD can last that long
what your looking for is a PS4 chip for sale to everyone. Not gonna happen anytime soon. They are very limited by TSMC's massive 20nm delay just like everyone else.. This is a +1 update, designed to lower Perf. Per watt.
You... wow. Literally. What? What in the hell are you talking about? Literally everyone but Intel is stuck on 28nm, Apple snapped up all early run 20nm stuff because they're Apple. This is also just and a brief APU announcement, not a lineup and detail of every product they hope to deliver next year.
This is why Anandtech's commenting system sucks. If "Crazy" is the first post it always sucks up a disproportionate amount of the conversation.
The competition is perfecting it's 14nm production? No really. 14nm looks quite disastrous for them at the moment. Postponed several quarters, very low yields, lower than expected performance. That's why they launched only Core M so far. And even this one sux considering expectations and availability.
And no, 28nm doesn't tells us anything about Carrizo. There are very interesting improvements. Excavator, new graphics architecture, HDL, first APU with full HSA 1.0 support, first true SoC based on Bulldozer and maybe even HBM as option. These are very attractive chips for most OEMs and consumers. OTOH no one wants a 250W TDP APU.
You cannot compare Intel with AMD. Intel is huge with mass resources, despite AMD has always been a lot more innovative in the last 10+ years. AMD was always slower with nm production. Intel caught up and surpassed them when AMD fumbled the Bulldozer. The very hype of the Bulldozer design forced Intel to develop what we have today.
Both obviously need each other, to keep each other on there toes.
after the kaveri based fx-7600p appeared in exactly 0 laptops, i wonder if these chips will every appear in anything, or if amd will just completely give up? or release a laptop of their own, to show OEMs how to do it right (similar to google's nexus line)
AMD made a fatal mistake with Kaveri. It released a desktop variant first, followed 2 quarters later with a latop/mobile variant. By that time, Intel cleaned its mobile clock with subsidies and good products.
So Baytrail is worse than Beema, but smaller and cool running than Beema and comes with good "contra-revenue incentives" that makes a OEM/ODM salivate.
While everything Kaveri was designed for like configurable TDP. Good gaming performance for the power envelope were all designed for mobile environment rather than desktop.
Even today if you go to www.amd.com and go to "Shop AMD Desktops" and chose the big OEMS like HP, DELL, Lenovo, Acer you cannot find a single Kaveri desktop. They still sell Richland desktops but no Kaveri ones.
You have to find III tier OEMs like Cyberpower, iBuyPower for Kaveri APU based desktops. When I look for Kaveri notebooks I see only this. A HP notebook with average configuration sold for a humoungous price from a PCC. http://shop.amd.com/en-us/search/K0L62UPABA
AMD is now in a very bad situation. The OEMs that buy bulk of their products are not interested in their latest and greatest instead, if interested 1(Richland) or 2 generations (AMD E2) behind the current one.
This! I'd love to buy a decent laptop with recent AMD APU + AMD dGPU, but there is simply nothing on the market. I run Linux, and I don't want to deal with Nvidia binary drivers. All-AMD solution has working open-source drivers, and decent GPU performance. Intel has good open-source drivers and crappy GPU performnace. Intel CPU + AMD dGPU would be another option, but I prefer all-AMD.
The only things I can find are MSI gaming laptops. Or some low end crap from HP, which has weird arrow key layout which is highly annoying for me.
The new x86 core "Zen" is supposed to be coming in 2016. This is a ground-up design. They know Bulldozer is a dead end; Rory Read admitted as much, not long before being let go.
I hear ya man. Bulldozer was a failure since day 1. The fake cores suck. They say they have 8 core chips when really its 4 full cores and then 4 more cores with only half the parts of a core. They continue to stretch it with steamroller piledriver excavator and from the outside each iteration seems to bring no meaningful performance gains. After bulldozer flopped they should of instantly started working on a totally new architecture instead of wasting time reiterating a hugely failed one.
If it wasn't for amd scoring the video game consoles they would be massively bleeding money. Now that both consoles have sold over 10 million units though expect console sales to get dramatically slower after this holiday as most people that want one already have 1.
What I have said before, The gpu part in the ps4 apu is much more powerful than any apu they offer for pc building. The 8 jaguar cores are really weak though. If amd just released an apu with the full ps4 gpu side fully enabled which is the same amount of cores as a desktop 7870 and instead of using crappy jaguar cores use 4 excavator cores that can atleast do some work they could have a real winner of a chip. They could even take it a step further and make an even bigger gpu than the ps4's apu. They already made a full 7870 inside an apu so maybe they could make a full or close to full r9 285 inside an apu + the 4 normal power excavator cores. Sure the die would be really big but considering they have been on 28nm so long the defect rate should be incredibly low right now so going with a huge die shouldn't be too risky. Making a bold product like that is exactly what amd needs to get noticed again. SFF desktops are really hot right now + HTPC's people would be lining up to put a gpu powerhouse apu like that in a system like that. Then they could scale it down for laptop use. I know a lot of people that would jump on a r9 285 or close to r9 285 inside an apu (just lower clocks than the discrete gpu)
if amd can put a 20CU count full 7870 inside the ps4 apu (just lower clocks than the desktop gpu) than there is no reason they cant put a full r9 285 (just lower clocks then desktop gpu) inside a PC apu. Yes with a large die there are more chances for defects but with the incredible mature 28nm process like i said defects would be low and they could just offer 2 or 3 different chips with the defect cores fused off and made into lower end chips. With that powerful of a gpu inside an apu people would pay a much higher price for it since they arent buying a discrete gpu anymore.
if amd continues with these crappy releases over and over they will start losing money now that consoles will start slowing down in sales. They will continue to lose money every quarter until they are bankrupt and gone. They need to take a risk and put a huge GPU inside an apu and blow every 1;s minds.
Which would be incompatible with current DDR3/4 memory architectures on x86. Unless the motherboard offered a special pool of fast memory for the gpu it would be pointless to put that many CUs on the APU as they would be utterly bandwidth starved.
Bulldozer cores have two integer units and one floating-point unit. This puts them somewhere between two actual cores and Intel's hyperthreading system for 2-threads per core performance. Because of this AMD markets them as 2 cores each, so what AMD claims is a 8-core chip only actually has 4-cores. This is somewhat disingenuous, which is why the OP calls them "fake cores".
I don't think it's AMD's biggest hurdle, however. That would be Bulldozer (and it's derivatives) horrible single-thread performance. It's really disappointing after AMD ruling x86 performance for years (during the Athlon XP and Athlon 64 generations). All the fusion chips have been ok budget chips, but they really don't have anything high-end anymore.
Well I think the term 'fake' is a little bit too much. It sounds like there are cores sitting inside the die doing nothing more than just warming the chip quite a bit.
I think AMD was aiming at the point that developers uses the GPU for the most floating-point computation. That's why they try to reduce the space the CPU core occupy on the die, and dedicate a large space for GPU, I think. That day hasn't came, and may even never come. It is still pretty hard to utilize the GPU right now. HSA might improve the situation in the future, but I don't know if it can convince the developer to start taking serious on using GPU.
It might not be wrong to say that AMD has put too much faith on the GPU. It does pay AMD in the end, as APU can serve the lower-end gaming section just fine, but this might be just a side effect and not what AMD expected at first.
I agree that Bulldozer suffers in single-thread performance. I hope that they can do a better job next time. Having Intel dominate the market would not be good for us users.
The issue is you have two cores within a module and the work cannot simply be split between the two cores... and a single core is too weak on its own. Also, you have the potential power issue whereby the whole module is active whilst only one of the cores is doing any actual work.
It's only when you throw enough work at the CPU that it starts to perform reasonably well.
I think the idea had merit for some server applications and should have been marketed/designed as a specialized server/professional chip. It's not an especially good design for most consumers and it was a mistake to continue pursuing it for so long.
The yields would be absolutely horrid man. Think of the binning. You got 8 cores and 1024 Gpu Cores. Maybe 1 core cpu is bad, They cant just disable 2 cores like intel, and sell it as a -2 core chip with all that graphics. yes 28nm is mature, but there is no escaping yield problems. i would say 1 out of 5 dies are good.
It's a long shot, but I think AMD's goal with Carrizo may be to get a design win with the upcoming Retina MacBook Air. Apple has shown quite a bit of willingness to go with AMD GPUs (on the Mac Pro and the Retina iMac), and the MacBook Air has always had a 15W TDP chip. Big-core Carrizo is said to start at 15W according to AMD's slides, and I don't think that is just a coincidence. Intel has better CPU performance, but their GPUs at that power level are pretty terrible, not good enough for a system with a Retina display. (Iris/Iris Pro is adequate, but much more power-hungry, and wouldn't work too well in a MacBook Air.)
While this makes sense, let us remember that Apple will not use a part which the supplier can't supply "on demand". AMD has a smaller production, supply chain than Intel.
This all sounds good, but only if AMD CPU (x86-64) wouldn't be so sluggish in terms of single threaded performance with respect even to Sandy Bridge, not even mentioning Ivy Bridge, Haswell and what comes next from the blue team. Look - Piledriver- and Steamroller-based APUs are around 70% slower clock-to-clock than Haswell in Cinebench R15, which even Mac users like to post as a bench result on Mac info resources. So, I suppose, it won't happen.
I meant, specifically, Cinebench CPU score, especially single-threaded CPU score, which is a good indicator of CPU snappiness with respect to running simple serial software.
Cinebench single-threaded score is in no way an indicator of CPU snappiness. Where do You people even come up with this stuff? It just shows what the CPU is capable of doing in single-threaded workloads (which is starting to fade away anyway, more and more workloads keep going multi where AMD can still keep up with PD even on 32nm with slightly higher electricity bills)
I read that the Retina laptops are/were bound by single threaded performance for their animation quality, which was why they dropped frames a lot. If that is still the case, I don't see an AMD future in them.
Right now we have fully opensource HSA stack on Linux. This further indicate that AMD seriously think about getting part of Android pie (and HPC pie, which is Linux based).
Of course Catalyst development on Win and userspace component on Lin is happening behind closed doors, so we do now know where its at HSA. (Under Linux Catalyst will share kernel component with opensource driver, 285 will be first gpu with such driver stack)
At 28nm? Intel is already selling 22nm Atoms with 14nm chips showing up in a few months. On mobile, power consumption is a key value and the only way x86 can compete with ARM is with better processor optimization.
Also, your description of Voltage Adaptive Operation is the inverse of what the slide indicates. The slide indicates that they are compensating for dips, not increases. From the sound of it, historically they've had to clock as if they were operating at the dip voltage to prevent errors, but now that they are able to compensate, they can clock at the nominal voltage.
Who comes up with these brand names? They all sound like slow and heavy construction site equipment; and 'Carrizo' sounds like the workers' lunch sausage.
I'm not touching an AMD mobile APU. Though the A4-1250 APU is the slowest, it shouldn't be slower than a dual core Intel Atom. I thought it was Windows 8.1 and installed W7, but no, internet browsing is still a painful experience. The APU is capable is of FULL HD video decoding but the CPU sucks so much that its not watchable.
Sounds great and all, but I still can't even get a FX-7600p / FX-7500 / A10-7400p mobile APU despite being "released" six months ago. As much as I like AMD, I haven't been too impressed with their performance lately.
agreed. I've been dying to give AMD business (I had an a6-5200 for a week before returning it, not enough power) and they just can't seem to deliver any products to market
Pretty much a +1 to the rest of the comments. The form factors that OEMs are putting AMD chips into are afterthoughts. They either need a true partner to deliver the highly desirable form factors or do it themselves. Me? I want a surface pro 3 clone (with the same screen res), and a fanless NUC that can drive 1080 content.
I bought the Sandy Bridge 2500K that resides in my desktop system back in early 2011. Wake me up when AMD come up with a CPU that can beat this four year old 32nm chip in single-thread performance without pulling more than its 95W TDP to do so.
Intel has money to develop several parallel CPU units... AMD can not. That is why they could not abandon bulldozer architecture, when they finally find out that programmers are not going to use parallel programming in big way... Intel has multiple times money and resources. It is hard to compete against. Lets hope that other manufacturers can actually get closer to Intel in production technology so that AMD can also benefit from smaller production node and reduced TDP...
What would be interesting is something along the lines of an APU with upgradeable GDDR5 for the GPU side and DDR4 for the CPU side. It would separate the memory bandwidth and offering upgrade ability for GPU memory would be unprecedented. Of course this would require motherboard and GDDR manufacturers to help design, support and supply the slot and chips. Would be great to be able to increase the GDDR from 1gb to 2gb as needed, instead of having to replace the whole unit.
As for the "2nd gen FX CPU's" on the market so long, it makes a lot of sense for them. If they are abandoning the architecture next year, there was no real point in ramping up production of the newer chips that wouldn't really sell that well. Continue the current line until the actual new one is ready and put the production costs toward research for the new arch.
AMD has a CPU branding issue as well. Many of my clients hear "AMD CPU" and think they are a Intel knock off. The only thing they really get credit for is their GPU's. Intel made themselves a household name that implied premium quality. If the new CPU's can compete with Intel again, they need some serious marketing. But they really, really need to stop over-promising! The hype ahead of bulldozers release really doomed it. There was essentially no way for those chips to fill the expectations. When they didn't knock peoples socks off, we get what we have now, a lot of doubt and mistrust for AMD CPU's.
I'm dissappointed there is no excavator + 7850+ graphics + edram or gddr5 or quad ddr4 or anything type product. It would be unchallenged and certainly have a small market of enthusiasts and steam boxes and other things. Even if you didn't sell alot, I think it would get people excited about AMD again and it would play to the only strength they've got at the moment.
Sure you can get an i5 + standalone graphics card but it's not exactly the same thing.
I think that with HSA, they can just cut out the FPU modules and let the GPU do all FP calculations. It cuts a lot of transistors in the core, decreases presure on core front end and opens it up for more integer calculations. If we take this and high density libraries, a sharp decline in power consumption and increased in performance is not a far fetched reality.
The most dissapointing thing on Kaveri for me was that the iGPU was supposed to be as fast as a 7750 on desktop and it is actuallly slightly slower than a 7730 GDRR5.
The biggest issue besides performance, is the utter lack of marketing.
They just don't get it. They could take a platform and market it for what it can do per dollar. Could you imagine if they could get a full platform together to be the same as or better than xbox one or ps4, but run Windows? Instead we have miscalculated release dates, unknown naming convention, and no way to promote their product. 99% of people don't care about nm or memory support, they want to know what the system can do, and they wan't to know the total cost.
While there is a part of me that agrees with the majority of the comments in regards to the couch potato that AMD has become, we should all remember that AMD is now run by a highly capable EE PhD rather than some business wonk who was set to rest on his laurels while the competition went to deep space and beyond. Certainly there is no guarantee that the PhD will bring the company back to the AMD glory days to which the competition could only respond with the dismal CORE series, there is a much better probability that the PhD will or has put them back on track to really innovate. Changing direction with the current products would not make much business sense as I see it. We all know just how bad delaying products makes AMD look. Selling something is better than throwing away everything and starting over - at least as I see it.
Personally, I built a HTPC with a 7850K and replaced a linux server running an Athlon II X3 445 with an Athlon 5350. The HTPC works very well though I do not use it for gaming, and I was expecting the 5350 to be slower than the 445. Much to my surprise, the 5350 seems to be easily out performing the 445 even though its TDP is 1/4 that of the 445.
As it stands, AMD has a great basis for the future, at least to me anyway. Let's hope the EE PhD realizes this, applies the tweaks needed to make the concept a superior platform, and leads the company back to its glory days.
Right now, I think it makes sense from a business standpoint to stay at the same node as it keeps valuable research dollars in the company coffers. K12 would be an obvious candidate for a significant node shrink, and having been an AMD fan for a long time, I hope to see this happen.
For the safety of the public. AMD needs to sell an APU processor without the Weaponized (ARM)Trustzone. These cores allow remote access to your device that no Operating System can detect.
Zepi sounds like he's very technical and very much into everything gaming, and the gaming community is truly the only thing that's keeping enthusiasts alive and well..... What AMD should do or not do I'm not sure because the Intel "Core i" series is stomping them very badly for some years now, AMD should definitely not try to embrace their old glory days of their prowess as a DIY builders choice for a gaming system. At the defense of AMD though, I truly believe they have hung in there very well as the only real threat to giving Intel any kind of competition ( even though it's incredibly minute) and I say this---- their APU's freaking kick Intel's ass graphics wise FOR THE MONEY and I hope AMD keeps improving the whole APU paradigm. Not all users want or need to play games at incredibly high frame rates! We have built 4 AMD APU computers for people doing Pro Audio Apps like, Image Line FL Studio, Cakewalks Sonar Platinum, Pro Tools, Nuendo, Cubase, and Reason, and you can hook up like two or more 4K Monitors to the AMD A10 7850 with no problem at all with 16GB RAM and some great SSD's for Windows and Sample Drives.....it freaking works ----we're talking $1300 systems that kick ass in Pro Audio DAW software, try buying a Mac With two SSD's 16 GB of RAM and a Corsair 850 WATT PS, hallelujah AMD still rocks in other Apps other than gaming....God bless AMD! Keep going baby, if only to be a thorn in Intel's side, and stop a potential monopoly!
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
94 Comments
Back to Article
zepi - Thursday, November 20, 2014 - link
Considering that the competition is perfecting it's 14nm production with second generation products already out of the gates while AMD is still stuck with 28nm tells us everything we need to know about these chips.Where are the GDDR5 / DDR4 versions with fast enough GPU's that could actually run games? Or innovative mixed GDDR5 / DDR3 designs or edram/sram L4 cached versions?
Where is my 250W TDP highly clocked 4-core Steamroller with integrated 7970 class GPU, soldered to motherboard with enough edram / sram for framebuffer, 4GB of GDDR5 + two DDR3 sockets? Shipped with AIO water cooler? With clever driver programming to benefit from all the bandwidth and shared memory space with low level mantle api? The ultimate fusion of paraller- and serial computing power that became theoretically possible when AMD bought ATI?
Instead we have lousy bottom-to-middleclass CPU's with barely "good enough for Sims" GPU's that don't spark anyones interest. No deal AMD, No deal.
kyuu - Thursday, November 20, 2014 - link
As is usually the case, the low-wattage mobile variants are likely to be the most interesting. To my knowledge, Intel does not have Atom on a 14nm process, which is what those would be competing with. Insisting on continuing to compare these with Intel's Core line that cost significantly more is just silly.Of course, the issue is that, no matter how good AMD's chips are for mobile designs, they will continue to be ignored by OEMs and wont appear in anything but bottom-of-the-barrel designs. I wish something could be done about this, as I'd really love to get a table with an AMD SoC.
kyuu - Thursday, November 20, 2014 - link
Damn autocorrect plus lack of edit functionality; tablet, not table.mrdude - Thursday, November 20, 2014 - link
Part of earning trust is being on time, delivering on promises, and providing an enticing option. With respect to CPUs/APUs, AMD has failed on all accounts. Kaveri was delayed and so were Kabini and Temash. Worse yet, Kaveri showed up a fifth of its performance missing. Requiring high-speed dual-channel memory is required for the chips to shine, yet that also increases BoM and limits form factors.Personally, I can't believe they still haven't been able to produce an APU that can drive 1080p at high settings. Such a chip would sell like hot cakes, yet the company continually falters. What's the point of all of that GPU silicon if you can't utilize it? Do the CPU cores really need that much dark silicon? :P
Mikemk - Thursday, November 20, 2014 - link
The point is for smaller designs where a dedicated GPU wouldn't fit or would add too much heatmrdude - Thursday, November 20, 2014 - link
Yet their ramp for mobile Kaveris has been incredibly slow, and SFF NUC-like Kaveri and Beema/Mullins systems are nearly nonexistent.I understand the point. Speaking for myself, I'd love to buy an NUC system with an AMD APU inside that can drive 1080p. For that to happen, AMD still has to decrease power consumption, increase both CPU and GPU performance, cure the memory bottleneck, and actually get some of these products made so we're not chatting about hypotheticals. 2400mhz RAM on SO-DIMMs isn't going to happen, and that ~1.6v required to get there is too high. Even at 95W TDP Kaveri throttles the CPU down to ~3ghz when the GPU is under load.
I really don't think Carrizo is going to fix any of these issues. They might look a little better, but it's clear that nearly all of AMD's resources have been poured into their 2016 Zen and K12 architectures.
Gadgety - Friday, November 21, 2014 - link
SFF NUC-like, yes I've been looking for them, too. Surely this could be a segment of the market where APUs would fit. In the end I put together a 45W A8-7600 based Kaveri based M-ITX form factor but I'm surprised the kitbuilders haven't done so. I expect Carrizo to improve 10% on Kaveri, but much more. The Zen project is, as you say, where it's at.mickulty - Friday, November 21, 2014 - link
It doesn't make much sense to bash AMD over 'being on time' and 'delivering on promises' given what a complete dogs dinner broadwell and Intel's 14nm process has been. Remember when desktop broadwell was supposed to come out January 2014?mrdude - Friday, November 21, 2014 - link
Intel isn't the one attempting to persuade OEMs to use their chips.And I actually agree with you. Intel hasn't made an interesting processor since Sandy Bridge. Their Broadwell yields are still poor. But despite Intel being late and offering 5-10% improvements YoY, AMD still hasn't been able to claw back any ground.
The high end x86 space is an utter snoozefest.
andrewaggb - Friday, November 21, 2014 - link
yepnt300 - Wednesday, January 7, 2015 - link
AMD did have the opportunity to claw back some performance ground. Intel has been sleeping at the helm with terrible yields. That said, Intel is well known for "Anit-Competitive" practices. They were also charged and had to pay AMD for damages. They paid up more than once.http://www.theverge.com/2014/6/12/5803442/intel-ne...
Acreo Aeneas - Sunday, November 23, 2014 - link
You keep mentioning 1080p. For games or video? If for video, then current and pay APUs can already do that on AMD's side. For games, that's not feasible given the price point and power envelopes that APUs are aimed at.Acreo Aeneas - Sunday, November 23, 2014 - link
Pay should be older. Autocorrect follies.mrdude - Sunday, November 23, 2014 - link
nVidia's Maxwell 750Ti drives 1080p at high really well and it does that at sub-75W TDPs. Bear in mind that's for the full GPU, including the PCB, vRAM, etc. If on a single die, redundancies would be eliminated and realistic TDP would be quite a bit lower.AMD producing an APU that can do the same is certainly possible if not for a few bottlenecks: 1 - Their CPU architecture is inefficient. A new uArch isn't coming until 2016, and I don't expect any significant improvements until then 2 - Their current GPU architecture is lackluster in perf-per-watt, but that should (hopefully) change. 3 - AMD still hasn't managed to alleviate the memory bottleneck. Adding any sort of GPU power to an APU is fruitless until that is completely addressed.
To clarify, I wasn't mentioning video. You can buy an ARM SoC for a few bucks that can drive 1080p video just fine. 1080p video should be a given at this point.
Redwoodz - Sunday, December 7, 2014 - link
Well,if Intel can"t make a APU that runs 1080p on high why would you expect AMD to at this point? Fact is they can make one,but not at the cost needed.Furthermore,gamers that want to run all games on Ultra settings on 1080p are going to buy a dedicated GPU,not an APU.patrickjp93 - Friday, December 19, 2014 - link
Intel can now that it has a few years of graphics engineering under its belt. Broadwell is increasing throughput per core by 40% while also including 20% more cores. Intel didn't care about gaming ever. It's been aiming at increasing compute power to maintain absolute dominance over IBM and Sun for supercomputer CPUs in the future and encourage even more OEMs to switch. Beyond that, Intel has finally looked at knocking Nvidia clean out of the mobile GPU market, and AMD is waking up a bit more in laptop integrated graphics.The main problem with Iris Pro 5200 is 10 EUs (cores) per sub-slice (a group connected by a data bus). That created a big throughput bottleneck where the bus couldn't keep the cores fed. With that knocked out, we can see Intel start to cook, especially if they create a good tessellation engine.
Even though AMD's quad-core processors have 800 million more transistors and 2.x million per transistors per sq. mm, Intel still has the performance crown. If you think Intel won't make an APU by the end of 2016 capable of doing 1080p high/ultra at 50+ fps, you're naive as IBM was to say Intel would never compete in the server world. Look at how that turned out for IBM. All they have left are mainframes for banks.
nt300 - Wednesday, January 7, 2015 - link
At the time, AMD went through a complete management overhaul. Hopefully all gets rectified in 2015. And yes I have no problems driving 1080p on MAX Settings with my Kaveri APU.domboy - Friday, November 21, 2014 - link
Agreed! I'd really like to see something like the Asus T100 with an AMD APU in it.Morawka - Friday, November 21, 2014 - link
well we are comparing them to core because the last Kavari chips were priced like Core. I think $155 is the cheapest unit, most of the 384 and 512 Shader Sku's were $180+. Same price as a lastest and greatest Core i5.samlebon2306 - Friday, November 21, 2014 - link
AMD A10 7850K 4.0 GHz Black Edition Boxed processor for $109 at Micro Center:http://www.microcenter.com/product/427565/A10_7850...
zepi - Saturday, November 22, 2014 - link
Ofc AMD's chips are ignored by OEM's since they are utter crap compared to Intel, Qualcomm and Nvidia chips. Why would an OEM pick something that has by far worse perf/w and idle usage characteristics than competition?AMD has two product lines that are really competitive. APU's with highend Graphics (Console chips in Xbox one and PS4) and desktop class GPU's. Intel can't compete with GPU's, Nvidia doesn't have fast x86 CPU's and Qualcomm doesn't have high-tdp products.
For some reason AMD has not capitalised on this opportunity, but instead it has tried to compete in a market where it doesn't have competitive products. Not to mention that it hasn't had R&D resources to catch competition.
mikato - Monday, November 24, 2014 - link
They can't get any more consoles since I think they got them all :) As for desktop class GPUs, they're in heated battle with Nvidia as you know. I'm still not sure why I hear they have smallish market share here though.StevoLincolnite - Wednesday, November 26, 2014 - link
There are still hundreds of millions of "consoles" AMD could get the drop on, *if* they play their cards right. (Geddit?)And that's the mobile sector, if AMD can have good enough performance, with low power and at the right price they may end up being an attractive proposition for Nintendo's and Sony's hand held consoles. (I bet they regret selling it's mobile Radeon business, aka. Adreno.)
Obviously, it won't be high-profit monolithic chips, but AMD can do with the extra business.
Hrel - Friday, November 21, 2014 - link
"Where is my 250W TDP highly clocked 4-core Steamroller with integrated 7970 class GPU, soldered to motherboard with enough edram / sram for framebuffer, 4GB of GDDR5 + two DDR3 sockets? Shipped with AIO water cooler? With clever driver programming to benefit from all the bandwidth and shared memory space with low level mantle api? The ultimate fusion of paraller- and serial computing power that became theoretically possible when AMD bought ATI?"Are you insane 0_0? Honestly, have you been checked?
that tech guy - Friday, November 21, 2014 - link
you are kind of missing the point . sure for absolute best performance a intel cpu with a nvidia card is best for most gaming . but do you need anything better than a i7 4770 ? that is intells dilemma .skylake has been pushed back . intels alleged answer to amds apu's that can play games . skylake is 2016 now . amds problem for years now has been its tdp situation . with that little problem resolved it can then go back to simply building on the number of cores it uses on the dies since they will also be smaller . sooo carizo is 2015 and skylake is 2016 . do you think intel will be able to out preform amd with their integrated solution on gaming with zero help from nvidia in 2016 ? because thats all that other companies care about . steam has a slew of publishers who has started putting up in the minimum requirements that intel integrated graphics are not supported .
if for instance better than a i7 4770 isn't needed for years especially because the new systems dont have massive cpu power and AAA developers for the first time EVER have the ability to port things faster because the hardware is using x86 architecture on all platforms than what will intel have in 3 years ? cheap i7's sure but they want top end product to move
Michael Bay - Friday, November 21, 2014 - link
Tdp situation.Little problem.
You can tell things are going bad when everything AMD has to shill with are the likes of you.
piroroadkill - Friday, November 21, 2014 - link
Agreed. AMD could be making absolutely killer HTPC chips for gaming, but none of their APUs actually go all the way and have enough beef to the GPU.ddriver - Friday, November 21, 2014 - link
Don't you worry, IC process is soon going to hit a size limit brick wall, AMD and the rest will have time to catch up ;)Morawka - Friday, November 21, 2014 - link
that's called denial lol... I guess you could call it "hope" as well. The "brick wall" wont happen for at least 6-10 years, and to be honest, i don't think AMD can last that longtarqsharq - Wednesday, November 26, 2014 - link
I dunno, their chips are in all three of the current generation consoles. That's some business there.asimov1979 - Friday, November 21, 2014 - link
I think Low Power chips will be complemented with 20nm Skybridge SOCs which should be more competitive with Intel's 14nm.Morawka - Friday, November 21, 2014 - link
what your looking for is a PS4 chip for sale to everyone. Not gonna happen anytime soon. They are very limited by TSMC's massive 20nm delay just like everyone else.. This is a +1 update, designed to lower Perf. Per watt.Frenetic Pony - Friday, November 21, 2014 - link
You... wow. Literally. What? What in the hell are you talking about? Literally everyone but Intel is stuck on 28nm, Apple snapped up all early run 20nm stuff because they're Apple. This is also just and a brief APU announcement, not a lineup and detail of every product they hope to deliver next year.This is why Anandtech's commenting system sucks. If "Crazy" is the first post it always sucks up a disproportionate amount of the conversation.
gruffi - Friday, December 5, 2014 - link
The competition is perfecting it's 14nm production? No really. 14nm looks quite disastrous for them at the moment. Postponed several quarters, very low yields, lower than expected performance. That's why they launched only Core M so far. And even this one sux considering expectations and availability.And no, 28nm doesn't tells us anything about Carrizo. There are very interesting improvements. Excavator, new graphics architecture, HDL, first APU with full HSA 1.0 support, first true SoC based on Bulldozer and maybe even HBM as option. These are very attractive chips for most OEMs and consumers. OTOH no one wants a 250W TDP APU.
nt300 - Wednesday, January 7, 2015 - link
You cannot compare Intel with AMD. Intel is huge with mass resources, despite AMD has always been a lot more innovative in the last 10+ years. AMD was always slower with nm production. Intel caught up and surpassed them when AMD fumbled the Bulldozer. The very hype of the Bulldozer design forced Intel to develop what we have today.Both obviously need each other, to keep each other on there toes.
TheinsanegamerN - Thursday, November 20, 2014 - link
after the kaveri based fx-7600p appeared in exactly 0 laptops, i wonder if these chips will every appear in anything, or if amd will just completely give up? or release a laptop of their own, to show OEMs how to do it right (similar to google's nexus line)Gadgety - Friday, November 21, 2014 - link
Looks like they're mainly appearing in cheap desktop kits, using suboptimal 1600MHz memory modules. I fully expected SFFs with Kaveri chips.rocketbuddha - Friday, November 21, 2014 - link
AMD made a fatal mistake with Kaveri. It released a desktop variant first, followed 2 quarters later with a latop/mobile variant. By that time, Intel cleaned its mobile clock with subsidies and good products.So Baytrail is worse than Beema, but smaller and cool running than Beema and comes with good "contra-revenue incentives" that makes a OEM/ODM salivate.
While everything Kaveri was designed for like configurable TDP. Good gaming performance for the power envelope were all designed for mobile environment rather than desktop.
Even today if you go to www.amd.com and go to "Shop AMD Desktops" and chose the big OEMS like HP, DELL, Lenovo, Acer you cannot find a single Kaveri desktop. They still sell Richland desktops but no Kaveri ones.
You have to find III tier OEMs like Cyberpower, iBuyPower for Kaveri APU based desktops.
When I look for Kaveri notebooks I see only this. A HP notebook with average configuration sold for a humoungous price from a PCC.
http://shop.amd.com/en-us/search/K0L62UPABA
AMD is now in a very bad situation. The OEMs that buy bulk of their products are not interested in their latest and greatest instead, if interested 1(Richland) or 2 generations (AMD E2) behind the current one.
Go figure!
coder111 - Wednesday, December 10, 2014 - link
This! I'd love to buy a decent laptop with recent AMD APU + AMD dGPU, but there is simply nothing on the market. I run Linux, and I don't want to deal with Nvidia binary drivers. All-AMD solution has working open-source drivers, and decent GPU performance. Intel has good open-source drivers and crappy GPU performnace. Intel CPU + AMD dGPU would be another option, but I prefer all-AMD.The only things I can find are MSI gaming laptops. Or some low end crap from HP, which has weird arrow key layout which is highly annoying for me.
ruthan - Thursday, November 20, 2014 - link
I dont believe too much that AMD is able deliver something with better performance per watt than Intel or ARM, i think that this time passed.Gadgety - Friday, November 21, 2014 - link
Better graphics for smaller form factor at a cheaper price.sonicmerlin - Thursday, November 20, 2014 - link
Are they ever going to move beyond Bulldozer?JDG1980 - Thursday, November 20, 2014 - link
The new x86 core "Zen" is supposed to be coming in 2016. This is a ground-up design. They know Bulldozer is a dead end; Rory Read admitted as much, not long before being let go.Gadgety - Friday, November 21, 2014 - link
Exactly.Laststop311 - Thursday, November 20, 2014 - link
I hear ya man. Bulldozer was a failure since day 1. The fake cores suck. They say they have 8 core chips when really its 4 full cores and then 4 more cores with only half the parts of a core. They continue to stretch it with steamroller piledriver excavator and from the outside each iteration seems to bring no meaningful performance gains. After bulldozer flopped they should of instantly started working on a totally new architecture instead of wasting time reiterating a hugely failed one.If it wasn't for amd scoring the video game consoles they would be massively bleeding money. Now that both consoles have sold over 10 million units though expect console sales to get dramatically slower after this holiday as most people that want one already have 1.
What I have said before, The gpu part in the ps4 apu is much more powerful than any apu they offer for pc building. The 8 jaguar cores are really weak though. If amd just released an apu with the full ps4 gpu side fully enabled which is the same amount of cores as a desktop 7870 and instead of using crappy jaguar cores use 4 excavator cores that can atleast do some work they could have a real winner of a chip. They could even take it a step further and make an even bigger gpu than the ps4's apu. They already made a full 7870 inside an apu so maybe they could make a full or close to full r9 285 inside an apu + the 4 normal power excavator cores. Sure the die would be really big but considering they have been on 28nm so long the defect rate should be incredibly low right now so going with a huge die shouldn't be too risky. Making a bold product like that is exactly what amd needs to get noticed again. SFF desktops are really hot right now + HTPC's people would be lining up to put a gpu powerhouse apu like that in a system like that. Then they could scale it down for laptop use. I know a lot of people that would jump on a r9 285 or close to r9 285 inside an apu (just lower clocks than the discrete gpu)
Laststop311 - Thursday, November 20, 2014 - link
if amd can put a 20CU count full 7870 inside the ps4 apu (just lower clocks than the desktop gpu) than there is no reason they cant put a full r9 285 (just lower clocks then desktop gpu) inside a PC apu. Yes with a large die there are more chances for defects but with the incredible mature 28nm process like i said defects would be low and they could just offer 2 or 3 different chips with the defect cores fused off and made into lower end chips. With that powerful of a gpu inside an apu people would pay a much higher price for it since they arent buying a discrete gpu anymore.Laststop311 - Thursday, November 20, 2014 - link
if amd continues with these crappy releases over and over they will start losing money now that consoles will start slowing down in sales. They will continue to lose money every quarter until they are bankrupt and gone. They need to take a risk and put a huge GPU inside an apu and blow every 1;s minds.tuxfool - Wednesday, November 26, 2014 - link
Which would be incompatible with current DDR3/4 memory architectures on x86. Unless the motherboard offered a special pool of fast memory for the gpu it would be pointless to put that many CUs on the APU as they would be utterly bandwidth starved.mr_tawan - Friday, November 21, 2014 - link
What is fake core, btw? Is is just some pile of sand sitting in the die doing nothing ?Flunk - Friday, November 21, 2014 - link
Bulldozer cores have two integer units and one floating-point unit. This puts them somewhere between two actual cores and Intel's hyperthreading system for 2-threads per core performance. Because of this AMD markets them as 2 cores each, so what AMD claims is a 8-core chip only actually has 4-cores. This is somewhat disingenuous, which is why the OP calls them "fake cores".I don't think it's AMD's biggest hurdle, however. That would be Bulldozer (and it's derivatives) horrible single-thread performance. It's really disappointing after AMD ruling x86 performance for years (during the Athlon XP and Athlon 64 generations). All the fusion chips have been ok budget chips, but they really don't have anything high-end anymore.
mr_tawan - Friday, November 21, 2014 - link
Well I think the term 'fake' is a little bit too much. It sounds like there are cores sitting inside the die doing nothing more than just warming the chip quite a bit.I think AMD was aiming at the point that developers uses the GPU for the most floating-point computation. That's why they try to reduce the space the CPU core occupy on the die, and dedicate a large space for GPU, I think. That day hasn't came, and may even never come. It is still pretty hard to utilize the GPU right now. HSA might improve the situation in the future, but I don't know if it can convince the developer to start taking serious on using GPU.
It might not be wrong to say that AMD has put too much faith on the GPU. It does pay AMD in the end, as APU can serve the lower-end gaming section just fine, but this might be just a side effect and not what AMD expected at first.
I agree that Bulldozer suffers in single-thread performance. I hope that they can do a better job next time. Having Intel dominate the market would not be good for us users.
silverblue - Saturday, November 22, 2014 - link
The issue is you have two cores within a module and the work cannot simply be split between the two cores... and a single core is too weak on its own. Also, you have the potential power issue whereby the whole module is active whilst only one of the cores is doing any actual work.It's only when you throw enough work at the CPU that it starts to perform reasonably well.
andrewaggb - Saturday, November 22, 2014 - link
I think the idea had merit for some server applications and should have been marketed/designed as a specialized server/professional chip. It's not an especially good design for most consumers and it was a mistake to continue pursuing it for so long.Morawka - Friday, November 21, 2014 - link
The yields would be absolutely horrid man. Think of the binning. You got 8 cores and 1024 Gpu Cores. Maybe 1 core cpu is bad, They cant just disable 2 cores like intel, and sell it as a -2 core chip with all that graphics. yes 28nm is mature, but there is no escaping yield problems. i would say 1 out of 5 dies are good.Morawka - Friday, November 21, 2014 - link
on a chip that size....coder111 - Wednesday, December 10, 2014 - link
They probably started working on totally new architecture instantly. Unfortunately it takes ~5 years to develop a totally new CPU architecture...lilmoe - Thursday, November 20, 2014 - link
All still on 28nm.........................JDG1980 - Thursday, November 20, 2014 - link
It's a long shot, but I think AMD's goal with Carrizo may be to get a design win with the upcoming Retina MacBook Air. Apple has shown quite a bit of willingness to go with AMD GPUs (on the Mac Pro and the Retina iMac), and the MacBook Air has always had a 15W TDP chip. Big-core Carrizo is said to start at 15W according to AMD's slides, and I don't think that is just a coincidence. Intel has better CPU performance, but their GPUs at that power level are pretty terrible, not good enough for a system with a Retina display. (Iris/Iris Pro is adequate, but much more power-hungry, and wouldn't work too well in a MacBook Air.)SeleniumGlow - Friday, November 21, 2014 - link
While this makes sense, let us remember that Apple will not use a part which the supplier can't supply "on demand". AMD has a smaller production, supply chain than Intel.TiGr1982 - Friday, November 21, 2014 - link
This all sounds good, but only if AMD CPU (x86-64) wouldn't be so sluggish in terms of single threaded performance with respect even to Sandy Bridge, not even mentioning Ivy Bridge, Haswell and what comes next from the blue team.Look - Piledriver- and Steamroller-based APUs are around 70% slower clock-to-clock than Haswell in Cinebench R15, which even Mac users like to post as a bench result on Mac info resources. So, I suppose, it won't happen.
TiGr1982 - Friday, November 21, 2014 - link
I meant, specifically, Cinebench CPU score, especially single-threaded CPU score, which is a good indicator of CPU snappiness with respect to running simple serial software.umadrabbit - Sunday, December 14, 2014 - link
Cinebench single-threaded score is in no way an indicator of CPU snappiness. Where do You people even come up with this stuff? It just shows what the CPU is capable of doing in single-threaded workloads (which is starting to fade away anyway, more and more workloads keep going multi where AMD can still keep up with PD even on 32nm with slightly higher electricity bills)Death666Angel - Friday, November 21, 2014 - link
I read that the Retina laptops are/were bound by single threaded performance for their animation quality, which was why they dropped frames a lot. If that is still the case, I don't see an AMD future in them.MikeMurphy - Sunday, November 23, 2014 - link
Intel has vastly superior performance-per-watt. You won't find AMD chips in premium ultra thin mobile devices anytime soon.przemo_li - Friday, November 21, 2014 - link
Right now we have fully opensource HSA stack on Linux. This further indicate that AMD seriously think about getting part of Android pie (and HPC pie, which is Linux based).Of course Catalyst development on Win and userspace component on Lin is happening behind closed doors, so we do now know where its at HSA.
(Under Linux Catalyst will share kernel component with opensource driver, 285 will be first gpu with such driver stack)
Flunk - Friday, November 21, 2014 - link
At 28nm? Intel is already selling 22nm Atoms with 14nm chips showing up in a few months. On mobile, power consumption is a key value and the only way x86 can compete with ARM is with better processor optimization.Jumangi - Friday, November 21, 2014 - link
Man to look at the top of that desktop roadmap graphic and just seeing the never ending "2nd Gen FX" just go on and on makes me sad.Colin1497 - Friday, November 21, 2014 - link
Shrinking resistors?Also, your description of Voltage Adaptive Operation is the inverse of what the slide indicates. The slide indicates that they are compensating for dips, not increases. From the sound of it, historically they've had to clock as if they were operating at the dip voltage to prevent errors, but now that they are able to compensate, they can clock at the nominal voltage.
Colin1497 - Friday, November 21, 2014 - link
They are spinning this as being able to lower the nominal voltage and reduce power consumption, but it should also let the top parts clock higher.ant6n - Friday, November 21, 2014 - link
Who comes up with these brand names? They all sound like slow and heavy construction site equipment; and 'Carrizo' sounds like the workers' lunch sausage.DigitalFreak - Friday, November 21, 2014 - link
It's a city in ArizonaJlHADJOE - Friday, November 21, 2014 - link
For a couple of weeks I actually thought it was "Chorizo".mikato - Monday, November 24, 2014 - link
Carrizon Plain - https://www.google.com/search?q=carrizo+plain&...mikato - Monday, November 24, 2014 - link
ugh, wish I could edit.Carrizo Plain
zodiacfml - Friday, November 21, 2014 - link
I'm not touching an AMD mobile APU. Though the A4-1250 APU is the slowest, it shouldn't be slower than a dual core Intel Atom. I thought it was Windows 8.1 and installed W7, but no, internet browsing is still a painful experience. The APU is capable is of FULL HD video decoding but the CPU sucks so much that its not watchable.Masospaghetti - Friday, November 21, 2014 - link
Sounds great and all, but I still can't even get a FX-7600p / FX-7500 / A10-7400p mobile APU despite being "released" six months ago. As much as I like AMD, I haven't been too impressed with their performance lately.drexnx - Friday, November 21, 2014 - link
agreed. I've been dying to give AMD business (I had an a6-5200 for a week before returning it, not enough power) and they just can't seem to deliver any products to marketcharliem76 - Friday, November 21, 2014 - link
Pretty much a +1 to the rest of the comments. The form factors that OEMs are putting AMD chips into are afterthoughts. They either need a true partner to deliver the highly desirable form factors or do it themselves. Me? I want a surface pro 3 clone (with the same screen res), and a fanless NUC that can drive 1080 content.r3loaded - Friday, November 21, 2014 - link
I bought the Sandy Bridge 2500K that resides in my desktop system back in early 2011. Wake me up when AMD come up with a CPU that can beat this four year old 32nm chip in single-thread performance without pulling more than its 95W TDP to do so.stmok - Saturday, November 22, 2014 - link
So we need to wait until 2016 before AMD finally abandons Bulldozer-based architecture?haukionkannel - Saturday, November 22, 2014 - link
Intel has money to develop several parallel CPU units... AMD can not. That is why they could not abandon bulldozer architecture, when they finally find out that programmers are not going to use parallel programming in big way...Intel has multiple times money and resources. It is hard to compete against. Lets hope that other manufacturers can actually get closer to Intel in production technology so that AMD can also benefit from smaller production node and reduced TDP...
Morg72 - Saturday, November 22, 2014 - link
What would be interesting is something along the lines of an APU with upgradeable GDDR5 for the GPU side and DDR4 for the CPU side. It would separate the memory bandwidth and offering upgrade ability for GPU memory would be unprecedented. Of course this would require motherboard and GDDR manufacturers to help design, support and supply the slot and chips. Would be great to be able to increase the GDDR from 1gb to 2gb as needed, instead of having to replace the whole unit.As for the "2nd gen FX CPU's" on the market so long, it makes a lot of sense for them. If they are abandoning the architecture next year, there was no real point in ramping up production of the newer chips that wouldn't really sell that well. Continue the current line until the actual new one is ready and put the production costs toward research for the new arch.
AMD has a CPU branding issue as well. Many of my clients hear "AMD CPU" and think they are a Intel knock off. The only thing they really get credit for is their GPU's. Intel made themselves a household name that implied premium quality. If the new CPU's can compete with Intel again, they need some serious marketing. But they really, really need to stop over-promising! The hype ahead of bulldozers release really doomed it. There was essentially no way for those chips to fill the expectations. When they didn't knock peoples socks off, we get what we have now, a lot of doubt and mistrust for AMD CPU's.
andrewaggb - Saturday, November 22, 2014 - link
I'm dissappointed there is no excavator + 7850+ graphics + edram or gddr5 or quad ddr4 or anything type product. It would be unchallenged and certainly have a small market of enthusiasts and steam boxes and other things. Even if you didn't sell alot, I think it would get people excited about AMD again and it would play to the only strength they've got at the moment.Sure you can get an i5 + standalone graphics card but it's not exactly the same thing.
piroroadkill - Sunday, November 23, 2014 - link
Yeah, AMD pretty much needs to sell the PS4 APU, but get rid of the garbage Jaguar cores and throw in a few of their latest and fastest x86 cores.J0nDaFr3aK - Sunday, November 23, 2014 - link
Is Carrizo going to use the FM2+ socket? Are there going to be new chipsets?lord_anselhelm - Monday, November 24, 2014 - link
I've yet to even see a Kaveri chip powering any available UK laptops :(Hope AMD can actually secure some design wins when these chips hard launch.
joseph97svk - Tuesday, November 25, 2014 - link
I think that with HSA, they can just cut out the FPU modules and let the GPU do all FP calculations. It cuts a lot of transistors in the core, decreases presure on core front end and opens it up for more integer calculations. If we take this and high density libraries, a sharp decline in power consumption and increased in performance is not a far fetched reality.estarkey7 - Tuesday, November 25, 2014 - link
Where is the True Audio Processor? I see they omitted it in the slides, did they dump it?Cryio - Tuesday, November 25, 2014 - link
The most dissapointing thing on Kaveri for me was that the iGPU was supposed to be as fast as a 7750 on desktop and it is actuallly slightly slower than a 7730 GDRR5.Dug - Tuesday, November 25, 2014 - link
The biggest issue besides performance, is the utter lack of marketing.They just don't get it. They could take a platform and market it for what it can do per dollar.
Could you imagine if they could get a full platform together to be the same as or better than xbox one or ps4, but run Windows?
Instead we have miscalculated release dates, unknown naming convention, and no way to promote their product.
99% of people don't care about nm or memory support, they want to know what the system can do, and they wan't to know the total cost.
wiyosaya - Wednesday, December 3, 2014 - link
While there is a part of me that agrees with the majority of the comments in regards to the couch potato that AMD has become, we should all remember that AMD is now run by a highly capable EE PhD rather than some business wonk who was set to rest on his laurels while the competition went to deep space and beyond. Certainly there is no guarantee that the PhD will bring the company back to the AMD glory days to which the competition could only respond with the dismal CORE series, there is a much better probability that the PhD will or has put them back on track to really innovate. Changing direction with the current products would not make much business sense as I see it. We all know just how bad delaying products makes AMD look. Selling something is better than throwing away everything and starting over - at least as I see it.Personally, I built a HTPC with a 7850K and replaced a linux server running an Athlon II X3 445 with an Athlon 5350. The HTPC works very well though I do not use it for gaming, and I was expecting the 5350 to be slower than the 445. Much to my surprise, the 5350 seems to be easily out performing the 445 even though its TDP is 1/4 that of the 445.
As it stands, AMD has a great basis for the future, at least to me anyway. Let's hope the EE PhD realizes this, applies the tweaks needed to make the concept a superior platform, and leads the company back to its glory days.
Right now, I think it makes sense from a business standpoint to stay at the same node as it keeps valuable research dollars in the company coffers. K12 would be an obvious candidate for a significant node shrink, and having been an AMD fan for a long time, I hope to see this happen.
Arabian Before Florida Became Florida - Friday, December 19, 2014 - link
For the safety of the public. AMD needs to sell an APU processor without the Weaponized (ARM)Trustzone. These cores allow remote access to your device that no Operating System can detect.Arabian Before Florida Became Florida - Friday, December 19, 2014 - link
http://www.arm.com/products/security-on-arm/trustz...Clinton Balanced Budget. ~ $17 Trillion Debt.. $8 Trillions is just to attack the public..
timberghost - Friday, April 3, 2015 - link
Zepi sounds like he's very technical and very much into everything gaming, and the gaming community is truly the only thing that's keeping enthusiasts alive and well..... What AMD should do or not do I'm not sure because the Intel "Core i" series is stomping them very badly for some years now, AMD should definitely not try to embrace their old glory days of their prowess as a DIY builders choice for a gaming system. At the defense of AMD though, I truly believe they have hung in there very well as the only real threat to giving Intel any kind of competition ( even though it's incredibly minute) and I say this---- their APU's freaking kick Intel's ass graphics wise FOR THE MONEY and I hope AMD keeps improving the whole APU paradigm. Not all users want or need to play games at incredibly high frame rates! We have built 4 AMD APU computers for people doing Pro Audio Apps like, Image Line FL Studio, Cakewalks Sonar Platinum, Pro Tools, Nuendo, Cubase, and Reason, and you can hook up like two or more 4K Monitors to the AMD A10 7850 with no problem at all with 16GB RAM and some great SSD's for Windows and Sample Drives.....it freaking works ----we're talking $1300 systems that kick ass in Pro Audio DAW software, try buying a Mac With two SSD's 16 GB of RAM and a Corsair 850 WATT PS, hallelujah AMD still rocks in other Apps other than gaming....God bless AMD! Keep going baby, if only to be a thorn in Intel's side, and stop a potential monopoly!