Moving the SB on to the CPU at 28nm sounds quite exciting. If it all works out (fingers crossed), this could be the trump card for AMD APUs to be very competitive in the mobile market.
Look at Intel's Bay Trail for example, its already a single chip SoC. Actually AMD already have integrated PCH or SouthBridge in their newer G-series embeded APU.
While 5% IPC is nothing to be amazed me, that 40% power reduction. Man. Hope AMD can get a "14/16" ff product out before Intel hits it's 10nm stride. I don't see AMD overtaking them in terms of performance, but, in terms of performance at low power... I see it as a possibility. And, of course, better/more competition is almost always good for us, the consumers.
AMD *must* squeeze the process much more than Intel as it has (at least) one generation gap. What AMD is doing @ 28nm blows away what Intel did. Truth is, the process is also much more mature and ... well, AMD has been suck on 28nm for a while. Given the possibility, scaling to 14nmFF is much more effective, even though the process is more ... crude. Still, with these numbers, AMD seems to have a decent shot. Personally, though, I already recommend AMD for all _home_ use that might involve even a little bit of gaming. A decent APU is much more flexible than Intel's even though not so efficient, and to get a decent iGPU on Intel's platform the price rises very quickly. Carrizo just seems to seal the deal.
Did they say anything about AVX2 support for Excavator? Old roadmaps and bits of informationen suggested that Excavator would support AVX2, but given the focus on density and power effiency, it might has been useless, so they didn't implemented it.
And what about Delta Color Compression? Due to the fact that Carrizo most likely is a GCN 1.2 part, I would expect the GPU to support DCC.
AVX2 is playing to Intel's strengths. AMD's answer SHOULD be that code that really exploits AVX2 probably runs even better on the GPU, and that's where you should be running it. Of course their ability to force such an answer on the market depends on - (for some code) the double performance of their GPUs - the quality of their compilers and other tools (maybe THIS, with AMD money, is behind the recent push to get LLVM working well on Windows and in Deve Studio?) - the extent to which HSA --- now that it's a fully delivered product --- behaves the way we expect such a product should. For example, how high is the overhead now of toggling from CPU to GPU to perform a smallish computation, then reverting to CPU? In principle with HSA it should be no more expensive than running the computation on another CPU --- the combination of shared address space and interrupts to force immediate code execution should see to that. In practice it may still be substantially higher if, for example, their are implementation limitations in how fast data travels through the LLC between CPU and GPU and (worst of all for AMD) there may be a necessity for OS involvement to handle the interrupts optimally for getting kernel code to run ASAP. If so, who knows when MS will ship that? (And will they only ship it for Windows 10?)
They could go really "crazy" and implement AVX2 (and future extensions) on the GPU side of the APU directly (or via some kind of decoder/translator) and really push their HSA stuff that way.
The most recent discovered benchmark of AMD's Carrizo FX mobile scoring P2645 on 3DMARK 11 performance indicates that it has more graphics power than the very expensive Iris Pro 5200!
Combined with other enhancements and capabilities such as next gen GCN with delta color compression, hardware h.265 decode, HSA ready, along with DX12 and Freesync support, you have the makings of a mobile and AIO/ITX bestseller.
However the CPU centric physics score there shows the CPU side is still weak (when compared to the 15W dual core Broadwell-U processors like i7-5500U). And given this Carrizo FX may have 28W TDP (for top SKU), it may just end up like 35W FX-7600P or 20W FX-7500 where they are found in very few laptops only. Graphically still much weaker than most modern and recent mobile discrete GPUs found on gaming laptops. Also it isn't much faster than desktop APU like the A10-7850K. And having H.265 decoder or being HSA ready does very little actually (especially when Broadwell and even Haswell now has H.265 decoding also, additionally too few applications are using OpenCL or HSA)...
One thing to consider is the price these will be sold at.
Comparing to an i7-5500U is probably not going to be an apples to apples comparison, because that chip costs nearly $400 (according to http://www.notebookcheck.net/Intel-Core-i7-5500U-N... Carrizo chips are probably going to be priced competitively with i3 Broadwells, which have a lot of features disabled.
Additionally full hardware H.265 decoding is going to be more power efficient than doing core aspects on the GPU, as with Intel's solution.
The real barrier to adoption is going to be the OEMs.
Due to its pricing that's typically where most APUs end up today, with budget laptops. Premium laptops nowadays are mostly ultrabook style (very thin and stylish) which requires low TDP processors (which is why Intel had such a good run here). That's is something that AMD is working on with Carrizo (with its 10W lower range for Carrizo-L and 15W lower range for Carrizo), although they still need to match Intel at the CPU side of things also (if AMD wants to get better pricing).
Fixed function decoders have one problem, the lack of flexibility. Hybrid and software based decoders have better flexibility. The new Intel drivers for Broadwell (and some Haswell models) can support H.265 including 10-bit H.265 and Google's VP9 (though wished it also had 10-bit H.264 a.k.a "Hi10p"). So far AMD has not mentioned whether 10-bit H.265 and VP9 is supported yet (I would assume its the standard 8-bit H.265).
For OEM and ODM adoption, the product has to be "attractive" enough to demand a good volume. On schedule product delivery timelines also has to be up to par.
The CPU is not a bottleneck at that level of GPU performance in most, if not all, games. Now, if you were getting a mid level or faster dGPU, the A10-5750 drove a 7970m at 50 to near 100% of an 45W i7. So, let's cut that GPU in half, 640 GCN cores, in theory, the A10-5750 should never be the bottleneck at "playable" (45-60fps IMHO) settings. The 8 GCN core GPU is only 512 cores, so, it should performance similarly to an i7 with the same GPU setup (Doesn't exist, of course) when gaming.
The issue with all of these APUs has always been one thing - memory bandwidth. AMD's memory controller, even in dual channel mode, even with 2400/2133 memory, is terrible compared to Intel's. Something like barely 70% of the throughput that Intel achieves with 1600 memory. You're right that the CPU is not the bottleneck - the memory bandwidth and latency are. Terrible, just terrible, on every revision of the BD cores thus far. Which is funny, because memory access was clearly a problem all the way back with the original BD AM3+ launch, and yet despite preaching APUs are the future for going on 3 revisions now, AMD still hasn't figured out how to fix the biggest bottleneck on these chips. To see this in action - look at benchmarks of the A8-7600 vs A8-7800 - fps are within 5 percent, despite the fact that the 7800 clearly has a bigger, better GPU, and faster CPU.
Yes H.265 full hardware support with 18Gbps HDMI 2.0 as previously reported, and all these low power improvements, this could be the holy grail for silent HTPC (ITX / small form factor) for the next generation 4K / Ultra HD displays. An AMD NUC that people actually want!
Nowadays most laptops uses very low power processors especially those in the 15W and lower range, because that allows the manufacturer to use cheaper and smaller batteries. Also makes them thinner (like the recent Dell XPS 13 reviewed here), smaller and often fanless (especially those using Bay Trail processors). Additionally many laptops nowadays are in the lower budget range (like the HP Stream series) thus cost is an issue. Majority of premium laptops and gaming laptops uses Intel processors although you can find a handful using AMD APUs (like the HP EliteBook series)...
Yes, that one had an AMD APU. It does have some problems with 1080p MKV video playback, example https://www.youtube.com/watch?v=aAzw3H8ww3M possibly due to either driver issue or its fixed function hardware decoder.
Umm... It is alright if I hope for AMD tablet in the future? Is it okay? I won't be disappointed anymore? or the very least there is a day where I can find good AMD hybrid not with crippled hardware
Joking (Half serious) aside, I don't know if this can go fanless. The figure (lowest was 2.5W) seems like this is still not a good challenger for mobile platform with baytrail and soon to release cherry trail. This won't touch core-M as well unless the cripple the clock speed. So everything depends on AMD pricing and (sigh) SDP marketing.
I believe 28W is the highest for Carrizo, as their previous PowerPoint slides had shown. Also Carrizo is still on 28nm, as the foundries (TSMC and Global Foundries) still do not have production ready 14nm for their clients yet.
No matter how good you are with 28nm, you can do better with a process shrink or more (22nm - 14nm). We've been waiting too long for AMD to move too smaller nodes. If they could do 14nm at the same time as Intel (or the smaller nods when they come around) then they could dominate Intel and get 30% market share. Just competing on price and a small number benchmarks with cheaper Intel won't save them. Too many compromises and not enough process shrinks.
AMD is almost on their last leg. So far only embedded and semi-custom are working from consoles. Ultra low-power client? Never heard of it. 90%+ the market (nuc and its lookalike) is intel. ARM-based server? No major news as well. Pro graphic? I don't know anything about this except mac pro
I'm curious about their financial report this year. If 1st half is bad and Zen or whatever they're putting next year aren't successful then get ready for major changes. Restructure, merge, full buyout, GPU dept buyout, you name it.
Carrizo's 250mm2 still much larger than even the desktop Haswell chip at 177mm2. Most mobile SoCs are much smaller. For example, Broadwell-Y is 82mm2 (without PCH) and Bay Trail is 102mm2 (single chip). ARM SoCs are also smaller (typically under 100mm2). For example, Tegra3 is 81.9mm2.
With bigger dies sizes means higher costs as well (less dies per wafer, higher number of defective dies, etc). Thus being cheaper means AMD cannot earn much profit from them at all. And most of the pricing is mainly on CPU performance and wattage rather than integrated GPU performance.
Actually AMD should compromise some areas especially reducing that integrated GPU since that takes up too much die space (given Intel's mobile processors have lesser GPU performance). Then they should concentrate on increasing CPU performance as well as bringing down power consumption. Then perhaps AMD can sell them as cheap or at much higher prices than before (while earning more profit at the same time).
On the other hand 28nm is extremely mature with very high yields, even for large dies, the wafer cost is lower than 14nm/20nm, and there's lots of capacity. Most importantly, it's available to AMD, and they have to use what they've got - especially if they're paying for it or not with their wafer agreement with GlobalFoundries.
There is also going to be Carrizo-L, which is pin compatible (=> happy OEMs), which will be a far smaller die. Not as small as Bay Trail of course, but it will have more performance, and hopefully it has the same power saving features as Carrizo.
The pricing of these mobile processors are primarily are not based on integrated graphics performance. This is why you find most APUs in low budget laptops, and Intel in majority of premium laptops. And that's the reason for the 2nd paragraph (on the compromise). And its not "decent" performance either, as typically mobile GPUs are much weaker than the desktop counterparts. If you want gaming then get a real gaming laptop which has a discrete mobile GPU. That integrated GPU inside the APU just will not cut it...
read up on binning. Note that AMD has to burn up a certain amount of wafers or pay a fine, that means they can produce relatively large die socs at a reasonable cost. Also carrizo-l is more competitive perf/mm^2 and perf/w...
It would be more profitable to get more dies per wafer, while having to comply with that wafer supply agreement. "Reasonable" cost just will not cut it since AMD has been continuously losing money on the same APU strategy on every past quarter. Kaveri die size incidentally is nearly the same at 245mm2. Carrizo's lowest wattage is 12W (according to http://techreport.com/review/27853/amd-previews-ca... ) thus the lower end Carrizo-L is not going to displace Broadwell-Y (a.k.a "Core M") in this respect. And performance per watt wise, will not displace Broadwell-U either. Typically pricing is primarily based on CPU performance rather than integrated GPU performance.
Intel has the fabs, but do they have the yields? Not yet. Decent 14nm production is still pretty much utopic until Q3-Q4. Only a handful of Broadwell-U processors are on the streets, Broadwell-K is still 3 months away and Skylake has been postponed again.
This time around TSMC and GloFo will be catching up with Intel for real. Maybe Intel will pull ahead again by 2017 with 10nm, but at the very least all of 2016 will be 14/16nm for all of the major fabs.
Also, by holding back the tech, they can really pounce on AMD as soon as Zen is released, by releasing yet another iteration in short order due to all the delaying.
Hahahaha. You think jumping nodes is still easy? Ever heard of Rock's law? It is very closely related to Moore's law. Intel has been holding at about 10-11 billion dollars per year for a few years now if I recall correctly. Samsung's spending appears to be slowing down to about match Intels (once more, iirc). However, TSMC and GloFo appear to both have a continued rise in spending.
Nice and impressive but I feel bad because this chip will battle Intel's 14nm chips which is rather a long a shot from 28nm. Intel can compete in a price war with the state of the art Carrizo.
Right. That's with Bay Trail devices. For bigger devices such as notebooks with Carrizo where I expect it to be used more, it might end up more expensive to an Intel Celeron with the same performance with a slightly higher power consumption.
Why are they even bothering? None of these chip will ever appear in a decent laptop worth more than $300. Where are all the value add and feature packed AMD APU laptops with 1080P IPS screens, 256GB SSD, 8GB Ram, blu-ray etc. etc. Yes that's right...they never existed, never will!
Yeah that's true, I would never buy or recommend a note/lap with 1366x768, these days, to anyone, ever. No matter the price, it's just so 2006 (or what they say).
So they have significantly better energy efficiency and die area savings on top of any savings from moving 32 nm to 28 nm. Yet they don't use this to produce some many-module chips for their dekstop & server platforms. With this technology they could probably offer 6 modules in cheaper than their current 4 module chips. Single threaded performance still won't be great, but for some markets that's OK. That they're not doing this shows just how badly their financial situation is - they focus heavily on whatever they they can still sell in half-decent quantities and try nothing else. If this makes them survive - well, so be it. It's just sad to see them develop these optimization, yet not using them across most of their product range.
I agree, how about they release these @ 65w and 95w for desktop APU's, and 125w AM3 drop ins. With 30% more frequency at same power they could release 4.5ghz 8 cores or something?
There's a graph that shows that the power savings due to high density design effectively end at 20W and beyond... so, energy-efficient Carrizo chips are pretty much geared towards lower TDP form factors.
They are hoping to build greater enthusiasm for Zen by having a much more massive jump in improvement. Of course, Intel has been delaying Skylake forever so it will be able to drop 10nm or whatever on AMD as soon as Zen is released.
I didn't even read the OP. New CPU will have around the same single threaded performance with a Nehalem Intel CPU from 2008. I have lost faith that AMD will ever produce something even remotely antagonistic to Intel Sandy Bridge architecturefrom 2011, to say nothing of producing something that will give Haswell a run for its money. AMD will be on par with Haswell tech in around 2018-2019.
Well, there's Zen coming up with a brand new architecture, with Jim Keller (of Athlon fame) back on board as chief architect.... who knows, AMD has something to catch up in 2016.
Kaveri is already competitive in terms of performance per Watt, compared to Sandy Bridge. With Carrizo it appears that they passed past Haswell and (almost) reached Broadwell. Pure performance is still behind, but it's good to see some competition again. Plus, CPU performance is hardly an issue at these levels: unless you have extremely CPU-intensive tasks, you´ll be just fine with anything available on the market today. The good news is that the GPU now appears to seamlessly boost (some) CPU performance. Things are looking interesting!
Is it me or this seems really exciting? At these power consumption you get fanless HTPC suitable for casual gaming in HD. Not bad, not bad at all. Now, if we can get 6 cores (3x modules) and 2X GPU for 50~60W on a desktop, that would be great.
Maybe you wouldn't buy such a laptop? Well, strange as it may seem, AMD is not in business to match your precise needs: you are not the star of anyone's movie except your own.
AMD have to make money where they can. Where they can is in providing this type of CPU to low-end laptops.
Thats where AMD are in the laptop market. The manufacturers thrown out a few crappy AMND based laptops purely to give the illusion of competition. Wouldn't surprise me if Intel pays for them.
AMD continues to chase crap that doesn't make money. Intel figured this out and goes high-end, then serves low if desired or have some fab space left, or at this point need to stop arm's advance up the chain (again this will squish amd margins). You go broke doing the opposite. NV figured it out with socs (avoided consoles, which crap on your CORE product R&D), and went high-end/auto until they can afford to do cheap volume to sell to people who can't afford high end models. NV also figured out they'll have a better mobile market once gaming gets amped up (so go auto until gaming is king and hardware used to max by those games), and they are required then (their gpu) and can get high margin customers easily who want GREAT gaming that replaced consoles etc. Why AMD is chasing poor people is beyond me. CHASE MONEY (rich) and you get profits and margins in the 55-64% range (check apple, not chasing poor). Chase the poor and you get ~35% margins. If you have debt, chasing poor just means you can afford the interest on your debt this year, but not much else. That is exactly what we see happening on their quarterly reports.
This is a loser. All of these revs of this junk do nothing. This would be a good chip if Intel didn't exist, and ARM armada (all arm vendors) weren't coming up the chain. AMD should be announcing a 14nm GF CPU (with no gpu, I say GF because they have to use wafers still AFAIK, or go samsung if TSMC can't fix crap) that is total IPC monster to beat Intel and be paired with a top discrete chip for gamers delight. People I know who used to be staunch AMD supporters now don't even talk about them when discussing their next purchase. But we would switch instantly if they had a CPU monster sans gpu, that even matched intel for the same price. Most of us would be willing to even pay a tidbit more to support AMD, but would only do that for same or better perf. We no longer talk AMD because perf just sucks on cpu and we ALL disable the gpu on any of these.
Intel isn't selling a chip without gpu to enthusiast mainstream today, so all that gpu room AMD could be using for a IPC monster that could be priced above Intel's $350 range or at least equal pricing and beating their cpu perf. Most of the people that pay that have ZERO interest in that wasted gpu space Intel foists on us. You could sell a lot of HIGH MARGIN stuff with a chip that beats Intel handily in cpu and comes with no wasted gpu. The last time AMD made real MONEY was with a MONSTER CPU that had no gpu ;) Intel's HIGH end stuff is what allows them to make 13Billion which in turn allows stupid stuff like throwing away 4.1B+ on giving away mobile chips (instead of just buying NV and putting out a real soc to compete with arm but on a better process). A few more years losing 4B+ a year on mobile and Intel could have had NV for FREE...LOL. Management doesn't seem to get that point or can't get NV to sell.
Either way, AMD needs to chase MONEY, not broke people (meaning people who can't afford a laptop with a discrete gpu etc). Their current road just guards what they have that loses money. They need to take something ELSE that MAKES money. 28nm bare better than last year that uses less power does nothing vs. Intel who has better R&D by MILES and a die shrink on top. Are you trying to be the same NOTHING company, or finally make 1B+ profits again?
AMD has some great CPU architects back now, so why are they chasing parts that will be squeezed by ARM-->Intel racing to each other instead of chasing Intel top end without a gpu so you can BEAT their cpu and charge accordingly, which in turn means finally having some pricing power and a PROFIT for the whole YEAR. They are chasing a market that will be eaten by the ARM-Intel war. The perfect move is jumping ABOVE Intel in a shocker, while they're distracted by the race down to ARM all the while making enthusiasts blab about you at the water cooler again. In a DOWN PC sales market NV has thrived, while being able to throw away money on 5 fake socs until they could get discrete gpus into them and gaming catches up to use them (hence auto detour - where nobody is king yet). AMD should do PURE cpu IPC now and THEN come for cheaper stuff after milking the enthusiast cow (you know, titan buyers, 980/970, i7's etc - these people PAY).
I'm all for good deals and such as a consumer, but let some other sucker make those if they can afford low margin junk (Intel, ARM side etc). AMD needs to give me a reason to buy their chip AND their stock again. This is NOT how you do either. All this APU crap is stealing from core GPU tech too (obliterated by maxwell). Go back to Straight cpu/gpu company. I WANT to buy those, but I can't. I'm forced to go NV for gpu, and Intel for cpu unless I'm broke (which I'm not). I'm an AMD fan (their workers, older products etc), but management HATER for years. That group doesn't get it. Chase the rich so you can afford to do some poor stuff at some point which with high enough volume maybe makes you some change (but they won't win volume from Intel in APU who can just price those to death currently, especially with ARM coming up from bottom end). Chase the poor first however, and you just go broke with no margins. There is a reason NV launches 980/970 first (same with AMD in this case 290/290x) then deals lower end for poor consumers later. There is a reason NV said they won't chase commodity $200 phones for now and will concentrate on high-end phones and tablets or Autos. NV said they wouldn't do consoles due to margins (how's that working out for AMD). Learn AMD, LEARN! And quickly!
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
79 Comments
Back to Article
D. Lister - Monday, February 23, 2015 - link
Moving the SB on to the CPU at 28nm sounds quite exciting. If it all works out (fingers crossed), this could be the trump card for AMD APUs to be very competitive in the mobile market.BlueBlazer - Tuesday, February 24, 2015 - link
Look at Intel's Bay Trail for example, its already a single chip SoC. Actually AMD already have integrated PCH or SouthBridge in their newer G-series embeded APU.shing3232 - Friday, February 27, 2015 - link
They are not BIG cores.testbug00 - Monday, February 23, 2015 - link
While 5% IPC is nothing to be amazed me, that 40% power reduction. Man. Hope AMD can get a "14/16" ff product out before Intel hits it's 10nm stride. I don't see AMD overtaking them in terms of performance, but, in terms of performance at low power... I see it as a possibility. And, of course, better/more competition is almost always good for us, the consumers.yankeeDDL - Tuesday, February 24, 2015 - link
AMD *must* squeeze the process much more than Intel as it has (at least) one generation gap.What AMD is doing @ 28nm blows away what Intel did. Truth is, the process is also much more mature and ... well, AMD has been suck on 28nm for a while.
Given the possibility, scaling to 14nmFF is much more effective, even though the process is more ... crude.
Still, with these numbers, AMD seems to have a decent shot. Personally, though, I already recommend AMD for all _home_ use that might involve even a little bit of gaming. A decent APU is much more flexible than Intel's even though not so efficient, and to get a decent iGPU on Intel's platform the price rises very quickly. Carrizo just seems to seal the deal.
MrBungle123 - Friday, February 27, 2015 - link
"What AMD is doing @ 28nm blows away what Intel did."Blows away what intel did when? What AMD has is barely competitive with intel's long retired 45nm nehalem parts.
Novacius - Monday, February 23, 2015 - link
Did they say anything about AVX2 support for Excavator? Old roadmaps and bits of informationen suggested that Excavator would support AVX2, but given the focus on density and power effiency, it might has been useless, so they didn't implemented it.And what about Delta Color Compression? Due to the fact that Carrizo most likely is a GCN 1.2 part, I would expect the GPU to support DCC.
name99 - Tuesday, February 24, 2015 - link
AVX2 is playing to Intel's strengths. AMD's answer SHOULD be that code that really exploits AVX2 probably runs even better on the GPU, and that's where you should be running it.Of course their ability to force such an answer on the market depends on
- (for some code) the double performance of their GPUs
- the quality of their compilers and other tools (maybe THIS, with AMD money, is behind the recent push to get LLVM working well on Windows and in Deve Studio?)
- the extent to which HSA --- now that it's a fully delivered product --- behaves the way we expect such a product should. For example, how high is the overhead now of toggling from CPU to GPU to perform a smallish computation, then reverting to CPU? In principle with HSA it should be no more expensive than running the computation on another CPU --- the combination of shared address space and interrupts to force immediate code execution should see to that. In practice it may still be substantially higher if, for example, their are implementation limitations in how fast data travels through the LLC between CPU and GPU and (worst of all for AMD) there may be a necessity for OS involvement to handle the interrupts optimally for getting kernel code to run ASAP. If so, who knows when MS will ship that? (And will they only ship it for Windows 10?)
phoenix_rizzen - Tuesday, February 24, 2015 - link
They could go really "crazy" and implement AVX2 (and future extensions) on the GPU side of the APU directly (or via some kind of decoder/translator) and really push their HSA stuff that way.vred - Friday, February 27, 2015 - link
AMD are nowhere even remotely close to be able to run AVX code on GPU.Veritex - Monday, February 23, 2015 - link
The most recent discovered benchmark of AMD's Carrizo FX mobile scoring P2645 on 3DMARK 11 performance indicates that it has more graphics power than the very expensive Iris Pro 5200!http://www.3dmark.com/3dm11/9453670
Combined with other enhancements and capabilities such as next gen GCN with delta color compression, hardware h.265 decode, HSA ready, along with DX12 and Freesync support, you have the makings of a mobile and AIO/ITX bestseller.
BlueBlazer - Tuesday, February 24, 2015 - link
However the CPU centric physics score there shows the CPU side is still weak (when compared to the 15W dual core Broadwell-U processors like i7-5500U). And given this Carrizo FX may have 28W TDP (for top SKU), it may just end up like 35W FX-7600P or 20W FX-7500 where they are found in very few laptops only. Graphically still much weaker than most modern and recent mobile discrete GPUs found on gaming laptops. Also it isn't much faster than desktop APU like the A10-7850K. And having H.265 decoder or being HSA ready does very little actually (especially when Broadwell and even Haswell now has H.265 decoding also, additionally too few applications are using OpenCL or HSA)...psychobriggsy - Tuesday, February 24, 2015 - link
One thing to consider is the price these will be sold at.Comparing to an i7-5500U is probably not going to be an apples to apples comparison, because that chip costs nearly $400 (according to http://www.notebookcheck.net/Intel-Core-i7-5500U-N... Carrizo chips are probably going to be priced competitively with i3 Broadwells, which have a lot of features disabled.
Additionally full hardware H.265 decoding is going to be more power efficient than doing core aspects on the GPU, as with Intel's solution.
The real barrier to adoption is going to be the OEMs.
BlueBlazer - Tuesday, February 24, 2015 - link
Due to its pricing that's typically where most APUs end up today, with budget laptops. Premium laptops nowadays are mostly ultrabook style (very thin and stylish) which requires low TDP processors (which is why Intel had such a good run here). That's is something that AMD is working on with Carrizo (with its 10W lower range for Carrizo-L and 15W lower range for Carrizo), although they still need to match Intel at the CPU side of things also (if AMD wants to get better pricing).Fixed function decoders have one problem, the lack of flexibility. Hybrid and software based decoders have better flexibility. The new Intel drivers for Broadwell (and some Haswell models) can support H.265 including 10-bit H.265 and Google's VP9 (though wished it also had 10-bit H.264 a.k.a "Hi10p"). So far AMD has not mentioned whether 10-bit H.265 and VP9 is supported yet (I would assume its the standard 8-bit H.265).
For OEM and ODM adoption, the product has to be "attractive" enough to demand a good volume. On schedule product delivery timelines also has to be up to par.
testbug00 - Thursday, February 26, 2015 - link
The CPU is not a bottleneck at that level of GPU performance in most, if not all, games. Now, if you were getting a mid level or faster dGPU, the A10-5750 drove a 7970m at 50 to near 100% of an 45W i7. So, let's cut that GPU in half, 640 GCN cores, in theory, the A10-5750 should never be the bottleneck at "playable" (45-60fps IMHO) settings. The 8 GCN core GPU is only 512 cores, so, it should performance similarly to an i7 with the same GPU setup (Doesn't exist, of course) when gaming.takeship - Friday, February 27, 2015 - link
The issue with all of these APUs has always been one thing - memory bandwidth. AMD's memory controller, even in dual channel mode, even with 2400/2133 memory, is terrible compared to Intel's. Something like barely 70% of the throughput that Intel achieves with 1600 memory. You're right that the CPU is not the bottleneck - the memory bandwidth and latency are. Terrible, just terrible, on every revision of the BD cores thus far. Which is funny, because memory access was clearly a problem all the way back with the original BD AM3+ launch, and yet despite preaching APUs are the future for going on 3 revisions now, AMD still hasn't figured out how to fix the biggest bottleneck on these chips. To see this in action - look at benchmarks of the A8-7600 vs A8-7800 - fps are within 5 percent, despite the fact that the 7800 clearly has a bigger, better GPU, and faster CPU.maglito - Tuesday, February 24, 2015 - link
Yes H.265 full hardware support with 18Gbps HDMI 2.0 as previously reported, and all these low power improvements, this could be the holy grail for silent HTPC (ITX / small form factor) for the next generation 4K / Ultra HD displays. An AMD NUC that people actually want!bleh0 - Monday, February 23, 2015 - link
Maybe this time around there be more then 5 laptops with new APUs in them.BlueBlazer - Tuesday, February 24, 2015 - link
Nowadays most laptops uses very low power processors especially those in the 15W and lower range, because that allows the manufacturer to use cheaper and smaller batteries. Also makes them thinner (like the recent Dell XPS 13 reviewed here), smaller and often fanless (especially those using Bay Trail processors). Additionally many laptops nowadays are in the lower budget range (like the HP Stream series) thus cost is an issue. Majority of premium laptops and gaming laptops uses Intel processors although you can find a handful using AMD APUs (like the HP EliteBook series)...monstercameron - Wednesday, February 25, 2015 - link
hp stream 14?BlueBlazer - Wednesday, February 25, 2015 - link
Yes, that one had an AMD APU. It does have some problems with 1080p MKV video playback, example https://www.youtube.com/watch?v=aAzw3H8ww3M possibly due to either driver issue or its fixed function hardware decoder.WorldWithoutMadness - Monday, February 23, 2015 - link
Umm... It is alright if I hope for AMD tablet in the future? Is it okay? I won't be disappointed anymore? or the very least there is a day where I can find good AMD hybrid not with crippled hardwareJoking (Half serious) aside, I don't know if this can go fanless. The figure (lowest was 2.5W) seems like this is still not a good challenger for mobile platform with baytrail and soon to release cherry trail. This won't touch core-M as well unless the cripple the clock speed. So everything depends on AMD pricing and (sigh) SDP marketing.
Novacius - Monday, February 23, 2015 - link
Carrizo will very likely come with TDPs of 35W, 15W, and 10W, the latter only for a single module configuration. But it should be possible in 14nm.BlueBlazer - Tuesday, February 24, 2015 - link
I believe 28W is the highest for Carrizo, as their previous PowerPoint slides had shown. Also Carrizo is still on 28nm, as the foundries (TSMC and Global Foundries) still do not have production ready 14nm for their clients yet.BlueBlazer - Wednesday, February 25, 2015 - link
Oops! Forget about my earlier reply. Looks like it is 35W according to http://techreport.com/review/27853/amd-previews-ca...tygrus - Monday, February 23, 2015 - link
No matter how good you are with 28nm, you can do better with a process shrink or more (22nm - 14nm). We've been waiting too long for AMD to move too smaller nodes. If they could do 14nm at the same time as Intel (or the smaller nods when they come around) then they could dominate Intel and get 30% market share. Just competing on price and a small number benchmarks with cheaper Intel won't save them. Too many compromises and not enough process shrinks.Oxford Guy - Monday, February 23, 2015 - link
"If they could do 14nm at the same time as Intel..." If only AMD could have Intel's money and manpower.WorldWithoutMadness - Tuesday, February 24, 2015 - link
According to this:http://www.anandtech.com/show/8913/amd-reports-q4-...
AMD is almost on their last leg. So far only embedded and semi-custom are working from consoles.
Ultra low-power client? Never heard of it. 90%+ the market (nuc and its lookalike) is intel.
ARM-based server? No major news as well.
Pro graphic? I don't know anything about this except mac pro
I'm curious about their financial report this year. If 1st half is bad and Zen or whatever they're putting next year aren't successful then get ready for major changes. Restructure, merge, full buyout, GPU dept buyout, you name it.
BlueBlazer - Tuesday, February 24, 2015 - link
Carrizo's 250mm2 still much larger than even the desktop Haswell chip at 177mm2. Most mobile SoCs are much smaller. For example, Broadwell-Y is 82mm2 (without PCH) and Bay Trail is 102mm2 (single chip). ARM SoCs are also smaller (typically under 100mm2). For example, Tegra3 is 81.9mm2.With bigger dies sizes means higher costs as well (less dies per wafer, higher number of defective dies, etc). Thus being cheaper means AMD cannot earn much profit from them at all. And most of the pricing is mainly on CPU performance and wattage rather than integrated GPU performance.
Actually AMD should compromise some areas especially reducing that integrated GPU since that takes up too much die space (given Intel's mobile processors have lesser GPU performance). Then they should concentrate on increasing CPU performance as well as bringing down power consumption. Then perhaps AMD can sell them as cheap or at much higher prices than before (while earning more profit at the same time).
psychobriggsy - Tuesday, February 24, 2015 - link
On the other hand 28nm is extremely mature with very high yields, even for large dies, the wafer cost is lower than 14nm/20nm, and there's lots of capacity. Most importantly, it's available to AMD, and they have to use what they've got - especially if they're paying for it or not with their wafer agreement with GlobalFoundries.There is also going to be Carrizo-L, which is pin compatible (=> happy OEMs), which will be a far smaller die. Not as small as Bay Trail of course, but it will have more performance, and hopefully it has the same power saving features as Carrizo.
Oxford Guy - Tuesday, February 24, 2015 - link
Having decent GPU performance is the one reason to get an AMD part over an Intel.BlueBlazer - Wednesday, February 25, 2015 - link
The pricing of these mobile processors are primarily are not based on integrated graphics performance. This is why you find most APUs in low budget laptops, and Intel in majority of premium laptops. And that's the reason for the 2nd paragraph (on the compromise). And its not "decent" performance either, as typically mobile GPUs are much weaker than the desktop counterparts. If you want gaming then get a real gaming laptop which has a discrete mobile GPU. That integrated GPU inside the APU just will not cut it...monstercameron - Wednesday, February 25, 2015 - link
read up on binning. Note that AMD has to burn up a certain amount of wafers or pay a fine, that means they can produce relatively large die socs at a reasonable cost. Also carrizo-l is more competitive perf/mm^2 and perf/w...BlueBlazer - Wednesday, February 25, 2015 - link
It would be more profitable to get more dies per wafer, while having to comply with that wafer supply agreement. "Reasonable" cost just will not cut it since AMD has been continuously losing money on the same APU strategy on every past quarter. Kaveri die size incidentally is nearly the same at 245mm2. Carrizo's lowest wattage is 12W (according to http://techreport.com/review/27853/amd-previews-ca... ) thus the lower end Carrizo-L is not going to displace Broadwell-Y (a.k.a "Core M") in this respect. And performance per watt wise, will not displace Broadwell-U either. Typically pricing is primarily based on CPU performance rather than integrated GPU performance.testbug00 - Thursday, February 26, 2015 - link
Actually, an optimized 28nm node could be better in terms of performance and perhaps even power than a bleeding edge 20 or fake 14/16nm node.colinw - Monday, February 23, 2015 - link
Someone call me when AMD actually releases a new CPU core that nears competitive.Real men have fabs.
Oxford Guy - Monday, February 23, 2015 - link
Having fabs is one thing. Keeping those fabs competitive is another matter.Soulkeeper - Tuesday, February 24, 2015 - link
I can't help but think that AMD don't want my money.gonchuki - Tuesday, February 24, 2015 - link
Intel has the fabs, but do they have the yields? Not yet.Decent 14nm production is still pretty much utopic until Q3-Q4. Only a handful of Broadwell-U processors are on the streets, Broadwell-K is still 3 months away and Skylake has been postponed again.
This time around TSMC and GloFo will be catching up with Intel for real. Maybe Intel will pull ahead again by 2017 with 10nm, but at the very least all of 2016 will be 14/16nm for all of the major fabs.
Oxford Guy - Tuesday, February 24, 2015 - link
"Broadwell-K is still 3 months away and Skylake has been postponed again."That's because Intel has no competition and wants to clear existing inventory at the highest possible prices.
Oxford Guy - Tuesday, February 24, 2015 - link
Also, by holding back the tech, they can really pounce on AMD as soon as Zen is released, by releasing yet another iteration in short order due to all the delaying.Refuge - Tuesday, February 24, 2015 - link
Intel is way too interested in Moore's law to hold anything back against AMD.testbug00 - Thursday, February 26, 2015 - link
They clearly aren't, otherwise their spending would currently be 18-20 billion on fabs per year. It isn't.testbug00 - Thursday, February 26, 2015 - link
Hahahaha. You think jumping nodes is still easy? Ever heard of Rock's law? It is very closely related to Moore's law. Intel has been holding at about 10-11 billion dollars per year for a few years now if I recall correctly. Samsung's spending appears to be slowing down to about match Intels (once more, iirc). However, TSMC and GloFo appear to both have a continued rise in spending.ToTTenTranz - Tuesday, February 24, 2015 - link
With Kaveri, the elephant in the room was the lack of memory bandwidth for the integrated GPU.They don't even mention it with Carrizo.
psychobriggsy - Tuesday, February 24, 2015 - link
GCN 1.2 has bandwidth saving features that should fix some of the problems - even so running a 512 SP GPU on DDR3 is always going to be limiting.yannigr2 - Tuesday, February 24, 2015 - link
http://www.3dmark.com/3dm11/94536703D Mark 11
P2645
with Generic VGA(1x) and AMD FX-8800P Radeon R7, 12 Compute Cores
25% faster compared to a 7600p and faster than a 740m I think.
BlueBlazer - Tuesday, February 24, 2015 - link
GT 740M can score P2820 in 3dMark11 http://www.3dmark.com/3dm11/9190589yannigr2 - Tuesday, February 24, 2015 - link
It's losing on the graphics score. Also the gpu and memory seems to have been overclocked in that resulthttp://www.notebookcheck.net/NVIDIA-GeForce-GT-740...
So, no. 740m is slower.
zodiacfml - Tuesday, February 24, 2015 - link
Nice and impressive but I feel bad because this chip will battle Intel's 14nm chips which is rather a long a shot from 28nm. Intel can compete in a price war with the state of the art Carrizo.yannigr2 - Tuesday, February 24, 2015 - link
The bad thing is that this chip will battle against Intel's contra revenue, and there AMD has almost zero chances.zodiacfml - Tuesday, February 24, 2015 - link
Right. That's with Bay Trail devices. For bigger devices such as notebooks with Carrizo where I expect it to be used more, it might end up more expensive to an Intel Celeron with the same performance with a slightly higher power consumption.jabber - Tuesday, February 24, 2015 - link
Why are they even bothering? None of these chip will ever appear in a decent laptop worth more than $300. Where are all the value add and feature packed AMD APU laptops with 1080P IPS screens, 256GB SSD, 8GB Ram, blu-ray etc. etc. Yes that's right...they never existed, never will!Scandal really.
Valis - Tuesday, February 24, 2015 - link
Yeah that's true, I would never buy or recommend a note/lap with 1366x768, these days, to anyone, ever. No matter the price, it's just so 2006 (or what they say).MrSpadge - Tuesday, February 24, 2015 - link
So they have significantly better energy efficiency and die area savings on top of any savings from moving 32 nm to 28 nm. Yet they don't use this to produce some many-module chips for their dekstop & server platforms. With this technology they could probably offer 6 modules in cheaper than their current 4 module chips. Single threaded performance still won't be great, but for some markets that's OK. That they're not doing this shows just how badly their financial situation is - they focus heavily on whatever they they can still sell in half-decent quantities and try nothing else. If this makes them survive - well, so be it. It's just sad to see them develop these optimization, yet not using them across most of their product range.Pissedoffyouth - Tuesday, February 24, 2015 - link
I agree, how about they release these @ 65w and 95w for desktop APU's, and 125w AM3 drop ins. With 30% more frequency at same power they could release 4.5ghz 8 cores or something?firex123 - Tuesday, February 24, 2015 - link
There's a graph that shows that the power savings due to high density design effectively end at 20W and beyond... so, energy-efficient Carrizo chips are pretty much geared towards lower TDP form factors.testbug00 - Thursday, February 26, 2015 - link
20W PER MODULE. So, in theory, a 65W SKU is perfectly reasonable. 40W for the CPU and 25W for the GPU + MISCUrizane - Wednesday, February 25, 2015 - link
Making something wide (greater than 4 modules) that can run highly multithreaded software is in direct opposition to AMD's stance on HSA.Oxford Guy - Tuesday, February 24, 2015 - link
They are hoping to build greater enthusiasm for Zen by having a much more massive jump in improvement. Of course, Intel has been delaying Skylake forever so it will be able to drop 10nm or whatever on AMD as soon as Zen is released.Achaios - Tuesday, February 24, 2015 - link
I didn't even read the OP. New CPU will have around the same single threaded performance with a Nehalem Intel CPU from 2008. I have lost faith that AMD will ever produce something even remotely antagonistic to Intel Sandy Bridge architecturefrom 2011, to say nothing of producing something that will give Haswell a run for its money. AMD will be on par with Haswell tech in around 2018-2019.firex123 - Tuesday, February 24, 2015 - link
Well, there's Zen coming up with a brand new architecture, with Jim Keller (of Athlon fame) back on board as chief architect.... who knows, AMD has something to catch up in 2016.yankeeDDL - Tuesday, February 24, 2015 - link
Kaveri is already competitive in terms of performance per Watt, compared to Sandy Bridge. With Carrizo it appears that they passed past Haswell and (almost) reached Broadwell.Pure performance is still behind, but it's good to see some competition again.
Plus, CPU performance is hardly an issue at these levels: unless you have extremely CPU-intensive tasks, you´ll be just fine with anything available on the market today. The good news is that the GPU now appears to seamlessly boost (some) CPU performance. Things are looking interesting!
yankeeDDL - Tuesday, February 24, 2015 - link
Is it me or this seems really exciting? At these power consumption you get fanless HTPC suitable for casual gaming in HD. Not bad, not bad at all.Now, if we can get 6 cores (3x modules) and 2X GPU for 50~60W on a desktop, that would be great.
pidgin - Tuesday, February 24, 2015 - link
Why do they keep making these? I've never seen AMD in laptops evername99 - Tuesday, February 24, 2015 - link
Oh for crying out loud. Go to Best Buy and check out the sub-$400 laptops. Half of them have an AMD chip in them.http://www.bestbuy.com/site/searchpage.jsp?st=AMD+...
Maybe you wouldn't buy such a laptop? Well, strange as it may seem, AMD is not in business to match your precise needs: you are not the star of anyone's movie except your own.
AMD have to make money where they can. Where they can is in providing this type of CPU to low-end laptops.
jabber - Wednesday, February 25, 2015 - link
AMD + Laptop = Junk.Thats where AMD are in the laptop market. The manufacturers thrown out a few crappy AMND based laptops purely to give the illusion of competition. Wouldn't surprise me if Intel pays for them.
TheJian - Tuesday, February 24, 2015 - link
AMD continues to chase crap that doesn't make money. Intel figured this out and goes high-end, then serves low if desired or have some fab space left, or at this point need to stop arm's advance up the chain (again this will squish amd margins). You go broke doing the opposite. NV figured it out with socs (avoided consoles, which crap on your CORE product R&D), and went high-end/auto until they can afford to do cheap volume to sell to people who can't afford high end models. NV also figured out they'll have a better mobile market once gaming gets amped up (so go auto until gaming is king and hardware used to max by those games), and they are required then (their gpu) and can get high margin customers easily who want GREAT gaming that replaced consoles etc. Why AMD is chasing poor people is beyond me. CHASE MONEY (rich) and you get profits and margins in the 55-64% range (check apple, not chasing poor). Chase the poor and you get ~35% margins. If you have debt, chasing poor just means you can afford the interest on your debt this year, but not much else. That is exactly what we see happening on their quarterly reports.This is a loser. All of these revs of this junk do nothing. This would be a good chip if Intel didn't exist, and ARM armada (all arm vendors) weren't coming up the chain. AMD should be announcing a 14nm GF CPU (with no gpu, I say GF because they have to use wafers still AFAIK, or go samsung if TSMC can't fix crap) that is total IPC monster to beat Intel and be paired with a top discrete chip for gamers delight. People I know who used to be staunch AMD supporters now don't even talk about them when discussing their next purchase. But we would switch instantly if they had a CPU monster sans gpu, that even matched intel for the same price. Most of us would be willing to even pay a tidbit more to support AMD, but would only do that for same or better perf. We no longer talk AMD because perf just sucks on cpu and we ALL disable the gpu on any of these.
Intel isn't selling a chip without gpu to enthusiast mainstream today, so all that gpu room AMD could be using for a IPC monster that could be priced above Intel's $350 range or at least equal pricing and beating their cpu perf. Most of the people that pay that have ZERO interest in that wasted gpu space Intel foists on us. You could sell a lot of HIGH MARGIN stuff with a chip that beats Intel handily in cpu and comes with no wasted gpu. The last time AMD made real MONEY was with a MONSTER CPU that had no gpu ;) Intel's HIGH end stuff is what allows them to make 13Billion which in turn allows stupid stuff like throwing away 4.1B+ on giving away mobile chips (instead of just buying NV and putting out a real soc to compete with arm but on a better process). A few more years losing 4B+ a year on mobile and Intel could have had NV for FREE...LOL. Management doesn't seem to get that point or can't get NV to sell.
Either way, AMD needs to chase MONEY, not broke people (meaning people who can't afford a laptop with a discrete gpu etc). Their current road just guards what they have that loses money. They need to take something ELSE that MAKES money. 28nm bare better than last year that uses less power does nothing vs. Intel who has better R&D by MILES and a die shrink on top. Are you trying to be the same NOTHING company, or finally make 1B+ profits again?
AMD has some great CPU architects back now, so why are they chasing parts that will be squeezed by ARM-->Intel racing to each other instead of chasing Intel top end without a gpu so you can BEAT their cpu and charge accordingly, which in turn means finally having some pricing power and a PROFIT for the whole YEAR. They are chasing a market that will be eaten by the ARM-Intel war. The perfect move is jumping ABOVE Intel in a shocker, while they're distracted by the race down to ARM all the while making enthusiasts blab about you at the water cooler again. In a DOWN PC sales market NV has thrived, while being able to throw away money on 5 fake socs until they could get discrete gpus into them and gaming catches up to use them (hence auto detour - where nobody is king yet). AMD should do PURE cpu IPC now and THEN come for cheaper stuff after milking the enthusiast cow (you know, titan buyers, 980/970, i7's etc - these people PAY).
I'm all for good deals and such as a consumer, but let some other sucker make those if they can afford low margin junk (Intel, ARM side etc). AMD needs to give me a reason to buy their chip AND their stock again. This is NOT how you do either. All this APU crap is stealing from core GPU tech too (obliterated by maxwell). Go back to Straight cpu/gpu company. I WANT to buy those, but I can't. I'm forced to go NV for gpu, and Intel for cpu unless I'm broke (which I'm not). I'm an AMD fan (their workers, older products etc), but management HATER for years. That group doesn't get it. Chase the rich so you can afford to do some poor stuff at some point which with high enough volume maybe makes you some change (but they won't win volume from Intel in APU who can just price those to death currently, especially with ARM coming up from bottom end). Chase the poor first however, and you just go broke with no margins. There is a reason NV launches 980/970 first (same with AMD in this case 290/290x) then deals lower end for poor consumers later. There is a reason NV said they won't chase commodity $200 phones for now and will concentrate on high-end phones and tablets or Autos. NV said they wouldn't do consoles due to margins (how's that working out for AMD). Learn AMD, LEARN! And quickly!
jabber - Wednesday, February 25, 2015 - link
If AMD want to see quality AMD based products on shelves then they have only one option.Make it themselves.
Just drop all the crappy E1 CPU junk! Real CPU/GPU/APU/RAM/SSD/IPS quality gear please.
sascha - Wednesday, February 25, 2015 - link
Nice but I hope FM2+ will see a proper update as well before 2016.http://www.kitguru.net/components/cpu/anton-shilov...