They have such a huge headroom if they ever want to increase their power budget from 5-10Watts and build a laptop (45 Watt) or even desktop/server (180 Watt) chip. Just upping the clock frequency by 50% and they'll already be faster than any Intel chip available.
Combine this with high-bandwidth memory using through-chip Vias, or just adding tons of cache & cores, and there are amazing possibilities here. Apple really could compete hard against Intel in their core market.
Firstly we don't know if it would scale that well, as a low power optimized architecture it may not be able to clock much higher than it is without lengthening the pipeline, and thus lowering IPC.
The guy is using Tabletmark as a supposed cure to Geekbench. The problem is that a core i5 is scoring LESS than a weak-sauce core-M in photo/video editing work. Anyone who believes that deserves to be laughed at.
Do you think that article has any validity at all? I'm fine with A9X skepticism vis-a-vis Intel but if you're going to do a hatchet job, at least do it correctly.
Didn't read your link, but the rest of your post hit's the nail on its head. And to push even further: if Apple somehow designed the A9(X) as a low power architecture which could still take 50% higher clock speeds, they would have designed it badly - because by designing for fixed low frequencies you can gain a lot of power efficiency. That's why CPUs like Dothan or Brazos don't clock all that high, no matter the voltage & power you give them.
this, apple is designing their soc to look good on geekbench, and other popular benchmarks with victim cache and optimized layout. They are gaming the results to make their chips look as powerful as Intel's Core Architecture. I dont blame them, everyone is doing it nowadays.
But....When you throw pure math at it, the big boys separate from the small boys.
Uhm, so the benchmarks are what inform CPU design decisions, not actual, real world use cases in mobile devices? Because, to me, servers/laptops and mobile devices have _hugely_ different use cases; servers/desktops need to perform at a high load continuously, while mobile devices usually need bursts of (decent) performance and mainly good GPU capabilities.
An iOS device’s main compute use case is drawing the UI, no? It stands to reason Apple would optimise for that and games. I’m sure they could also design a decent desktop CPU, but that wouldn’t be used in their mobile devices and, let’s face it, couldn’t easily beat Intel’s CPU designs (since Intel is not a bunch of amateurs).
Don't use logic on the internet when you can just yell "OMG THEY ARE GAMING THE BENCHMARKS" instead.
I'm not sure why people are in such denial over the beastliness of the A9X. I don't think it's more powerful than Intel Core processors, but it's very obvious that even if Apple's SoC progress slows down and we see 30-50% improvements over each year instead of 70-90%, Intel will inevitably face a situation in which even the most diehard Intel fanboy has to concede that Chipzilla is now Slowzilla.
The stupidity of people when they try to find a way to diminish what Apple has accomplished.
Apple is designing their SoC to look good on Geekbench? That's your premise? Then what about the fact it kills other ARM processors in countless other benchmarks? Did Apple also "cheat" for those as well?
No, probably not. It's Samsung and numerous Android vendors that have to cheat on benchmarks to make their inferior processors seems better than they actually are.
It's also dumb to say Apple is now smoking or will start to against Intel chips. They do impressive work but people make them out to be gods of design. Apple and hyperbole go hand in hand with the Internet.
Considering the guys working for Apple are exDEC guys I would say some of them would be considered gods of chip design. Remember Intel won originally because they were cheap and compatible with x86 which doesn't apply today.
Yeah just like how intel's looks very good on super p1. Those numbers scores you mentioned are moot. How about real some really world performance between Intel and apple as you can see here, Apple does hold it's own.
according to the results in notebookcheck the A9 processor is worse than the A8, which makes no sense, specially considering that the A9 has a much higher clock rate than the A8. So there is something definitely wrong with the benchmark used
The correct way to measure performance is performance per TDP. This way you try to get as close to architectural efficiency as possible by removing the electrical portion where designers purposefully trades off performance for power saving.
So by your number: 14GF / 17 W = 0.782 GF/watt
A9 (3.5W part) = 1.2GF/ 3.5 = 0.343 GF / Watt
A9X (4W part) = 2GF / 4GF = 0.5 GF /Watt
These are just using your numbers.
The fact you said A8X gets 2.5GF and A9X is 2GF which is less than previous architecture, means something is probably incorrect with the estimation. There maybe an issue with the compiler or whatever.
A really old benchmark like Linpack tends to be optimized for older architectures such as X86, so there are old link libraries made with F77 compiler (fortran), which are tuned particularly for it. So really you can only take things like this with a grain of salt. I don't think anyone built a numerical analysis library the ARM architecture yet. So the compiler will might do the least efficient things.
All benchmark, you take it with a grain of salt. Any one single benchmark can be taken out of context, what you should do is to look at an entire collection benchmarks and get an idea of its relative performance.
@astroboy888: "The correct way to measure performance is performance per TDP. This way you try to get as close to architectural efficiency as possible by removing the electrical portion where designers purposefully trades off performance for power saving."
The rest of the post is actually pretty well thought out, so I'm going to assume this statement is within the bounds of mobile devices where architectural inefficiencies indirectly impact performance via thermals and power draw considerations. Outside of mobile this statement does not always hold. While I do agree that maintaining architectural efficiency is desirable, elegant, and often times beneficial to performance (think size or thermal constrained designs), sometimes people just don't care about the efficiency. Architectural inefficiencies in supercomputers bring massive power bills. Yet, I still can't find any ARM based supercomputers in the top 500. I see Intel, AMD, nVidia, and even Sparc based processors. Also, if performance per TDP were the only metric of merit, then nobody would overclock (not much anyways). After all, these architectures are usually running pretty close to peak efficiency. Overclocking in general sacrifices a pretty significant amount of power efficiency to get what isn't always a very meaningful gain in clock frequency. How much voltage did you add to get that last 100MHz? Had to go water cooling just to keep thermals under wraps you say?
I can't remember which thread, but I recently read a discussion on RWT confirming that Geekbench on desktop and mobile are completely different benchmarks. Until the next version is released, they won't be comparable.
So it's not surprising the level of confusion we have now. I imagine the A9 is a lot slower than our limited benches have lead us to believe. But it is still a very fast core, so I'd love to see a more thorough analysis using big boy benchmarks.
The reviewer makes some fundamental errors. He doesn't account for any optimisations in the software (hence why he is so confused by the Haswell i5 vs Broadwell Core M result) and he absolutely fails to understand that iMovie on iOS isn't doing anything like the same amount of work as it is on OSX.
Basically your link is useless to this discussion, but cheers for sharing.
Linpack isn't a relevant benchmark for a consumer product.
If you want benchmarks that matter, look at the Javascript ones. Something like SunSpider shows how close they are. You can hand confirm this just be comparing responsiveness when you use iOS devices vs. Intel products - the iOS device is basically just as fast.
@vFunct: "If you want benchmarks that matter, look at the Javascript ones. Something like SunSpider shows how close they are."
Javascript performance is too reliant on the software that runs it to make a good CPU comparison. Same processor in two different browsers gives different results. Same browser across two different operating systems gives different results. That said, Javascript benchmarks do a pretty good job of capturing the overall web browsing experience of the end product.
You mean a benchmark that exercises AVX2. Well duh.
No-one is denying that AVX2 provides a whole lot of FLOPs. But WHY do the FLOPs on the CPU? Apple's solution would be to run them on the GPU --- and the people who used to care about Linpack (and now care about HPCG) generally think the same way...
Oh, you mean 3DMark Physics, which essentially gives you frequency*numcores? Don't believe me --- run that equation against the performance that you see, or you cam just look at the source. (And BTW this benchmark which was so incompetently coded that it hardwired iOS as always using two cores, even for the A8X...)
Good luck finding anything in the real world that actually scales that way.
Biased how? That's a nicely balanced review and his conclusion looks accurate: A9X is a phenomenal tablet SoC but it is NOT yet directly competing with Intel's higher-end offerings.
Yeah just like how intel's looks very good on super p1. Those numbers you mentioned are moot. How about real some world performance as you can see here, Apple does hold it's own.
It's hard to find meaningful comparable benchmarks. The kraken javascript benchmark is a an interesting tool to compare disparate platforms because javascript runs most everywhere, and because on systems that allow multiple browsers, modern browsers all do at least roughly comparably.
Based on kraken, it looks to me like a skylake 6700k is still likely more than 2 times faster per core than the a9x (which means the a9x is extremely impressive). Of course workload will matter tremendously, there may be workloads where it beats skylake as is today.
In any case - scaling clockspeed by 50% is not a trivial matter, and will almost certainly take at least twice as much power - and at that power level, there's not much difference between a skylake core and a9x.
All in all: it would be really cool to see a9x or derivative chips larger machines - but don't expect magic. It's something of a minor miracle that it's as fast as it is already.
Your assumptions are flawed. 1. Power consumption does not increase linearly. 2. If CPU is clocked higher, it will stall on other components, so they'll also need a boost. 3. There aren't any decent benchmarks out yet (MadeUpBenchmark 2000 et al, excluded)
1 is especially worth noting, since if it held true, the Pentium 4s would have been amazing processors. Exponential scaling sucks, as do the sort of spiraling costs you can get into where getting the clocks to get more performance costs you IPC.
Not to mention that increasing the power envelope AND stacking memory with TSVs on top of the die would utterly choke the chip, resulting in thermal throttling 100% of the time. There's a reason that Apple has off-die memory on all their tablets (which I believe to be a large part of the reason for their huge lead in performance consistency compared to Android tablets, all of which use stacked memory). HBM/other high bandwith memory on an interposer? Sure. But don't put anything on top of your most power hungry chip unless you really, really have to.
Also, Intel's Core chips cost ~10x more. It's a shame Windows RT had to die. It would've helped ARM provide the competition Intel direly needs in the PC market. I mean for crying out loud, Intel is selling $160 Atom-based "Pentium" chips to OEMs now..............
Windows 10M is still ARM. It runs Universal apps just like x86 Win10. As the library of Universal apps grows, this benefits not only Win10 x86 and Win10M, but also future Windows builds on any architecture. Have you seen Continuum in action? It's basically a full port already. They could eventually release another complete Windows ARM build. It would be able to run all Universal apps just fine, even the Native ones - they're cloud-compiled. Realistically they could port to another architecture (such as MIPS) if they had a reason to, and it would ALSO run all the Universal apps. They were careful not to box themselves in this time around.
No at 147mm the actual production cost is very similar. Yields aren't going to be great either due to defect density involved. Wafer prices for tsm 16nm aren't cheap. Hard cost is estimated to be between $100-$120 for Apple on a9x explains why it is so expensive. Plus you have design for A9x so not that much cheaper for a fraction of the functionality and massive die size compared to core m at 89mm. I see turbos and NoS bolted onto civics to get it to go fast not as elegant or sexy as a nice V8
"Just upping the clock frequency by 50% and they'll already be faster than any Intel chip available" LOL. Now that is funny. It is a very impressive mobile chip and it scores very well on a mobile OS, but that isn't in the same ballpark with Intel Core and Xeon x86 chips. Its like you are comparing engines in a Ferrari and an 18 wheel diesel truck. Different purpose entirely. If you are in a Ferrari (a mobile OS) the Ferrari engine runs extremely swift. Put it in the 18 wheeler and try to haul a load of freight (a real full fledged OS) and it's an entirely different story. Its a great mobile ship, but lets not pretend it can do what x86 does. Maybe someday in a decade or two who knows.
I think you mean TSV "through silicon vias" instead of through-chip vias...and yes, in theory it would add a ton of bandwidth, but you'd run into thermal issues as well as yield issues. TSMC does not have mature TSV yields yet.
In their delusional pipedreams, maybe. Heat output in chips doesn`t scale in linear fashion, doubly so for ARM designs. You`d get something terribly hot and overpriced to hell.
No way, intel is on a release cycle to increase profits. If they wanted to, they'll drop a monster bomb of a cpu and blow apple out of the water. Intel can do better currently, they just choose not to because theyre basically a monopoly.
The A9X GPU is already faster in many respects than the nVidia and AMD GPUs on current MacBook Pros. Apple can simply create its own GPU to team with Intel's CPU to both control its own destiny and to ride past the bottlenecks that nVidia and AMD are becoming. Ideally, Apple gets a license to create its own custom Intel processors. Certainly Apple can create them faster and more powerful than Intel can.
"Faster in many respects" - are you talking about GFXBench? That seems an outlier, with the A9X being faster than the Iris Pro in that, but that alone, with other tests being in the Iris Pros favor by 2-4X. Which you would expect, given the 47W power envelope and eDRAM.
Just think about it, the fabrication processes aren't wildly different in efficiency, and people are talking about a sub 10W SoC beating 40+ watt dedicated GPUs, or even 47W integrated ones?
To be fair, the GPU in this does also have an absolutely enormous tech lead over those NV/AMD GPUs. Of course not precisely an accident that Apple can get to higher tech stuff first :)
This is probably an unusually large gap though - the power limited 16nm GPUs are of course going to be hugely faster than the current ones.
Can you explain the "enormous tech lead" to me? People thinking a sub 10W SoC bests 47W CPU GPUs or 40+Watt dedicated laptop GPUs, I think something is fishy. There's no magic pixie dust in the semi industry anymore. Pretty sure GFXbench uses FP16 on mobile and FP32 on desktop, if that's your source. Other benchmarks put chips like the Iris Pro at 3-4x faster.
TSMC and Samsung 14/16nm are not comparable to Intels fabs directly. Intel is usually more efficient to the point where the previous ones can give current gen on other fabs a good run for the money, see here.
@tipoo: "TSMC and Samsung 14/16nm are not comparable to Intels fabs directly."
Agreed. It is more likely that the "enormous tech lead" is actually in favor of the nVidia/ATi camp rather than the PowerVR IP. It's like ATi's 6000 series and nVidia's Maxwell series that made sacrifices to FP64 resources that weren't useful in games in order to spend more die area on resources that were. Did that make these chips more advanced? Granted there were other enhancements to both architectures, but this reallocation of resources was responsible in large for the improvements. Pixel and texel fill rates look pretty good for PowerVR, but I'd sure like to see how it handles some more advanced features. It would be great if Unigen would port their Heaven and/or Valley benchmark to iOS. Then we could see how the A9X handles Extreme tessellation settings.
I think that is more about the fabs. AMD/nVidia are still rocking 28nm fab access. They both hoped to be on 20nm but that never panned out as planned. Now they are waiting their turn in line for 16nm.
That still doesn't account for what some people would believe is the A9X being 5X more efficient :P I think GFXbench is just throwing a lot of people, it's using half precision on mobile. Realistically there's no way it's beating 40+W dedicated GPUs on this fab generation, a few back maybe.
5x is a stretch but it really could be close to that for some benchmarks vs the R9 370x in the MBPros. For starters, the pure fab difference is about 2x.
There's also a lot of room for the architecture to be more efficient at low power levels. The R9 M370x is still basically 77xx stuff and wasn't ever really optimised for low power operation, while the A9x very obviously is. Look at Maxwell say.
If that's only 50% more efficient you're up around 3x already.
If you then hit a benchmark that happened to favour the architectural differences in the A9x? 5x actually becomes fairly plausible.
Says as much about the current gpu in the MBPro's as anything else perhaps :)
'Reality distortion field' ... Only if you're the one(s) arguing AGAINST the tremendous power, speed and graphic prowess shown in the iPad Pro/A9x. As the 'realities' are definitely in black and white if you read the article or many MANY others on the performance of the iPad Pro
Thanks for yet another awesome article! While there's no reason I NEED to know how Apple or anyone else's hardware works, I've been obsessed with hardware designs for decades, so I love knowing what's going on :)
I wonder if PowerVR's claims are true, that the 6 core 7th gen series PowerVR parts actually have the same performance as a Playstation 3/Xbox 360?
If it's actually true, that's pretty amazing, even if it is a decade later, even still!
Yeah, and I suspect phones will encroach on this generations consoles faster for a few reasons, even at launch day they didn't start off as tanks for one, more of...Hybrid cars. Mid range, years old architecture, even on launch date for the GPU, and Jaguar cores for the CPUs.
If the A series SoC keeps increasing at this pace, say 90% gains every second year, 50% gains every first year, it would be breathing down their necks very fast.
I think the 6S is around ~200Gflops on the GPU iirc, which bests the PS3s 190 and is probably around the 360's 240, but with a more efficient architecture that is probably doing more with a lower number (just using those for ballpark performance of course - they don't denote actual performance).
That would put the 12 core iPad Pro at maybe around 400Gflops?
Are you using Imagination's numbers? I mean nvidia also misled us with their tegra. Correct me if I'm wrong, but FP32 ALUs work mutually exclusively with FP16 ALUs in PowerVR.
But I still think that Apple is using the series 6XT as a template for their designs, just went with a much wider one this time. I think if there were the 7 series, power compustion would be lower.. Is there any way to find out for certain ?
At this point it is very safe to assume it's Series 7. Series 6XT only scaled up to 8 GPU cores. So although A9 could in theory be using a 6XT design, A9X is without a doubt using a Series 7 design (and therefore A9 would be using a S7 design as well).
Anandtech, I sure hope when you do the detailed iPad Pro review that you dive deeper into the performance than just benchmarks.
Too many haters complaining that things must be rigged to make the A9X appear faster than it is, or that benchmarks aren't comparable between ARM and x86.
I think some real-world tests are in order. For example, what if you rendered the identical 4K video on an iPad Pro and several Intel mobile processors to see how long they take? Or use filters on photos to see how quickly they are able to complete. Though it might be tough finding equivalent software on both platforms to do such a test.
So apparently it does not edit raw 4K, not one stream, so their claim of 3 streams of 4K editing seemed to be for iPhone shot video which...Kind of takes the "Pro" out of it, but would be great for home users.
Even if he is the most evil deceitful man on the face of the planet, the different benchmarks he ran don't lie. Geekbench - which Anandtech does not use for crossplatform comparisons - is the only bench to favor A9x. Oh and GFXBench, but that one runs different workloads on ios/android vs osx/windows.
No, not "worth a read". What a self-embarrassing moron!
In Order to somehow "prove" that the iPad Pro "can't edit 4k" Ung doesn't even attempt to actually edit 4k (which has already been confirmed to work well) but instead he's just digging until he finds a file format that isn't compatible with iMove and then crows that a banal and deliberate file format incompatibility somehow "proved" that "4k does not work", which is utter nonsense and which has zero relation to the question he had claimed to be asking.
There's clearly no integrity there.
Ung is nothing but a low-rent click-baiting troll, but at least some people lap it up without questioning.
Bingo. Then there's the test of the MacBook Pro vs the Surface to check Microsoft's claim it's 2x faster. He titles the article "It's not 2x faster, it's 3x faster". Then he proceeds to check the Surface with discreet GPU against the MacBook with integrated GPU and cherry-picks certain benchmarks where the Surface GPU is actually 3x faster. Oh, and ignores the CPU where the MacBook edges the Surface.
Yeah, in all fairness once you match up the storage size and RAM, the Surface Book with the dGPU ends up costing similar to the base 15" MBP, not the 13". Though in size it's comparable to the 13". So I could see it argued both ways which one the comparison is more fair to, but in terms of cost-wise, the Iris Pro would end up just a bit behind the 940M with GDDR5 in there, but the 15" MBP has a true quad core CPU.
Compared to the Iris 6100 in the 13" Pro, sure it may be 3x faster on GPU tasks alone. Not CPU.
Awesome article, It raises two questions for me, how does OSX run on an ARM CPU (competitively?) and by extension would an ARM powered Macbrick or Imacbox be something Apple could or would sell perhaps as a budget product tier? (I know budget and Apple are mutually exclusive :) )
OS X can't run on ARM, it's x86. An ARM version OS X and ARM Macbook are likely in the future though. The biggest issue is developer support, everyone would have to rewrite their applications for ARM.
If you want performance, you`d have to do at least a partial rewrite, since not all extensions translate directly. Might not be a problem for something like word processor, but something computationally intensive surely will be.
Apple has been using the same approach on ARM as on Intel for some years now, with even the same frameworks with the same or very similar APIs in many cases.
So what else is on the chip? I see a mirrored area that looks like a mini dual core CPU and an area with 3 ip blocks that are dublicated right below as if they just double up what ever the circuitry is.
I think those are cache tags, not "mini CPUs". Just a guess from spending way too long analysing die shots :P
If it is a mini CPU, it could be security related like AMD Trustzone and the Wii/Wii U security chip or whatever, but that doesn't seem quite right as Apple has dedicated blocks for that.
Not surprise at all if you follow their strategy in the last several years. ARM camp is trying to increase performance while stay within the power envelope. Intel is trying to reduce its power consumption while keeping performance flat. I think they have both succeeded today although the ARM has a lot more market momentum. At the moment I am typing using a Lenovo Yoga 2 10" tablet with a detachable keyboard (like the Surface). I just bought this device for less than $200, and its battery last all day. Beat that Apple.
Now, for Apple to switch the CPU of all of their Mac, you have to put on a different hat. What is the cost of switching? and how the customer would benefit from it? Sure, Apple may save some money on the chip, but not a lot. The Mac volume is not that big as compared to the whole PC. But the cost of porting all the software ecosystem is huge. What about Windows compatibility? Also, Apple has been using a lot of PC hardware standard such as mSATA, USB, Thunderbolt, M2, PCIe for free over the years. They won't be free when they switched CPU. And last but not least, what do the buyers benefit from this? Cheaper Apple hardware? yah right.
Apple has no interest whatsoever in making their products cheaper – they are pretty much the only computer manufacturer who is not participating in the downward spiral their competitors are caught in for lack of differentiation (they all have to use Windows, they all have to use Intel (or at most Intel-compatible) CPUs, so something's gotta give: And the sticker price is pretty much the only thing left in the PC market nowadays).
Apple is all about not getting into that race, and their whole game is working their butts off to offer unique selling points instead, something no PC manufacturer really can.
And those primarily revolve around OS X and its sibling iOS, and hardware-wise special points in build quality, performance and battery life. With Intel CPUs differentiation of Macs against Windows PCs has been difficult and limited. Switching to their own CPUs will give them a world of options which they've never had. They "just" have to actually reach performance/power points which are attractive to their customers so they're motivated for paying the prices Apple wants to ask for their unique products.
The cost of the CPUs is a factor, but given that Apple is already paying for all the heavy lifting in their CPU development from the iPhone profits and that even their more expensive contract manufacturing easily beats Intel's prices that's actually just another motivation for getting away from Intel.
They're the only computer manufacturer in the market with their own OS, with their own CPUs and with almost complete freedom to develop all of that in any direction they like and with much fewer legacy pressures.
The more unique their products are, the less pressure on the prices, at least from the customer segment actually interested in those unique selling points.
That is what Apple is all about, not simply making cheaper versions of their existing products as the PC manufacturers are pretty much forced to.
They're not the only one. High end, high margin justifying products exist from other manufacturers. Surfacebook/SurfacePro, S6 edge and Note series, high end consumer and pro level hardware from Dell/HP/Lenovo/Asus.
There will be nothing cooler than seeing Apple go alone in the PC space with ARM CPUs.
Apple is increasingly taking over the high-margin segment of the computer market just like they already own the high-margin segment in smartphones.
PC manufacturers are increasingly relegated to bit player status there – their bigger sales numbers are not happening vis-a-vis Apple's products but in the low-margin segments where Apple doesn't even want to compete.
Nope. Apple is still one of the few biggest computer manufacturers by now, even though they only sell high-margin products – and they are one of the few with growing volume at the same time!
They are growing but it's nothing explosive. It's buckling the trend that's for sure. Then there's the element of image/status symbol that belongs exclusively to a macbook. It's unfair to take a Lenovo Thinkpad Carbon, a high end device, compare it to a 12" macbook and say they are competing in the same market. They don't.
Double digit (high double digit) percentages YoY certainly seem 'explosive' to me. Especially considering EVERY other OEM is down, YoY each and every quarter over the last three or four years
Apple has not been caught in the downward price spiral? Really? Maybe in phones but in PCs they have followed the competition down. MacBook Air launched at $1800 and now is half that as PC ultra books have dropped in price.
The first Macbook Air wasn’t launched to replace any product from Apple, and its strategy seemed very much like the one used for the current MacBook Retina or the first MacBook Pro Retina, in order to test the waters and get experience building something new. Eventually a new version came out that replaced another product line at another price point, without changing the average selling price for Apple. It is the average selling price that you have to look at.
" It is the average selling price that you have to look at." When the MacBook Air was launched, the ASP of Macs was $1539, it is now $1205, that is a 22% decrease in ASP and Apple has scrapped its cheapest line of notebook products.
The idea that Apple is immune from competition in the PC world is a fantasy. Yes, people are prepared to pay more for a Mac, which unlike in the smatphone is actually a singifcantly better quality produc than its competitors, but that premium is limited and Apple needs to respond to the market.
according to Apple: 2008 average revenue per Mac $1440; in 2015 $1240 (14% decrease), with 110% unit sales growth while keeping basically the same entry price points. Do you think that is comparable with what happened with the PC industry, which grew around 5% and saw the average selling price probably reduced in half, almost to a third of the Mac ASP.
Apple didn't get caught in the negative aspects of the downward price spiral, while also helping to kickstart it.
The panic in the rest of the industry when Apple launched the iMac was interesting to witness. I was working at one of the larger consumer PC companies at the time.
Apple avoided the negative effects due to approaching the issue differently. The iMac was a new product that responded to some consumer frustrations of the time. The products in most every competitors space at that price were cheaper variants of the top tier workstations they were making tons of money off of selling to businesses. The corners cut to scale those products down often added to the consumer frustrations at the time.
Timing also helped Apple greatly here. They lucked out on needing a great product to save the company right around the same time consumers were interested in getting on the internet. They delivered solid hardware that worked, and an OS that at the time wasn't that far behind the consumer Windows (9x) in reliability.
Meanwhile the PC industry was shipping "WinModems" that were more software then hardware and were a nightmare to deal with, while being caught up in Microsoft's fight against the open internet. Microsoft came to be a force that didn't allow the computer makers to differentiate themselves well enough to carve out their own markets while also being a hinderance to lowering cost in the same areas Apple managed to. Microsoft's (later judged illegal) stick to get the OEMs in line was varying the price of Windows and Office licenses in some nasty ways. This ultimately became a major factor in why my former employer no longer exists. They wanted to differentiate, and attempt to compete with Apple on that exclusive focus on consumers.
Apple has always been great when they launch a new revolutionary product. When they have to make small improvement for each annual refresh, they are giving time for their competitors to catch up. I have yet to see any revolutionized product since the Ipod,iPhone, iPad. All have been yearly refresh. The iWatch is an utterly failure, hardly excite anyone.
I don't see that replacing the Intel chip with A9X or even A10X will make their Mac a new reborn. On the other hand, Intel won't stand still and they would become a formidable competitor instantly upon divorce.
Comparing A9X dies size to Skylake is apples and oranges. Apple has decided to incorporate GPU and CPU in a single die, where as Skylake separates the CPU and the GPU in separate dies but in a single SOC package. So the sizes are not equivalent. Modern GPU, because of the repetitive nature of the cores, die size is going to be much larger than that of the CPU.
Wait I thought we were comparing Core M to A9x? Core M SoC has gpu and dual core CPU. Bought the Cube i7 Stylus Core M broadwell for $350. Pretty kick ass running windows 10 have not picked up the iPad as much now that I have a full windows machine on my lap with very snappy snappy performance on the broadwell core m which is 82mm die size. It has the build quality of an Asus t100 but much better performance.
No L3 cache!! That's the last thing I expected in the A9X, I would have guessed 6mb of L3 and 12mb of L4 :) But that's probably still miles ahead of my aged i5-3210m. BTW, Why don't we call it GTA7850 as opposed to last year's GXA6850? Sounds terrific heh. GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850
One would think changing these configurations of cache and cpu/gpu cores, engineers would be more easily frustrated by supporting previous models. Seems like a good way to promote upgrading devices every one or two years. Imagine having a more linear transition causing 5 years use out of a device. Just as a PC cpu/mobo combo can survive 5 years. $1000 dollar portable throw-aways I guess.
One would think changing these configurations of cache and cpu/gpu cores, engineers would be more easily frustrated by supporting previous models. Seems like a good way to promote upgrading devices every one or two years. Imagine having a more linear transition causing 5 years use out of a device. Just as a PC cpu/mobo combo can survive 5 years. $1000 dollar portable throw-aways I guess.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
121 Comments
Back to Article
vFunct - Monday, November 30, 2015 - link
They have such a huge headroom if they ever want to increase their power budget from 5-10Watts and build a laptop (45 Watt) or even desktop/server (180 Watt) chip. Just upping the clock frequency by 50% and they'll already be faster than any Intel chip available.Combine this with high-bandwidth memory using through-chip Vias, or just adding tons of cache & cores, and there are amazing possibilities here. Apple really could compete hard against Intel in their core market.
nathanddrews - Monday, November 30, 2015 - link
I don't disagree, but I'll believe it when I see it.tipoo - Monday, November 30, 2015 - link
Firstly we don't know if it would scale that well, as a low power optimized architecture it may not be able to clock much higher than it is without lengthening the pipeline, and thus lowering IPC.And for your other statement, I recommend reading every page of this
http://www.pcworld.com/article/3006268/tablets/tes...
Mondozai - Monday, November 30, 2015 - link
The guy is using Tabletmark as a supposed cure to Geekbench. The problem is that a core i5 is scoring LESS than a weak-sauce core-M in photo/video editing work. Anyone who believes that deserves to be laughed at.Do you think that article has any validity at all? I'm fine with A9X skepticism vis-a-vis Intel but if you're going to do a hatchet job, at least do it correctly.
MrSpadge - Monday, November 30, 2015 - link
Didn't read your link, but the rest of your post hit's the nail on its head. And to push even further: if Apple somehow designed the A9(X) as a low power architecture which could still take 50% higher clock speeds, they would have designed it badly - because by designing for fixed low frequencies you can gain a lot of power efficiency. That's why CPUs like Dothan or Brazos don't clock all that high, no matter the voltage & power you give them.kaspar737 - Monday, November 30, 2015 - link
If you run a real benchmark like Linpack the results aren't even funny. The A9X does like 1/10th of a 5W Broadwell.melgross - Monday, November 30, 2015 - link
Show us some numbers then. Besides its not being used much anymore, as it isn't useful these days.kaspar737 - Monday, November 30, 2015 - link
http://www.techpowerup.com/forums/threads/processo...Couldn't find the 5W Broadwell results I once found, but here the 3317u (17W Ivy Bridge) is doing about 28 GFLOPS, even if you take off half the score because of multithreading and slightly higher clock, it still leaves you with 14 GFLOPS.
http://www.notebookcheck.net/Apple-A9-Smartphone-S...
The Apple A9 gets about 1.2 GFLOPS.
http://www.notebookcheck.net/Apple-A8X-iPad-SoC.12...
A8X gets 2.5 GFLOPS.
I couldn't find the A9X results yet but I believe they will be somewhere around 2GFLOPS.
jasonelmore - Monday, November 30, 2015 - link
this, apple is designing their soc to look good on geekbench, and other popular benchmarks with victim cache and optimized layout. They are gaming the results to make their chips look as powerful as Intel's Core Architecture. I dont blame them, everyone is doing it nowadays.But....When you throw pure math at it, the big boys separate from the small boys.
xype - Monday, November 30, 2015 - link
Uhm, so the benchmarks are what inform CPU design decisions, not actual, real world use cases in mobile devices? Because, to me, servers/laptops and mobile devices have _hugely_ different use cases; servers/desktops need to perform at a high load continuously, while mobile devices usually need bursts of (decent) performance and mainly good GPU capabilities.An iOS device’s main compute use case is drawing the UI, no? It stands to reason Apple would optimise for that and games. I’m sure they could also design a decent desktop CPU, but that wouldn’t be used in their mobile devices and, let’s face it, couldn’t easily beat Intel’s CPU designs (since Intel is not a bunch of amateurs).
Mondozai - Monday, November 30, 2015 - link
Don't use logic on the internet when you can just yell "OMG THEY ARE GAMING THE BENCHMARKS" instead.I'm not sure why people are in such denial over the beastliness of the A9X. I don't think it's more powerful than Intel Core processors, but it's very obvious that even if Apple's SoC progress slows down and we see 30-50% improvements over each year instead of 70-90%, Intel will inevitably face a situation in which even the most diehard Intel fanboy has to concede that Chipzilla is now Slowzilla.
ciderrules - Monday, November 30, 2015 - link
The stupidity of people when they try to find a way to diminish what Apple has accomplished.Apple is designing their SoC to look good on Geekbench? That's your premise? Then what about the fact it kills other ARM processors in countless other benchmarks? Did Apple also "cheat" for those as well?
No, probably not. It's Samsung and numerous Android vendors that have to cheat on benchmarks to make their inferior processors seems better than they actually are.
Jumangi - Monday, November 30, 2015 - link
It's also dumb to say Apple is now smoking or will start to against Intel chips. They do impressive work but people make them out to be gods of design. Apple and hyperbole go hand in hand with the Internet.joelypolly - Monday, November 30, 2015 - link
Considering the guys working for Apple are exDEC guys I would say some of them would be considered gods of chip design. Remember Intel won originally because they were cheap and compatible with x86 which doesn't apply today.xenocea - Tuesday, December 1, 2015 - link
Yeah just like how intel's looks very good on super p1. Those numbers scores you mentioned are moot. How about real some really world performance between Intel and apple as you can see here, Apple does hold it's own.http://youtu.be/Kq5pruqwI7I
vFunct - Monday, November 30, 2015 - link
None of these benchmarks matter. They are measuring GFLOPS, which is irrelevant here.Ppietra - Monday, November 30, 2015 - link
according to the results in notebookcheck the A9 processor is worse than the A8, which makes no sense, specially considering that the A9 has a much higher clock rate than the A8. So there is something definitely wrong with the benchmark usedtoyotabedzrock - Monday, November 30, 2015 - link
Something is wrong when then newer chip is suddenly half the speed of the older one.astroboy888 - Monday, November 30, 2015 - link
The correct way to measure performance is performance per TDP. This way you try to get as close to architectural efficiency as possible by removing the electrical portion where designers purposefully trades off performance for power saving.So by your number:
14GF / 17 W = 0.782 GF/watt
A9 (3.5W part) = 1.2GF/ 3.5 = 0.343 GF / Watt
A9X (4W part) = 2GF / 4GF = 0.5 GF /Watt
These are just using your numbers.
The fact you said A8X gets 2.5GF and A9X is 2GF which is less than previous architecture, means something is probably incorrect with the estimation. There maybe an issue with the compiler or whatever.
A really old benchmark like Linpack tends to be optimized for older architectures such as X86, so there are old link libraries made with F77 compiler (fortran), which are tuned particularly for it. So really you can only take things like this with a grain of salt. I don't think anyone built a numerical analysis library the ARM architecture yet. So the compiler will might do the least efficient things.
All benchmark, you take it with a grain of salt. Any one single benchmark can be taken out of context, what you should do is to look at an entire collection benchmarks and get an idea of its relative performance.
BurntMyBacon - Tuesday, December 1, 2015 - link
@astroboy888: "The correct way to measure performance is performance per TDP. This way you try to get as close to architectural efficiency as possible by removing the electrical portion where designers purposefully trades off performance for power saving."The rest of the post is actually pretty well thought out, so I'm going to assume this statement is within the bounds of mobile devices where architectural inefficiencies indirectly impact performance via thermals and power draw considerations. Outside of mobile this statement does not always hold. While I do agree that maintaining architectural efficiency is desirable, elegant, and often times beneficial to performance (think size or thermal constrained designs), sometimes people just don't care about the efficiency. Architectural inefficiencies in supercomputers bring massive power bills. Yet, I still can't find any ARM based supercomputers in the top 500. I see Intel, AMD, nVidia, and even Sparc based processors. Also, if performance per TDP were the only metric of merit, then nobody would overclock (not much anyways). After all, these architectures are usually running pretty close to peak efficiency. Overclocking in general sacrifices a pretty significant amount of power efficiency to get what isn't always a very meaningful gain in clock frequency. How much voltage did you add to get that last 100MHz? Had to go water cooling just to keep thermals under wraps you say?
defaultluser - Monday, November 30, 2015 - link
This doesn't surprised me.I can't remember which thread, but I recently read a discussion on RWT confirming that Geekbench on desktop and mobile are completely different benchmarks. Until the next version is released, they won't be comparable.
So it's not surprising the level of confusion we have now. I imagine the A9 is a lot slower than our limited benches have lead us to believe. But it is still a very fast core, so I'd love to see a more thorough analysis using big boy benchmarks.
xenocea - Tuesday, December 1, 2015 - link
A real world test between the two.http://youtu.be/Kq5pruqwI7I
Spunjji - Wednesday, December 2, 2015 - link
The reviewer makes some fundamental errors. He doesn't account for any optimisations in the software (hence why he is so confused by the Haswell i5 vs Broadwell Core M result) and he absolutely fails to understand that iMovie on iOS isn't doing anything like the same amount of work as it is on OSX.Basically your link is useless to this discussion, but cheers for sharing.
vFunct - Monday, November 30, 2015 - link
Linpack isn't a relevant benchmark for a consumer product.If you want benchmarks that matter, look at the Javascript ones. Something like SunSpider shows how close they are. You can hand confirm this just be comparing responsiveness when you use iOS devices vs. Intel products - the iOS device is basically just as fast.
BurntMyBacon - Tuesday, December 1, 2015 - link
@vFunct: "If you want benchmarks that matter, look at the Javascript ones. Something like SunSpider shows how close they are."Javascript performance is too reliant on the software that runs it to make a good CPU comparison. Same processor in two different browsers gives different results. Same browser across two different operating systems gives different results. That said, Javascript benchmarks do a pretty good job of capturing the overall web browsing experience of the end product.
Spunjji - Wednesday, December 2, 2015 - link
Correct. vFunct's statement is basically propaganda.name99 - Monday, November 30, 2015 - link
You mean a benchmark that exercises AVX2. Well duh.No-one is denying that AVX2 provides a whole lot of FLOPs. But WHY do the FLOPs on the CPU?
Apple's solution would be to run them on the GPU --- and the people who used to care about Linpack (and now care about HPCG) generally think the same way...
id4andrei - Monday, November 30, 2015 - link
3dmark also trounces A9X in detriment of Intel.name99 - Monday, November 30, 2015 - link
You mean this:http://www.anandtech.com/show/9686/the-apple-iphon...
where Apple is at the top of the charts?
Oh, you mean 3DMark Physics, which essentially gives you frequency*numcores?
Don't believe me --- run that equation against the performance that you see, or you cam just look at the source.
(And BTW this benchmark which was so incompetently coded that it hardwired iOS as always using two cores, even for the A8X...)
Good luck finding anything in the real world that actually scales that way.
id4andrei - Tuesday, December 1, 2015 - link
You're showing me ARM benchmarks. I was referring to Intel vs A9x. Look here http://www.pcworld.com/article/3006268/tablets/tes...WasHopingForAnHonestReview - Wednesday, December 2, 2015 - link
Interesting review. Seemed biased but the facts are there.Spunjji - Wednesday, December 2, 2015 - link
Biased how? That's a nicely balanced review and his conclusion looks accurate: A9X is a phenomenal tablet SoC but it is NOT yet directly competing with Intel's higher-end offerings.Guest8 - Tuesday, December 1, 2015 - link
FP16 vs FP32 half precision ARM vs Full precision Intel. Google itxenocea - Tuesday, December 1, 2015 - link
Yeah just like how intel's looks very good on super p1. Those numbers you mentioned are moot. How about real some world performance as you can see here, Apple does hold it's own.http://youtu.be/Kq5pruqwI7I
emn13 - Monday, November 30, 2015 - link
It's hard to find meaningful comparable benchmarks. The kraken javascript benchmark is a an interesting tool to compare disparate platforms because javascript runs most everywhere, and because on systems that allow multiple browsers, modern browsers all do at least roughly comparably.Based on kraken, it looks to me like a skylake 6700k is still likely more than 2 times faster per core than the a9x (which means the a9x is extremely impressive). Of course workload will matter tremendously, there may be workloads where it beats skylake as is today.
In any case - scaling clockspeed by 50% is not a trivial matter, and will almost certainly take at least twice as much power - and at that power level, there's not much difference between a skylake core and a9x.
All in all: it would be really cool to see a9x or derivative chips larger machines - but don't expect magic. It's something of a minor miracle that it's as fast as it is already.
djgandy - Monday, November 30, 2015 - link
Your assumptions are flawed.1. Power consumption does not increase linearly.
2. If CPU is clocked higher, it will stall on other components, so they'll also need a boost.
3. There aren't any decent benchmarks out yet (MadeUpBenchmark 2000 et al, excluded)
xthetenth - Monday, November 30, 2015 - link
1 is especially worth noting, since if it held true, the Pentium 4s would have been amazing processors. Exponential scaling sucks, as do the sort of spiraling costs you can get into where getting the clocks to get more performance costs you IPC.Valantar - Monday, November 30, 2015 - link
Not to mention that increasing the power envelope AND stacking memory with TSVs on top of the die would utterly choke the chip, resulting in thermal throttling 100% of the time. There's a reason that Apple has off-die memory on all their tablets (which I believe to be a large part of the reason for their huge lead in performance consistency compared to Android tablets, all of which use stacked memory). HBM/other high bandwith memory on an interposer? Sure. But don't put anything on top of your most power hungry chip unless you really, really have to.extide - Monday, November 30, 2015 - link
BTW, the memory on top doesnt use TSV's.Valantar - Monday, November 30, 2015 - link
I know, I was only referring to vFunct's mention of TSVs.Krysto - Monday, November 30, 2015 - link
Also, Intel's Core chips cost ~10x more. It's a shame Windows RT had to die. It would've helped ARM provide the competition Intel direly needs in the PC market. I mean for crying out loud, Intel is selling $160 Atom-based "Pentium" chips to OEMs now..............jasonelmore - Monday, November 30, 2015 - link
thats a relative argument..Intel and Apples bill of materials is probably very close, with intel leading slightly
You cant go out and buy a A9X so you cant attach a price to it.. you have to buy a $700 worth of other stuff to get it.
I would say the profit margins and cost come out about the same when you factor in apple and intel's Cost and what you pay.
Alexvrb - Tuesday, December 1, 2015 - link
Windows 10M is still ARM. It runs Universal apps just like x86 Win10. As the library of Universal apps grows, this benefits not only Win10 x86 and Win10M, but also future Windows builds on any architecture. Have you seen Continuum in action? It's basically a full port already. They could eventually release another complete Windows ARM build. It would be able to run all Universal apps just fine, even the Native ones - they're cloud-compiled. Realistically they could port to another architecture (such as MIPS) if they had a reason to, and it would ALSO run all the Universal apps. They were careful not to box themselves in this time around.Guest8 - Tuesday, December 1, 2015 - link
No at 147mm the actual production cost is very similar. Yields aren't going to be great either due to defect density involved. Wafer prices for tsm 16nm aren't cheap. Hard cost is estimated to be between $100-$120 for Apple on a9x explains why it is so expensive. Plus you have design for A9x so not that much cheaper for a fraction of the functionality and massive die size compared to core m at 89mm. I see turbos and NoS bolted onto civics to get it to go fast not as elegant or sexy as a nice V8retrospooty - Monday, November 30, 2015 - link
"Just upping the clock frequency by 50% and they'll already be faster than any Intel chip available" LOL. Now that is funny. It is a very impressive mobile chip and it scores very well on a mobile OS, but that isn't in the same ballpark with Intel Core and Xeon x86 chips. Its like you are comparing engines in a Ferrari and an 18 wheel diesel truck. Different purpose entirely. If you are in a Ferrari (a mobile OS) the Ferrari engine runs extremely swift. Put it in the 18 wheeler and try to haul a load of freight (a real full fledged OS) and it's an entirely different story. Its a great mobile ship, but lets not pretend it can do what x86 does. Maybe someday in a decade or two who knows.menting - Monday, November 30, 2015 - link
I think you mean TSV "through silicon vias" instead of through-chip vias...and yes, in theory it would add a ton of bandwidth, but you'd run into thermal issues as well as yield issues. TSMC does not have mature TSV yields yet.nimish - Tuesday, December 1, 2015 - link
How can you say just increasing clock by 50 % ? If you can scale that much on any chip even atom core would become competitive and throw away client.Michael Bay - Tuesday, December 1, 2015 - link
In their delusional pipedreams, maybe.Heat output in chips doesn`t scale in linear fashion, doubly so for ARM designs. You`d get something terribly hot and overpriced to hell.
Samus - Tuesday, December 1, 2015 - link
That's true, this could make a hell of a netbook SoC.WasHopingForAnHonestReview - Wednesday, December 2, 2015 - link
No way, intel is on a release cycle to increase profits. If they wanted to, they'll drop a monster bomb of a cpu and blow apple out of the water. Intel can do better currently, they just choose not to because theyre basically a monopoly.Constructor - Wednesday, December 2, 2015 - link
So it's great that we'll see that hypothesis being put to a test pretty soon.tipoo - Monday, November 30, 2015 - link
So what are we talking in terms of Gflops for this 12 cluster series 7 GPU?That large of a GPU gain without the L3 cache, which I expected to be helping it along, it's almost more impressive.
jameskatt - Monday, November 30, 2015 - link
The A9X GPU is already faster in many respects than the nVidia and AMD GPUs on current MacBook Pros. Apple can simply create its own GPU to team with Intel's CPU to both control its own destiny and to ride past the bottlenecks that nVidia and AMD are becoming. Ideally, Apple gets a license to create its own custom Intel processors. Certainly Apple can create them faster and more powerful than Intel can.tipoo - Monday, November 30, 2015 - link
"Faster in many respects" - are you talking about GFXBench? That seems an outlier, with the A9X being faster than the Iris Pro in that, but that alone, with other tests being in the Iris Pros favor by 2-4X. Which you would expect, given the 47W power envelope and eDRAM.Just think about it, the fabrication processes aren't wildly different in efficiency, and people are talking about a sub 10W SoC beating 40+ watt dedicated GPUs, or even 47W integrated ones?
Qwertilot - Monday, November 30, 2015 - link
To be fair, the GPU in this does also have an absolutely enormous tech lead over those NV/AMD GPUs. Of course not precisely an accident that Apple can get to higher tech stuff first :)This is probably an unusually large gap though - the power limited 16nm GPUs are of course going to be hugely faster than the current ones.
tipoo - Monday, November 30, 2015 - link
Can you explain the "enormous tech lead" to me? People thinking a sub 10W SoC bests 47W CPU GPUs or 40+Watt dedicated laptop GPUs, I think something is fishy. There's no magic pixie dust in the semi industry anymore. Pretty sure GFXbench uses FP16 on mobile and FP32 on desktop, if that's your source. Other benchmarks put chips like the Iris Pro at 3-4x faster.Qwertilot - Monday, November 30, 2015 - link
Well probably a touch less than nothing tech wise vs iris pro of course!The mobile GPUs in the mac book pros though are of course still on 28nm. That really is a handicap vs 16 :)
tipoo - Monday, November 30, 2015 - link
TSMC and Samsung 14/16nm are not comparable to Intels fabs directly. Intel is usually more efficient to the point where the previous ones can give current gen on other fabs a good run for the money, see here.http://www.extremetech.com/wp-content/uploads/2014...
Different fabs advertise different ways, and all 14nm isn't comparable.
Besides, even if the fab gives a good, say, 50% advantage - you're still talking a 5-10x efficiency advantage for A9X!
I think it's just incomparable benchmarks instead.
alex3run - Monday, November 30, 2015 - link
If you look at fill rates, you will see 7,5 - 8,5 Gpixels and 7,5 - 8,5 Gtexels/s for PowerVR GT7600. That is Xbox 360 level.alex3run - Monday, November 30, 2015 - link
Sorry. I tried to answer the bottom post.BurntMyBacon - Tuesday, December 1, 2015 - link
@tipoo: "TSMC and Samsung 14/16nm are not comparable to Intels fabs directly."Agreed. It is more likely that the "enormous tech lead" is actually in favor of the nVidia/ATi camp rather than the PowerVR IP. It's like ATi's 6000 series and nVidia's Maxwell series that made sacrifices to FP64 resources that weren't useful in games in order to spend more die area on resources that were. Did that make these chips more advanced? Granted there were other enhancements to both architectures, but this reallocation of resources was responsible in large for the improvements. Pixel and texel fill rates look pretty good for PowerVR, but I'd sure like to see how it handles some more advanced features. It would be great if Unigen would port their Heaven and/or Valley benchmark to iOS. Then we could see how the A9X handles Extreme tessellation settings.
tipoo - Monday, November 30, 2015 - link
And the MBPs use Intel 22nm, not 28nm which is a Glofo/TSMC process.id4andrei - Monday, November 30, 2015 - link
I think the 15" MBP is the only product with 14nm Broadwell. I haven't heard of other notebooks with Broadwell in that TDP.MykeM - Tuesday, December 1, 2015 - link
The 15" is Haswell but both the 13" rMBP and the MacBook Airs are on Broadwell (14nm). The new 12" rMB is using the new Core-M and therefore Skylake.ciparis - Tuesday, December 1, 2015 - link
The 12" rMB is using the previous-generation Core-M. Skylake was not available in time.alex3run - Monday, November 30, 2015 - link
Maybe GFXBench loads not only ALUs like 3DMark. But still I don't believe both these benches. Gamebench rules.Rampart19 - Monday, November 30, 2015 - link
I think that is more about the fabs. AMD/nVidia are still rocking 28nm fab access. They both hoped to be on 20nm but that never panned out as planned. Now they are waiting their turn in line for 16nm.tipoo - Monday, November 30, 2015 - link
That still doesn't account for what some people would believe is the A9X being 5X more efficient :PI think GFXbench is just throwing a lot of people, it's using half precision on mobile. Realistically there's no way it's beating 40+W dedicated GPUs on this fab generation, a few back maybe.
Qwertilot - Tuesday, December 1, 2015 - link
5x is a stretch but it really could be close to that for some benchmarks vs the R9 370x in the MBPros. For starters, the pure fab difference is about 2x.There's also a lot of room for the architecture to be more efficient at low power levels. The R9 M370x is still basically 77xx stuff and wasn't ever really optimised for low power operation, while the A9x very obviously is. Look at Maxwell say.
If that's only 50% more efficient you're up around 3x already.
If you then hit a benchmark that happened to favour the architectural differences in the A9x? 5x actually becomes fairly plausible.
Says as much about the current gpu in the MBPro's as anything else perhaps :)
jasonelmore - Monday, November 30, 2015 - link
The reality distortion field is in full force with this one.akdj - Thursday, December 3, 2015 - link
'Reality distortion field' ...Only if you're the one(s) arguing AGAINST the tremendous power, speed and graphic prowess shown in the iPad Pro/A9x. As the 'realities' are definitely in black and white if you read the article or many MANY others on the performance of the iPad Pro
Spunjji - Wednesday, December 2, 2015 - link
Funniest nonsense post of the year :'DWolfpup - Monday, November 30, 2015 - link
Thanks for yet another awesome article! While there's no reason I NEED to know how Apple or anyone else's hardware works, I've been obsessed with hardware designs for decades, so I love knowing what's going on :)I wonder if PowerVR's claims are true, that the 6 core 7th gen series PowerVR parts actually have the same performance as a Playstation 3/Xbox 360?
If it's actually true, that's pretty amazing, even if it is a decade later, even still!
tipoo - Monday, November 30, 2015 - link
Yeah, and I suspect phones will encroach on this generations consoles faster for a few reasons, even at launch day they didn't start off as tanks for one, more of...Hybrid cars. Mid range, years old architecture, even on launch date for the GPU, and Jaguar cores for the CPUs.If the A series SoC keeps increasing at this pace, say 90% gains every second year, 50% gains every first year, it would be breathing down their necks very fast.
I think the 6S is around ~200Gflops on the GPU iirc, which bests the PS3s 190 and is probably around the 360's 240, but with a more efficient architecture that is probably doing more with a lower number (just using those for ballpark performance of course - they don't denote actual performance).
That would put the 12 core iPad Pro at maybe around 400Gflops?
id4andrei - Monday, November 30, 2015 - link
Flops are different though. FP16 and FP32.tipoo - Monday, November 30, 2015 - link
I know, pretty sure we talked about this in relation to GFXBench :PBut these are FP32, it would be nearing 800 if using half precision/FP16.
id4andrei - Tuesday, December 1, 2015 - link
Are you using Imagination's numbers? I mean nvidia also misled us with their tegra. Correct me if I'm wrong, but FP32 ALUs work mutually exclusively with FP16 ALUs in PowerVR.tipoo - Tuesday, December 1, 2015 - link
No, combining the FLOPS per SIMD like Anandtech, and the clock speed derived from Techreporthttp://www.anandtech.com/show/9686/the-apple-iphon...
http://techreport.com/review/27376/apple-iphone-6-...
My memory was off though, 6S is closer to 180, so iPad Pro would be closer to 300 than 400
tipoo - Tuesday, December 1, 2015 - link
Actually that's not taking into account if the Pro is higher clocked, which I assume it is, may end up closer to 400 FP32.GC2:CS - Monday, November 30, 2015 - link
That thing is just a monster.But I still think that Apple is using the series 6XT as a template for their designs, just went with a much wider one this time. I think if there were the 7 series, power compustion would be lower.. Is there any way to find out for certain ?
Ryan Smith - Monday, November 30, 2015 - link
At this point it is very safe to assume it's Series 7. Series 6XT only scaled up to 8 GPU cores. So although A9 could in theory be using a 6XT design, A9X is without a doubt using a Series 7 design (and therefore A9 would be using a S7 design as well).ciderrules - Monday, November 30, 2015 - link
Anandtech, I sure hope when you do the detailed iPad Pro review that you dive deeper into the performance than just benchmarks.Too many haters complaining that things must be rigged to make the A9X appear faster than it is, or that benchmarks aren't comparable between ARM and x86.
I think some real-world tests are in order. For example, what if you rendered the identical 4K video on an iPad Pro and several Intel mobile processors to see how long they take? Or use filters on photos to see how quickly they are able to complete. Though it might be tough finding equivalent software on both platforms to do such a test.
tipoo - Monday, November 30, 2015 - link
Someone tried, worth a readhttp://www.pcworld.com/article/3006268/tablets/tes...
So apparently it does not edit raw 4K, not one stream, so their claim of 3 streams of 4K editing seemed to be for iPhone shot video which...Kind of takes the "Pro" out of it, but would be great for home users.
ciderrules - Monday, November 30, 2015 - link
I'll wait for Anandtech to test it. Gordon Mah Ung is not someone I'd trust to be unbiased.id4andrei - Tuesday, December 1, 2015 - link
Even if he is the most evil deceitful man on the face of the planet, the different benchmarks he ran don't lie. Geekbench - which Anandtech does not use for crossplatform comparisons - is the only bench to favor A9x. Oh and GFXBench, but that one runs different workloads on ios/android vs osx/windows.Constructor - Tuesday, December 1, 2015 - link
No, not "worth a read". What a self-embarrassing moron!In Order to somehow "prove" that the iPad Pro "can't edit 4k" Ung doesn't even attempt to actually edit 4k (which has already been confirmed to work well) but instead he's just digging until he finds a file format that isn't compatible with iMove and then crows that a banal and deliberate file format incompatibility somehow "proved" that "4k does not work", which is utter nonsense and which has zero relation to the question he had claimed to be asking.
There's clearly no integrity there.
Ung is nothing but a low-rent click-baiting troll, but at least some people lap it up without questioning.
ciderrules - Tuesday, December 1, 2015 - link
Bingo. Then there's the test of the MacBook Pro vs the Surface to check Microsoft's claim it's 2x faster. He titles the article "It's not 2x faster, it's 3x faster". Then he proceeds to check the Surface with discreet GPU against the MacBook with integrated GPU and cherry-picks certain benchmarks where the Surface GPU is actually 3x faster. Oh, and ignores the CPU where the MacBook edges the Surface.The guy is a hack, and clearly has a bias.
Constructor - Tuesday, December 1, 2015 - link
Ah, that notorious piece of junk was him, too? Figures.tipoo - Tuesday, December 1, 2015 - link
Yeah, in all fairness once you match up the storage size and RAM, the Surface Book with the dGPU ends up costing similar to the base 15" MBP, not the 13". Though in size it's comparable to the 13". So I could see it argued both ways which one the comparison is more fair to, but in terms of cost-wise, the Iris Pro would end up just a bit behind the 940M with GDDR5 in there, but the 15" MBP has a true quad core CPU.Compared to the Iris 6100 in the 13" Pro, sure it may be 3x faster on GPU tasks alone. Not CPU.
WatcherCK - Monday, November 30, 2015 - link
Awesome article, It raises two questions for me, how does OSX run on an ARM CPU (competitively?) and by extension would an ARM powered Macbrick or Imacbox be something Apple could or would sell perhaps as a budget product tier? (I know budget and Apple are mutually exclusive :) )freeskier93 - Monday, November 30, 2015 - link
OS X can't run on ARM, it's x86. An ARM version OS X and ARM Macbook are likely in the future though. The biggest issue is developer support, everyone would have to rewrite their applications for ARM.Marc GP - Tuesday, December 1, 2015 - link
Apple has been running OS/X on their AX SOCs in their labs for years now.Applications don't need to be rewritten, just recompiled.
Michael Bay - Tuesday, December 1, 2015 - link
If you want performance, you`d have to do at least a partial rewrite, since not all extensions translate directly. Might not be a problem for something like word processor, but something computationally intensive surely will be.Constructor - Tuesday, December 1, 2015 - link
Apple has been using the same approach on ARM as on Intel for some years now, with even the same frameworks with the same or very similar APIs in many cases.bushgreen - Monday, November 30, 2015 - link
So the a9x is just 16nm finfet not finfet+ ?Ryan Smith - Monday, November 30, 2015 - link
It's likely FinFET+, but we don't know for sure.zeeBomb - Monday, November 30, 2015 - link
L3 Cache? What do you mean? There is none!tipoo - Tuesday, December 1, 2015 - link
Yes, that's what it says?toyotabedzrock - Monday, November 30, 2015 - link
So what else is on the chip? I see a mirrored area that looks like a mini dual core CPU and an area with 3 ip blocks that are dublicated right below as if they just double up what ever the circuitry is.tipoo - Tuesday, December 1, 2015 - link
I think those are cache tags, not "mini CPUs". Just a guess from spending way too long analysing die shots :PIf it is a mini CPU, it could be security related like AMD Trustzone and the Wii/Wii U security chip or whatever, but that doesn't seem quite right as Apple has dedicated blocks for that.
nofumble62 - Monday, November 30, 2015 - link
Not surprise at all if you follow their strategy in the last several years. ARM camp is trying to increase performance while stay within the power envelope. Intel is trying to reduce its power consumption while keeping performance flat. I think they have both succeeded today although the ARM has a lot more market momentum. At the moment I am typing using a Lenovo Yoga 2 10" tablet with a detachable keyboard (like the Surface). I just bought this device for less than $200, and its battery last all day. Beat that Apple.Now, for Apple to switch the CPU of all of their Mac, you have to put on a different hat. What is the cost of switching? and how the customer would benefit from it? Sure, Apple may save some money on the chip, but not a lot. The Mac volume is not that big as compared to the whole PC. But the cost of porting all the software ecosystem is huge. What about Windows compatibility? Also, Apple has been using a lot of PC hardware standard such as mSATA, USB, Thunderbolt, M2, PCIe for free over the years. They won't be free when they switched CPU. And last but not least, what do the buyers benefit from this? Cheaper Apple hardware? yah right.
Constructor - Monday, November 30, 2015 - link
You misunderstand Apple's motivations.Apple has no interest whatsoever in making their products cheaper – they are pretty much the only computer manufacturer who is not participating in the downward spiral their competitors are caught in for lack of differentiation (they all have to use Windows, they all have to use Intel (or at most Intel-compatible) CPUs, so something's gotta give: And the sticker price is pretty much the only thing left in the PC market nowadays).
Apple is all about not getting into that race, and their whole game is working their butts off to offer unique selling points instead, something no PC manufacturer really can.
And those primarily revolve around OS X and its sibling iOS, and hardware-wise special points in build quality, performance and battery life. With Intel CPUs differentiation of Macs against Windows PCs has been difficult and limited. Switching to their own CPUs will give them a world of options which they've never had. They "just" have to actually reach performance/power points which are attractive to their customers so they're motivated for paying the prices Apple wants to ask for their unique products.
The cost of the CPUs is a factor, but given that Apple is already paying for all the heavy lifting in their CPU development from the iPhone profits and that even their more expensive contract manufacturing easily beats Intel's prices that's actually just another motivation for getting away from Intel.
They're the only computer manufacturer in the market with their own OS, with their own CPUs and with almost complete freedom to develop all of that in any direction they like and with much fewer legacy pressures.
The more unique their products are, the less pressure on the prices, at least from the customer segment actually interested in those unique selling points.
That is what Apple is all about, not simply making cheaper versions of their existing products as the PC manufacturers are pretty much forced to.
id4andrei - Tuesday, December 1, 2015 - link
They're not the only one. High end, high margin justifying products exist from other manufacturers. Surfacebook/SurfacePro, S6 edge and Note series, high end consumer and pro level hardware from Dell/HP/Lenovo/Asus.There will be nothing cooler than seeing Apple go alone in the PC space with ARM CPUs.
Guest8 - Tuesday, December 1, 2015 - link
I would love for Apple to go the way of OS X R/TConstructor - Tuesday, December 1, 2015 - link
Apple is increasingly taking over the high-margin segment of the computer market just like they already own the high-margin segment in smartphones.PC manufacturers are increasingly relegated to bit player status there – their bigger sales numbers are not happening vis-a-vis Apple's products but in the low-margin segments where Apple doesn't even want to compete.
Michael Bay - Tuesday, December 1, 2015 - link
High-margin and razor-thin number wise. Apple is literally a phone company now, a one trick pony status Jobs desperately worked to avoid.Constructor - Tuesday, December 1, 2015 - link
Nope. Apple is still one of the few biggest computer manufacturers by now, even though they only sell high-margin products – and they are one of the few with growing volume at the same time!So much for conventional "wisdom".
id4andrei - Wednesday, December 2, 2015 - link
They are growing but it's nothing explosive. It's buckling the trend that's for sure. Then there's the element of image/status symbol that belongs exclusively to a macbook. It's unfair to take a Lenovo Thinkpad Carbon, a high end device, compare it to a 12" macbook and say they are competing in the same market. They don't.akdj - Thursday, December 3, 2015 - link
Double digit (high double digit) percentages YoY certainly seem 'explosive' to me. Especially considering EVERY other OEM is down, YoY each and every quarter over the last three or four yearsSpeedfriend - Tuesday, December 1, 2015 - link
Apple has not been caught in the downward price spiral? Really? Maybe in phones but in PCs they have followed the competition down. MacBook Air launched at $1800 and now is half that as PC ultra books have dropped in price.Ppietra - Tuesday, December 1, 2015 - link
The first Macbook Air wasn’t launched to replace any product from Apple, and its strategy seemed very much like the one used for the current MacBook Retina or the first MacBook Pro Retina, in order to test the waters and get experience building something new. Eventually a new version came out that replaced another product line at another price point, without changing the average selling price for Apple. It is the average selling price that you have to look at.Speedfriend - Wednesday, December 2, 2015 - link
" It is the average selling price that you have to look at."When the MacBook Air was launched, the ASP of Macs was $1539, it is now $1205, that is a 22% decrease in ASP and Apple has scrapped its cheapest line of notebook products.
The idea that Apple is immune from competition in the PC world is a fantasy. Yes, people are prepared to pay more for a Mac, which unlike in the smatphone is actually a singifcantly better quality produc than its competitors, but that premium is limited and Apple needs to respond to the market.
Ppietra - Wednesday, December 2, 2015 - link
according to Apple: 2008 average revenue per Mac $1440; in 2015 $1240 (14% decrease), with 110% unit sales growth while keeping basically the same entry price points.Do you think that is comparable with what happened with the PC industry, which grew around 5% and saw the average selling price probably reduced in half, almost to a third of the Mac ASP.
Drakino - Tuesday, December 1, 2015 - link
Apple didn't get caught in the negative aspects of the downward price spiral, while also helping to kickstart it.The panic in the rest of the industry when Apple launched the iMac was interesting to witness. I was working at one of the larger consumer PC companies at the time.
Apple avoided the negative effects due to approaching the issue differently. The iMac was a new product that responded to some consumer frustrations of the time. The products in most every competitors space at that price were cheaper variants of the top tier workstations they were making tons of money off of selling to businesses. The corners cut to scale those products down often added to the consumer frustrations at the time.
Timing also helped Apple greatly here. They lucked out on needing a great product to save the company right around the same time consumers were interested in getting on the internet. They delivered solid hardware that worked, and an OS that at the time wasn't that far behind the consumer Windows (9x) in reliability.
Meanwhile the PC industry was shipping "WinModems" that were more software then hardware and were a nightmare to deal with, while being caught up in Microsoft's fight against the open internet. Microsoft came to be a force that didn't allow the computer makers to differentiate themselves well enough to carve out their own markets while also being a hinderance to lowering cost in the same areas Apple managed to. Microsoft's (later judged illegal) stick to get the OEMs in line was varying the price of Windows and Office licenses in some nasty ways. This ultimately became a major factor in why my former employer no longer exists. They wanted to differentiate, and attempt to compete with Apple on that exclusive focus on consumers.
nofumble62 - Wednesday, December 2, 2015 - link
Apple has always been great when they launch a new revolutionary product. When they have to make small improvement for each annual refresh, they are giving time for their competitors to catch up. I have yet to see any revolutionized product since the Ipod,iPhone, iPad. All have been yearly refresh. The iWatch is an utterly failure, hardly excite anyone.I don't see that replacing the Intel chip with A9X or even A10X will make their Mac a new reborn. On the other hand, Intel won't stand still and they would become a formidable competitor instantly upon divorce.
astroboy888 - Monday, November 30, 2015 - link
Comparing A9X dies size to Skylake is apples and oranges. Apple has decided to incorporate GPU and CPU in a single die, where as Skylake separates the CPU and the GPU in separate dies but in a single SOC package. So the sizes are not equivalent. Modern GPU, because of the repetitive nature of the cores, die size is going to be much larger than that of the CPU.Guest8 - Tuesday, December 1, 2015 - link
Wait I thought we were comparing Core M to A9x? Core M SoC has gpu and dual core CPU. Bought the Cube i7 Stylus Core M broadwell for $350. Pretty kick ass running windows 10 have not picked up the iPad as much now that I have a full windows machine on my lap with very snappy snappy performance on the broadwell core m which is 82mm die size. It has the build quality of an Asus t100 but much better performance.xCyborg - Thursday, December 3, 2015 - link
No L3 cache!! That's the last thing I expected in the A9X, I would have guessed 6mb of L3 and 12mb of L4 :) But that's probably still miles ahead of my aged i5-3210m.BTW, Why don't we call it GTA7850 as opposed to last year's GXA6850? Sounds terrific heh.
GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850 GTA7850
seanr - Wednesday, December 9, 2015 - link
One would think changing these configurations of cache and cpu/gpu cores, engineers would be more easily frustrated by supporting previous models. Seems like a good way to promote upgrading devices every one or two years. Imagine having a more linear transition causing 5 years use out of a device. Just as a PC cpu/mobo combo can survive 5 years. $1000 dollar portable throw-aways I guess.seanr - Wednesday, December 9, 2015 - link
One would think changing these configurations of cache and cpu/gpu cores, engineers would be more easily frustrated by supporting previous models. Seems like a good way to promote upgrading devices every one or two years. Imagine having a more linear transition causing 5 years use out of a device. Just as a PC cpu/mobo combo can survive 5 years. $1000 dollar portable throw-aways I guess.aiyub121 - Saturday, June 25, 2016 - link
Maybe GFXBench loads not only ALUs like 3DMarkhttp://www.aiydriver.com