"On the CPU side of things, Apple is using new generation large performance cores as well as new small power efficient cores, but remains in a 2+6 configuration."
As long as the 2 cores push more threads and ipc than the previous gen, who cares if there are more or less of them? For an ultra low power device, keeping the number of cores as low as possible is key to better sustained performance and keeping the die size as small as possible.
If the A14 was going for efficiency to offset ProMotion, that I would easily be forgiving of and would really want that. But it sounds like we're also not getting ProMotion this year, plus a smaller than usual chip upgrade. Unless you really wanted 5G this year (and why? It's slower than 4G in many of the few places it is available right now), it's sounding like a wait till next year year.
Don’t forget that the increase in the transistor budget they get every year isn’t always spent on the CPU and GPU. It’s clear that a large portion of that transistor budget has gone to the new 16-core Neural Engine and new ISP for the camera, though not for the iPad Air
This is genuinely befuddling to me. What on Earth sort of "value-add" are the Neural Engines adding here? It's not just Apple, but Samsung, Huawei, Qualcomm, etc.
They're far too early for their time. We're wasting 10 to 20% of the transistor budget for a feature you'll never use over the lifetime of the device. It's like 8 GB of VRAM on a $100 GPU. Why?
Can anyone name even five common apps that rely on high-performance neural engines? What, ever-so-slightly improved autocorrect? Undetectably better Siri? A few milliseconds faster Search?
The only possible use-case for mainstream consumers seems to be computational photography. Meanwhile, literally every application and all games would benefit from faster & larger CPUs and GPUs.
iOS and apps already take advantage of ML. The camera app is of course using ML. Siri. Scribble. Photos app categorizing photos. ML is hugely beneficial when it works and is just getting started. Im a developer and I think it will be more useful than adding more cores to the CPU (which is the only growth we see nowadays) because most apps scale poorly with multi-threading.
Apple until now has had a steady train of impressive single core gains, they're not among the ones that just throw more cores at the problem. Look at this itself, only two big cores still. That's why this update in particular sticks out, with people maybe wondering if that gravy train is going to slow down.
Doesn't Google Photos upload everything to Google and let the server do all the work? Same for voice recognition and everything, Google does *nothing* locally because it's in their financial best interest to slurp as much data as possible.
Apple does it locally because if they do it remotely they have no chance in hell to compete with Google and Amazon (another company that literally hires people to listen to the Alexa recordings in order to properly label data for their ML). So Apple came up with a different strategy of doing as much as possible locally in order to sell *privacy*, since they can't sell Google and Amazon levels of performance in this particular regard.
Google does do voice recognition locally starting on the Pixel 4 (I don't know if that is true for the budget phones). They use local voice to text on the recorder app. The assistant also works without a network connection, obviously if you ask for something that it can't do locally it will need a network connection, but doing things like setting alarms, launching apps, or other basic phone controls are done locally. They also can do song detection, like Shazam, without a network connection. I think the song detection was able to be fit into 500MB which was something they mentioned when they launched the pixel 4 last year. They made a point of talking about local processing so that everything would continue to work even if you have a poor network connection.
@FattyA, when you say "starting on the Pixel 4" do you mean "any Android phone launched after Pixel 4" or literally "on the Pixel 4" which is probably one of the worst selling Android phones so pretty irrelevant in the grand scheme of things? Is it Android which is prioritizing or defaulting to local processing in general or *just* Pixel 4 doing *just* the voice recognition locally while everything else still gets sent to the great Google blackhole in the cloud?
Yes, Google uploads everything. They do that to study the data and to make money off it. There's no reason Apple couldn't do it that way, too. Apple could lease 100% of AWS's capacity and still have $25 billion annual profit left over. In realistic terms, the cost of offloading ML would amount to a rounding error for Apple. They've just decided it's more lucrative to develop faster SOCs and do the ML locally. That's probably down to a combination of Apple being good at designing chips and being able to charge a premium for more privacy and other features that benefit from local ML. It's basically just a different philosophy. Google is an advertising company. They want to profit from selling ads, hence their data obsession. Apple is a hardware company. They want to profit by selling shiny devices.
@ceomrman, Apple could play the same game but they'd still lose against Google or Amazon. Google (or Amazon) has far, far more access to "free" data than Apple does. Google has the upper hand here between being on so many more phones and home assistants all over the world (this aspect is important) and mixing data they get from all of their other sources. Apple's problem isn't the lack of computing power but the lack of a high quality and extensive data set. So Apple could at best be a distant second or third. Or they could just not play a game they'd lose and instead turn it on its head and brand themselves privacy advocates, compete for the market Google simply can't.
Don't forget that Apple is a lifestyle brand. They actually make money selling the devices, unlike Google. Apple is incentivized to maintain a high-quality user experience on their devices, meaning it makes sense to move (or keep) things like voice, handwriting, and face recognition on the device, rather than subject to the whims of connectivity. I know that on my phone, the gboard voice recognition goes south fast if your WiFi/LTE connection are spotty.
Google led the world in applying ML to consumer products. It couldn't be done locally -- the tech did not exist. It was done in the datacenter, using x86 and GPU with the addition ofTensor Processing Units from 2015.
Apple, was following the same path until they made a decision to go for local-only processing (also in 2015) in order to create a USP of "your data doesn't leave for phone" for marketing.
Of course your data is as available to the rest of the world whether it's on your phone or in a datacenter; if a device is connected, it's connected. And of course iOS backs up everything to Apple's DCs anyway. As Apple says, it's only the processing that is done locally. Your data is shared with Apple just as much as an Android user's is shared with Google.
Are you serious? You can google machine learning, and that answer will have been provided by machine learning!
These chips have a specific configuration that is efficient at machine learning, meaning, according to the wikipedia's description of google' TPU chip: "Compared to a graphics processing unit, it is designed for a high volume of low precision computation (e.g. as little as 8-bit precision) with more input/output operations per joule, and lacks hardware for rasterisation/texture mapping.".
It's a customized version of GPU. Do you ever question if games actually use GPUs? No, right? Why would you? These are huge companies and this is a major hardware feature they spent millions to develop. Skepticism is healthy, but keep it in perspective, please.
You are assuming that all of the current CPU/GPU capacity in something like an A12 is already being used. That is not true. Your phone/ipad is not using 6 cores to do anything. Running ML-type tasks on a CPU/GPU is less efficient than having a dedicated/specialized processor for this.
I mean the fact that you assume, by default, that you know better than Apple, Samsung, Huawei, and Qualcomm tells it all right there
Sigh, a troll comment masquerading as a correction. Faster & larger CPUs/GPUs are not for performance alone, but for more efficient processing (i.e., the race to idle), longer software update cycles,
Not only that, but larger CPU cores & GPU cores can run at lower frequencies (see the mess of A12's DVFS curve), further increasing efficiency. Again, see A13 for a perfect problem.
GPU cores are also purely scalable: there's hardly an issue of "unused cores" during GPU events.
Likewise, because iOS devices are updated for nearly half-a-decade while iOS increases in complexity, a significant reserve of CPU / GPU power is much more important.
The fact that fixed-function hardware simply exists is not an argument for its large die reservation, relative to high-end iOS / iPadOS consumer experiences.
There's plenty of AI/ML hype and little real-world delivery besides a few niche applications.
Relying on corporations is a pitiful argument: I'm sure users appreciated that 8K30p ISP from Qualcomm. "How relevant! This is what was missing." /s
Relying on good competition is excellent argument. That's the core of capitalism, stupid.
You listed more than 3 companies. If forgoing ML gives a competition advantage, wouldn't one of them, especially MTK and Samsung have already done it? Samsung obviously know they are behind, that's why they finally got rid of SARC and signed on RDNA. Yet ML is still there.
How stupid are you to think you are so much better?
How stupid are *you* to think that's exactly how capitalism works in practice?
Forgoing ML wouldn't really give a competitive advantage because all of these companies have dedicated vast quantities of marketing to how important ML is for their products. Consumers aren't machines with access to perfect information - most of them don't even read these releases, let alone take any particular note of what's in their phone. The salesperson tells them "new, shiny, faster, better" and they buy.
The difference in die size between and SoC with ML and one without wouldn't make enough of a difference to the BoM to give a significant price advantage, and investing it into other components wouldn't give enough of a performance advantage to change that, either - whereas saying "now your phone has a brain in it!" definitely will.
Backing up an appeal to authority with some wishful thinking about the nature of capitalism and a spot of tone policing really is Ben Shapiro level of crappy argumentation.
I'm in agreement with you about the other comments, but if I could offer a counterpoint to your thoughts on AI/ML - you yourself noted how iOS devices are updated for half a decade. With the way things have been proceeding, it seems likely that within that time frame we *will* have more uses for the "Neural engine" - and, as others have noted, it will probably perform those tasks more efficiently than an up-rated CPU and GPU would. It's pretty much a classic chicken/egg scenario.
The gpu core scalability argument is quite flawed, the scalability of the cores depends on the gpu arch, see vega which after 56 cu will not scale well to 64 and then after that not at all
Siri, searching and photo categorization aren't 'niche' applications, they are primary use cases for most people. Every voice system needs special hardware to efficiently triage voice recognition samples.
"I mean the fact that you assume, by default, that you know better than Apple, Samsung, Huawei, and Qualcomm tells it all right there"
So they're all right to take away the 3.5mm jack and create phones with curved screens that can't fit a screen protector? Glad to hear that, I thought they were all just copying flashy trends that don't add anything to the user experience...
In seriousness, I'm mocking you because that comment is a naked appeal to authority. It's perfectly possible that they're dedicating silicon area to things that can't be used very well yet - there are, after all, a lot of phones out there with 8 or 10 decidedly mediocre CPU cores. Lots of companies got on board with VR and 3D displays and those aren't anywhere to be seen now.
You are in many ways correct in that modern phones are a triumph of marketing over common sense. Where I think you may be wrong is that Apple has never marketed on absolute performance. They aren't really competing with Android phones and so have for example got by with minimal RAM and flash for years. Given there is no marketing going on for the NPU itself it must be there for some purpose that will increase sales or data harvesting.
As an extension, the obvious area of use is the camera. Phone cameras are heavily dependent on software image synthesis to improve apparent image quality, adding in detail that was missing in the original image using AI.
@BedformTim - they've never marketed on absolute performance per se, but they do regularly tout performance improvements over their own prior products, along with their general leadership.
I'm not sure you're disagreeing with me here, though - my point here was very much that putative performance advantages in any area are irrelevant to the success of their products! :)
The ML chip units simply aren't high profile enough to be simply about sales. And Apple in particular doesn't just add hardware for no reason - yes, AR but notably that isn't on every device the way that machine learning has been. It's real hardware with real advantages, I'm not sure why you're picking this out.
With a higher transistor budget they can add more and more fixed or atleast less flexible circuitry for better power efficiency. All of the CPUs today have an immense amount of fixed function units for things like media or imaging as no one has a better use for all these transistors, why have a separate sensor hub when you can just put it in the CPU?
Before it was because the SoCs weren't computationally powerful enough to do some tasks without bringing the SoC to its knees (see the Nokia 808 PureView and it's imaging DSP compared to the Lumia 1020).
The effieciency is now just used to reduce power draw, with the added benefit that that dedicated circuitry can be added in at comparatively very little cost (in space, etc.)
Updating the neural inferencing capabilities at the edge is about reducing data transmitted back to the mothership for the same quality of data harvested. They're doing it for their own enrichment.
Of course it does. All their cloud subscription services run on AI in the cloud. Local AI can't do inferences without a huge dataset - voice and photos can be local to a greater extent netflix type stuff can't. Like the new fitness+, to pick videos and create strategies for content, that's all stats and AI based and requires collecting data and analytics on their side, not yours.
They have talked up about anonymizing what they collect, but they are the only ones who can see that, so it's on the honor system entirely for that.
Who is this user ?...infact this has been my thoughts all these years starting with the Galaxy s7 with it exynos then slowing making it way to main-stream smartphone marketing
Sometimes I doubt if these mobile chipsets indeed have this desk top grade hardware as they claim.
Palm detection when Apple Pencil, Siri app recommendations, photo gallery, camera use it a lot, on device dictation, health features (sleep, hand washing), accessibility/sound detection, Lidar processing. There was an Arc article last month about how it's used almost everywhere in the OS now.
The neural engine is used in all sorts of categories of programs and uses on Apple devices. The neural engine speeds up and/or is what allows live effects on pictures like the various Portrait Lighting modes, Animojis and other live selfie effects, their nighttime picture mode, all of the AR stuff, as well as image detection, focus tracking, body position and analysis, etc. Pixelmator Photo’s new enhancement feature was shown off during the presentation. There is a good chance that an improved neural engine will allow them to apply live effects to video.
The core ML API is also used for on device language parsing and dictation. It is even used in low level operations like making the use of the pencil smoother on the ipad. ML is a very important tool for more and more types of application. Having dedicated hardware for processing instead of using brute attacks via the GPU or CPU makes for both a faster and more efficient system. Apple will also make a big deal about this in the new Macs. A lot of processing intensive manipulations could be moved off the GPU and onto the Neural Engine. It could very well lead to photo and video manipulations being able to be done on a lighter weight and less expensive computer.
According to Dan Riccio they are doing tons of proyects based on ML so they want to be prepared for everything, but im with you in terms of wanting more cpu power now.
There are use cases, but we can argue if they should had put all the budget from the bigger die&new process into ML instead of smaller part there and bigger towards CPU and GPU. The current A13 ML is not showing any issues with the AI tasks that apple implements, for example and about fitness+ it will be available on a lot older phones lol. Bumping all the budget towards the ML only is stupid, because less than 1% are really pushing that one while all 100% benefits from faster CPU and GPU. Play some civilization VI for example and one can clearly see that A13 struggles - it's laggy and not smooth at all - so yes, that game for example needs faster SOC and I can list quite a lot more really.
Correct. Performance is more than enough. AI and ISP along with better sustained performance is a much better choice. Sadly, we don't get manufacturers to decide this in the Android space as long as only ARM provides the CPU cores. Sure, manufacturers do the ISP and AI, but they can't fully customise their core design.
My guess is this: - The Apple A13 sets the benchmark for minimum performance for the coming generation. - The Apple A14 maintains that performance, but extends silicon budget to AI. This beefy ML-coprocessors are used for the new OS X Macs who are transitioning from x86. On the iPhone front, the decent efficiency gains are spent on the thirsty 5G external chip. - On the A15 SoC, they'll focus again on efficiency this time by shifting the external 5G modem to an efficient internal modem, and the extra efficiency gains are spent on making the 120Hz display have acceptable battery life. On the OS X side, the efficiency is instead sacrificed to increase performance on the MacBook Air and Pro.
Well next years iphone will be huge, new high MP camera sensor (48 or 64) hopefully with option for 8K recording, 120hz, maybe notch reduction we will see what they decide because the decision has not been made in terms of design yet, and probably more things we dont know yet.
But this years iphone wont be bad either if coming from a few years old phone, i want the new case redesign and the bezel reduction in the front panel.
high MP sensors lead to worse photos overall. There are things like physics, you know... binning smaller pixels leads to more issues than less MP and bigger pixels, it was proven already again and again. Check s20 ultra/note 20 ultra and all their issues because of the "big" sensor with insane MP numbers. Marketing, nothing more. Bigger 16MP sensor will take tons better photos than 48/64/108MP one. Not that nowdays processing is not 90% of the phone photos (look at pixel 4a + I guarantee you that they could made it to take even better photos with the same camera setup if they wanted to).
This one, most likely this year will be just a big camera update. That's nice for someone who takes a lot of pictures and don't just post them on FB, for everyone else... totally not worth to change your phone.
Intel has been giving 5% performance upgrades each year since forever and people still bought billions of those chips. 16% CPU/8% GPU generation upgrade is nothing to scoff at.
I think Andrei's reason for calling it "meagre" is because with the die shrink we could have expected similar gains without any significant architecture improvements... so this big exciting "40%" reveal feels a bit like they phoned it in when you take it in context of the A13 and the die shrink.
To expand on your comment. TSMC N5 promises a ~15% improvement in performance compared to N7. Like Andrei mentioned, they might have improved their power efficiency at the cost of performance. What is strange is that Apple didn't mention better power efficiency. If A14 uses as much power as A13, then all the improvements come from the better node.
I don't get Andrei's negativity here either. Your and his points are well taken, but I don't think it makes sense to jump the gun and say it is a meagre update. When I watched the presentation I thought to myself that Neural Engine is *massive*! Given Apple's tight integration of software and hardware I reckon Apple knows what it wants to do with it.
It seems pretty slanted on first glance, and it's more than a little questionable given the situation around Intel's slow moving collapse and Apple's new initiative. I can't help but link the two, Intel has been noticeably underhanded and constantly getting caught.
No, Andrei offers us an either/or: either the performance gains relative to the A13 are meagre (which might constitute an engineering failure) or Apple has become more concerned about the power draw of its mobile processors (which would indicate a change in design strategy). Unable to read Apple's mind he sensibly leaves it to us to make of this what we will. In the article what he says is: "Apple might have finally pulled back on their excessive peak power draw at the maximum performance states of the CPUs and GPUs, and thus peak performance wouldn’t have seen such a large jump this generation, but favour more sustainable thermal figures." That sounds like a positive statement to me that expresses support for the "pulling back" and the "more sustainable thermal figures" that it results in. Those Apple performance cores are already very potent (at peak performance). Even if I am wrong to read Andrei's words this way, I still think that would be the right strategy for Apple going forward. Just as ARM must be finding additional performance from its cores Apple needs to avoid overcooking things with its performance cores and thus rendering them unsuitable for use in a mobile device context.
That said, Apple does have the most stunning power management tricks governing the operation of its SoCs. Even as delivered performance of recent A series SoCs (in the smartphone/pocketable device form factor) is tightly regulated and constantly shifting to sustainable levels (per the governing parameters of power draw and thermal envelope) what users of Apple phones notice is a product that offers great/responsive performance and good battery life in a package that seems to be free of compromise. Apple's smartphone SoCs are good enough to create that illusion and, not to take anything away from Apple, you need very good technology to create an illision like that. Still, illusions only take you so far.
If Apple's performance cores are going to genuinely rival the energy efficiency of licensed ARM cores (Apple doesn't seem to have cores with the incredible energy efficiency of ARM's licensed cores but conversely ARM doesn't seem to have on chip power management tech that is in the league of Apple's SoCs) and Apple should really want to do that because energy efficiency directly governs the relevance of a chip for use in the mobile context, then Apple is going to have to beat a path back to more energy efficient processor design. Such a processor might be a bit more conventional that Apple's recent A series offerings in that the sustained performance on tap would be more in line with the peak performance on offer from a chip. That doesn't mean that Apple should stop looking for performance gains from its smartphone silicon. It only means that it would make a lot of sense to put energy efficiency first for it forthcoming processors (for the smartphone). And, as there is already plenty of rarely tapped processing power that is nominally available from Apple's SoCs but that remains hidden from view most of the time, only showing up in peak performance favouring benchmarks like GB5, then pushing hard on an energy efficiency first SoC design strategy is, in any case, just one more interesting way and potentially a more fruitful way to explore how performance can be raised.
We talk about mobile space and mobile SOC's, it's bad regarding how things move here. Intel did 5% every year, because they didn't have any competition, they milked their buyers and saved a lot of R&D money.
For it's size and new process, it's kinda laughable how small the uplift is, but hey! You got a lot faster ML that will be utilised by less than 1% of the buyers! Even the current NE/ML in A13 is not utilised at all. ARM promises big jumps in comparison, if A14 is really that small of an improvement as it's panning out - goes the drain is the 2 generation advantage that already is not really valid vs snapdragon 865+. Let's wait and see tho, I am really curious about the reveal + previous years we got already a lot of geekbench leaks of the new iphone, this year? hmmm
Are we certain that the performance comparisons on the A14 were against the A12 from the previous iPad Air, and not the A13? That was unclear to me. During the presentation, when they started talking about the A14 chip, the speaker switched over to Tim Millet, Apple's VP of Platform Architecture. To me at least, I was assuming he was referring to the previous generation of A-series chips (A13), since he IS the A-series guy, and not just the previous iPad Air generation.
And also, 38% more transistors seems crazy - is Apple hiding something? Is it really a 2+6 core CPU and 4-core GPU? Or is maybe 4+4 CPU and perhaps 6 or 8-core GPU? Apple disabling some parts on the iPad/iPhone 12, while enabling the full A14 for the first Apple Silicon Mac later this year?
You are right, it is noted quite clearly in the press release. I must have missed that during the conference! That is a little disappointing then - this would be the smallest performance increase on both the CPU & GPU side of things since Apple started making their own A-series SoC's.
I don't think we should rush to conclusions just yet anyway. Apple's performance figures don't make sense to me because their a12 performance figures for the regular iPad were also wrong. They claimed 40% performance from a12 to a10 whereas in the context of their own figures from the iphone launches (X and Xs) 1.15x1.25=44%. I believe apple just added 25+15=40 for the a12 in the ipad. Similarly here they just added 20(A13)+20=40 for the iPad air. We should expect at least 20 percent improvement announcement over the A13 in their iPhone 12 keynote. We also have to keep in mind that they claimed 15% for the A12 which turned out out to be 20% in Geekbench 5 and 25% in spec2006. Do you think that maybe the reason @Andreif7
You don't spend 11 and a half billion transistors on a bleeding edge foundry trying to save money, the transistors are there, it's just not applying to major CPU and GPU bumps over the A13 this year.
Looking the performance increase over last generation, IMO a good portion of these billions of transistors are simply in place for backup, to increase the yields at decent levels. This is one of the more popular strategies to have more good dies on a wafer.
I think it's too early to draw any conclusions. (a) They may want to make more of a big deal about "our CPU/GPUs are the best" at the Mac release than now. So they say some vague words and that's it.
(b) On the CPU side if we have a big performance increase in SVE/2 and AMX, that won't show until SW is recompiled. So obviously that needs new XCode to be released, but again it may be something they're holding off till the Mac event.
(c) GPU number I think HAS to be a mistake or misunderstanding in the phrasing. I mean, iPhone11 vs iPhone XS GB5 Compute number is already 40%: 6500 vs 4600. They might just conceivably stay flat for the sake of a more realistic sustained performance, but they aren't going to go backwards!
Presumably post announcement we'll start to see GB numbers in the wild soon, at which point we can recalibrate.
One of the things they boast about (but gave no details) is "Advanced Silicon Packaging". Is it possible they put something like HBM in there, alongside the DRAM, as a tentative first experimental step? And the calibration to automatically steer data between the HBM and the DD5 is still far from perfect? I agree HBM on this class of devices doesn't necessarily make sense, but maybe it makes sense in terms of power, and while Apple wouldn't go backwards for no reason, they might have effectively gone backwards as a hiccup in rolling out this new tech?
Yes, but we still have no indication as to where they are actually USED. They certainly aren't visible in XCode. I have seen no indication that you can trigger their use via Accelerate calls. PERHAPS they are used by some CoreML calls?
My point is that when (if?) they're visible to 3rd party SW, that might boost the performance of some benchmarks, and general purpose code. (Or might not. It's hard to say when ALL you know is that they "accelerate matrix computation". Do they give us access to larger registers that could be used in alternate ways? Access to alternate cache use patterns? Access to some sort of permute or table lookup primitives?)
That AMX is included among the places to where CoreML dispatches is an ASSUMPTION. That's precisely my point -- I've seen nothing so far to indicate that the assumption is validated.
Apple has in the past shipped hardware (eg h.265) that wasn't activated until a year or so later, so this is not necessarily unprecedented.
This seemed more of a consumer event - new watches, Apple Watch SE, iPad (the two cheaper models), new bands etc. Hence lighter on the nitty gritty of details on the chips etc. There wasn’t anything ‘Pro’ announced today, which is another reason for being light on details. Apple knows reviews will come out soon anyway.
Nice to see Apple a (tiny) bit more of focus on value for money, shame it took a global pandemic to make them do so. They utterly dominate the high end in every sector they compete in so maybe this is a just a long planned move into the upper middle of the market. (Aided by the shift to Arm. Expect lots of technical detail when their ArmBooks come out.)
Focus on value for money? They put the price of the Air up by $100. Sure they kept the normal iPad the same price with a processor upgrade, but that is still a 2-3 year old processor in a 5 year old form factor
Perhaps I'm too cynical but my first guess is always that the ever-expanding AI transistor budget is primarily driven by the desire for corporations/corporate government to have increasingly sophisticated spyware. (Giving people perks to go along with it — like saving some guy about to fall off a cliff — is the spoon full of sugar for the medicine, of course.)
Agner Fog joked years ago that the main benefit of multiple processor cores is to make the spyware run faster. Well, since we already have plenty of extra cores these days a newer approach was needed, to expand upon the panopticon. There are already so many layers of spyware in current devices that the spies might need the AI to keep track of all of it.
Getting people to surrender to chip-based TIA while paying for the pleasure is a neat trick.
From the sound of it, this may be one of the smallest improvement in their SOC performance to date. The fact that they are using A12 instead of A13 as comparison is a tell tale sign. They've always compare new vs last generation to show the performance improvement from my memory. Seems like they lost their lead CPU designer and we are starting to see the impact. At this rate, Apple is at risk of losing whatever single core advantage to the generic ARM chips.
”Apple is at risk of losing whatever single core advantage to the generic ARM chips.”
If A14 is 40% faster in single core than A12, it means that the GB5 score for A14 is about 1550.
For reference, the SD 865 gets about 900 in GB5. Yes, 900.
If Apple stops chip development now, then the generic ARM chips will surpass them in four-ish years in single core performance, if they improve 15% every year.
lol no, ARM already announced the cortex X1 which will be 30% faster than 865 @3ghz . Then its just 25% difference with A14. Will be lesser if qualcomm goes above 3ghz.
"All of this is just to treat the symptoms, not the cause. Chaslot believes the real focus needs to be on long term solutions. At the moment, users are fighting against supercomputers to try to protect their free will, but it’s a losing battle with the current tools.
That’s why Chaslot is convinced the only way forward is to provide proper transparency and give users real control: 'In the long term, we need people to be in control of the AI, instead of AI controlling the users.'"
This from an article called ‘YouTube recommendations are toxic,’ says dev who worked on the algorithm
I have been trying to figure out how to rid myself of the dreadful banality of the "12-year-old Humilitates Simon Cowell" video, among other monstrosities that incessantly show up in that list (because my mother visited and watched those awful reality show videos via our WiFi).
Chaslot's statement about supercomputers and our free will seems spot on, if chilling. But, go go Apple. And Nvidia. And everyone else. Cram in as much AI goodness as possible.
From an half node can we expect more?? This 5nm is more like a 6nm in the best case. At this point better stay on 7nm cutting down costs, if possible. If 3nm and 2nm will are the same, upcoming years will are a little boring. We have to remember that on high power SKUs these numbers will are even lower.
I don't think we should rush to conclusions just yet anyway. Apple's performance figures don't make sense to me because their a12 performance figures for the regular iPad were also wrong. They claimed 40% performance from a12 to a10 whereas in the context of their own figures from the iphone launches (X and Xs) 1.15x1.25=44%. I believe apple just added 25+15=40 for the a12 in the ipad. Similarly here they just added 20(A13)+20=40 for the iPad air. We should expect at least 20 percent improvement announcement over the A13 in their iPhone 12 keynote. We also have to keep in mind that they claimed 15% for the A12 which turned out out to be 20% in Geekbench 5 and 25% in spec2006. Do you think that maybe the reason @Andreif7
I think they reach the stage where they are on the TOP of CPU/GPU performance for mobile platform, and increased efficiency and dedicated circuits for ML photo and Video processing.
This chip will probably be used in the next iPhones (where they will explain all those specifics new components) and also the AppleTV I Imagine cementing a reference point for performance for iOS apps for the next few years (for Apple Arcade for exemple).
We will see soon enough when the iPhones are presented.
Is this the first time that a leading edge SoC has gone into an iPad as they are usually reserved for iPhones? Given how constrained capacity would have been at 5nm, I find it strange it has been used for the Air. Perhaps expected demand for the iPhone is way below what it was when capacity was booked.
Also surely this chip will not be used in any Mac product? With Tiger lake coming out, Apple is gong to really have to pull one out of the hat to compete
I doubt the capacity of 5nm is constrained, all of the planned Huawei wafer starts were up in the air just a few months ago, AMD is not on 5nm yet, so Apple and Qualcomm prob fought it out over those wafer starts and now has more capacity than they planned for in the beginning of the year.
No Huawei was still reported to be getting its 5nm chips fabricated up until yesterday and had even tried to get additional capacity when the ban was announced 3 months ago. 5nm would have been very tight.
Despite everyone calling for a big 5G replacement cycle for the iPhone now, 5G just doesn't offer that much to consumer yet in most places above what 4G provides, so that may be reflected in underlying orders from operators having seen poor demand for other 5G phones.
The wafers that has been produced sofar (since mid Q2) was planned atleast a year ago, Apple had first call on wafer capacity up to a certain percentage, so they should have been getting what they ordered, and yield has been reported to be better than expected. From now there are extra wafer starts to divide between Qcom and Apple so they definitely have the opportunity to have more chips than they planned for last year produced this year.
I wonder if that is the first A series chip tuned for MacOS - could it be that the short period of sustained peak performance was an issue on the Mac side so they focused on fixing that? Pure speculation, but if Apple is going to use its chips across multiple products it seems inevitable that compromises will need to be made.
Surely they need something more powerful than this for the Mac, at 40% improvement over A12, we are talking 1550 SC, 3700 MC in Geekbench, when an i5 tiger lake looks to be 1400 SC, 5100MC and the i7 1600SC, 6100 MC. Apple has promised 30% better performance than Intel, haven't they?
Completely agree. I would expect 1.5 or 2x larger die size for the first macbook launched (maybe around 150mm if the iphone 12 chip is 100mm sq. I would expect around 200 or 250mm for a mac chip given they likely will do all they can to attack concerns of ARM processing power head on. Though I'll admit I am talking completely out of my depth on this
We already know they make custom tailored chips though. A14X (Z?) might be a perfect fit for the first line of Apple Silicon Macs (ultra light notebooks with similar requirements to an iPad Pro). Who knows - but using the same architecture and ramping the CPU/GPU core #s way up might be what they have planned.
Could limiting the peak power here be a way to put distance between this and the A14X or whatever it will be in the Mac? That way they can chuck some extra cores and extra power/cooling at that and have clear space between this on single and multi thread?
What about protecting from Meltdown and Spectre attacks? They're cheating the CPUs branch predictors. Are these new CPUs that are even richer in machine learning and AI features are safe now?
I'm hoping Apple used the initial production of 5nm chips for a smaller run of iPad Airs with some logic disabled. Otherwise, the A14 is rather disappointing.
I am really shocked to hear that this chip seems to have very limited improvements based on face value. Granted we need to see the chip in person and having the iPhone version will allow a better comparison, but the year Apple plans to start taking their chip lineup to the entire product stack, and they make little to no improvements beyond a die shink? Given they have made such major strides in this sector the last 5 years, I think this is pretty shocking. Perhaps we see a full 30-50%+ improvement in battery life, but I wouldn't bank on it.
Well your math is off on the basis that it's not twice as fast as the A12, their own numbers are 40% better CPU, 30% better GPU, and machine learning was all they said was doubled.
NVidia make use of RT-cores as ray-tracing computing unit and convolution network computing also. I have a theory, APPLE is targeting ray-tracing also ...
When there's a difference in clock-speeds, the iPad version is usually clocked higher then the iPhone version, as iPads have a larger surface area and volume for heat dissipation than iPhones do.
I am also quite surprised from the performance uplift if it turns out true. Compared to the recent years, it's really really small in both CPU and GPU. Clearly all the budget was put forward the NE, but I still feel that for the major user even the current NE is more than enough and rarely used. To put all into it that will benefit literally less than 1% of the user base instead of CPU/GPU that will benefit all... dunno how to feel about it. Also it doesn't look insanely more efficient, I would expect them to promote that + ipad air battery life to show it too.
Given that every year the iPhone is benchmarked as 4 to 5 years ahead of Qualcomm and Samsung, I don't see the issue with losing a year of CPU/GPU performance gain. It still means they are 4 years ahead.
Now you question how ML/NE benefits only 1% of the user base? I'm not sure that's remotely true. Siri uses ML for speech recognition. Are you claiming only 1% of their user base uses Siri?
FaceID and animoji also use ML to recognize your face; are you claiming only 1% of their user base uses FaceID or animoji?
The camera app uses ML in portrait mode to select the foreground and blur the background; are you claiming only 1% of their user base uses portrait mode?
They use ML in their heartbeat monitoring hardware/software to determine if you have an atrial fibrillation. It's possible only 1% of their user base have an Apple Watch and also have an atrial fibrillation, but at 100m watches sold way more than 1% of their users are being monitored for atrial fibrillation; closer to 10% of their user base have an Apple Watch, so at the very least 10% of them use the NE hardware.
Apple definitely uses ML and NE to perform image recognition and tagging on device; are you claiming only 1% of Apple's users search their photos by keywords?
I think it is possible to be relatively clear about what Apple is claiming here. Granting that Apple is probably referring to the single threaded performance of its A14 performance cores and is probably using Geekbench to get its performance numbers (that is normally what Apple does) then Apple is saying that the A14 will offer 40% more GB5 performance than the A12. So (using representive GB5 benchmark data as a guide) single threaded performance will roughly be: 1112 * 1.4 -> 1556.8
A GB5 score of 1557, if confirmed, would mean that Apple has a storming little chip suited to smartphone use that offers the same (peak) single threaded performance as Intel's Core i7-1165G7 Tiger Lake processor. Using a bit more supposition it is possible to weigh up whether a future A14X (it is reasonable to expect that Apple will continue to offer such a product but we will only know for sure once it happens) could actually provide general computing performance in the vicinity of the Core i7-1165G7. To achieve parity with that processor multi-threaded performance in percentage terms (relative to the A12Z/A12X) would have to improve by: ((GB5 score for Core i7-1165G7 - GB5 score for A12Z/A12X) / GB5 score for A12Z/A12X) * 100 ((5832 - 4644) / 4644) * 100 -> 25.6% That seems to be a readily achieveable number when we conider that there is two processor generations and a shrink to the 5nm silicon process between the A12Z/A12X and the A14X.
GPU performance is harder to assess and is best left to when we have some A14X products to compare to Intel's Tiger Lake chips. Still, it isn't so easy to find areas in which the A14X is likely to look like some kind of toy in a direct comparison to the Tiger Lake parts. On the contrary, the limited evidence we have at this point indicates that the A12X will be competitive in performance terms with a relatively high performance part in Intel's 'Mobile', i.e. laptop, processor range while consuming around 1/2 the power. It is hard to dismiss an advantage like that.
A reduction in power consumption and therefore heat that needs to be dissipated through the chassis and skin of a portable device permits the design of thinner and lighter notebooks and tablets and/or longer battery life for such devices. Additionally, it allows the elimination of fans from most of the portable computing devices that consumers will need. Anyone needing a laptop more powerful than that will still need a product that has fans. Irrespective, there still will be a significant Perf/W advantage if the kind of claims that Apple is making about the performance and energy efficiency of its A14 generation cores holds up.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
127 Comments
Back to Article
SarahKerrigan - Tuesday, September 15, 2020 - link
"On the CPU side of things, Apple is using new generation large performance cores as well as new small power efficient cores, but remains in a 2+6 configuration."Surely you meant 2+4?
Andrei Frumusanu - Tuesday, September 15, 2020 - link
Yes, typo.Kangal - Wednesday, September 16, 2020 - link
I hope they can make the switch to a 3+5 configuration.What do you think about something like that?
tkSteveFOX - Thursday, September 17, 2020 - link
As long as the 2 cores push more threads and ipc than the previous gen, who cares if there are more or less of them? For an ultra low power device, keeping the number of cores as low as possible is key to better sustained performance and keeping the die size as small as possible.tipoo - Tuesday, September 15, 2020 - link
If the A14 was going for efficiency to offset ProMotion, that I would easily be forgiving of and would really want that. But it sounds like we're also not getting ProMotion this year, plus a smaller than usual chip upgrade. Unless you really wanted 5G this year (and why? It's slower than 4G in many of the few places it is available right now), it's sounding like a wait till next year year.caribbeanblue - Tuesday, September 15, 2020 - link
Don’t forget that the increase in the transistor budget they get every year isn’t always spent on the CPU and GPU. It’s clear that a large portion of that transistor budget has gone to the new 16-core Neural Engine and new ISP for the camera, though not for the iPad Airtipoo - Tuesday, September 15, 2020 - link
Yes this is true. Maybe they see ML as a greater bang for your buck return with the given silicon budget this year.ikjadoon - Tuesday, September 15, 2020 - link
This is genuinely befuddling to me. What on Earth sort of "value-add" are the Neural Engines adding here? It's not just Apple, but Samsung, Huawei, Qualcomm, etc.They're far too early for their time. We're wasting 10 to 20% of the transistor budget for a feature you'll never use over the lifetime of the device. It's like 8 GB of VRAM on a $100 GPU. Why?
Can anyone name even five common apps that rely on high-performance neural engines? What, ever-so-slightly improved autocorrect? Undetectably better Siri? A few milliseconds faster Search?
The only possible use-case for mainstream consumers seems to be computational photography. Meanwhile, literally every application and all games would benefit from faster & larger CPUs and GPUs.
Thermogenic - Tuesday, September 15, 2020 - link
Apple's photo indexing comes to mind as a really cool feature of iPhone since all of that is done locally.futbalguy - Tuesday, September 15, 2020 - link
iOS and apps already take advantage of ML. The camera app is of course using ML. Siri. Scribble. Photos app categorizing photos. ML is hugely beneficial when it works and is just getting started. Im a developer and I think it will be more useful than adding more cores to the CPU (which is the only growth we see nowadays) because most apps scale poorly with multi-threading.tipoo - Tuesday, September 15, 2020 - link
Apple until now has had a steady train of impressive single core gains, they're not among the ones that just throw more cores at the problem. Look at this itself, only two big cores still. That's why this update in particular sticks out, with people maybe wondering if that gravy train is going to slow down.Luminar - Wednesday, September 16, 2020 - link
How does Siri use ML?How does scribble use ML?
Google photos categorizes photos as well.
close - Wednesday, September 16, 2020 - link
Doesn't Google Photos upload everything to Google and let the server do all the work? Same for voice recognition and everything, Google does *nothing* locally because it's in their financial best interest to slurp as much data as possible.Apple does it locally because if they do it remotely they have no chance in hell to compete with Google and Amazon (another company that literally hires people to listen to the Alexa recordings in order to properly label data for their ML). So Apple came up with a different strategy of doing as much as possible locally in order to sell *privacy*, since they can't sell Google and Amazon levels of performance in this particular regard.
FattyA - Wednesday, September 16, 2020 - link
Google does do voice recognition locally starting on the Pixel 4 (I don't know if that is true for the budget phones). They use local voice to text on the recorder app. The assistant also works without a network connection, obviously if you ask for something that it can't do locally it will need a network connection, but doing things like setting alarms, launching apps, or other basic phone controls are done locally. They also can do song detection, like Shazam, without a network connection. I think the song detection was able to be fit into 500MB which was something they mentioned when they launched the pixel 4 last year. They made a point of talking about local processing so that everything would continue to work even if you have a poor network connection.close - Monday, September 21, 2020 - link
@FattyA, when you say "starting on the Pixel 4" do you mean "any Android phone launched after Pixel 4" or literally "on the Pixel 4" which is probably one of the worst selling Android phones so pretty irrelevant in the grand scheme of things? Is it Android which is prioritizing or defaulting to local processing in general or *just* Pixel 4 doing *just* the voice recognition locally while everything else still gets sent to the great Google blackhole in the cloud?ceomrman - Friday, September 18, 2020 - link
Yes, Google uploads everything. They do that to study the data and to make money off it. There's no reason Apple couldn't do it that way, too. Apple could lease 100% of AWS's capacity and still have $25 billion annual profit left over. In realistic terms, the cost of offloading ML would amount to a rounding error for Apple. They've just decided it's more lucrative to develop faster SOCs and do the ML locally. That's probably down to a combination of Apple being good at designing chips and being able to charge a premium for more privacy and other features that benefit from local ML. It's basically just a different philosophy. Google is an advertising company. They want to profit from selling ads, hence their data obsession. Apple is a hardware company. They want to profit by selling shiny devices.close - Monday, September 21, 2020 - link
@ceomrman, Apple could play the same game but they'd still lose against Google or Amazon. Google (or Amazon) has far, far more access to "free" data than Apple does. Google has the upper hand here between being on so many more phones and home assistants all over the world (this aspect is important) and mixing data they get from all of their other sources. Apple's problem isn't the lack of computing power but the lack of a high quality and extensive data set. So Apple could at best be a distant second or third. Or they could just not play a game they'd lose and instead turn it on its head and brand themselves privacy advocates, compete for the market Google simply can't.Daeros - Tuesday, September 22, 2020 - link
Don't forget that Apple is a lifestyle brand. They actually make money selling the devices, unlike Google. Apple is incentivized to maintain a high-quality user experience on their devices, meaning it makes sense to move (or keep) things like voice, handwriting, and face recognition on the device, rather than subject to the whims of connectivity. I know that on my phone, the gboard voice recognition goes south fast if your WiFi/LTE connection are spotty.Meteor2 - Wednesday, October 7, 2020 - link
A lot of misrepresentation of reality above.Google led the world in applying ML to consumer products. It couldn't be done locally -- the tech did not exist. It was done in the datacenter, using x86 and GPU with the addition ofTensor Processing Units from 2015.
Apple, was following the same path until they made a decision to go for local-only processing (also in 2015) in order to create a USP of "your data doesn't leave for phone" for marketing.
Of course your data is as available to the rest of the world whether it's on your phone or in a datacenter; if a device is connected, it's connected. And of course iOS backs up everything to Apple's DCs anyway. As Apple says, it's only the processing that is done locally. Your data is shared with Apple just as much as an Android user's is shared with Google.
nico_mach - Thursday, September 17, 2020 - link
Are you serious? You can google machine learning, and that answer will have been provided by machine learning!These chips have a specific configuration that is efficient at machine learning, meaning, according to the wikipedia's description of google' TPU chip:
"Compared to a graphics processing unit, it is designed for a high volume of low precision computation (e.g. as little as 8-bit precision) with more input/output operations per joule, and lacks hardware for rasterisation/texture mapping.".
It's a customized version of GPU. Do you ever question if games actually use GPUs? No, right? Why would you? These are huge companies and this is a major hardware feature they spent millions to develop. Skepticism is healthy, but keep it in perspective, please.
ANORTECH - Thursday, September 17, 2020 - link
Why are not scaled better?jjjag - Tuesday, September 15, 2020 - link
You are assuming that all of the current CPU/GPU capacity in something like an A12 is already being used. That is not true. Your phone/ipad is not using 6 cores to do anything. Running ML-type tasks on a CPU/GPU is less efficient than having a dedicated/specialized processor for this.I mean the fact that you assume, by default, that you know better than Apple, Samsung, Huawei, and Qualcomm tells it all right there
ikjadoon - Tuesday, September 15, 2020 - link
Sigh, a troll comment masquerading as a correction. Faster & larger CPUs/GPUs are not for performance alone, but for more efficient processing (i.e., the race to idle), longer software update cycles,Not only that, but larger CPU cores & GPU cores can run at lower frequencies (see the mess of A12's DVFS curve), further increasing efficiency. Again, see A13 for a perfect problem.
GPU cores are also purely scalable: there's hardly an issue of "unused cores" during GPU events.
Likewise, because iOS devices are updated for nearly half-a-decade while iOS increases in complexity, a significant reserve of CPU / GPU power is much more important.
The fact that fixed-function hardware simply exists is not an argument for its large die reservation, relative to high-end iOS / iPadOS consumer experiences.
There's plenty of AI/ML hype and little real-world delivery besides a few niche applications.
Relying on corporations is a pitiful argument: I'm sure users appreciated that 8K30p ISP from Qualcomm. "How relevant! This is what was missing." /s
dotjaz - Tuesday, September 15, 2020 - link
You are the troll. You assume you know better than ALL of them, that type of superiority is just delusional.On top of that, you have the audacity to lecture others.
dotjaz - Tuesday, September 15, 2020 - link
Relying on good competition is excellent argument. That's the core of capitalism, stupid.You listed more than 3 companies. If forgoing ML gives a competition advantage, wouldn't one of them, especially MTK and Samsung have already done it? Samsung obviously know they are behind, that's why they finally got rid of SARC and signed on RDNA. Yet ML is still there.
How stupid are you to think you are so much better?
Spunjji - Wednesday, September 16, 2020 - link
How stupid are *you* to think that's exactly how capitalism works in practice?Forgoing ML wouldn't really give a competitive advantage because all of these companies have dedicated vast quantities of marketing to how important ML is for their products. Consumers aren't machines with access to perfect information - most of them don't even read these releases, let alone take any particular note of what's in their phone. The salesperson tells them "new, shiny, faster, better" and they buy.
The difference in die size between and SoC with ML and one without wouldn't make enough of a difference to the BoM to give a significant price advantage, and investing it into other components wouldn't give enough of a performance advantage to change that, either - whereas saying "now your phone has a brain in it!" definitely will.
Backing up an appeal to authority with some wishful thinking about the nature of capitalism and a spot of tone policing really is Ben Shapiro level of crappy argumentation.
Spunjji - Wednesday, September 16, 2020 - link
I'm in agreement with you about the other comments, but if I could offer a counterpoint to your thoughts on AI/ML - you yourself noted how iOS devices are updated for half a decade. With the way things have been proceeding, it seems likely that within that time frame we *will* have more uses for the "Neural engine" - and, as others have noted, it will probably perform those tasks more efficiently than an up-rated CPU and GPU would. It's pretty much a classic chicken/egg scenario.Archer_Legend - Thursday, September 17, 2020 - link
The gpu core scalability argument is quite flawed, the scalability of the cores depends on the gpu arch, see vega which after 56 cu will not scale well to 64 and then after that not at allnico_mach - Thursday, September 17, 2020 - link
Siri, searching and photo categorization aren't 'niche' applications, they are primary use cases for most people. Every voice system needs special hardware to efficiently triage voice recognition samples.Spunjji - Wednesday, September 16, 2020 - link
"I mean the fact that you assume, by default, that you know better than Apple, Samsung, Huawei, and Qualcomm tells it all right there"So they're all right to take away the 3.5mm jack and create phones with curved screens that can't fit a screen protector? Glad to hear that, I thought they were all just copying flashy trends that don't add anything to the user experience...
In seriousness, I'm mocking you because that comment is a naked appeal to authority. It's perfectly possible that they're dedicating silicon area to things that can't be used very well yet - there are, after all, a lot of phones out there with 8 or 10 decidedly mediocre CPU cores. Lots of companies got on board with VR and 3D displays and those aren't anywhere to be seen now.
BedfordTim - Wednesday, September 16, 2020 - link
You are in many ways correct in that modern phones are a triumph of marketing over common sense.Where I think you may be wrong is that Apple has never marketed on absolute performance. They aren't really competing with Android phones and so have for example got by with minimal RAM and flash for years. Given there is no marketing going on for the NPU itself it must be there for some purpose that will increase sales or data harvesting.
BedfordTim - Wednesday, September 16, 2020 - link
As an extension, the obvious area of use is the camera. Phone cameras are heavily dependent on software image synthesis to improve apparent image quality, adding in detail that was missing in the original image using AI.Spunjji - Wednesday, September 16, 2020 - link
@BedformTim - they've never marketed on absolute performance per se, but they do regularly tout performance improvements over their own prior products, along with their general leadership.I'm not sure you're disagreeing with me here, though - my point here was very much that putative performance advantages in any area are irrelevant to the success of their products! :)
nico_mach - Thursday, September 17, 2020 - link
The ML chip units simply aren't high profile enough to be simply about sales. And Apple in particular doesn't just add hardware for no reason - yes, AR but notably that isn't on every device the way that machine learning has been. It's real hardware with real advantages, I'm not sure why you're picking this out.octavus - Tuesday, September 15, 2020 - link
With a higher transistor budget they can add more and more fixed or atleast less flexible circuitry for better power efficiency. All of the CPUs today have an immense amount of fixed function units for things like media or imaging as no one has a better use for all these transistors, why have a separate sensor hub when you can just put it in the CPU?Tams80 - Friday, September 18, 2020 - link
Very much this.Before it was because the SoCs weren't computationally powerful enough to do some tasks without bringing the SoC to its knees (see the Nokia 808 PureView and it's imaging DSP compared to the Lumia 1020).
The effieciency is now just used to reduce power draw, with the added benefit that that dedicated circuitry can be added in at comparatively very little cost (in space, etc.)
linuxgeex - Tuesday, September 15, 2020 - link
Updating the neural inferencing capabilities at the edge is about reducing data transmitted back to the mothership for the same quality of data harvested. They're doing it for their own enrichment.Spunjji - Wednesday, September 16, 2020 - link
I'm not sure this really applies in the case of Apple?nico_mach - Thursday, September 17, 2020 - link
Of course it does. All their cloud subscription services run on AI in the cloud. Local AI can't do inferences without a huge dataset - voice and photos can be local to a greater extent netflix type stuff can't. Like the new fitness+, to pick videos and create strategies for content, that's all stats and AI based and requires collecting data and analytics on their side, not yours.They have talked up about anonymizing what they collect, but they are the only ones who can see that, so it's on the honor system entirely for that.
dotjaz - Tuesday, September 15, 2020 - link
So you are never using the camera over the life time of your phone. Then why not start complaining about the sensor first?Duke Brobbey - Wednesday, September 16, 2020 - link
Who is this user ?...infact this has been my thoughts all these years starting with the Galaxy s7 with it exynos then slowing making it way to main-stream smartphone marketingSometimes I doubt if these mobile chipsets indeed have this desk top grade hardware as they claim.
Hyper72 - Wednesday, September 16, 2020 - link
Palm detection when Apple Pencil, Siri app recommendations, photo gallery, camera use it a lot, on device dictation, health features (sleep, hand washing), accessibility/sound detection, Lidar processing. There was an Arc article last month about how it's used almost everywhere in the OS now.Isaacc7 - Wednesday, September 16, 2020 - link
The neural engine is used in all sorts of categories of programs and uses on Apple devices. The neural engine speeds up and/or is what allows live effects on pictures like the various Portrait Lighting modes, Animojis and other live selfie effects, their nighttime picture mode, all of the AR stuff, as well as image detection, focus tracking, body position and analysis, etc. Pixelmator Photo’s new enhancement feature was shown off during the presentation. There is a good chance that an improved neural engine will allow them to apply live effects to video.The core ML API is also used for on device language parsing and dictation. It is even used in low level operations like making the use of the pencil smoother on the ipad. ML is a very important tool for more and more types of application. Having dedicated hardware for processing instead of using brute attacks via the GPU or CPU makes for both a faster and more efficient system. Apple will also make a big deal about this in the new Macs. A lot of processing intensive manipulations could be moved off the GPU and onto the Neural Engine. It could very well lead to photo and video manipulations being able to be done on a lighter weight and less expensive computer.
ANORTECH - Thursday, September 17, 2020 - link
According to Dan Riccio they are doing tons of proyects based on ML so they want to be prepared for everything, but im with you in terms of wanting more cpu power now.cha0z_ - Monday, September 21, 2020 - link
There are use cases, but we can argue if they should had put all the budget from the bigger die&new process into ML instead of smaller part there and bigger towards CPU and GPU. The current A13 ML is not showing any issues with the AI tasks that apple implements, for example and about fitness+ it will be available on a lot older phones lol. Bumping all the budget towards the ML only is stupid, because less than 1% are really pushing that one while all 100% benefits from faster CPU and GPU. Play some civilization VI for example and one can clearly see that A13 struggles - it's laggy and not smooth at all - so yes, that game for example needs faster SOC and I can list quite a lot more really.tkSteveFOX - Thursday, September 17, 2020 - link
Correct. Performance is more than enough. AI and ISP along with better sustained performance is a much better choice. Sadly, we don't get manufacturers to decide this in the Android space as long as only ARM provides the CPU cores.Sure, manufacturers do the ISP and AI, but they can't fully customise their core design.
Kangal - Sunday, September 20, 2020 - link
My guess is this:- The Apple A13 sets the benchmark for minimum performance for the coming generation.
- The Apple A14 maintains that performance, but extends silicon budget to AI. This beefy ML-coprocessors are used for the new OS X Macs who are transitioning from x86. On the iPhone front, the decent efficiency gains are spent on the thirsty 5G external chip.
- On the A15 SoC, they'll focus again on efficiency this time by shifting the external 5G modem to an efficient internal modem, and the extra efficiency gains are spent on making the 120Hz display have acceptable battery life. On the OS X side, the efficiency is instead sacrificed to increase performance on the MacBook Air and Pro.
ANORTECH - Thursday, September 17, 2020 - link
Well next years iphone will be huge, new high MP camera sensor (48 or 64) hopefully with option for 8K recording, 120hz, maybe notch reduction we will see what they decide because the decision has not been made in terms of design yet, and probably more things we dont know yet.But this years iphone wont be bad either if coming from a few years old phone, i want the new case redesign and the bezel reduction in the front panel.
cha0z_ - Monday, September 21, 2020 - link
high MP sensors lead to worse photos overall. There are things like physics, you know... binning smaller pixels leads to more issues than less MP and bigger pixels, it was proven already again and again. Check s20 ultra/note 20 ultra and all their issues because of the "big" sensor with insane MP numbers. Marketing, nothing more. Bigger 16MP sensor will take tons better photos than 48/64/108MP one. Not that nowdays processing is not 90% of the phone photos (look at pixel 4a + I guarantee you that they could made it to take even better photos with the same camera setup if they wanted to).cha0z_ - Monday, September 21, 2020 - link
This one, most likely this year will be just a big camera update. That's nice for someone who takes a lot of pictures and don't just post them on FB, for everyone else... totally not worth to change your phone.benedict - Tuesday, September 15, 2020 - link
Intel has been giving 5% performance upgrades each year since forever and people still bought billions of those chips. 16% CPU/8% GPU generation upgrade is nothing to scoff at.linuxgeex - Tuesday, September 15, 2020 - link
I think Andrei's reason for calling it "meagre" is because with the die shrink we could have expected similar gains without any significant architecture improvements... so this big exciting "40%" reveal feels a bit like they phoned it in when you take it in context of the A13 and the die shrink.Rudde - Wednesday, September 16, 2020 - link
To expand on your comment. TSMC N5 promises a ~15% improvement in performance compared to N7. Like Andrei mentioned, they might have improved their power efficiency at the cost of performance. What is strange is that Apple didn't mention better power efficiency. If A14 uses as much power as A13, then all the improvements come from the better node.OreoCookie - Wednesday, September 16, 2020 - link
I don't get Andrei's negativity here either. Your and his points are well taken, but I don't think it makes sense to jump the gun and say it is a meagre update. When I watched the presentation I thought to myself that Neural Engine is *massive*! Given Apple's tight integration of software and hardware I reckon Apple knows what it wants to do with it.nico_mach - Thursday, September 17, 2020 - link
It seems pretty slanted on first glance, and it's more than a little questionable given the situation around Intel's slow moving collapse and Apple's new initiative. I can't help but link the two, Intel has been noticeably underhanded and constantly getting caught.ChrisGX - Tuesday, September 29, 2020 - link
No, Andrei offers us an either/or: either the performance gains relative to the A13 are meagre (which might constitute an engineering failure) or Apple has become more concerned about the power draw of its mobile processors (which would indicate a change in design strategy). Unable to read Apple's mind he sensibly leaves it to us to make of this what we will. In the article what he says is: "Apple might have finally pulled back on their excessive peak power draw at the maximum performance states of the CPUs and GPUs, and thus peak performance wouldn’t have seen such a large jump this generation, but favour more sustainable thermal figures." That sounds like a positive statement to me that expresses support for the "pulling back" and the "more sustainable thermal figures" that it results in. Those Apple performance cores are already very potent (at peak performance). Even if I am wrong to read Andrei's words this way, I still think that would be the right strategy for Apple going forward. Just as ARM must be finding additional performance from its cores Apple needs to avoid overcooking things with its performance cores and thus rendering them unsuitable for use in a mobile device context.That said, Apple does have the most stunning power management tricks governing the operation of its SoCs. Even as delivered performance of recent A series SoCs (in the smartphone/pocketable device form factor) is tightly regulated and constantly shifting to sustainable levels (per the governing parameters of power draw and thermal envelope) what users of Apple phones notice is a product that offers great/responsive performance and good battery life in a package that seems to be free of compromise. Apple's smartphone SoCs are good enough to create that illusion and, not to take anything away from Apple, you need very good technology to create an illision like that. Still, illusions only take you so far.
If Apple's performance cores are going to genuinely rival the energy efficiency of licensed ARM cores (Apple doesn't seem to have cores with the incredible energy efficiency of ARM's licensed cores but conversely ARM doesn't seem to have on chip power management tech that is in the league of Apple's SoCs) and Apple should really want to do that because energy efficiency directly governs the relevance of a chip for use in the mobile context, then Apple is going to have to beat a path back to more energy efficient processor design. Such a processor might be a bit more conventional that Apple's recent A series offerings in that the sustained performance on tap would be more in line with the peak performance on offer from a chip. That doesn't mean that Apple should stop looking for performance gains from its smartphone silicon. It only means that it would make a lot of sense to put energy efficiency first for it forthcoming processors (for the smartphone). And, as there is already plenty of rarely tapped processing power that is nominally available from Apple's SoCs but that remains hidden from view most of the time, only showing up in peak performance favouring benchmarks like GB5, then pushing hard on an energy efficiency first SoC design strategy is, in any case, just one more interesting way and potentially a more fruitful way to explore how performance can be raised.
Retycint - Tuesday, September 15, 2020 - link
People were scoffing at Intel's performance upgrades too, just so you knowcha0z_ - Monday, September 21, 2020 - link
We talk about mobile space and mobile SOC's, it's bad regarding how things move here. Intel did 5% every year, because they didn't have any competition, they milked their buyers and saved a lot of R&D money.For it's size and new process, it's kinda laughable how small the uplift is, but hey! You got a lot faster ML that will be utilised by less than 1% of the buyers! Even the current NE/ML in A13 is not utilised at all. ARM promises big jumps in comparison, if A14 is really that small of an improvement as it's panning out - goes the drain is the 2 generation advantage that already is not really valid vs snapdragon 865+. Let's wait and see tho, I am really curious about the reveal + previous years we got already a lot of geekbench leaks of the new iphone, this year? hmmm
NextGen_Gamer - Tuesday, September 15, 2020 - link
Are we certain that the performance comparisons on the A14 were against the A12 from the previous iPad Air, and not the A13? That was unclear to me. During the presentation, when they started talking about the A14 chip, the speaker switched over to Tim Millet, Apple's VP of Platform Architecture. To me at least, I was assuming he was referring to the previous generation of A-series chips (A13), since he IS the A-series guy, and not just the previous iPad Air generation.And also, 38% more transistors seems crazy - is Apple hiding something? Is it really a 2+6 core CPU and 4-core GPU? Or is maybe 4+4 CPU and perhaps 6 or 8-core GPU? Apple disabling some parts on the iPad/iPhone 12, while enabling the full A14 for the first Apple Silicon Mac later this year?
Andrei Frumusanu - Tuesday, September 15, 2020 - link
> Are we certain that the performance comparisons on the A14 were against the A12 from the previous iPad Air, and not the A13?Yes, Apple is clear about it, that it compared to the past generation *device*. https://www.apple.com/newsroom/2020/09/apple-unvei...
It was also noted in the event.
As for transistors, bigger caches, double size NPU, probably new LPDDR5 controllers, probably fatter ISPs.
NextGen_Gamer - Tuesday, September 15, 2020 - link
You are right, it is noted quite clearly in the press release. I must have missed that during the conference! That is a little disappointing then - this would be the smallest performance increase on both the CPU & GPU side of things since Apple started making their own A-series SoC's.Jaianiesh03 - Wednesday, September 16, 2020 - link
I don't think we should rush to conclusions just yet anyway. Apple's performance figures don't make sense to me because their a12 performance figures for the regular iPad were also wrong. They claimed 40% performance from a12 to a10 whereas in the context of their own figures from the iphone launches (X and Xs) 1.15x1.25=44%. I believe apple just added 25+15=40 for the a12 in the ipad. Similarly here they just added 20(A13)+20=40 for the iPad air. We should expect at least 20 percent improvement announcement over the A13 in their iPhone 12 keynote. We also have to keep in mind that they claimed 15% for the A12 which turned out out to be 20% in Geekbench 5 and 25% in spec2006. Do you think that maybe the reason @Andreif7firewolfsm - Tuesday, September 15, 2020 - link
It may be that they simply wanted to save die area/money this generation.tipoo - Tuesday, September 15, 2020 - link
You don't spend 11 and a half billion transistors on a bleeding edge foundry trying to save money, the transistors are there, it's just not applying to major CPU and GPU bumps over the A13 this year.Gondalf - Wednesday, September 16, 2020 - link
Looking the performance increase over last generation, IMO a good portion of these billions of transistors are simply in place for backup, to increase the yields at decent levels.This is one of the more popular strategies to have more good dies on a wafer.
name99 - Tuesday, September 15, 2020 - link
I think it's too early to draw any conclusions.(a) They may want to make more of a big deal about "our CPU/GPUs are the best" at the Mac release than now. So they say some vague words and that's it.
(b) On the CPU side if we have a big performance increase in SVE/2 and AMX, that won't show until SW is recompiled. So obviously that needs new XCode to be released, but again it may be something they're holding off till the Mac event.
(c) GPU number I think HAS to be a mistake or misunderstanding in the phrasing. I mean, iPhone11 vs iPhone XS GB5 Compute number is already 40%: 6500 vs 4600.
They might just conceivably stay flat for the sake of a more realistic sustained performance, but they aren't going to go backwards!
Presumably post announcement we'll start to see GB numbers in the wild soon, at which point we can recalibrate.
name99 - Tuesday, September 15, 2020 - link
TRIGGER WARNING Far out speculation:One of the things they boast about (but gave no details) is "Advanced Silicon Packaging".
Is it possible they put something like HBM in there, alongside the DRAM, as a tentative first experimental step? And the calibration to automatically steer data between the HBM and the DD5 is still far from perfect?
I agree HBM on this class of devices doesn't necessarily make sense, but maybe it makes sense in terms of power, and while Apple wouldn't go backwards for no reason, they might have effectively gone backwards as a hiccup in rolling out this new tech?
Andrei Frumusanu - Tuesday, September 15, 2020 - link
Their packaging mention probably refers to to InFO, which they had for some time.name99 - Tuesday, September 15, 2020 - link
You think they will still be calling that out after so many years? I'm hoping it refers to more than A10 technology!tipoo - Tuesday, September 15, 2020 - link
AMX extensions were added in the A13 last year fyiname99 - Tuesday, September 15, 2020 - link
Yes, but we still have no indication as to where they are actually USED.They certainly aren't visible in XCode. I have seen no indication that you can trigger their use via Accelerate calls. PERHAPS they are used by some CoreML calls?
My point is that when (if?) they're visible to 3rd party SW, that might boost the performance of some benchmarks, and general purpose code. (Or might not. It's hard to say when ALL you know is that they "accelerate matrix computation". Do they give us access to larger registers that could be used in alternate ways? Access to alternate cache use patterns? Access to some sort of permute or table lookup primitives?)
tipoo - Tuesday, September 15, 2020 - link
Afaik when you use CoreML, it automatically dispatches appropriate tasks to AMX, the neural engine, GPU, or CPU, depending on algorithm typename99 - Wednesday, September 16, 2020 - link
That AMX is included among the places to where CoreML dispatches is an ASSUMPTION.That's precisely my point -- I've seen nothing so far to indicate that the assumption is validated.
Apple has in the past shipped hardware (eg h.265) that wasn't activated until a year or so later, so this is not necessarily unprecedented.
tipoo - Monday, September 21, 2020 - link
It's not an assumption, I'm telling you forthright that AMX instructions are available in EL0, and are used by CoreML and Accelerate.framework.Tomatotech - Wednesday, September 16, 2020 - link
This seemed more of a consumer event - new watches, Apple Watch SE, iPad (the two cheaper models), new bands etc. Hence lighter on the nitty gritty of details on the chips etc. There wasn’t anything ‘Pro’ announced today, which is another reason for being light on details. Apple knows reviews will come out soon anyway.Nice to see Apple a (tiny) bit more of focus on value for money, shame it took a global pandemic to make them do so. They utterly dominate the high end in every sector they compete in so maybe this is a just a long planned move into the upper middle of the market. (Aided by the shift to Arm. Expect lots of technical detail when their ArmBooks come out.)
Speedfriend - Wednesday, September 16, 2020 - link
Focus on value for money? They put the price of the Air up by $100. Sure they kept the normal iPad the same price with a processor upgrade, but that is still a 2-3 year old processor in a 5 year old form factorname99 - Wednesday, September 16, 2020 - link
It's Apple. They're ALL consumer events! Apple is a consumer company!!!blackcrayon - Wednesday, September 16, 2020 - link
Kinda, but discussion the intricacies of optimizing your Swift code for METAL 2 isn't exactly consumer oriented (i.e. certain parts of WWDC).zaza - Tuesday, September 15, 2020 - link
AI features are probably used a lot for the iPhone's camera and LiDAR. But it seems that they are focusing on efficiency which is not a bad thing.nicolaim - Tuesday, September 15, 2020 - link
Typo: A14 SoC chip.SoC = System on a Chip
Oxford Guy - Tuesday, September 15, 2020 - link
Perhaps I'm too cynical but my first guess is always that the ever-expanding AI transistor budget is primarily driven by the desire for corporations/corporate government to have increasingly sophisticated spyware. (Giving people perks to go along with it — like saving some guy about to fall off a cliff — is the spoon full of sugar for the medicine, of course.)Agner Fog joked years ago that the main benefit of multiple processor cores is to make the spyware run faster. Well, since we already have plenty of extra cores these days a newer approach was needed, to expand upon the panopticon. There are already so many layers of spyware in current devices that the spies might need the AI to keep track of all of it.
Getting people to surrender to chip-based TIA while paying for the pleasure is a neat trick.
watzupken - Tuesday, September 15, 2020 - link
From the sound of it, this may be one of the smallest improvement in their SOC performance to date. The fact that they are using A12 instead of A13 as comparison is a tell tale sign. They've always compare new vs last generation to show the performance improvement from my memory. Seems like they lost their lead CPU designer and we are starting to see the impact. At this rate, Apple is at risk of losing whatever single core advantage to the generic ARM chips.Boland - Tuesday, September 15, 2020 - link
They're comparing to the A12 because that's what was in the last iPad. When the phone keynote comes around, you'll get the 13 comparison there.Zerrohero - Wednesday, September 16, 2020 - link
”Apple is at risk of losing whatever single core advantage to the generic ARM chips.”If A14 is 40% faster in single core than A12, it means that the GB5 score for A14 is about 1550.
For reference, the SD 865 gets about 900 in GB5. Yes, 900.
If Apple stops chip development now, then the generic ARM chips will surpass them in four-ish years in single core performance, if they improve 15% every year.
jaj18 - Wednesday, September 16, 2020 - link
lol no, ARM already announced the cortex X1 which will be 30% faster than 865 @3ghz . Then its just 25% difference with A14. Will be lesser if qualcomm goes above 3ghz.Meteor2 - Wednesday, October 7, 2020 - link
What phone can I buy that in?Rego78 - Tuesday, September 15, 2020 - link
As far as we know the A12 was the first Processor on 7nm. Is their chart wrong?Rego78 - Tuesday, September 15, 2020 - link
Even saying so in their presser:https://www.apple.com/newsroom/2018/09/iphone-xs-a...
Sychonut - Tuesday, September 15, 2020 - link
This would have performed better on 14+++++. Just saying.Oxford Guy - Wednesday, September 16, 2020 - link
Looks like I'm not too cynical after all:"All of this is just to treat the symptoms, not the cause. Chaslot believes the real focus needs to be on long term solutions. At the moment, users are fighting against supercomputers to try to protect their free will, but it’s a losing battle with the current tools.
That’s why Chaslot is convinced the only way forward is to provide proper transparency and give users real control: 'In the long term, we need people to be in control of the AI, instead of AI controlling the users.'"
This from an article called ‘YouTube recommendations are toxic,’ says dev who worked on the algorithm
I have been trying to figure out how to rid myself of the dreadful banality of the "12-year-old Humilitates Simon Cowell" video, among other monstrosities that incessantly show up in that list (because my mother visited and watched those awful reality show videos via our WiFi).
Chaslot's statement about supercomputers and our free will seems spot on, if chilling. But, go go Apple. And Nvidia. And everyone else. Cram in as much AI goodness as possible.
SydneyBlue120d - Wednesday, September 16, 2020 - link
No AV1 support at all?!?!Gondalf - Wednesday, September 16, 2020 - link
From an half node can we expect more?? This 5nm is more like a 6nm in the best case.At this point better stay on 7nm cutting down costs, if possible.
If 3nm and 2nm will are the same, upcoming years will are a little boring. We have to remember that on high power SKUs these numbers will are even lower.
Jaianiesh03 - Wednesday, September 16, 2020 - link
I don't think we should rush to conclusions just yet anyway. Apple's performance figures don't make sense to me because their a12 performance figures for the regular iPad were also wrong. They claimed 40% performance from a12 to a10 whereas in the context of their own figures from the iphone launches (X and Xs) 1.15x1.25=44%. I believe apple just added 25+15=40 for the a12 in the ipad. Similarly here they just added 20(A13)+20=40 for the iPad air. We should expect at least 20 percent improvement announcement over the A13 in their iPhone 12 keynote. We also have to keep in mind that they claimed 15% for the A12 which turned out out to be 20% in Geekbench 5 and 25% in spec2006. Do you think that maybe the reason @Andreif7tipoo - Monday, September 21, 2020 - link
Maybe they move to harder internal tests each generation at the same time, like the battery life claim changed over timeTorrijos - Wednesday, September 16, 2020 - link
I think they reach the stage where they are on the TOP of CPU/GPU performance for mobile platform, and increased efficiency and dedicated circuits for ML photo and Video processing.This chip will probably be used in the next iPhones (where they will explain all those specifics new components) and also the AppleTV I Imagine cementing a reference point for performance for iOS apps for the next few years (for Apple Arcade for exemple).
We will see soon enough when the iPhones are presented.
Speedfriend - Wednesday, September 16, 2020 - link
Is this the first time that a leading edge SoC has gone into an iPad as they are usually reserved for iPhones? Given how constrained capacity would have been at 5nm, I find it strange it has been used for the Air. Perhaps expected demand for the iPhone is way below what it was when capacity was booked.Also surely this chip will not be used in any Mac product? With Tiger lake coming out, Apple is gong to really have to pull one out of the hat to compete
Zoolook - Wednesday, September 16, 2020 - link
I doubt the capacity of 5nm is constrained, all of the planned Huawei wafer starts were up in the air just a few months ago, AMD is not on 5nm yet, so Apple and Qualcomm prob fought it out over those wafer starts and now has more capacity than they planned for in the beginning of the year.Speedfriend - Wednesday, September 16, 2020 - link
No Huawei was still reported to be getting its 5nm chips fabricated up until yesterday and had even tried to get additional capacity when the ban was announced 3 months ago. 5nm would have been very tight.Despite everyone calling for a big 5G replacement cycle for the iPhone now, 5G just doesn't offer that much to consumer yet in most places above what 4G provides, so that may be reflected in underlying orders from operators having seen poor demand for other 5G phones.
Zoolook - Wednesday, September 16, 2020 - link
The wafers that has been produced sofar (since mid Q2) was planned atleast a year ago, Apple had first call on wafer capacity up to a certain percentage, so they should have been getting what they ordered, and yield has been reported to be better than expected.From now there are extra wafer starts to divide between Qcom and Apple so they definitely have the opportunity to have more chips than they planned for last year produced this year.
huangcjz - Monday, September 21, 2020 - link
The A5 went into the iPad 2 in March 2011 before the iPhone 4S in October 2011, I think. I can't remember if there are any other examples.playtech1 - Wednesday, September 16, 2020 - link
I wonder if that is the first A series chip tuned for MacOS - could it be that the short period of sustained peak performance was an issue on the Mac side so they focused on fixing that? Pure speculation, but if Apple is going to use its chips across multiple products it seems inevitable that compromises will need to be made.Speedfriend - Wednesday, September 16, 2020 - link
Surely they need something more powerful than this for the Mac, at 40% improvement over A12, we are talking 1550 SC, 3700 MC in Geekbench, when an i5 tiger lake looks to be 1400 SC, 5100MC and the i7 1600SC, 6100 MC. Apple has promised 30% better performance than Intel, haven't they?surt - Wednesday, September 16, 2020 - link
I'd be shocked if they didn't have a much larger chip planned for the mac. I'd expect at least double the area.Zoolook - Wednesday, September 16, 2020 - link
It would be strange if there aren't at least four big cores as a minimum yes.TouchdownTom9 - Friday, September 18, 2020 - link
Completely agree. I would expect 1.5 or 2x larger die size for the first macbook launched (maybe around 150mm if the iphone 12 chip is 100mm sq. I would expect around 200 or 250mm for a mac chip given they likely will do all they can to attack concerns of ARM processing power head on. Though I'll admit I am talking completely out of my depth on thisblackcrayon - Wednesday, September 16, 2020 - link
We already know they make custom tailored chips though. A14X (Z?) might be a perfect fit for the first line of Apple Silicon Macs (ultra light notebooks with similar requirements to an iPad Pro). Who knows - but using the same architecture and ramping the CPU/GPU core #s way up might be what they have planned.Showtime - Wednesday, September 16, 2020 - link
WIN/WIN and not just for Apple. 8% with any efficiency gains while using their own chips is great for us consumers.RedOnlyFan - Wednesday, September 16, 2020 - link
Is tsmc 5nm a full node or a half node "just a renamed 7nm"Archer_Legend - Thursday, September 17, 2020 - link
Possibly bothtipoo - Monday, September 21, 2020 - link
5nm is named as a mainline full node, 4nm is the half node, 3nm is the next full nodedontlistentome - Thursday, September 17, 2020 - link
Could limiting the peak power here be a way to put distance between this and the A14X or whatever it will be in the Mac? That way they can chuck some extra cores and extra power/cooling at that and have clear space between this on single and multi thread?Kurosaki - Thursday, September 17, 2020 - link
When is the 3080 review coming up AT?! I'm waiting! :DJu1iet - Thursday, September 17, 2020 - link
What about protecting from Meltdown and Spectre attacks? They're cheating the CPUs branch predictors. Are these new CPUs that are even richer in machine learning and AI features are safe now?jeffbui - Friday, September 18, 2020 - link
I'm hoping Apple used the initial production of 5nm chips for a smaller run of iPad Airs with some logic disabled. Otherwise, the A14 is rather disappointing.TouchdownTom9 - Friday, September 18, 2020 - link
I am really shocked to hear that this chip seems to have very limited improvements based on face value. Granted we need to see the chip in person and having the iPhone version will allow a better comparison, but the year Apple plans to start taking their chip lineup to the entire product stack, and they make little to no improvements beyond a die shink? Given they have made such major strides in this sector the last 5 years, I think this is pretty shocking. Perhaps we see a full 30-50%+ improvement in battery life, but I wouldn't bank on it.will_meig - Saturday, September 19, 2020 - link
The A14 is about 1,6 times faster than the A13. Not 16%.According to Apple, the A14 is twice as fast as the A12. And the A13 is 20% faster than the A12.
A12: 100%
A13: 120%
A14: 200%
120 x 1,66... = 200
tipoo - Monday, September 21, 2020 - link
Well your math is off on the basis that it's not twice as fast as the A12, their own numbers are 40% better CPU, 30% better GPU, and machine learning was all they said was doubled.adonishong - Sunday, September 20, 2020 - link
NVidia make use of RT-cores as ray-tracing computing unit and convolution network computing also. I have a theory, APPLE is targeting ray-tracing also ...high3r - Monday, September 21, 2020 - link
Could it be that this A14 is a clocked lower than those of iPhone 12?huangcjz - Monday, September 21, 2020 - link
When there's a difference in clock-speeds, the iPad version is usually clocked higher then the iPhone version, as iPads have a larger surface area and volume for heat dissipation than iPhones do.cha0z_ - Monday, September 21, 2020 - link
I am also quite surprised from the performance uplift if it turns out true. Compared to the recent years, it's really really small in both CPU and GPU. Clearly all the budget was put forward the NE, but I still feel that for the major user even the current NE is more than enough and rarely used. To put all into it that will benefit literally less than 1% of the user base instead of CPU/GPU that will benefit all... dunno how to feel about it. Also it doesn't look insanely more efficient, I would expect them to promote that + ipad air battery life to show it too.michael2k - Wednesday, September 23, 2020 - link
Given that every year the iPhone is benchmarked as 4 to 5 years ahead of Qualcomm and Samsung, I don't see the issue with losing a year of CPU/GPU performance gain. It still means they are 4 years ahead.Now you question how ML/NE benefits only 1% of the user base? I'm not sure that's remotely true. Siri uses ML for speech recognition. Are you claiming only 1% of their user base uses Siri?
FaceID and animoji also use ML to recognize your face; are you claiming only 1% of their user base uses FaceID or animoji?
The camera app uses ML in portrait mode to select the foreground and blur the background; are you claiming only 1% of their user base uses portrait mode?
They use ML in their heartbeat monitoring hardware/software to determine if you have an atrial fibrillation. It's possible only 1% of their user base have an Apple Watch and also have an atrial fibrillation, but at 100m watches sold way more than 1% of their users are being monitored for atrial fibrillation; closer to 10% of their user base have an Apple Watch, so at the very least 10% of them use the NE hardware.
Apple definitely uses ML and NE to perform image recognition and tagging on device; are you claiming only 1% of Apple's users search their photos by keywords?
Suraj tiwary - Wednesday, September 23, 2020 - link
Andrei frumusanuplease do cpu benchmarks with spec2017 as spec2006 is too old.
Anymoore - Saturday, September 26, 2020 - link
A14 seems not ready in September unlike A11, A12, A13. So next iPhone also delayed for launch.ChrisGX - Tuesday, September 29, 2020 - link
I think it is possible to be relatively clear about what Apple is claiming here. Granting that Apple is probably referring to the single threaded performance of its A14 performance cores and is probably using Geekbench to get its performance numbers (that is normally what Apple does) then Apple is saying that the A14 will offer 40% more GB5 performance than the A12. So (using representive GB5 benchmark data as a guide) single threaded performance will roughly be:1112 * 1.4 -> 1556.8
A GB5 score of 1557, if confirmed, would mean that Apple has a storming little chip suited to smartphone use that offers the same (peak) single threaded performance as Intel's Core i7-1165G7 Tiger Lake processor. Using a bit more supposition it is possible to weigh up whether a future A14X (it is reasonable to expect that Apple will continue to offer such a product but we will only know for sure once it happens) could actually provide general computing performance in the vicinity of the Core i7-1165G7. To achieve parity with that processor multi-threaded performance in percentage terms (relative to the A12Z/A12X) would have to improve by:
((GB5 score for Core i7-1165G7 - GB5 score for A12Z/A12X) / GB5 score for A12Z/A12X) * 100
((5832 - 4644) / 4644) * 100 -> 25.6%
That seems to be a readily achieveable number when we conider that there is two processor generations and a shrink to the 5nm silicon process between the A12Z/A12X and the A14X.
GPU performance is harder to assess and is best left to when we have some A14X products to compare to Intel's Tiger Lake chips. Still, it isn't so easy to find areas in which the A14X is likely to look like some kind of toy in a direct comparison to the Tiger Lake parts. On the contrary, the limited evidence we have at this point indicates that the A12X will be competitive in performance terms with a relatively high performance part in Intel's 'Mobile', i.e. laptop, processor range while consuming around 1/2 the power. It is hard to dismiss an advantage like that.
A reduction in power consumption and therefore heat that needs to be dissipated through the chassis and skin of a portable device permits the design of thinner and lighter notebooks and tablets and/or longer battery life for such devices. Additionally, it allows the elimination of fans from most of the portable computing devices that consumers will need. Anyone needing a laptop more powerful than that will still need a product that has fans. Irrespective, there still will be a significant Perf/W advantage if the kind of claims that Apple is making about the performance and energy efficiency of its A14 generation cores holds up.
ChrisGX - Tuesday, September 29, 2020 - link
There is a typo in the third paragraph. The reference to A12X is wrong. It should read A14X.six_tymes - Friday, October 2, 2020 - link
those ugly notches, still remain! be gone already!