Thanks Andrei, using the P6 Pro right now and it is remarkably smooth in terms of general UI regardless of the benchmark scores etc., in comparison to other phones. I suspect the scheduling and UI rendering side of things are contributing here. Very much looking forward to the camera review!
I went from an S20 (regular) to the P6 pro. Wife has the S21 regular.
My experience so far:
UI is insanely fast and smooth on the P6. Everything is buttery smooth, no lag ever. It's a joy to use. The S20 is a very fast phone, but it did have the very occasional hiccup when switching between many different apps. So far, this hasn't happened on the Pixel 6 at all.
The S20 had nicer hardware overall, and the hand size was perfect. S20 screen gets a bit brighter, was a tiny bit sharper, and auto-adjust brightness was basically perfect on the S20, it's a little bit imperfect on the P6 (occasionally goes up or down in low light for no reason).
All in all, I'm very happy with the Pixel 6. If the Pixel 7 comes in a 6"-6.2" version next year, I may have to switch again, though!
That's because it is running a mostly Stock OS. Google severely limits background tasks when in use, and prioritises touch input... as opposed to say Samsung, which starts off slower and raises frequency in steps, whilst continuing background tasks. This slows the experience, but can boost efficiency, depending on the user.
Now, the Cortex-A76 is actually not bad. It's a great chip, as it's quiet fast while being efficient. It requires less area and density compared to the A77 and A78. So Google didn't make a mistake here. By going for the A76, they were able to upgrade to have two Cortex-X1 cores. It is a design choice. Another design choice could be 3x X1 and 5x A55, cutting out the Middle-cores for a more simpler design. Or you could potentially have 4x A78 and 4x A55, and have the A78 cores clock higher, for better sustained performance than X1. These are all different design choices, one can be better than another, but it depends on the circumstances.
I just want to add my viewpoint on the performance and efficiency of this chipset/phone.
AI/ML/NPU/TPU Benchmark: GeekBench ML 0.5 This looks like the most accurate representation. The iPhone 13 has an impressive AI performance because their SDK is better fleshed out, their software is coded more natively, and the SoC has pretty impressive specs Cache, CPU, GPU to help with such tasks. The GS101 wins in the Android ecosystem by a wide margin, followed by QSD 888, MediaTek Dimensity, then lastly Exynos. We can see the proper AI chart here: https://images.anandtech.com/graphs/graph17032/126...
GPU Benchmark: GFxBench Aztec Ruins High (Sustained) This looks like the most accurate representation. Again Apple flexes its lead with its optimised software and cutting-edge hardware. Larger phones with better cooling manage to score higher, and giving preference to Qualcomm's mature drivers, followed by Huawei's node advantage, then the mediocre attempts by Exynos which is tied for the last spot with the GS101. We can see the proper GPU chart here: https://images.anandtech.com/graphs/graph17032/101...
CPU Multithread Benchmark: GeekBench 5 (crude Single/Multithread) In the multi-thread test, it shows how sacrificing the middle cores has affected the total score, where it helps to boost the performance of the first 1-2 threads. So at least that design choice is captured. We can see the proper Multithread CPU chart here: https://images.anandtech.com/graphs/graph16983/116...
CPU Single-core Benchmark: SPEC 2017 (fp scores) The SPEC test is more nuanced. We've established that Anandtech has made huge blunders here. Instead of reporting the Power Draw (watts) of the chipset, they instead try to calculate Energy Consumed (joules) by estimating it crudely. It is for that reason, we get some very inconsistent and wrong data. Such as Apple's Efficiency cores using less power than a Cortex-A53, yet producing scores in line with the Cortex-A78.
So instead, we will focus on the fp-scores instead of the int-scores, since this actually scales better from chipset to chipset. And we will focus on the Power Draw figures, to get the proper data. In particular, the tests of 526, 544, and 511 are quite revealing. We can see the proper CPU chart here: https://images.anandtech.com/doci/16983/SPECfp-pow...
As a summary of the raw data, here: Chipset-CoreType: Performance Value / Watts Recorded = Efficiency Score
Apple A14-E: 2.54 / 0.42 = 6.05 Apple A15-E: 3.03 / 0.52 = 5.83 Dim 1200-A55: 0.71 / 0.22 = 3.23 QSD 888-A55: 0.85 / 0.30 = 2.83 Exy 990-A55: 0.84 / 0.50 = 1.68 (? too low! Watts probably not recorded correctly) Exy 2100-A55: 0.94 / 0.57 = 1.65 (? too low! Watts probably not recorded correctly) GS 101-A55: 0.88 / 0.59 = 1.49 (? too low! Watts probably not recorded correctly)
> We've established that Anandtech has made huge blunders here. Instead of reporting the Power Draw (watts) of the chipset, they instead try to calculate Energy Consumed (joules) by estimating it crudely.
I have no idea what you're referring to. The power draw is reported right there, and the energy isn't estimated, it's measured precisely. The A55 data is correct.
Perf/W is directly inversely relative to energy consumption if you would actually plot your data.
The Specific Power Draw makes sense in the context of these comparisons. For the longest time in this industry, they've always talked about Performance per Watt. No-one, not even Intel (and they've been know to be quite shady) uses Performance per Joules.
The total energy consumed in Joules is simply irrational. One can then make a flawed comparison of how much processing could be made through the consumption of a cupcake if you read it's nutritional content. Not only that, if you actually look at the data you guys submitted, it has a lot more variance with Joules, whilst Watts shows a lot more consistent results. Your energy consumed is an estimate, not what is specifically used by the cores when running.
For instance, when using Joules, it makes Apple's Efficiency cores seem to use slightly less power than a Cortex-A55, whilst performing benchmarks slightly faster than a Cortex-A76. If that is true, then no Android phones would be sold above $500 as everyone would simply buy iPhones. It's like comparing a 2011 processor (48nm Dual Cortex-A9) to a 2015 processor (16nm Octa Cortex-A53), so it's not only using less power, but delivering more than x4 times the performance. Ludicrous. You just cannot magically wave away discrepancies that big (x7.43). On the other hand, if you plot it using Watts, you get a deeper picture. Apple's Efficiency cores use about double the energy as a Cortex-A55 but in turn they deliver four times the performance, so the net difference is a much more palatable x2.14 leap in efficiency (at least in max performance comparison). And I'm comparing the latest Apple (A15) to Android (QSD 888) cores.
If the A55 data is as accurate as you say, why do you have discrepancies there as well? For example, QSD 888 versus Google Silicon-101... they're both using off-the-shelf Cortex-A55. Yet, the Qualcomm's chipset is apparently drawing only 0.30 Watts, compared to 0.59 Watts... which is about x2 less. And both perform fairly close scores at 0.85 versus 0.88, making their total efficiency difference of x1.90 (2.83 vs 1.49) puzzling. So something is a miss. Going off the Joules Estimates doesn't fix the issue either, as you still have an unaccounted x1.83 difference in efficiency still.
With all your resources, you guys never got curious about such discrepancies? (sorry for being a little obtuse)
You are obviously uneducated, and don't know what "off-the-shelf" means in any chips. Physical implementation varies a lot even on the same process with the same IP. Either you or Cadence are lying. I'd rather believe a reputable company with decades of experience.
Snapdragon 888: 4x Cortex-A55 @ 1.80GHz 4x128KB pL2, with 4MB sL3, on Samsung 5nm (5LPE) Google Tensor: 4x Cortex-A55 @ 1.80GHz 4x128KB pL2, with 4MB sL3, on Samsung 5nm (5LPE)
Both of these SoC's are using Cortex-A55 cores which were licensed from ARM directly. They are off-the-shelf. These are not custom cores, such as the Mongoose, Early-Kyro, Krait, Denver, Carmel, or the 8-or-so different custom designs released by Apple. If you say that I am lying, then you are also saying that both Google, Qualcomm, are also lying. And note, that they are virtually identical in their specifications and build.
I think you entirely mis-understood the point of Cadence is about. Sure, even on the same chips there are variance, the so-called "silicon lottery". But be realistic, how much of a difference do you think it is? I'll give a hint, the larger the silicon, the more the variance, and the bigger the difference. If you check the latest data from the now bankrupt siliconlottery.com service, the difference with the 3950X is (worst) 4.00Ghz versus 4.15Ghz (best). At best that is a 3-point-something-percent difference, so let's say it is 5%... and this difference is likely to be less on smaller chips. But even if we accept 5%, that is nowhere near x2 variance.
Also, you would be calling AnandTech liars as well: " the new Cortex-A77 picks up where the Cortex-A76 left off and follows Arm’s projected trajectory of delivering a continued SOLID PERFORMANCE UPLIFT of 20-25% CAGR "...
How is it that we are able to be impressed by a +20% uplift, yet, we completely disregard a +90% difference? It is not logical, and doesn't pass the sniff test. You can call me uneducated all you like, I'm trying to arrive at the truth, since there are big discrepancies with the data provided that I've pointed out to above. I am open to criticism, as everyone should be.
Let's look at some units Performance is units of calculation work divided by time. For our graph, some constant times spec runs per second. Performance per watt is then some constant times (spec runs) / (seconds * watts) The joules measurement put up there is specifically joules per spec run. One joule is one watt second, so that number would therefore be (watts * seconds) / (spec runs).
Notice the similarity? Joules is 1/(perf per watt). Hopefully it's clear from this that the "joules" measurement on that graph *is* there to indicate efficiency, just like a perf/watt measurement would be. The only difference is that in the joules graph, high numbers indicate inefficient processors, while in a perf/watt graph, those would be represented by low numbers.
Pixel 4A updated last night to 12 and it runs even smoother. The UI tricks they have done does appear to make a visual difference. So far I'm impressed with 12 aside from the gigantic texts and bars here and there.
Good in-depth review. I know you are doing the camera review of this. So I have a request can you look into if the Pixel 6 cameras are hardware binned to ~12MP even though the specs say they are 50MP/48MP. There is a lot of mixed views out there, most mistaking this as the pixel binning done on other phones like galaxy S21u(Software binned for low light but has access to full resolution). If you could confirm this for the review would be great, looking forward to that review.
The SoC is a bust, they tried to do some gimmickry with their zero talent and tried to make it a cheaper deal by going to Samsung for their IP fabrication expertise and lithography process. Ended up being a dud in CPU, GPU and price to performance, all that NPU NN, mega magic boom is all a farce. I was asking the same thing, what does these provide to us end users ? Nothing. Just that fancy Livetranslation and other gimmicks which you use seldom. On top we do not even know what TPU does in the Pixel Software, it's closed source. AOSP is open but Pixel UI and all backend are closed.
Hardware is utter joke, the P6 series has garbage mmwave system look at the internals, they crammed one lol. LG V50 back in 2019 had 2-33 mmwave antennas. This junk saved on cost. The display has banding issues all over the place. Optical image sensor for Fingerprint is slow and a joke vs the physical dedicated ones. The stereo speaker system has massive channel imbalance on top. Then you have the low battery SoT for this price point and battery capacity. The DIY aspect is thrown into gutters, the phone has massively hamfisted cooling approach with graphite pads smeared all over the place as leaks showed and no proper HS system it's just a small pathetic AL board reinforcement plate doing it so on top the Display has no metal backplate to reinforce it or dissipate heat. What a nonsense. SD888 itself heats up a lot and so many vendors add VC cooling, Sony Xperia 1 Mark 3 messed up there and had inferior performance with throttling. This junk is even more pathetic, pay for a S tier SKU get trash sustained performance of a B+ device, the AP, Pocket now and other Youtube shill press will hype this to moon.
We do not even know how this junk has in terms of Software blocks like P2, P3, P4 had A/B system, then merged partitions, later read only ext4 system. This will have even worse. To make it a round about trash, the software is a joke cheap kiddo inspired garbage, heck that BBKs OnePlus's Oxygen + Oppo Color OS mix is way superior than this junk with massive information density loss.
I'd wait for the next SD successor device, hopefully 888's BS power consumption and insane heat can be reduced.
It's a typo for mmwave, it's 2-3 units. Also I forgot to mention the lack of charger, SD slot, no 3.5mm jack very poor servicing almost impossible to get the phone properly cooled if you open it due to cheap graphite pad reliance. It also has that USB port and microphone soldered to the mainboard which looks like a feeble trash unit check any phone in the recent times and look how solid engineered they are, dual sandwich designs with reinforced chassis and proper heat dissipation.
I agree that the benchmarks do not tell the whole story, but I would still say that even the use of a Snapdragon 870 would have been a better choice. The general performance is similar (maybe a small advantage for Tensor in AI), but the advantages of Snapdragon 870 are bigger: runs much cooler, hugely better battery-life. To be honest I am disappointed by the SOC. The only thing that might make it a seller is the software (ui and camera), but the SOC is rather a no-go.
There are other factors though. Early ROM settings, tweaks, bugs, and cooling/hardware. The 870 may have scored lower in a P6 as well. So many factors. - Agreed, the P6 should be a bit more polished though.
The problem goes beyond the slightly worse SoC than the already existing Qualcomm offering. It's that despite being a "Google SoC" they still support it for just 3 years. All the excuses used over the years, all the pointing fingers at SoC manufacturers for the lack of support were just proven to be a load of crap. Same crap, now with a google sticker.
It's about to get worse with the camera review. I can verify Google might have been bluffing about the 50mp/48mp sensors. The sensors are locked at 12mp. So Pixel pro has essentially three 12 mp cameras. Which means the portrait mode zoom of 2x is a low resolution 3 MP image. Also at 10x zoom the image resolution is 2.5MP, 4 times lower than that of s21 ultra. What drove Google to make the choice of first hardware pixel binning the resolution down and then trying to digitally blow the resolution backup!!.It's baffling, tried to get an answer from Google support, they just refused to confirm or deny this is binned at the hardware level
"I can verify that google might have been bluffing"
dude, lmfao—it's called "binning"; please look it up. they've been upfront about this and it was known before the phone was even launched, let alone after we've seen all of these reviews. The reason Google support "refused to confirm or deny" is because the people doing customer support are unlikely to know what "pixel binning" is (hey, I guess they're in good company there with you), and are not equipped to deal with weirdos of your specific variety.
Looks like you both do not value to concept of understanding the topic before responding, So pay attention this time. I am talking about hardware binning.. not software binning like every one else does for low light. Hardware binning means the sensors NEVER produce anything other than 12MP. Do both of you understand what NEVER means? Never means these sensors are NEVER capable of 50MP or 48MP. NEVER means Pixel 3x zoom are 1.3MP low resolution images(Yes that is all your portrait modes). NEVER means at 10x Pixel images are down to 2.5MP. Next time Both of you learn to read, learn to listen before responding like you do.
Agreed this is useful in those situations where it's needed but those situations probably aren't very common for those of us that don't do a lot of international travel. In local situations with non-native English speakers typically enough English is still known to "get by."
Not always. My in-law just moved in who knows no English and I barely speak their native language. We've always relief on translation apps to get by. When I got the P6 this weekend, both our lives just got dramatically better. The experience is just so much more integrated and way faster. No more spending minutes pausing while we type responses. The live translate is literally life changing because of how improved it is. I know others in my situation, it's not that uncommon and they are very excited for this phone because of this one capability.
Agreed, but, in the context of the review: - does this need to run locally. (My guess is yes, that non-local is noticeably slower, and requires an internet connection you may not have.) - does anyone run it locally (no idea) - is the constraint on running it locally and well the amount of inference HW? Or the model size? or something else like CPU? ie does Tensor the chip actually do this better than QC (or Apple, or, hell, an intel laptop)?
Wow. You know what the fact that the phone has a modem to compete with Qualcomm for the first time in the US is good enough for me. The more competition the better, yes Qualcomm is still collecting royalties for their parents but who cares.
Surprisingly large for being cheap. Dual X1's with 1 MB cache, 20 Mali cores. So many inefficiencies just to get a language translation block. As if both the engineers and the bean counters fell asleep. To be fair, it's a first-gen phone SoC for Google.
Idk if I regard Samsung + AMD as much better, though. Once upon a time AMD had a low-power graphics department. They sold that off to Qualcomm over a decade ago. So this would probably be AMD's first gen on phones too. And the ARM X1 core remains a problem. It's customizable but the blueprint seems to throw efficiency out the window for performance. You don't want any silicon in a phone that throws efficiency out the window.
You both are wrong. Yes, it’s true apple is not the majority device supplier. Yes, it’s also true that apple is far far ahead.
But what is wrong is that
1. @TheinsanegamerN : Where apple doesn’t dominate, they are predominantly lower segment markets where avg smartphone price is far below the cheapest iPhone.
Apple sells more iPhones than Samsung sells Notes (RIP), S series and fold/flip. Samsung sells overall more phones because they operate in price segments all the way from $200 to $2000 while apple starts $400.
You can be sure that SE Asia, S.America and Africa uses androids, but it’s also true that these androids are far from the bleeding edge flagships.
2. @tuxRoller : Competition is important. If Samsung won’t team up with AMD, Adreno and Mail will become complicit. If Tensor doesn’t outperform ML tasks, exynos and Qualcomm won’t innovate there. If apple won’t have started the 64bit and Custom CPU race, nothing would be same today.
I am really happy with my Pixel 6 pro so far. Above all, I want my phone to have a really excellent camera and every new mid to high end phone now is plenty fast for just about any use. I bet the Pixel 6 takes much better pictures than whatever "trash" you are using. But you sure have them beat when it comes to the benchmarks I bet.
It's the opposite, the iPhone is massively ahead in performance, but every high end phone takes the same high end photos... you got the same photos but a lot less performance...
I took some time to really test the camera and you are simply wrong. I have been photographing with it heavily for the last couple of days and the camera is incredible. Call it a gimmick or whatever, but the way they do their photo stacking puts this phone in a league of its own. If your main use case for a phone is benchmarking, I guess this is not your device.
Everyone and their grand mother do image stacking. iPhone is almost as good if not better even with a smaller sensor when compared to the latest Pixel. How's that for "in a league of its own"?
As long as they use Samsung process they will be hopelessly behind Apples Socs in efficiency unfortunately, would be interesting to see SD back on TSMC process for a direct comparison with Apple silicon.
I was enjoying how the speculation about the GS101 were claiming its "not far behind" the SD888. I was never expecting google to make another high end device, let alone one that undercuts most of the competition, as its just not what trends would say.
I am not impressed. As someone who was rather hopeful that google would take control and bring us android users a true apple chip equivalent some day, this is definitely not the case with google silicon.
Considering how cookie cutter this design is, and how google made some major amateur decisions, I do not see google breaking away from the typical android SOC mold next generation.
Looking back at how long it took apple to design a near 100% solo design for the iPhone (A8X was the first A chip to use a complete inhouse GPU and etc design, other than ARM cores), that is a whopping 4 and a half years. Suppose this first google "designed" chip is following the same trend, an initial "brand name" break away yet still using a lot of help from other designs, and then slowly fixing one part at a time till its all fixed, while also improving what is already good, I could see google getting there by the Pixel X (10?). But as it stands, unless google dedicates a lot of time to actually altering Arm's own designs and simply having samsung make it, I don't see Tensor every surpassing qualcomm (unless samsung has some big breakthrough in their own CPU/GPU IP which may or may not come with AMD's help).
As the chip stands today, its "passable", but not impressive. Considering Google can get android to run really well on a SD765G, this isn't at all surprising. The TPU seems like a nice touch, since honestly, focusing on voice is more important than on "raw" cpu performance or something. I have always been frustrated with speech to text not being "perfect" and constantly having to correct it manually and "working around" its limitations. As for my own experience with the 6 Pro, its bloody good.
Now to specifics. The X1 chips do get hot, as does the 5G modem. I switched the device to LTE for now. I do get 5G at home and pretty much most places I go, and it is fast, its not something I need right now. I even had a call drop over 5G because I walked around a buildings corner. Not fun.
The A76 excuse I have heard floating around, is that it takes up less physical die space, by A LOT. And apparently, there was simply no room for an A77 or A78 because the TPU and GPU took up so much room. I don't understand this compromise, when the GPU performance is this mediocre. Why not simply use the same GPU size as the S21 (Ex2100) and give the A78's more room? Don't know, but an odd choice for sure.
The A55 efficiency issues are noticeable. Try playing spotify over bluetooth for an hour, and watch the battery drain. I get consistently great standby time, and very good battery life when heavily using my device, but its these background screen off tasks that really chug the battery more than expected.
Over all though I haven't noticed any serious issues with my unit. The finger print scanner works as intended, and is better than my 8T. The camera does just as well if not better than the previous pixels. And over all...no complaints. But I wonder how much of this UX comes from google literally brute forcing their way with 2 X1 cores and a overkill GPU, and how much of it is them actually trying.
As for recommendations to google for Tensor V2, they need to not compromise efficiency for performance. This phone isn't designed to game, cut the GPU down, or heck, partner with AMD (who is working with samsung) to bring competitive graphics to mobile to compete with Adreno from QComm. 2 X1 cores, if necessary, can stay, but at that point, might as well just have 4 of them and get rid of all the other cores entirely and simply build a very good kernel to modulate the frequency. Or make it a 2+6 design with A57 cores. As someone who codes kernels for pixels and nexus devices for a long time, trying to optimize the software to really get efficiency out of the big.LITTLE system is near impossible, and in my opinion, worthless unless your entire scheduler is "screen on/off" based, which is literally BS. I doubt google has any idea how to build a good CPU governor nor scheduler to truly make this X+X+X system even work properly, since I have yet see qcomm or samsung do it "well" to call commendable.
The rest of the phone is fine. YoY improvements are always welcome, but I think the pixel 6/pro just really show how current mobile chips are so far behind apple that you might as well give up. YoY improvements have imo halted, and honestly no one seems to be having the thought that maybe we should cut power consumption in half WITHOUT increasing performance. I mean...the phones are fast enough.
Who knows. We will see next year.
PS: I also am curious what google will do with the Pixel 6A (if they make one at all). Will it use a cut down GS101 or will it get the whole chip? It would seem overkill to shove this into a 399$ phone. Wonder what cut downs will be made, or if there will be improvements as well.
Good thoughts, there is one big issue you missed. Pixel camera sensors 50mp/48mp being binned to 12mp yet Google labeled them as 50mp/48mp. Every shot outside the native 1x,4x is just a crop of the 12mp image including pottaitk3mp crop) and 10x(2.5mp crop}.
You said you do Kernels but "As someone who was rather hopeful that google would take control and bring us android users a true apple chip equivalent some day, this is definitely not the case with google silicon."
What is Android lacking from needing that so called A series processor onto the platform ? I already see Android modding has been drained a lot now. It's there on XDA but less than 1% of user base uses mods, maybe root but it's still niche.
Android has been on a downhill since a long time. With Android v9 Pie to be specific. Google started to mimic iOS on superficial level with starting from OS level information density loss now on 12 it's insane, you get 4 QS toggles. It's worst. People love it somehow because new coat of trash paint is good.
On HW side, except for OnePlus phones no phones have proper mod ecosystem. Pixels had but due to the crappy policies they implemented on the HW side like AB system, Read only filesystem copied from Huawei horrible fusing of filesystems and then enforcing all these at CTS level, they added the worst of all - Scoped Storage which ruined all the user use cases of having a pocket computer to a silly iOS like trash device. Now on Android any photo you download goes into that Application specific folder and you cannot change it, due to API level block on Playstore for targeting Android v11 which comes with Scoped Storage by default. Next year big thing is coming, all 32bit applications will be obsoleted because ARM is going to remove the 32bit IP from the Silicon designs. That makes 888 the last 32bit capable CPU.
Again what do you expect ? Apple A series shines in these Anandtech SPEC scores but when It comes to real life Application work done performance, they do not show the same level of difference. Which is basically Application launch speed and performance of the said application now Android 12 adds a splash screen BS to all apps globally. Making it even worse.
There's nothing that Google is going to provide you or anyone to have something that doesn't exist, Android needs freedom and that is being eroded away every year with more and more Apple inspired crap. The only reason Google did this is to experiment on those billions of dollars and millions for their R&D, Pixel division has been in loss since 2016, less than 3% North American marketshare. Only became 3 from 2 due to A series budget Pixels. And they do not even sell overseas on many markets. In fact they imitate Apple so much that now they want the stupid HW exclusive joke processors for their lineup imitating Apple for no reason. Qcomm provides all the blobs and baseband packages, If Google can make them deliver support for 6 years they can do it, but they won't because sales. All that no charger because environment, no 3.5mm jack because no space, no SD slot is all a big fat LIE.
Their GS101 is a joke, a shame to CPU engineering, trash thermal design, useless A7x cores and the bloated X1 x2 cores for nothing, except for their ISP nothing is useful and even the Pixel camera can be ported to other phones, Magic Eraser for eg works on old Pixels, soon other phones due to Camera API2 and Modding.
Google's vision of Android was dead since v9 and since the death of Nexus series. Now it's more of a former shell with trash people running for their agenda of yearly consumerism and a social media tool rather than the old era of computer in your pocket, to make it worse the PR of Pixel is horrible and more political screaming than anything else.
Apple silicon shines in part due to being on a superior process, and a much better memory subsystem, Samsung process is far behind TSMC in regards to efficiency unfortunately.
I'm still left wondering what the point of Tensor is after all this. It doesn't seem better than what was on market even for Android. I guess the extra security updates are nice but still not extra OS updates even though it's theirs. And the NPU doesn't seem to outperform either despite them talking about that the most.
And boy do these charts just make A15 look even more above and beyond their efforts, but even A4 started with Cortex cores, maybe in 2-3 spins Google will go more custom.
I wonder if we will now see a similar pattern play out in the laptop space, with Macs moving well beyond the competition in CPU and GPU performance/watt, and landing at similar marketshare (it would be a big deal for the Mac to achieve the same share of the laptop market that the iPhone has of the smartphone market).
Well I'm definitely going to hold my Apple stocks for years and that's one part of the reason. M1 Pro and Max are absolute slam dunks on the industry, and their chipmaking was part of what won me over on their phones.
When did apple manage that? I can easily recall the M1 pulling notably more power then the 4700u in order to beat it in benchmarks despite having 5nm to play with. The M1X max pulls close to 100W at full tilt, and is completely unsustainable.
The average laptop costs $500 and most expensive laptops are bought by enterprises where Mac OS has a limited share. While the Macbookz are great devices, they are hobbled by poor monitor support at the Air end and cray prices at the MacBook Pro end. For most users the difference between the performance of a MacBook Pro and a $1000 laptop is unnoticeable except in their wallet!
If anyone wants to know know why Nvidia is most interested in purchasing ARM, it's in order to put the inefficient Mali out of it's misery - and simultaneously replace it with their own license-able Geforce cores!
Since ARM Corp started throwing in the GPU for free, they've had to cut GPU research (to pay for the increasingly complex CPU cores, all of which come out of the same revenue box!) But Nvidia has the massive Server Revenue to handle this architecture-design mismatch; they will keep the top 50% of the engineers, and cut the other cruft loose!
That may be a side effect. But the reason for purchasing g it would be maki g money, and controlling the market. Yes, it’s true that Nvidia wa t to control all graphics and to turn the GPU into the main programming aim.
If nvidia wanted to do that they could simply license ARM and make their own superior chip. The fact they have fallen flat on their face every time they have tried speaks volumes.
When a place like Rockchip can sell an Arm chip bundled with Mali for Peanuts, you can understand why superior GPU wasn't enough to win Phone customers!
You also need integrated modem if you ever want to compete with Qualcomm (not something Nvidia was willing to do).
But that bundling system has been shorting ARM Mali development for years (Qualcomm, Apple, and soon Samsung (via AMD) are all bringing better high-end options into the field - you know your performance/watt must be pathetic when a company like Samsung is getting desperate-enough to pay the cost of porting AMD GPU over to ARM architecture.
I don’t see the reason for using the A76 cores being one of time. This is a very new chip. The competitors on the Android side have been out for a while. They use A78 cores. Samsung uses A78 cores. So time doesn’t seem to be the issuer here, after all it does use the X1. So I wonder if it isn’t the size of the core on this already large, crowded chip that’s a reason, and possibly cost. If the newer cores take up more area they would cost slightly more. These chips are going to be bought in a fairly small number. Estimates have it that last year, Google sold between 4 and 7 million phones, and that they’re doubling this year’s order. Either would still be small, and give no advantage to Google in volume pricing compared to other chip makers.
The second is that you have to wonder if Google is following the Apple road here. Apple, of course, designs many chips, all for their own use. Will Google keep their chips for their own use, assuming they’re as successful in selling phones as Google hopes, or will they, after another generation, or two, when the chip is more of their own IP, offer them to other Android phone makers, and if so, how will Samsung feel about that, assuming their contract allows it?
I think they went for the A76 cores because of cost, like you said Tensor is already huge and the A78 or A77 cores would be more power efficient but they are also much bigger than the A76 on 5nm process. Even if they were to clock an A78 lower it would just be a waste of money and space on the chip for them. They probably had a specific budget for the chip which meant a specific die size. This is not Apple who is willing to throw as much money as they can to get the best performance per watt.
The display was rumored to be an E5 display from Samsung display which is in their latest display so I don't know why Google is not pushing for higher brightness but it could be because of heat dissipation as well...I highly doubt Samsung gave Google their garbage displays lol Also Google does not utilize the variable refresh rate very well and it's terrible for battery life. I have also seen a lot of janky scrolling with 120Hz in apps like Twitter..it has hiccups scrolling through the timeline compared to my Pixel 3.
The modem is very interesting probably more so than Tensor, this is the first competition for Qualcomm in the US at least. A lot of people have been saying that the modem is integrated in Tensor but why would Google integrate a modem that does not belong to them in "their" chip? That's like asking Apple to integrate Qualcomm modems in their chip. Also Samsung pays Qualcomm royalties for 5G so they probably have a special agreement surrounding the sale and implementation of the modem. It is definitely not as power efficient as Qualcomm's implementation but it's Good start. I got 400+ Mbps on T-Mobile 5GUC outdoors and 200 Mbps indoors (I don't know which band). It surprisingly supports n258 band like the iPhone.
Apple couldn’t integrate Qualcomm’s modems in their own chips because Qualcomm doesn’t allow that. They only allow the integration of their modems into their own SoC. It’s one reason why Apple wasn’t happy with them, other than the overcharging Qualcomm has been doing to Apple, and everyone else, by forcing the licensing of IP they didn’t use.
Yes, but all that conjecture hasn't been confirmed by any reputable source. And, the statements by Phil Carmack and Monika Gupta indicate Google has been optimising for power (most of all) and performance (to a lesser degree) rather than area. We end up back at the same place, using the A76 cores just doesn't make a lot of sense.
Also, the A78 is perhaps 30% larger than the A76 (on a common silicon process) whereas, I think the X1 is about twice the size of the A76. I'm not sure what the implications of all that is for wafer economics but I'm pretty sure the reason that Tensor will probably end up suffering some die bloat (compared to upper echelon ARM SoCs from past years) despite the dense 5nm silicon process is the design decision to use two of those large X1 cores (a decision that Andrei seems perplexed by).
The Google TPU only trades blows with the Qualcomm Hexagon 780 with the exception Mobile BERT. It's not an especially impressive first showing given that this is Google's centerpiece, and it's also unclear what the energy efficiency of this processor is relative to the competition. It's good there's competition though; at the phone level, software is somewhat differentiated and pricing is competitive.
Even if the performance isn't impressive, the big deal is guaranteed SW updates. Look at the Nvidia Shield, it came out in 2015 and its still getting the latest Android updates/OS! No other product has been updated for so long, 6 YEARS!
Now that Google owns the SoC they have full access to the SoC driver source code so should be able to support the SoC forever, or at least ~10 years....not reliant on Qualcomm's 3 yr support term etc.
Yeah, except they only guarantee 3 years of software update and 5 years of security updates, which is really a shame if you ask me.
If they could have guaranteed 5 years of OS updates from the start it would have been a very strong selling point. Especially since the difference between each generation becomes smaller every year, I could see people keeping a Pixel 6 for well over 3 years... How cool would that be to keep a $599 for 5 years and still run the latest android version ?
They should've just guaranteed 5 years for SW updates. Based off the pixel 3 being guaranteed for 3 yrs and than this month dropping security updates for Pixel 3 from their list, they're serious about guaranteeing being the maximum support they'll provide which is unfortunate. Maybe they'll update it this year cause that seems like a big hole.
Why? What new features do you NEED in your phone? Android stopped evolving with 9, iOS with about version 11. The newest OSes dont do anything spectacular the old ones didnt do.
You're getting 5 years of security updates and dont have apps tied to OS version like apple, giving the pixel a much longer service life then any other phone.
They're saying 3 years of OS updates, a far shot from 10. 5 years of security updates, which is a start, but owning their supposed own SoC they should have shot for 5 of OS.
After all the build up on "We're going to have our own chips now so we can support them without interference from Qualcomm", three years of updates is seriously underwhelming.
Apple has six year old phones running the current OS and the eight year old iPhone 5s got another security update a month ago.
All we know now about software updates is that it will get five years of SECURITY updates, nothing about OS updates was stated, as far as I see. If that’s true, they Google may still just offer three years. Even now, Qualcomm allows for four years of OS updates, but not even Google has taken advantage of it. So nothing may change there.
It's very irritating how slow Android SOCs are. I'll just keep on waiting. Won't give up my existing Android phone until actual performance improvements arrive. Hopefully Samsung x AMD will make a difference next year.
Looking at the excellent battery life of the iPhone 13 (which I am currently waiting for as my work phone) does iPhone till kill suspend background tasks. When I used to day trade, my iPhone would stop prices updating in the background, very annoying when I would flick to the app to check prices and unwittingly see prices hours old.
Seems to me based on thermals that the pixel 6/pro suffer from thermal throttling, and thus have power power budgets, then they should have given the internal hardware, leading to poor results.
Makes me wonder what one of these chips could do in a better designed chassis.
I'd like to ask a question that's not rooted in any particular company, whether it's x86, Google, or Apple, namely: how different *really* are all these AI acceleration tools, and what sort of timelines can we expect for what?
Here are the kinda use cases I'm aware of: For vision we have - various photo improvement stuff (deblur, bokeh, night vision etc). Works at a level people consider OK, getting better every year. Presumably the next step is similar improvement applied to video.
- recognition. Objects, OCR. I'd say the Apple stuff is "acceptable". The OCR is genuinely useful (eg search for "covid" will bring up a scan of my covid card without me ever having tagged it or whatever), and the object recognition gets better every year. Basics like "cat" or person recognition work well, the newest stuff (like recognizing plant species) seems to be accurate, but the current UI is idiotic and needs to be fixed (irrelevant for our purposes). On the one hand, you can say Google has had this for years. On the other hand my practical experience with Google Lens and recognition is that the app has been through so many rounds of "it's on iOS, no it isn't; it's available in the browser, no it isn't" that I've lost all interest in trying to figure out where it now lives when I want that sort of functionality. So I've no idea whether it's better than Apple along any important dimensions.
For audio we have - speech recognition, and speech synth. Both of these have been moved over the years from Apple servers to Apple HW, and honestly both are now remarkably good. The only time speech recognition serves me poorly is when there is a mic issue (like my watch is covered by something, or I'm using the mic associated with my car head unit, not the iPhone mic). You only realize how impressive this is when you hear voice synth from older platforms, like the last time I used Tesla maybe 3 yrs ago the voice synth was noticeably more grating and "synthetic" than Apple. I assume Google is at essentially Apple level -- less HW and worse mics to throw at the problem, but probably better models.
- maybe there's some AI now powering Shazam? Regardless it always worked well, but gets better and faster every year.
For misc we have - various pose/motion recognition stuff. Apple does this for recognizing types of exercises, or handwashing, and it works fine. I don't know if Google does anything similar. It does need a watch. Not clear how much further this can go. You can fantasize about weird gesture UIs, but I'm not sure the world cares.
- AI-powered keyboards. In the case of Apple this seems an utter disaster. They've been at it for years, it seems no better now with 100x the HW than it was five years ago, and I think everyone hates it. Not sure what's going on here. Maybe it's just a bad UI for indicating that the "recognition" is tentative and may be revised as you go further? Maybe the model is (not quite, but almost entirely) single-word based rather than grammar and semantic based? Maybe the model simply does not learn, ever, from how I write? Maybe the model is too much trained by the actual writing of cretins and illiterates, and tries to force my language down to that level? Regardless, it's just terrible.
What's this like in Google world? no "AI"-powered keyboards?, or they exist and are hated? or they exist and work really well?
Finally we have language. Translation seems to have crossed into "good enough" territory. I just compared Chinese->English for both Apple and Google and while both were good enough, neither was yet at fluent level. (Honestly I was impressed at the Apple quality which I rate as notably better than Google -- not what I expected!)
I've not yet had occasion to test Apple in translating images; when I tried this with Google, last time maybe 4 yrs ago, it worked but pretty terribly. The translation itself kept changing, like there was no intelligence being applied to use the "persistence" fact that the image was always of the same sign or item in a shop or whatever; and the presentation of the image, trying to overlay the original text and match font/size/style was so hit or miss as to be distracting.
Beyond translation we have semantic tasks (most obviously in the form of asking Siri/Google "knowledge" questions). I'm not interested in "which is a more useful assistant" type comparisons, rather which does a better job of faking semantic knowledge. Anecdotally Google is far ahead here, Alexa somewhat behind, and Apple even worse than Alexa; but I'm not sure those "rate the assistant" tests really get at what I am after. I'm more interested in the sorts of tests where you feed the AI a little story then ask it "common sense" questions, or related tasks like smart text summarization. At this level of language sophistication, everybody seems to be hopeless apart from huge experimental models.
So to recalibrate: Google (and Apple, and QC) are putting lots of AI compute onto their SoCs. Where is it used, and how does it help? Vision and video are, I think clear answers and we know what's happening there. Audio (recognition and synth) are less clear because it's not as clear what's done locally and what's shipped off to a server. But quality has clearly become a lot better, and at least some of that I think happens locally. Translation I'm extremely unclear how much happens locally vs remotely. And semantics/content/language (even at just the basic smart secretary level) seems hopeless, nothing like intelligent summaries of piles of text, or actually useful understanding of my interests. Recommendation systems, for example, seem utterly hopeless, no matter the field or the company.
So, eg, we have Tensor with the ability to run a small BERT-style model at higher performance than anyone else. Do we have ways today in which that is used? Ways in which it will be used in future that aren't gimmicks? (For example there was supposed to be that thing with Google answering the phone and taking orders or whatever it was doing, but that seems to have vanished without a trace.)
As I said, none of this is supposed to be confrontational. I just want a feel for various aspects of the landscape today -- who's good at what? are certain skills limited by lack of inference or by model size? what are surprising successes and failures?
I don't have any data but A76 is more efficient than A78 while relatively lower performance region. According to following DVFS carves, A77 is out of the question. https://images.anandtech.com/doci/15813/A78-X1-cro...
I think we have come to a point that pushing performance for mobile devices is starting to slow down big time, or in some cases like Exynos where we see regressions. The SOC gets refreshed each year, pushing for higher performance. The fabs however is slower to catch up, and despite the marketing of 7nm, 5nm, 3nm, etc, they may not be anywhere near what is being marketed. In this case, squeezing a fat GPU sounds great on paper, but in real life, the sustained performance is not going to make a huge difference because of the power and heat. In any case, I feel the push for an annual SOC upgrade should slow down because I certainly don't see significant difference in real life performance. We generally only know that last years SOCs are slower only when running benchmarks. Even in games, last gen high end SOCs can still handle challenging titles. Instead, they should focus on making the SOCs more power efficient.
Not India, China, UK, Russia, most of the EU, Africa. Which is the vast majority of the world's population and the vast majority of the world's phones, a great many of which are still feature phones.
Not India, China, UK, Russia, most of the EU, Africa. Which is the vast majority of the world's population and the vast majority of the world's phones, a great many of which are still feature phones.
To me, one of the most interesting points about this "meh" first Google custom SoC is that it was created with lots of Lego blocks from Samsung; I guess Google working with Qualcomm was either out of the question or not something either was willing to do. Maybe this was about Google wanting to show QC that they can develop a Pixel smartphone without them, maybe the two compete too closely on ML/AI, or maybe they just don't like each other much right now - who knows? Still, an SD 888-derived SoC with Google TPU would have likely been better on performance and efficiency. This one here is an odd duck. As for the Pixel 6, especially the Pro: camera is supposed to be spectacular, but with the battery life as it is and, of course (Google, after all), no expandable storage and no 3.5 mm headphone connectors, it missed the mark for me. But, the Pixels are sold out, so why would Google change?
If you want a "really excellent camera", sorry to disappoint you but you'll need to be buying an actual camera. The only thing a multipurpose portable computing device can ever be excellent at is being a multipurpose portable computing device.
isn't that pretty much verbatim what Stevie said when he showed the original iPhone? nothing has really changed since. it was, kinda, a big deal when Stevie intoned that the thingee incorporated 3, count em 3!, devices that you had to carry that day!!! internet, phone, and number 3 (whatever that was). is a 2021 smartphone really anything more?? I mean, beyond the capacity of more transistors. thank ASML (and some really smart physicists and engineers) for that not Apple or Samsung or Google or ... last time I checked Apple's 'our own ARM' SoC is just bigger and wider ARM ISA, due to the, so far, increasing transistor budget available at the foundries.
that all begs the fundamental question: if Apple and The Seven Dwarfs have access to the same physical capital (ASML, et al) why the difference? if everybody spends time and money tweaking a function (that they all need, one way or another), in some time (short, I'll assert) The One Best Way emerges. the task, in the final analysis, is just maths. of course, Best is not a point estimate, as many comments make clear; there're trade offs all along the line.
it would be fun to use one of the Damn Gummint's supercomputers (weather or nucular bomb design) to spec a SoC. wonder how different the result would be?
I've had the pixel 6 pro for a week now and I have to say it's amazing. I don't care what the synthetic benchmarks say about the chip. It's crazy responsive and I get through a day easily with heavy usage on the battery. At a certain point extra CPU/gpu power doesn't get you anywhere unless your an extreme phone gamer or trying to edit/render videos both of which you should really just do on a computer anyway. What I care mostly about is how fast my apps are opening and how fast the UI is. Theres a video comparison on YouTube of the same apps opening on the iPhone 13 max and the p6 pro and you know what the p6 pro wins handily at loading up many commonly used apps and even some games. Regarding the battery life, I expect to charge my phone nightly so I really don't care if another phone can get me a few more hours of usage after an entire day. I can get 6 hours of SOT and 18 hours unplugged on the battery. More than enough.
Well that would be true if iOS apps were the same as Android apps. In the review of A15, it was called out how Android AAA games such as Genshin Impact were missing visual effects altogether which were basically present in iOS. These app opening tests are pretty obtuse in my opinion and it checks out as well. For a more meaningful comparison, have a look at this and how badly this so called google soc is spanked by A15!
This piece has been up for three days, and there are still tons of typos and errors on every page? How is this happening? Why doesn't AnandTech maintain normal standards for publishers? I can't imagine publishing this piece without reading it. And after publishing it, I'd read it again – there's no way I wouldn't catch the typos and errors here. Word would catch many of them, so this is just annoying.
"...however it’s only 21% faster than the Exynos 2100, not exactly what we’d expect from 21% more cores."
The error above is substantive, and undercuts the meaning of the sentence. Readers will immediately know something is wrong, and will have to go back to find the correct figure, assuming anything at AnandTech is correct.
"...would would hope this to be the case."
That's great. How do they not notice an error like that? It's practically flashing at you. This is just so unprofessional and junky. And there are a lot more of these. It was too annoying to keep reading, so I quit.
Has Vulkan performance improved with Android 12? That is a serious question. There has been some strange reporting and punditry about the place that seems intent on strongly promoting the idea that the Tensor Mali GPU is endowed with oodles and oodles of usable GPU compute performance.
In order to make their case these pundits offer construals of reported benchmark scores of Tensor that appear to muddle fact and fiction. A recent update of Geekbench (5.4.3), for instance, in the view of these pundits, corrects a problem with Geekbench that caused it to understate Vulkan scores on Tensor. So far as I can tell, Primate Labs hasn't made any admission about such a basic flaw in their benchmark software, that needed to be (and has been) corrected, however. The changes in Geekbench 5.4.3, on the contrary, seem to be to improve stability.
I am hoping that there is a more sober explanation for the recent jump in Vulkan scores (assuming they aren't fakes) than these odd accounts that seem intent on defending Tensor from all criticism including criticism supported by careful benchmarking.
Of course, if Vulkan performance has indeed improved on ARM SoCs, then that improvement will also show up in benchmarks other than Geekbench. So, this is something that benchmarks can confirm or disprove.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
108 Comments
Back to Article
jaju123 - Tuesday, November 2, 2021 - link
Thanks Andrei, using the P6 Pro right now and it is remarkably smooth in terms of general UI regardless of the benchmark scores etc., in comparison to other phones. I suspect the scheduling and UI rendering side of things are contributing here. Very much looking forward to the camera review!jiffylube1024 - Wednesday, November 3, 2021 - link
Same experience here on a P6 regular.I went from an S20 (regular) to the P6 pro. Wife has the S21 regular.
My experience so far:
UI is insanely fast and smooth on the P6. Everything is buttery smooth, no lag ever. It's a joy to use. The S20 is a very fast phone, but it did have the very occasional hiccup when switching between many different apps. So far, this hasn't happened on the Pixel 6 at all.
The S20 had nicer hardware overall, and the hand size was perfect. S20 screen gets a bit brighter, was a tiny bit sharper, and auto-adjust brightness was basically perfect on the S20, it's a little bit imperfect on the P6 (occasionally goes up or down in low light for no reason).
All in all, I'm very happy with the Pixel 6. If the Pixel 7 comes in a 6"-6.2" version next year, I may have to switch again, though!
Kangal - Wednesday, November 3, 2021 - link
That's because it is running a mostly Stock OS. Google severely limits background tasks when in use, and prioritises touch input... as opposed to say Samsung, which starts off slower and raises frequency in steps, whilst continuing background tasks. This slows the experience, but can boost efficiency, depending on the user.Now, the Cortex-A76 is actually not bad. It's a great chip, as it's quiet fast while being efficient. It requires less area and density compared to the A77 and A78. So Google didn't make a mistake here. By going for the A76, they were able to upgrade to have two Cortex-X1 cores. It is a design choice. Another design choice could be 3x X1 and 5x A55, cutting out the Middle-cores for a more simpler design. Or you could potentially have 4x A78 and 4x A55, and have the A78 cores clock higher, for better sustained performance than X1. These are all different design choices, one can be better than another, but it depends on the circumstances.
Kangal - Wednesday, November 3, 2021 - link
I just want to add my viewpoint on the performance and efficiency of this chipset/phone.AI/ML/NPU/TPU Benchmark: GeekBench ML 0.5
This looks like the most accurate representation. The iPhone 13 has an impressive AI performance because their SDK is better fleshed out, their software is coded more natively, and the SoC has pretty impressive specs Cache, CPU, GPU to help with such tasks. The GS101 wins in the Android ecosystem by a wide margin, followed by QSD 888, MediaTek Dimensity, then lastly Exynos. We can see the proper AI chart here: https://images.anandtech.com/graphs/graph17032/126...
GPU Benchmark: GFxBench Aztec Ruins High (Sustained)
This looks like the most accurate representation. Again Apple flexes its lead with its optimised software and cutting-edge hardware. Larger phones with better cooling manage to score higher, and giving preference to Qualcomm's mature drivers, followed by Huawei's node advantage, then the mediocre attempts by Exynos which is tied for the last spot with the GS101. We can see the proper GPU chart here: https://images.anandtech.com/graphs/graph17032/101...
CPU Multithread Benchmark: GeekBench 5 (crude Single/Multithread)
In the multi-thread test, it shows how sacrificing the middle cores has affected the total score, where it helps to boost the performance of the first 1-2 threads. So at least that design choice is captured. We can see the proper Multithread CPU chart here: https://images.anandtech.com/graphs/graph16983/116...
CPU Single-core Benchmark: SPEC 2017 (fp scores)
The SPEC test is more nuanced. We've established that Anandtech has made huge blunders here. Instead of reporting the Power Draw (watts) of the chipset, they instead try to calculate Energy Consumed (joules) by estimating it crudely. It is for that reason, we get some very inconsistent and wrong data. Such as Apple's Efficiency cores using less power than a Cortex-A53, yet producing scores in line with the Cortex-A78.
So instead, we will focus on the fp-scores instead of the int-scores, since this actually scales better from chipset to chipset. And we will focus on the Power Draw figures, to get the proper data. In particular, the tests of 526, 544, and 511 are quite revealing. We can see the proper CPU chart here:
https://images.anandtech.com/doci/16983/SPECfp-pow...
As a summary of the raw data, here:
Chipset-CoreType: Performance Value / Watts Recorded = Efficiency Score
Apple A14-E: 2.54 / 0.42 = 6.05
Apple A15-E: 3.03 / 0.52 = 5.83
Dim 1200-A55: 0.71 / 0.22 = 3.23
QSD 888-A55: 0.85 / 0.30 = 2.83
Exy 990-A55: 0.84 / 0.50 = 1.68 (? too low! Watts probably not recorded correctly)
Exy 2100-A55: 0.94 / 0.57 = 1.65 (? too low! Watts probably not recorded correctly)
GS 101-A55: 0.88 / 0.59 = 1.49 (? too low! Watts probably not recorded correctly)
Apple A15-P: 10.15 / 4.77 = 2.13
QSD 870-A77: 5.76 / 2.77 = 2.08
Apple A14-P: 8.95 / 4.72 = 1.90
QSD 888-X1: 6.28 / 3.48 = 1.80
GS 101-X1: 6.17 / 3.51 = 1.76
Dim 1200-A78: 4.71 / 2.94 = 1.60
Exy 2100-X1: 6.23 / 3.97 = 1.57
Exy 990-M5: 4.87 / 3.92 = 1.24
Andrei Frumusanu - Thursday, November 4, 2021 - link
> We've established that Anandtech has made huge blunders here. Instead of reporting the Power Draw (watts) of the chipset, they instead try to calculate Energy Consumed (joules) by estimating it crudely.I have no idea what you're referring to. The power draw is reported right there, and the energy isn't estimated, it's measured precisely. The A55 data is correct.
Perf/W is directly inversely relative to energy consumption if you would actually plot your data.
Kangal - Saturday, November 6, 2021 - link
The Specific Power Draw makes sense in the context of these comparisons. For the longest time in this industry, they've always talked about Performance per Watt. No-one, not even Intel (and they've been know to be quite shady) uses Performance per Joules.The total energy consumed in Joules is simply irrational. One can then make a flawed comparison of how much processing could be made through the consumption of a cupcake if you read it's nutritional content. Not only that, if you actually look at the data you guys submitted, it has a lot more variance with Joules, whilst Watts shows a lot more consistent results. Your energy consumed is an estimate, not what is specifically used by the cores when running.
For instance, when using Joules, it makes Apple's Efficiency cores seem to use slightly less power than a Cortex-A55, whilst performing benchmarks slightly faster than a Cortex-A76. If that is true, then no Android phones would be sold above $500 as everyone would simply buy iPhones. It's like comparing a 2011 processor (48nm Dual Cortex-A9) to a 2015 processor (16nm Octa Cortex-A53), so it's not only using less power, but delivering more than x4 times the performance. Ludicrous. You just cannot magically wave away discrepancies that big (x7.43). On the other hand, if you plot it using Watts, you get a deeper picture. Apple's Efficiency cores use about double the energy as a Cortex-A55 but in turn they deliver four times the performance, so the net difference is a much more palatable x2.14 leap in efficiency (at least in max performance comparison). And I'm comparing the latest Apple (A15) to Android (QSD 888) cores.
If the A55 data is as accurate as you say, why do you have discrepancies there as well?
For example, QSD 888 versus Google Silicon-101... they're both using off-the-shelf Cortex-A55. Yet, the Qualcomm's chipset is apparently drawing only 0.30 Watts, compared to 0.59 Watts... which is about x2 less. And both perform fairly close scores at 0.85 versus 0.88, making their total efficiency difference of x1.90 (2.83 vs 1.49) puzzling. So something is a miss. Going off the Joules Estimates doesn't fix the issue either, as you still have an unaccounted x1.83 difference in efficiency still.
With all your resources, you guys never got curious about such discrepancies?
(sorry for being a little obtuse)
dotjaz - Sunday, November 7, 2021 - link
You are obviously uneducated, and don't know what "off-the-shelf" means in any chips. Physical implementation varies a lot even on the same process with the same IP. Either you or Cadence are lying. I'd rather believe a reputable company with decades of experience.https://www.anandtech.com/show/16836/cadence-cereb...
Kangal - Sunday, November 7, 2021 - link
Snapdragon 888: 4x Cortex-A55 @ 1.80GHz 4x128KB pL2, with 4MB sL3, on Samsung5nm (5LPE)
Google Tensor: 4x Cortex-A55 @ 1.80GHz 4x128KB pL2, with 4MB sL3, on Samsung
5nm (5LPE)
Both of these SoC's are using Cortex-A55 cores which were licensed from ARM directly. They are off-the-shelf. These are not custom cores, such as the Mongoose, Early-Kyro, Krait, Denver, Carmel, or the 8-or-so different custom designs released by Apple. If you say that I am lying, then you are also saying that both Google, Qualcomm, are also lying. And note, that they are virtually identical in their specifications and build.
I think you entirely mis-understood the point of Cadence is about. Sure, even on the same chips there are variance, the so-called "silicon lottery". But be realistic, how much of a difference do you think it is? I'll give a hint, the larger the silicon, the more the variance, and the bigger the difference. If you check the latest data from the now bankrupt siliconlottery.com service, the difference with the 3950X is (worst) 4.00Ghz versus 4.15Ghz (best). At best that is a 3-point-something-percent difference, so let's say it is 5%... and this difference is likely to be less on smaller chips. But even if we accept 5%, that is nowhere near x2 variance.
Also, you would be calling AnandTech liars as well:
" the new Cortex-A77 picks up where the Cortex-A76 left off and follows Arm’s projected trajectory of delivering a continued SOLID PERFORMANCE UPLIFT of 20-25% CAGR "...
How is it that we are able to be impressed by a +20% uplift, yet, we completely disregard a +90% difference? It is not logical, and doesn't pass the sniff test. You can call me uneducated all you like, I'm trying to arrive at the truth, since there are big discrepancies with the data provided that I've pointed out to above. I am open to criticism, as everyone should be.
TellowKrinkle - Tuesday, November 9, 2021 - link
Let's look at some unitsPerformance is units of calculation work divided by time. For our graph, some constant times spec runs per second.
Performance per watt is then some constant times (spec runs) / (seconds * watts)
The joules measurement put up there is specifically joules per spec run. One joule is one watt second, so that number would therefore be (watts * seconds) / (spec runs).
Notice the similarity? Joules is 1/(perf per watt).
Hopefully it's clear from this that the "joules" measurement on that graph *is* there to indicate efficiency, just like a perf/watt measurement would be. The only difference is that in the joules graph, high numbers indicate inefficient processors, while in a perf/watt graph, those would be represented by low numbers.
The0ne - Thursday, November 4, 2021 - link
Pixel 4A updated last night to 12 and it runs even smoother. The UI tricks they have done does appear to make a visual difference. So far I'm impressed with 12 aside from the gigantic texts and bars here and there.sharath.naik - Thursday, November 4, 2021 - link
Good in-depth review. I know you are doing the camera review of this. So I have a request can you look into if the Pixel 6 cameras are hardware binned to ~12MP even though the specs say they are 50MP/48MP. There is a lot of mixed views out there, most mistaking this as the pixel binning done on other phones like galaxy S21u(Software binned for low light but has access to full resolution). If you could confirm this for the review would be great, looking forward to that review.Silver5urfer - Tuesday, November 2, 2021 - link
Exactly as expected 1:1The SoC is a bust, they tried to do some gimmickry with their zero talent and tried to make it a cheaper deal by going to Samsung for their IP fabrication expertise and lithography process. Ended up being a dud in CPU, GPU and price to performance, all that NPU NN, mega magic boom is all a farce. I was asking the same thing, what does these provide to us end users ? Nothing. Just that fancy Livetranslation and other gimmicks which you use seldom. On top we do not even know what TPU does in the Pixel Software, it's closed source. AOSP is open but Pixel UI and all backend are closed.
Hardware is utter joke, the P6 series has garbage mmwave system look at the internals, they crammed one lol. LG V50 back in 2019 had 2-33 mmwave antennas. This junk saved on cost. The display has banding issues all over the place. Optical image sensor for Fingerprint is slow and a joke vs the physical dedicated ones. The stereo speaker system has massive channel imbalance on top. Then you have the low battery SoT for this price point and battery capacity. The DIY aspect is thrown into gutters, the phone has massively hamfisted cooling approach with graphite pads smeared all over the place as leaks showed and no proper HS system it's just a small pathetic AL board reinforcement plate doing it so on top the Display has no metal backplate to reinforce it or dissipate heat. What a nonsense. SD888 itself heats up a lot and so many vendors add VC cooling, Sony Xperia 1 Mark 3 messed up there and had inferior performance with throttling. This junk is even more pathetic, pay for a S tier SKU get trash sustained performance of a B+ device, the AP, Pocket now and other Youtube shill press will hype this to moon.
We do not even know how this junk has in terms of Software blocks like P2, P3, P4 had A/B system, then merged partitions, later read only ext4 system. This will have even worse. To make it a round about trash, the software is a joke cheap kiddo inspired garbage, heck that BBKs OnePlus's Oxygen + Oppo Color OS mix is way superior than this junk with massive information density loss.
I'd wait for the next SD successor device, hopefully 888's BS power consumption and insane heat can be reduced.
Silver5urfer - Tuesday, November 2, 2021 - link
It's a typo for mmwave, it's 2-3 units. Also I forgot to mention the lack of charger, SD slot, no 3.5mm jack very poor servicing almost impossible to get the phone properly cooled if you open it due to cheap graphite pad reliance. It also has that USB port and microphone soldered to the mainboard which looks like a feeble trash unit check any phone in the recent times and look how solid engineered they are, dual sandwich designs with reinforced chassis and proper heat dissipation.goatfajitas - Tuesday, November 2, 2021 - link
People get WAY too hung up on benchmarks. LOL, a "dud". A phone is about user experience, not how many "geekmarks" = best.lionking80 - Tuesday, November 2, 2021 - link
I agree that the benchmarks do not tell the whole story, but I would still say that even the use of a Snapdragon 870 would have been a better choice.The general performance is similar (maybe a small advantage for Tensor in AI), but the advantages of Snapdragon 870 are bigger: runs much cooler, hugely better battery-life.
To be honest I am disappointed by the SOC. The only thing that might make it a seller is the software (ui and camera), but the SOC is rather a no-go.
goatfajitas - Tuesday, November 2, 2021 - link
There are other factors though. Early ROM settings, tweaks, bugs, and cooling/hardware. The 870 may have scored lower in a P6 as well. So many factors. - Agreed, the P6 should be a bit more polished though.at_clucks - Tuesday, November 2, 2021 - link
The problem goes beyond the slightly worse SoC than the already existing Qualcomm offering. It's that despite being a "Google SoC" they still support it for just 3 years. All the excuses used over the years, all the pointing fingers at SoC manufacturers for the lack of support were just proven to be a load of crap. Same crap, now with a google sticker.sharath.naik - Tuesday, November 2, 2021 - link
It's about to get worse with the camera review. I can verify Google might have been bluffing about the 50mp/48mp sensors. The sensors are locked at 12mp. So Pixel pro has essentially three 12 mp cameras. Which means the portrait mode zoom of 2x is a low resolution 3 MP image. Also at 10x zoom the image resolution is 2.5MP, 4 times lower than that of s21 ultra. What drove Google to make the choice of first hardware pixel binning the resolution down and then trying to digitally blow the resolution backup!!.It's baffling, tried to get an answer from Google support, they just refused to confirm or deny this is binned at the hardware levelhoxha_red - Tuesday, November 2, 2021 - link
"I can verify that google might have been bluffing"dude, lmfao—it's called "binning"; please look it up. they've been upfront about this and it was known before the phone was even launched, let alone after we've seen all of these reviews. The reason Google support "refused to confirm or deny" is because the people doing customer support are unlikely to know what "pixel binning" is (hey, I guess they're in good company there with you), and are not equipped to deal with weirdos of your specific variety.
Maxpower27 - Tuesday, November 2, 2021 - link
You obviously have no familiarity with mobile phone cameras and sensors in particular. Read up about them and then try again.sharath.naik - Saturday, November 13, 2021 - link
Looks like you both do not value to concept of understanding the topic before responding, So pay attention this time. I am talking about hardware binning.. not software binning like every one else does for low light. Hardware binning means the sensors NEVER produce anything other than 12MP. Do both of you understand what NEVER means? Never means these sensors are NEVER capable of 50MP or 48MP. NEVER means Pixel 3x zoom are 1.3MP low resolution images(Yes that is all your portrait modes). NEVER means at 10x Pixel images are down to 2.5MP.Next time Both of you learn to read, learn to listen before responding like you do.
meacupla - Tuesday, November 2, 2021 - link
IDK about you, but livetranslation is very useful if you have to interact with people who can't speak a language you can speak fluently.BigDH01 - Tuesday, November 2, 2021 - link
Agreed this is useful in those situations where it's needed but those situations probably aren't very common for those of us that don't do a lot of international travel. In local situations with non-native English speakers typically enough English is still known to "get by."Justwork - Friday, November 5, 2021 - link
Not always. My in-law just moved in who knows no English and I barely speak their native language. We've always relief on translation apps to get by. When I got the P6 this weekend, both our lives just got dramatically better. The experience is just so much more integrated and way faster. No more spending minutes pausing while we type responses. The live translate is literally life changing because of how improved it is. I know others in my situation, it's not that uncommon and they are very excited for this phone because of this one capability.name99 - Tuesday, November 2, 2021 - link
Agreed, but, in the context of the review:- does this need to run locally. (My guess is yes, that non-local is noticeably slower, and requires an internet connection you may not have.)
- does anyone run it locally (no idea)
- is the constraint on running it locally and well the amount of inference HW? Or the model size? or something else like CPU? ie does Tensor the chip actually do this better than QC (or Apple, or, hell, an intel laptop)?
SonOfKratos - Tuesday, November 2, 2021 - link
Wow. You know what the fact that the phone has a modem to compete with Qualcomm for the first time in the US is good enough for me. The more competition the better, yes Qualcomm is still collecting royalties for their parents but who cares.Alistair - Tuesday, November 2, 2021 - link
That's a lot of words to basically state the truth, Tensor is a cheap chip, nothing new here. Next. I'm waiting for Samsung + AMD.Alistair - Tuesday, November 2, 2021 - link
phones are cheap too, but too expensiveWrs - Tuesday, November 2, 2021 - link
Surprisingly large for being cheap. Dual X1's with 1 MB cache, 20 Mali cores. So many inefficiencies just to get a language translation block. As if both the engineers and the bean counters fell asleep. To be fair, it's a first-gen phone SoC for Google.Idk if I regard Samsung + AMD as much better, though. Once upon a time AMD had a low-power graphics department. They sold that off to Qualcomm over a decade ago. So this would probably be AMD's first gen on phones too. And the ARM X1 core remains a problem. It's customizable but the blueprint seems to throw efficiency out the window for performance. You don't want any silicon in a phone that throws efficiency out the window.
Alistair - Wednesday, November 3, 2021 - link
Will we still be on X1 next year? I hope not. I'm hoping next year is finally the boost that Android needs for SoCs.TheinsanegamerN - Tuesday, November 2, 2021 - link
Not cheap, just poorly designed. Too much focus on magic and not enough on the fundamentals of a good chip design.Alistair - Wednesday, November 3, 2021 - link
cheaply made, i mean, with a CPU with A76 cores... lol...tuxRoller - Tuesday, November 2, 2021 - link
All companies other than those that Apple need should just admit defeat and give themselves over.Just needed to be said.
TheinsanegamerN - Tuesday, November 2, 2021 - link
The rest of the world is not america. Without america apple is a tiny minority of devices.Alistair - Wednesday, November 3, 2021 - link
? seriously? Apple is dominant in Japan, Taiwan, and a gazillion other countriesicedeocampo - Wednesday, November 3, 2021 - link
last time I checked there were just 195 countriesAlistair - Wednesday, November 3, 2021 - link
you know hyperbole right? your response is not an argument...ss91 - Thursday, November 4, 2021 - link
You both are wrong. Yes, it’s true apple is not the majority device supplier. Yes, it’s also true that apple is far far ahead.But what is wrong is that
1. @TheinsanegamerN : Where apple doesn’t dominate, they are predominantly lower segment markets where avg smartphone price is far below the cheapest iPhone.
Apple sells more iPhones than Samsung sells Notes (RIP), S series and fold/flip. Samsung sells overall more phones because they operate in price segments all the way from $200 to $2000 while apple starts $400.
You can be sure that SE Asia, S.America and Africa uses androids, but it’s also true that these androids are far from the bleeding edge flagships.
2. @tuxRoller : Competition is important. If Samsung won’t team up with AMD, Adreno and Mail will become complicit. If Tensor doesn’t outperform ML tasks, exynos and Qualcomm won’t innovate there. If apple won’t have started the 64bit and Custom CPU race, nothing would be same today.
tuxRoller - Friday, November 5, 2021 - link
Sorry for the bit of trolling, but, to be clear, that was trolling. I wasn't being serious.aclos3 - Wednesday, November 3, 2021 - link
I am really happy with my Pixel 6 pro so far. Above all, I want my phone to have a really excellent camera and every new mid to high end phone now is plenty fast for just about any use. I bet the Pixel 6 takes much better pictures than whatever "trash" you are using. But you sure have them beat when it comes to the benchmarks I bet.Alistair - Wednesday, November 3, 2021 - link
It's the opposite, the iPhone is massively ahead in performance, but every high end phone takes the same high end photos... you got the same photos but a lot less performance...aclos3 - Saturday, November 6, 2021 - link
I took some time to really test the camera and you are simply wrong. I have been photographing with it heavily for the last couple of days and the camera is incredible. Call it a gimmick or whatever, but the way they do their photo stacking puts this phone in a league of its own. If your main use case for a phone is benchmarking, I guess this is not your device.Lavkesh - Thursday, November 11, 2021 - link
Everyone and their grand mother do image stacking. iPhone is almost as good if not better even with a smaller sensor when compared to the latest Pixel. How's that for "in a league of its own"?Amandtec - Wednesday, November 3, 2021 - link
I don't doubt the veracity of your comment but I find the hostile undertone somewhat curious.damianrobertjones - Wednesday, November 3, 2021 - link
But... but... they said that it's amazing!! Who do I believe? /sZoolook - Saturday, November 6, 2021 - link
As long as they use Samsung process they will be hopelessly behind Apples Socs in efficiency unfortunately, would be interesting to see SD back on TSMC process for a direct comparison with Apple silicon.Tigran - Tuesday, November 2, 2021 - link
Performance looks very disappointing. Google promised 4.7x GPU performance improvement vs Pixel 5.singular9 - Tuesday, November 2, 2021 - link
I was enjoying how the speculation about the GS101 were claiming its "not far behind" the SD888. I was never expecting google to make another high end device, let alone one that undercuts most of the competition, as its just not what trends would say.I am not impressed. As someone who was rather hopeful that google would take control and bring us android users a true apple chip equivalent some day, this is definitely not the case with google silicon.
Considering how cookie cutter this design is, and how google made some major amateur decisions, I do not see google breaking away from the typical android SOC mold next generation.
Looking back at how long it took apple to design a near 100% solo design for the iPhone (A8X was the first A chip to use a complete inhouse GPU and etc design, other than ARM cores), that is a whopping 4 and a half years. Suppose this first google "designed" chip is following the same trend, an initial "brand name" break away yet still using a lot of help from other designs, and then slowly fixing one part at a time till its all fixed, while also improving what is already good, I could see google getting there by the Pixel X (10?). But as it stands, unless google dedicates a lot of time to actually altering Arm's own designs and simply having samsung make it, I don't see Tensor every surpassing qualcomm (unless samsung has some big breakthrough in their own CPU/GPU IP which may or may not come with AMD's help).
As the chip stands today, its "passable", but not impressive. Considering Google can get android to run really well on a SD765G, this isn't at all surprising. The TPU seems like a nice touch, since honestly, focusing on voice is more important than on "raw" cpu performance or something. I have always been frustrated with speech to text not being "perfect" and constantly having to correct it manually and "working around" its limitations. As for my own experience with the 6 Pro, its bloody good.
Now to specifics.
The X1 chips do get hot, as does the 5G modem. I switched the device to LTE for now. I do get 5G at home and pretty much most places I go, and it is fast, its not something I need right now. I even had a call drop over 5G because I walked around a buildings corner. Not fun.
The A76 excuse I have heard floating around, is that it takes up less physical die space, by A LOT. And apparently, there was simply no room for an A77 or A78 because the TPU and GPU took up so much room. I don't understand this compromise, when the GPU performance is this mediocre. Why not simply use the same GPU size as the S21 (Ex2100) and give the A78's more room? Don't know, but an odd choice for sure.
The A55 efficiency issues are noticeable. Try playing spotify over bluetooth for an hour, and watch the battery drain. I get consistently great standby time, and very good battery life when heavily using my device, but its these background screen off tasks that really chug the battery more than expected.
Over all though I haven't noticed any serious issues with my unit. The finger print scanner works as intended, and is better than my 8T. The camera does just as well if not better than the previous pixels. And over all...no complaints. But I wonder how much of this UX comes from google literally brute forcing their way with 2 X1 cores and a overkill GPU, and how much of it is them actually trying.
As for recommendations to google for Tensor V2, they need to not compromise efficiency for performance. This phone isn't designed to game, cut the GPU down, or heck, partner with AMD (who is working with samsung) to bring competitive graphics to mobile to compete with Adreno from QComm. 2 X1 cores, if necessary, can stay, but at that point, might as well just have 4 of them and get rid of all the other cores entirely and simply build a very good kernel to modulate the frequency. Or make it a 2+6 design with A57 cores. As someone who codes kernels for pixels and nexus devices for a long time, trying to optimize the software to really get efficiency out of the big.LITTLE system is near impossible, and in my opinion, worthless unless your entire scheduler is "screen on/off" based, which is literally BS. I doubt google has any idea how to build a good CPU governor nor scheduler to truly make this X+X+X system even work properly, since I have yet see qcomm or samsung do it "well" to call commendable.
The rest of the phone is fine. YoY improvements are always welcome, but I think the pixel 6/pro just really show how current mobile chips are so far behind apple that you might as well give up. YoY improvements have imo halted, and honestly no one seems to be having the thought that maybe we should cut power consumption in half WITHOUT increasing performance. I mean...the phones are fast enough.
Who knows. We will see next year.
PS: I also am curious what google will do with the Pixel 6A (if they make one at all). Will it use a cut down GS101 or will it get the whole chip? It would seem overkill to shove this into a 399$ phone. Wonder what cut downs will be made, or if there will be improvements as well.
sharath.naik - Tuesday, November 2, 2021 - link
Good thoughts, there is one big issue you missed. Pixel camera sensors 50mp/48mp being binned to 12mp yet Google labeled them as 50mp/48mp. Every shot outside the native 1x,4x is just a crop of the 12mp image including pottaitk3mp crop) and 10x(2.5mp crop}.teldar - Thursday, November 4, 2021 - link
You are absolutely a clueless troll and should go back to your cave. Your stupidity is unwanted.Silver5urfer - Tuesday, November 2, 2021 - link
You said you do Kernels but "As someone who was rather hopeful that google would take control and bring us android users a true apple chip equivalent some day, this is definitely not the case with google silicon."What is Android lacking from needing that so called A series processor onto the platform ? I already see Android modding has been drained a lot now. It's there on XDA but less than 1% of user base uses mods, maybe root but it's still niche.
Android has been on a downhill since a long time. With Android v9 Pie to be specific. Google started to mimic iOS on superficial level with starting from OS level information density loss now on 12 it's insane, you get 4 QS toggles. It's worst. People love it somehow because new coat of trash paint is good.
On HW side, except for OnePlus phones no phones have proper mod ecosystem. Pixels had but due to the crappy policies they implemented on the HW side like AB system, Read only filesystem copied from Huawei horrible fusing of filesystems and then enforcing all these at CTS level, they added the worst of all - Scoped Storage which ruined all the user use cases of having a pocket computer to a silly iOS like trash device. Now on Android any photo you download goes into that Application specific folder and you cannot change it, due to API level block on Playstore for targeting Android v11 which comes with Scoped Storage by default. Next year big thing is coming, all 32bit applications will be obsoleted because ARM is going to remove the 32bit IP from the Silicon designs. That makes 888 the last 32bit capable CPU.
Again what do you expect ? Apple A series shines in these Anandtech SPEC scores but when It comes to real life Application work done performance, they do not show the same level of difference. Which is basically Application launch speed and performance of the said application now Android 12 adds a splash screen BS to all apps globally. Making it even worse.
There's nothing that Google is going to provide you or anyone to have something that doesn't exist, Android needs freedom and that is being eroded away every year with more and more Apple inspired crap. The only reason Google did this is to experiment on those billions of dollars and millions for their R&D, Pixel division has been in loss since 2016, less than 3% North American marketshare. Only became 3 from 2 due to A series budget Pixels. And they do not even sell overseas on many markets. In fact they imitate Apple so much that now they want the stupid HW exclusive joke processors for their lineup imitating Apple for no reason. Qcomm provides all the blobs and baseband packages, If Google can make them deliver support for 6 years they can do it, but they won't because sales. All that no charger because environment, no 3.5mm jack because no space, no SD slot is all a big fat LIE.
Their GS101 is a joke, a shame to CPU engineering, trash thermal design, useless A7x cores and the bloated X1 x2 cores for nothing, except for their ISP nothing is useful and even the Pixel camera can be ported to other phones, Magic Eraser for eg works on old Pixels, soon other phones due to Camera API2 and Modding.
Google's vision of Android was dead since v9 and since the death of Nexus series. Now it's more of a former shell with trash people running for their agenda of yearly consumerism and a social media tool rather than the old era of computer in your pocket, to make it worse the PR of Pixel is horrible and more political screaming than anything else.
Zoolook - Saturday, November 6, 2021 - link
Apple silicon shines in part due to being on a superior process, and a much better memory subsystem, Samsung process is far behind TSMC in regards to efficiency unfortunately.Zoolook - Saturday, November 6, 2021 - link
Small nitpick, A8X GPU was a PowerVR licence, A11 had the first Apple inhouse GPU.iphonebestgamephone - Sunday, November 14, 2021 - link
"cut power consumption in half WITHOUT increasing performance"Make a custom kernel and uc/uv it and there you go. Should be easy for a pro kernel dev like you.
tipoo - Tuesday, November 2, 2021 - link
Thanks for this analysis, it's great.I'm still left wondering what the point of Tensor is after all this. It doesn't seem better than what was on market even for Android. I guess the extra security updates are nice but still not extra OS updates even though it's theirs. And the NPU doesn't seem to outperform either despite them talking about that the most.
And boy do these charts just make A15 look even more above and beyond their efforts, but even A4 started with Cortex cores, maybe in 2-3 spins Google will go more custom.
Blastdoor - Tuesday, November 2, 2021 - link
I wonder if we will now see a similar pattern play out in the laptop space, with Macs moving well beyond the competition in CPU and GPU performance/watt, and landing at similar marketshare (it would be a big deal for the Mac to achieve the same share of the laptop market that the iPhone has of the smartphone market).tipoo - Tuesday, November 2, 2021 - link
Well I'm definitely going to hold my Apple stocks for years and that's one part of the reason. M1 Pro and Max are absolute slam dunks on the industry, and their chipmaking was part of what won me over on their phones.TheinsanegamerN - Tuesday, November 2, 2021 - link
When did apple manage that? I can easily recall the M1 pulling notably more power then the 4700u in order to beat it in benchmarks despite having 5nm to play with. The M1X max pulls close to 100W at full tilt, and is completely unsustainable.Spleter - Tuesday, November 2, 2021 - link
I think you are confusing temperature in degrees and not the amount of watts.Alistair - Wednesday, November 3, 2021 - link
when it is drawing 100 watts it is competing against windows laptops that are drawing 200 watts, i'm not sure what the problem isSpeedfriend - Thursday, November 4, 2021 - link
The average laptop costs $500 and most expensive laptops are bought by enterprises where Mac OS has a limited share. While the Macbookz are great devices, they are hobbled by poor monitor support at the Air end and cray prices at the MacBook Pro end. For most users the difference between the performance of a MacBook Pro and a $1000 laptop is unnoticeable except in their wallet!dukhawk - Tuesday, November 2, 2021 - link
The chip is very Exynos design related. Looking through the kernel source and there are a ton of Exynos named files.dukhawk - Tuesday, November 2, 2021 - link
https://android.googlesource.com/device/google/rav...defaultluser - Tuesday, November 2, 2021 - link
If anyone wants to know know why Nvidia is most interested in purchasing ARM, it's in order to put the inefficient Mali out of it's misery - and simultaneously replace it with their own license-able Geforce cores!Since ARM Corp started throwing in the GPU for free, they've had to cut GPU research (to pay for the increasingly complex CPU cores, all of which come out of the same revenue box!) But Nvidia has the massive Server Revenue to handle this architecture-design mismatch; they will keep the top 50% of the engineers, and cut the other cruft loose!
melgross - Tuesday, November 2, 2021 - link
That may be a side effect. But the reason for purchasing g it would be maki g money, and controlling the market. Yes, it’s true that Nvidia wa t to control all graphics and to turn the GPU into the main programming aim.TheinsanegamerN - Tuesday, November 2, 2021 - link
If nvidia wanted to do that they could simply license ARM and make their own superior chip. The fact they have fallen flat on their face every time they have tried speaks volumes.they want ARM for patents and $$$, nothing more.
defaultluser - Wednesday, November 3, 2021 - link
When a place like Rockchip can sell an Arm chip bundled with Mali for Peanuts, you can understand why superior GPU wasn't enough to win Phone customers!You also need integrated modem if you ever want to compete with Qualcomm (not something Nvidia was willing to do).
But that bundling system has been shorting ARM Mali development for years (Qualcomm, Apple, and soon Samsung (via AMD) are all bringing better high-end options into the field - you know your performance/watt must be pathetic when a company like Samsung is getting desperate-enough to pay the cost of porting AMD GPU over to ARM architecture.
Kvaern1 - Sunday, November 7, 2021 - link
"If nvidia wanted to do that they could simply license ARM and make their own superior chip."''simply'
No, no one can simply do that anymore and only two companies can. NVidia just bought one of them.
melgross - Tuesday, November 2, 2021 - link
I’m wondering about several things here.I don’t see the reason for using the A76 cores being one of time. This is a very new chip. The competitors on the Android side have been out for a while. They use A78 cores. Samsung uses A78 cores. So time doesn’t seem to be the issuer here, after all it does use the X1. So I wonder if it isn’t the size of the core on this already large, crowded chip that’s a reason, and possibly cost. If the newer cores take up more area they would cost slightly more. These chips are going to be bought in a fairly small number. Estimates have it that last year, Google sold between 4 and 7 million phones, and that they’re doubling this year’s order. Either would still be small, and give no advantage to Google in volume pricing compared to other chip makers.
The second is that you have to wonder if Google is following the Apple road here. Apple, of course, designs many chips, all for their own use. Will Google keep their chips for their own use, assuming they’re as successful in selling phones as Google hopes, or will they, after another generation, or two, when the chip is more of their own IP, offer them to other Android phone makers, and if so, how will Samsung feel about that, assuming their contract allows it?
SonOfKratos - Tuesday, November 2, 2021 - link
I think they went for the A76 cores because of cost, like you said Tensor is already huge and the A78 or A77 cores would be more power efficient but they are also much bigger than the A76 on 5nm process. Even if they were to clock an A78 lower it would just be a waste of money and space on the chip for them. They probably had a specific budget for the chip which meant a specific die size. This is not Apple who is willing to throw as much money as they can to get the best performance per watt.The display was rumored to be an E5 display from Samsung display which is in their latest display so I don't know why Google is not pushing for higher brightness but it could be because of heat dissipation as well...I highly doubt Samsung gave Google their garbage displays lol Also Google does not utilize the variable refresh rate very well and it's terrible for battery life. I have also seen a lot of janky scrolling with 120Hz in apps like Twitter..it has hiccups scrolling through the timeline compared to my Pixel 3.
The modem is very interesting probably more so than Tensor, this is the first competition for Qualcomm in the US at least. A lot of people have been saying that the modem is integrated in Tensor but why would Google integrate a modem that does not belong to them in "their" chip? That's like asking Apple to integrate Qualcomm modems in their chip. Also Samsung pays Qualcomm royalties for 5G so they probably have a special agreement surrounding the sale and implementation of the modem. It is definitely not as power efficient as Qualcomm's implementation but it's Good start. I got 400+ Mbps on T-Mobile 5GUC outdoors and 200 Mbps indoors (I don't know which band). It surprisingly supports n258 band like the iPhone.
melgross - Wednesday, November 3, 2021 - link
Apple couldn’t integrate Qualcomm’s modems in their own chips because Qualcomm doesn’t allow that. They only allow the integration of their modems into their own SoC. It’s one reason why Apple wasn’t happy with them, other than the overcharging Qualcomm has been doing to Apple, and everyone else, by forcing the licensing of IP they didn’t use.ChrisGX - Thursday, November 4, 2021 - link
Yes, but all that conjecture hasn't been confirmed by any reputable source. And, the statements by Phil Carmack and Monika Gupta indicate Google has been optimising for power (most of all) and performance (to a lesser degree) rather than area. We end up back at the same place, using the A76 cores just doesn't make a lot of sense.Also, the A78 is perhaps 30% larger than the A76 (on a common silicon process) whereas, I think the X1 is about twice the size of the A76. I'm not sure what the implications of all that is for wafer economics but I'm pretty sure the reason that Tensor will probably end up suffering some die bloat (compared to upper echelon ARM SoCs from past years) despite the dense 5nm silicon process is the design decision to use two of those large X1 cores (a decision that Andrei seems perplexed by).
Raqia - Tuesday, November 2, 2021 - link
The Google TPU only trades blows with the Qualcomm Hexagon 780 with the exception Mobile BERT. It's not an especially impressive first showing given that this is Google's centerpiece, and it's also unclear what the energy efficiency of this processor is relative to the competition. It's good there's competition though; at the phone level, software is somewhat differentiated and pricing is competitive.webdoctors - Tuesday, November 2, 2021 - link
Even if the performance isn't impressive, the big deal is guaranteed SW updates. Look at the Nvidia Shield, it came out in 2015 and its still getting the latest Android updates/OS! No other product has been updated for so long, 6 YEARS!Now that Google owns the SoC they have full access to the SoC driver source code so should be able to support the SoC forever, or at least ~10 years....not reliant on Qualcomm's 3 yr support term etc.
BlueScreenJunky - Tuesday, November 2, 2021 - link
Yeah, except they only guarantee 3 years of software update and 5 years of security updates, which is really a shame if you ask me.If they could have guaranteed 5 years of OS updates from the start it would have been a very strong selling point. Especially since the difference between each generation becomes smaller every year, I could see people keeping a Pixel 6 for well over 3 years... How cool would that be to keep a $599 for 5 years and still run the latest android version ?
webdoctors - Tuesday, November 2, 2021 - link
I agree: https://support.google.com/pixelphone/answer/44577...They should've just guaranteed 5 years for SW updates. Based off the pixel 3 being guaranteed for 3 yrs and than this month dropping security updates for Pixel 3 from their list, they're serious about guaranteeing being the maximum support they'll provide which is unfortunate. Maybe they'll update it this year cause that seems like a big hole.
TheinsanegamerN - Tuesday, November 2, 2021 - link
Why? What new features do you NEED in your phone? Android stopped evolving with 9, iOS with about version 11. The newest OSes dont do anything spectacular the old ones didnt do.You're getting 5 years of security updates and dont have apps tied to OS version like apple, giving the pixel a much longer service life then any other phone.
tipoo - Tuesday, November 2, 2021 - link
They're saying 3 years of OS updates, a far shot from 10. 5 years of security updates, which is a start, but owning their supposed own SoC they should have shot for 5 of OS.BillBear - Wednesday, November 3, 2021 - link
After all the build up on "We're going to have our own chips now so we can support them without interference from Qualcomm", three years of updates is seriously underwhelming.Apple has six year old phones running the current OS and the eight year old iPhone 5s got another security update a month ago.
Google needs to seriously step up their game.
melgross - Friday, November 5, 2021 - link
All we know now about software updates is that it will get five years of SECURITY updates, nothing about OS updates was stated, as far as I see. If that’s true, they Google may still just offer three years. Even now, Qualcomm allows for four years of OS updates, but not even Google has taken advantage of it. So nothing may change there.Alistair - Tuesday, November 2, 2021 - link
It's very irritating how slow Android SOCs are. I'll just keep on waiting. Won't give up my existing Android phone until actual performance improvements arrive. Hopefully Samsung x AMD will make a difference next year.Speedfriend - Thursday, November 4, 2021 - link
Looking at the excellent battery life of the iPhone 13 (which I am currently waiting for as my work phone) does iPhone till kill suspend background tasks. When I used to day trade, my iPhone would stop prices updating in the background, very annoying when I would flick to the app to check prices and unwittingly see prices hours old.ksec - Tuesday, November 2, 2021 - link
Av1 hardware decoder having design problem again?Where have I heard of this before?
Peskarik - Tuesday, November 2, 2021 - link
preplanned obsolescencetuxRoller - Tuesday, November 2, 2021 - link
I wonder if Google is using the panfrost open source driver for Mali? That might account for some of the performance issues.TheinsanegamerN - Tuesday, November 2, 2021 - link
Seems to me based on thermals that the pixel 6/pro suffer from thermal throttling, and thus have power power budgets, then they should have given the internal hardware, leading to poor results.Makes me wonder what one of these chips could do in a better designed chassis.
name99 - Tuesday, November 2, 2021 - link
I'd like to ask a question that's not rooted in any particular company, whether it's x86, Google, or Apple, namely: how different *really* are all these AI acceleration tools, and what sort of timelines can we expect for what?Here are the kinda use cases I'm aware of:
For vision we have
- various photo improvement stuff (deblur, bokeh, night vision etc). Works at a level people consider OK, getting better every year.
Presumably the next step is similar improvement applied to video.
- recognition. Objects, OCR. I'd say the Apple stuff is "acceptable". The OCR is genuinely useful (eg search for "covid" will bring up a scan of my covid card without me ever having tagged it or whatever), and the object recognition gets better every year. Basics like "cat" or person recognition work well, the newest stuff (like recognizing plant species) seems to be accurate, but the current UI is idiotic and needs to be fixed (irrelevant for our purposes).
On the one hand, you can say Google has had this for years. On the other hand my practical experience with Google Lens and recognition is that the app has been through so many rounds of "it's on iOS, no it isn't; it's available in the browser, no it isn't" that I've lost all interest in trying to figure out where it now lives when I want that sort of functionality. So I've no idea whether it's better than Apple along any important dimensions.
For audio we have
- speech recognition, and speech synth. Both of these have been moved over the years from Apple servers to Apple HW, and honestly both are now remarkably good. The only time speech recognition serves me poorly is when there is a mic issue (like my watch is covered by something, or I'm using the mic associated with my car head unit, not the iPhone mic).
You only realize how impressive this is when you hear voice synth from older platforms, like the last time I used Tesla maybe 3 yrs ago the voice synth was noticeably more grating and "synthetic" than Apple. I assume Google is at essentially Apple level -- less HW and worse mics to throw at the problem, but probably better models.
- maybe there's some AI now powering Shazam? Regardless it always worked well, but gets better and faster every year.
For misc we have
- various pose/motion recognition stuff. Apple does this for recognizing types of exercises, or handwashing, and it works fine. I don't know if Google does anything similar. It does need a watch. Not clear how much further this can go. You can fantasize about weird gesture UIs, but I'm not sure the world cares.
- AI-powered keyboards. In the case of Apple this seems an utter disaster. They've been at it for years, it seems no better now with 100x the HW than it was five years ago, and I think everyone hates it. Not sure what's going on here.
Maybe it's just a bad UI for indicating that the "recognition" is tentative and may be revised as you go further?
Maybe the model is (not quite, but almost entirely) single-word based rather than grammar and semantic based?
Maybe the model simply does not learn, ever, from how I write?
Maybe the model is too much trained by the actual writing of cretins and illiterates, and tries to force my language down to that level?
Regardless, it's just terrible.
What's this like in Google world? no "AI"-powered keyboards?, or they exist and are hated? or they exist and work really well?
Finally we have language.
Translation seems to have crossed into "good enough" territory. I just compared Chinese->English for both Apple and Google and while both were good enough, neither was yet at fluent level. (Honestly I was impressed at the Apple quality which I rate as notably better than Google -- not what I expected!)
I've not yet had occasion to test Apple in translating images; when I tried this with Google, last time maybe 4 yrs ago, it worked but pretty terribly. The translation itself kept changing, like there was no intelligence being applied to use the "persistence" fact that the image was always of the same sign or item in a shop or whatever; and the presentation of the image, trying to overlay the original text and match font/size/style was so hit or miss as to be distracting.
Beyond translation we have semantic tasks (most obviously in the form of asking Siri/Google "knowledge" questions). I'm not interested in "which is a more useful assistant" type comparisons, rather which does a better job of faking semantic knowledge. Anecdotally Google is far ahead here, Alexa somewhat behind, and Apple even worse than Alexa; but I'm not sure those "rate the assistant" tests really get at what I am after. I'm more interested in the sorts of tests where you feed the AI a little story then ask it "common sense" questions, or related tasks like smart text summarization. At this level of language sophistication, everybody seems to be hopeless apart from huge experimental models.
So to recalibrate:
Google (and Apple, and QC) are putting lots of AI compute onto their SoCs. Where is it used, and how does it help?
Vision and video are, I think clear answers and we know what's happening there.
Audio (recognition and synth) are less clear because it's not as clear what's done locally and what's shipped off to a server. But quality has clearly become a lot better, and at least some of that I think happens locally.
Translation I'm extremely unclear how much happens locally vs remotely.
And semantics/content/language (even at just the basic smart secretary level) seems hopeless, nothing like intelligent summaries of piles of text, or actually useful understanding of my interests. Recommendation systems, for example, seem utterly hopeless, no matter the field or the company.
So, eg, we have Tensor with the ability to run a small BERT-style model at higher performance than anyone else. Do we have ways today in which that is used? Ways in which it will be used in future that aren't gimmicks? (For example there was supposed to be that thing with Google answering the phone and taking orders or whatever it was doing, but that seems to have vanished without a trace.)
As I said, none of this is supposed to be confrontational. I just want a feel for various aspects of the landscape today -- who's good at what? are certain skills limited by lack of inference or by model size? what are surprising successes and failures?
dotjaz - Tuesday, November 2, 2021 - link
" but I do think it’s likely that at the time of design of the chip, Samsung didn’t have newer IP ready for integration"Come on. Even A77 was ready wayyyy before G78 and X1, how is it even remotely possible to have A76 not by choice?
Andrei Frumusanu - Wednesday, November 3, 2021 - link
Samsung never used A77.anonym - Sunday, November 7, 2021 - link
Exynos 980 uses Cortex-A77anonym - Sunday, November 7, 2021 - link
I don't have any data but A76 is more efficient than A78 while relatively lower performance region. According to following DVFS carves, A77 is out of the question.https://images.anandtech.com/doci/15813/A78-X1-cro...
boozed - Tuesday, November 2, 2021 - link
So do we call this design "semi-custom" or "very-slightly-custom"?watzupken - Wednesday, November 3, 2021 - link
I think we have come to a point that pushing performance for mobile devices is starting to slow down big time, or in some cases like Exynos where we see regressions. The SOC gets refreshed each year, pushing for higher performance. The fabs however is slower to catch up, and despite the marketing of 7nm, 5nm, 3nm, etc, they may not be anywhere near what is being marketed. In this case, squeezing a fat GPU sounds great on paper, but in real life, the sustained performance is not going to make a huge difference because of the power and heat. In any case, I feel the push for an annual SOC upgrade should slow down because I certainly don't see significant difference in real life performance. We generally only know that last years SOCs are slower only when running benchmarks. Even in games, last gen high end SOCs can still handle challenging titles. Instead, they should focus on making the SOCs more power efficient.damianrobertjones - Wednesday, November 3, 2021 - link
All I want is for all phones to be able to record the front and rear camera at the same time. VLog fun. Such a simple thing... .Whiteknight2020 - Wednesday, November 3, 2021 - link
Not India, China, UK, Russia, most of the EU, Africa. Which is the vast majority of the world's population and the vast majority of the world's phones, a great many of which are still feature phones.Whiteknight2020 - Wednesday, November 3, 2021 - link
Not India, China, UK, Russia, most of the EU, Africa. Which is the vast majority of the world's population and the vast majority of the world's phones, a great many of which are still feature phones.eastcoast_pete - Wednesday, November 3, 2021 - link
To me, one of the most interesting points about this "meh" first Google custom SoC is that it was created with lots of Lego blocks from Samsung; I guess Google working with Qualcomm was either out of the question or not something either was willing to do. Maybe this was about Google wanting to show QC that they can develop a Pixel smartphone without them, maybe the two compete too closely on ML/AI, or maybe they just don't like each other much right now - who knows? Still, an SD 888-derived SoC with Google TPU would have likely been better on performance and efficiency. This one here is an odd duck. As for the Pixel 6, especially the Pro: camera is supposed to be spectacular, but with the battery life as it is and, of course (Google, after all), no expandable storage and no 3.5 mm headphone connectors, it missed the mark for me. But, the Pixels are sold out, so why would Google change?Whiteknight2020 - Wednesday, November 3, 2021 - link
If you want a "really excellent camera", sorry to disappoint you but you'll need to be buying an actual camera. The only thing a multipurpose portable computing device can ever be excellent at is being a multipurpose portable computing device.FunBunny2 - Wednesday, November 3, 2021 - link
"a multipurpose portable computing device."isn't that pretty much verbatim what Stevie said when he showed the original iPhone? nothing has really changed since. it was, kinda, a big deal when Stevie intoned that the thingee incorporated 3, count em 3!, devices that you had to carry that day!!! internet, phone, and number 3 (whatever that was). is a 2021 smartphone really anything more?? I mean, beyond the capacity of more transistors. thank ASML (and some really smart physicists and engineers) for that not Apple or Samsung or Google or ... last time I checked Apple's 'our own ARM' SoC is just bigger and wider ARM ISA, due to the, so far, increasing transistor budget available at the foundries.
that all begs the fundamental question: if Apple and The Seven Dwarfs have access to the same physical capital (ASML, et al) why the difference? if everybody spends time and money tweaking a function (that they all need, one way or another), in some time (short, I'll assert) The One Best Way emerges. the task, in the final analysis, is just maths. of course, Best is not a point estimate, as many comments make clear; there're trade offs all along the line.
it would be fun to use one of the Damn Gummint's supercomputers (weather or nucular bomb design) to spec a SoC. wonder how different the result would be?
NaturalViolence - Wednesday, November 3, 2021 - link
The math for the memory bandwidth doesn't check out. From the article:"4x 16-bit CH
@ 3200MHz LPDDR5 / 51.2GB/s"
But 3200MHz x 64 bit is 25.6GB/s, not 51.2GB/s. So which is it?
Wrs - Wednesday, November 3, 2021 - link
It’s Double data rateNaturalViolence - Wednesday, December 1, 2021 - link
So you're saying the data rate is actually 6400MHz? LPDDR5 doesn't support that. Only regular DDR5 does.Eifel234 - Wednesday, November 3, 2021 - link
I've had the pixel 6 pro for a week now and I have to say it's amazing. I don't care what the synthetic benchmarks say about the chip. It's crazy responsive and I get through a day easily with heavy usage on the battery. At a certain point extra CPU/gpu power doesn't get you anywhere unless your an extreme phone gamer or trying to edit/render videos both of which you should really just do on a computer anyway. What I care mostly about is how fast my apps are opening and how fast the UI is. Theres a video comparison on YouTube of the same apps opening on the iPhone 13 max and the p6 pro and you know what the p6 pro wins handily at loading up many commonly used apps and even some games. Regarding the battery life, I expect to charge my phone nightly so I really don't care if another phone can get me a few more hours of usage after an entire day. I can get 6 hours of SOT and 18 hours unplugged on the battery. More than enough.Lavkesh - Thursday, November 11, 2021 - link
Well that would be true if iOS apps were the same as Android apps. In the review of A15, it was called out how Android AAA games such as Genshin Impact were missing visual effects altogether which were basically present in iOS. These app opening tests are pretty obtuse in my opinion and it checks out as well. For a more meaningful comparison, have a look at this and how badly this so called google soc is spanked by A15!Here's Exynos 2100 vs Google Pixel 6
https://www.youtube.com/watch?v=iDjzPPtC4kU&t=...
Here's Exynos 2100 vs iPhone
https://www.youtube.com/watch?v=U9A91bnVBU4
Arbie - Friday, November 5, 2021 - link
No earphone jack, no sale.JoeDuarte - Saturday, November 6, 2021 - link
This piece has been up for three days, and there are still tons of typos and errors on every page? How is this happening? Why doesn't AnandTech maintain normal standards for publishers? I can't imagine publishing this piece without reading it. And after publishing it, I'd read it again – there's no way I wouldn't catch the typos and errors here. Word would catch many of them, so this is just annoying."...however it’s only 21% faster than the Exynos 2100, not exactly what we’d expect from 21% more cores."
The error above is substantive, and undercuts the meaning of the sentence. Readers will immediately know something is wrong, and will have to go back to find the correct figure, assuming anything at AnandTech is correct.
"...would would hope this to be the case."
That's great. How do they not notice an error like that? It's practically flashing at you. This is just so unprofessional and junky. And there are a lot more of these. It was too annoying to keep reading, so I quit.
ChrisGX - Monday, November 8, 2021 - link
Has Vulkan performance improved with Android 12? That is a serious question. There has been some strange reporting and punditry about the place that seems intent on strongly promoting the idea that the Tensor Mali GPU is endowed with oodles and oodles of usable GPU compute performance.In order to make their case these pundits offer construals of reported benchmark scores of Tensor that appear to muddle fact and fiction. A recent update of Geekbench (5.4.3), for instance, in the view of these pundits, corrects a problem with Geekbench that caused it to understate Vulkan scores on Tensor. So far as I can tell, Primate Labs hasn't made any admission about such a basic flaw in their benchmark software, that needed to be (and has been) corrected, however. The changes in Geekbench 5.4.3, on the contrary, seem to be to improve stability.
I am hoping that there is a more sober explanation for the recent jump in Vulkan scores (assuming they aren't fakes) than these odd accounts that seem intent on defending Tensor from all criticism including criticism supported by careful benchmarking.
Of course, if Vulkan performance has indeed improved on ARM SoCs, then that improvement will also show up in benchmarks other than Geekbench. So, this is something that benchmarks can confirm or disprove.
ChrisGX - Monday, November 8, 2021 - link
The odd accounts that I believe have muddled fact and fiction are linked here:https://chromeunboxed.com/update-geekbench-pixel-6...
https://mobile.twitter.com/SomeGadgetGuy/status/14...