Comments Locked

219 Comments

Back to Article

  • Silver5urfer - Tuesday, November 30, 2021 - link

    It all depends on how it will perform in a phone chassis. SD888 was overheating mess. I hope this addresses it. Also "Desktop Capabilities" that gave a big laugh. The interesting thing is Adreno Control Panel.

    Still the games are going to be Android only which are mostly MTX garbage. I miss the era of nice Indie platformers, puzzles and innovate games like Edge Ex, Alto, Skyforce Reloaded. Who wants that COD junk in mobile again.

    A15 performance or not the real world is what I'm at. Gotta see how it's more efficient, the bragging performance metrics are useless when the phone barely gets the SOT improved. Qcomm is a much better choice than MediaTek solely because of CAF and OEM support for Kernel SRC and others. Sadly Samsung phones in US won't have any BL Unlock due to American Carrier Mafia.
  • TriniX10257 - Tuesday, November 30, 2021 - link

    Is it confirmed that there's no hardware AV1 decode support?
  • Andrei Frumusanu - Tuesday, November 30, 2021 - link

    It doesn't have it.
  • Frenetic Pony - Tuesday, November 30, 2021 - link

    Oh yay, your $1k+ flagship phone won't have a key update relevant to a major use case.
  • Silver5urfer - Wednesday, December 1, 2021 - link

    Not even A15 has it, So there's no need for Qcomm to put more money into it. As simple as that. AV1 this that I see this everywhere as the coming of next revolution. Upon checking it's not even that great on the old hardware to decode so no wonder not all Netflix, Amazon, Disney Plus, Apple nobody is strictly using that. They use VP9. As for game streaming, not even Ampere has the encoder.

    I doubt AV1 will be widespread before 2025. HEVC is used in BluRay 4K Discs. Nobody uses this new standard and it took a long time to transition from old codecs. So many BD rips do not use H.265 either, still the world is on H.264 or similar ones.
  • vladx - Wednesday, December 1, 2021 - link

    And Apple is actually part of AOM which defined AV1 and the successor AV2, which is even more laughable. Let's face it, HEVC&VVC have already won over AV1&AV2.
  • brucethemoose - Wednesday, December 1, 2021 - link

    I'm not sure what you're on about, as VVC usage is pretty much nonexistent. Big streamers don't want to touch it with a 10 foot pole, and the more "friendly" EVC seems kinda meh.

    If any codec "won," its h.264.
  • vladx - Thursday, December 2, 2021 - link

    VVC's competitor is the upcoming AV2 codec, not AV1. And while AV2 is still in the drafting stage, hardware supporting VVC decode has already been released.
  • syxbit - Thursday, December 2, 2021 - link

    I'm not sure why you're complaining and criticizing AV1 so much.
    AOM (who define AV1 etc..) has the backing of Amazon, Apple, AMD, ARM, Facebook, Google, Microsoft, Intel, Nvidia, Samsung, Mozilla, Netflix, Youtube etc...

    It's no longer a battle. AOM have won. Netflix, Prime Video, Youtube etc.. will be using AV1. So either support it, or expect lower resolution or software decoded video that chews through your battery.
  • vladx - Thursday, December 2, 2021 - link

    Yeah they all support AOM with words but not all follow through, just look at Apple. HEVC is the winner in terms of adoption as hardware support beats claims made companies.
  • mode_13h - Friday, December 3, 2021 - link

    > HEVC is the winner in terms of adoption as hardware support beats claims made companies.

    Talk about an unfair comparison! HEVC was standardized long before AV1 was a thing!
  • name99 - Thursday, December 2, 2021 - link

    That's an exceptionally naive analysis.
    Most of the member of AOM have zero interest in ACTUALLY supporting yet another codec. AV1 (and AV2) exist purely to ensure that the REAL codecs (VVC, and EVC as the royalty-free version) come with reasonable terms so no-one (*cough* MPEG LA *cough*) tries to pull some shit, like kinda-sorta happened with h.264 in the early days of streaming.
  • Zoolook - Wednesday, December 8, 2021 - link

    Qualcomm is one of the companies behind HEVC, they get license money for every hevc enabled part that sells, why would they support the competition?
    They won't unless they really have to, and so far they don't.
  • name99 - Thursday, December 2, 2021 - link

    Of course VVC use is non-existent. It was finalized in 2020 (as opposed to AV1 in 2018).

    Apple had their first h.265 decoder in iPhones a year after 265 was finalized BUT this was not really publicized until a year later, so that when Apple talked about h.265 as their new preferred codec, they could point to a large pre-existing HW base.
    Chances are they will follow the same strategy for 266, suggesting that A15 may already have a decoder as part of the media block, but h.266 will not be discussed as part of Apple's preferred codecs until at least next year.

    I *think* Mediatek have a decoder, so it may be present on their chipsets. But all these things take time...
  • name99 - Thursday, December 2, 2021 - link

    AOM and AV1 have done their job.
    - licensing around future codecs will be more sane, without any backsliding
    - EVC is essentially equivalent to AV1 in performance and cpu cost, but is an ITU/ISO standard AND is royalty-free

    There's just no job left for AV1 to do except persist as a *possibility* if someone owning a patent that's part of VVC behaves stupidly.
  • mode_13h - Friday, December 3, 2021 - link

    > AOM and AV1 have done their job.

    That seems premature. For it to be a credible threat, AV1 needs hardware support. Otherwise, if there's a situation with lots of VVC hardware and no AV1 hardware, then the VVC patent holders might feel they have the leverage to force stricter licensing terms.
  • vladx - Wednesday, December 1, 2021 - link

    HEVC us used by Netflix, Amazon Prime and pretty much everything outside Google's ecosystem. Just because they recently adopted AV1, doesn't mean Netflix has dropped HEVC which has much wider hardware support.
  • GeoffreyA - Wednesday, December 1, 2021 - link

    vladx, HEVC may be more widespread, but was a lacklustre codec in many ways. Arguably, it never reached the excellence of its predecessor. AV1 is noticeably better than HEVC, and is in the same class, but slightly behind, VVC.
  • vladx - Wednesday, December 1, 2021 - link

    What do you mean? It has 50% better compression than H.264 and it's even more efficient than AV1 in terms of bandwidth usage as comparable quality settings, AV1 is only better space-wise which is less important since more space is cheaper to buy than more bandwidth. Only Google decided to cheapen out on paying royalties because of their NIH syndrome but that doesn't make HEVC lackluster as a codec in any way whatsoever.
  • vladx - Wednesday, December 1, 2021 - link

    And when I ssay that HEVC is more efficient bandwidth-wise than AV1, I'm referring of course to resolutions of 4K and above.. AV1 is better at 1080p and below but those resolutions are not really a concern in terms of bandwidth consumption.
  • BlueSwordM - Friday, December 3, 2021 - link

    In what world is AV1 less efficient than HEVC at higher resolutions?

    It's actually the opposite: modern encoders perform better as you up the resolution, and AV1's scaling is better than HEVC in that regard.
  • GeoffreyA - Wednesday, December 1, 2021 - link

    It may have 50% better compression on paper, and outperforms H.264 at lower bitrates, but I feel, and am not alone in feeling, that H.264 produces a better picture once it's got enough bits. HEVC, with sao left on, creates a softer image, with an unsightly effect. Of course, at 4K, HEVC leaves the older codec behind.
  • GeoffreyA - Wednesday, December 1, 2021 - link

    Silver5urfer, AV1's weakness is encoding complexity, but libaom has got a bit faster of late. In terms of quality/compression: H.265 < AV1 < VVC.
  • vladx - Wednesday, December 1, 2021 - link

    AV1 is only more efficient than HEVC at 1080p and below, at 4K and above which consume much more bandwidth HEVC is better.
  • GeoffreyA - Wednesday, December 1, 2021 - link

    I've got no experience with 4K encoding, so can't comment, but at other resolutions, AV1 produced a much better image than HEVC.
  • vladx - Wednesday, December 1, 2021 - link

    It's clearly your basing your conclusions on feeling instead of real data.
  • GeoffreyA - Thursday, December 2, 2021 - link

    Yes, I admit that.
  • GeoffreyA - Thursday, December 2, 2021 - link

    But the idea that H.265 is more efficient than AV1 at 4K sounds odd to me, because AV1 is a newer codec, possessing the same class of features as VVC, such as the bigger 128x128 CTUs, among other things. How is it that H.265 outperforms AV1 in a domain where it ought to fall even further behind (higher resolutions)?
  • vladx - Thursday, December 2, 2021 - link

    Because AV1 was overrated by marketing, there's a reason AV2 was soon announced in order to be competitive with VVC.
  • GeoffreyA - Thursday, December 2, 2021 - link

    Put this to the test this afternoon and H.265 is out of luck. On a simple 10-second 4K clip, using 2-pass 892 kbps, AV1 had marginally better quality, though softer. 4K narrowed the gap somewhat, but it squares with what I've experienced before.
  • name99 - Thursday, December 2, 2021 - link

    AVC is not competing with 265! Get that through your head.
    The competitors are VVC (if you pay a royalty) or EVC (if you want royalty-free).
    You won't impress anyone by attacking a strawman.
  • GeoffreyA - Friday, December 3, 2021 - link

    I know that VVC is the competitor, but was stunned by the claim that H.265 does better than AV1 in 4K, so had to test that.
  • GeoffreyA - Friday, December 3, 2021 - link

    Also noted various times that VVC was the top codec.
  • mode_13h - Friday, December 3, 2021 - link

    > AVC is not competing with 265!

    I guess you mean "AV1 is not competing...". AVC is actually h.264.
  • BlueSwordM - Friday, December 3, 2021 - link

    Nah, vladx is just wrong in this regard.
  • RSAUser - Sunday, December 5, 2021 - link

    @vladx, wrong, AV1 gives a better image at lower bandwidth for all resolutions than HEVC, issue was encode/decode overhead, and with SV1 it seems AV1 has "beaten" HEVC even there.
  • Kamen Rider Blade - Wednesday, December 1, 2021 - link

    EVC > AV1.

    H.266 VVC if you want the best.

    EVC if you want 2nd best that is also free and better than AV1.
  • brucethemoose - Wednesday, December 1, 2021 - link

    I was not impressed with the EVC benchmarks I saw, though it also seems a bit undercooked.
  • Kamen Rider Blade - Wednesday, December 1, 2021 - link

    H.266 VVC is going to take over for the commercial side.

    MPEG 5's EVC is "Royalty Free" w/ a Enhance EVC that has paid for features that you can go A-la Carte on.

    EVC is going to probably dominate those who want a "Royalty Free" CODEC that has better compression ratio than H.265, while maintaing comparable or better image quality.
  • vlad42 - Wednesday, December 1, 2021 - link

    After h.265, I'll believe it when I see it. Everyone seems to have already announced support for AV1 (youtube, twitch, netflix, amaxon prime, etc). Given there has been no hype/media coverage of h.266 (i didn't even know it was a thing as there has been no anandtech coverage), it seems like the commercial side has already decided on AV1 as the next gen codec.
  • vladx - Thursday, December 2, 2021 - link

    Hardware supporting VVC hardware decode has already been announced while AV2 is still hasn't even been drafted:

    https://www.xda-developers.com/mediateks-pentonic-...

    Just because you're ignorant about the current situation, doesn't mean everyone else is as well. At this rate, consumer hardware supporting encode will be released for VVC before AV1 let alone AV2.
  • vlad42 - Thursday, December 2, 2021 - link

    And yet there has been no coverage. So right now, consumers are far more likely to know about AV1 and demand it than an unheard of VVC.

    Also, do you really think any company wants to pay VVC's license fees? The problems with licensing and royalties for HEVC is why it has taken so long for it to gain any adoption whatsoever. As of right now, there are at least two different groups selling the licenses needed for the patents - this is the same situation that caused HEVC to take 5 years before it was used by anyone in volume - remember HECV was released in 2013. H.264 had only one group you needed to by a license from and the total cost of the license was lower than HEVC's. The whole point of AV1 is that it is royalty free and does not have these problems.

    As for you link, there is no indication of the setting used by AV1 or VVC. However, from what is known about AV1 it is clear they used it in fixed QP mode instead of VBR mode. It is well known that AV1 is optimized for VBR and that fixed QP is inefficient (it looses out to HEVC in all but UHD). However, with VBR it out performs HEVC in terms of bit rate savings by 20% at UHD. We would need proper thorough tests to be conducted to know if VVC's fees and performance costs would be worth it compared to AV1 (not to mention that AV1 could be further improved much like HEVC was during it's lifetime and VVC undoubtedly will be).

    Just because you're ignorant about the current situation, doesn't mean everyone else is as well. Hardware AV1 encoders were announced back in 2019.

    On 18 April 2019, Allegro DVT announced the AL-E210 with hardware AV1 encoding support for main 0 at 4K30 at 10-bit. In addition the Allegro AL-E195 has hardware AV1 encoding support for main 0 and professional 1 and Chips&Media's WAVE627 has hardware AV1 encoding support for main 0 at up to 4K120.

    Just because your going to act like a know-it-all jackass does not mean you actually know anything at all.
  • vladx - Thursday, December 2, 2021 - link

    "Also, do you really think any company wants to pay VVC's license fees?"

    Umm yes? The cost savings from lower bandwidth usage provided by VVC's superior compression beats any royalties. No one besides Google and cheapskates like Mozilla minds paying royalties for the best codec around.
  • vlad42 - Friday, December 3, 2021 - link

    If this were the case, then there would have been rapid adoption of HEVC across the board - especially among the likes of Twitch, YouTube, Netflix, Amazon Prime, etc. Instead we find that Twitch relies solely upon h.264, YouTube re-encodes as much as possible (everything?) to VP9 and h.254, and Netfix and Amazon Prime use h.264 for everything except UHD, etc.

    The licensing/royalty fiasco for HEVC is the largest reason why HEVC adoption has be pathetic compared to h.264. So yes, if VVC's licensing fees are anything like HEVC's, then these companies will not care about superior compression, AV1 with VBR will be good enough. It is not just Google and "cheapskates" like Mozilla.
  • vlad42 - Friday, December 3, 2021 - link

    Damn lack of an edit button...that should be VP9 and h.264 not h.254
  • vladx - Friday, December 3, 2021 - link

    Both Netflix and Amazon Prime have been using HEVC for ages, Twitch uses H.264 because they want as many streamers as possible and there are still plenty with pre-2015 PCs who try their hand at streaming on their shitty computers.
  • vlad42 - Monday, December 6, 2021 - link

    And as I said only for 4K content. If it was really worthwhile, they would use it for everything. However they do not.

    Also, if Twitch wants to maximize their user base, then why go straight to AV1 and not go to HEVC? Surely more viewers have hardware decode/encode support for HEVC than AV1? The only logical explanation is that their are licensing problems/the fees are too high.
  • mode_13h - Saturday, December 4, 2021 - link

    > Instead we find that Twitch relies solely upon h.264

    It seems to me that Twitch would use whatever is the lowest common denominator of its users, both in terms of their decoding & encoding capabilities. And when encoding, not only the capability matters but also how much overhead it adds, which could potentially impact gameplay.
  • vlad42 - Monday, December 6, 2021 - link

    If Twitch wants to maximize their user base, then why go straight to AV1 and not go to HEVC? Surely more viewers have hardware decode/encode support for HEVC than AV1? The only logical explanation is that their are licensing problems/the fees are too high.
  • name99 - Thursday, December 2, 2021 - link

    In what sense are these IP blocks "consumer hardware"?
    I'd say a reasonable proxy for consumer hardware is "is present in some phones". Can that be said for AV1?

    You can compare the uptake for AV1 with HEVC. I'd say there's a notable difference...
    Especially important is HW encoders. Decoders are easy, but you need encoders in phones to get a real change in usage.

    https://en.wikipedia.org/wiki/AV1#Adoption
    https://en.wikipedia.org/wiki/High_Efficiency_Vide...
  • vlad42 - Friday, December 3, 2021 - link

    vladx was complaining about the lack of ANY hardware encoder support, which is blatantly false. He did not quantify it as consumer encoders.

    It is also important to note that consumer hardware encoders are of little value until the major players (twitch, YouTube, etc.) are ready to start supporting AV1 in a manner in which client side encoding is actually needed. For YouTube, Facebook, etc., those companies already re-encode videos uploaded to them so they can just change the encode target on their end to be AV1 – they can even use the hardware encoders I listed above! AV1 hardware decode support has been around for a few years already and the software decode performs well on most laptop and desktops.

    There are already software encoders that are reasonably fast on desktops & laptops, so the hardware encoders are really needed for things like cellphones and live streaming/real time encoding. Cellphone videos typically end up stored in the cloud where space does not matter to the consumer or posted to social media, where as I mentioned above the company will re-encode the video anyway. For live streaming, twitch announced years ago that they would start to offer AV1 in 2022 (and twitch is the most aggressive that I have seen). So, as long as hardware encoders show up next year or software encoders' performance improves enough/CPUs improve enough, then everything is on track.

    As for adoption of HEVC, Apple was very early with support but only for FaceTime (there is no indication if it was hardware encoded on your link but let us assume it is for the sake of argument). Nvidia was also early. If there were others, then I missed them as the link is filled with announcements on codec support and software encoders/decoders. However, considering MacBooks still have 720p webcams, I doubt iPhones and iPads are encoding at a resolution higher than 720p. At these resolutions AV1 and VVC would bring minimal if any bitrate efficiency improvements at reasonable image quality. This same problem of low resolution video conferencing exists for Zoom, Teams, Skype, etc. on other platforms. As for Nvidia, they probably realized that HEVC encoding on Maxwell went largely unused for a long time due to the lack of adoption by the live streaming/video calling services (and anyone who wanted a high quality encode used software not hardware).

    The point is, there has been little motivation to rush the adopt of either AV1 or VVC encoding support on cellphone chips or GPUs due to the lack of a non-niche usage case. I think vendors have simply prioritized the die area that would have gone to hardware encoding support to the other parts of the SOC such as the X1/X2/P-Cores, NPUs, image processors, and GPUs as they would proved a more tangible benefit to end users.
  • vladx - Friday, December 3, 2021 - link

    > vladx was complaining about the lack of ANY hardware encoder support, which is blatantly false. He did not quantify it as consumer encoders.

    Umm stop putting words into 'my mouth". Let me quote myself:

    "At this rate, consumer hardware supporting encode will be released for VVC before AV1 let alone AV2."

    I specifically mentioned "consumer hardware" on which I believe VVC encoding will be supported before AV1, not "ANY hardware encoder support" as you claimed.
  • BlueSwordM - Friday, December 3, 2021 - link

    Nah, you're the one who's wrong here.

    There's already a ton of HW that has HW AV1 decode support: Samsung SOCs, Mediatek SOCs, Amlogic, etc.

    Almost all 2020+ TVs now include an AV1 HW decode, etc.
  • vladx - Friday, December 3, 2021 - link

    Compared to HEVC hardware decode which is supported by all consumer hardware from 2016 and onwards, AV1 support doesn't even come close right now.
  • Zoolook - Wednesday, December 8, 2021 - link

    Netflix started streaming in AV1 two weeks ago.
  • vladx - Wednesday, December 8, 2021 - link

    Sure, but that doesn't they dropped HEVC as well.
  • eastcoast_pete - Tuesday, November 30, 2021 - link

    Thanks for the information! Any idea why QC doesn't like AV1? It's free to use/implement AFAIK, so license fees can't be the reason.
  • tuxRoller - Tuesday, November 30, 2021 - link

    They want to reap the licensing fees from vvc, and ignoring av1 means people will rely less on av1, they might say.
    The decode issue isn't much of one given the speed of modern cores & dav1d's efficiency, but we're well past the point where the market is begging for more efficient codecs to satisfy our ultrahd hdr habits. That's not even mentioning the continued jpeg dominance.
  • Adonisds - Tuesday, November 30, 2021 - link

    Why would they get money from VVC? Shouldn't they have to pay to use it instead?
  • ikjadoon - Tuesday, November 30, 2021 - link

    Qualcomm's IP is the #1 whole contributor to VVC by a sizeable margin (~12% of VVC's contributions are Qualcomm's, more than any other single vendor).

    https://www.iam-media.com/market-developments/qual... (paywall)

    As a reminder, Qualcomm's earnings pre-tax as of fiscal Q3 2021:

    $1.795 billion from all its hardware (28% margin)
    $1.053 billion from its IP patents / licensing (71% margin)

    Qualcomm always seems to forget to mention their lopsided incentives during the yearly Summits, but it's frequently lurking behind most of their "unexplainable" decisions: reduce SoC hardware costs, increase licensing opportunities.
  • ikjadoon - Tuesday, November 30, 2021 - link

    Source for the earnings pre-tax,

    https://d1io3yog0oux5.cloudfront.net/_fc72c8d24773...
  • garblah - Tuesday, November 30, 2021 - link

    So assuming AV1 really does take off, will my S22 Ultra with QC SoC be able to play 1440p AV1 video on youtube without dropping frames? Probably so right?

    But I guess the battery drain will be considerably worse than if it had decoding via dedicated hardware? Probably bad enough that I would want to direct youtube to use the VP9 or AVC video if that's possible.

    Puts a small damper on the idea of keeping that phone for 3 or 4 years, especially if AV1 is widely used outside of youtube.
  • scarp - Wednesday, December 1, 2021 - link

    Big damper! No AV1 decoding meaning you practically can't play it. No worthiness at all with such a flagship phone because of humungous sacrifice if insisting in playing AV1. Or even if you try like the demanding AV1 videos, it wouldn't manage due to inability even if you are ok to sacrifice whatever it takes. Simply unable to accomplish is all in such scenario.
  • linuxgeex - Wednesday, December 1, 2021 - link

    It will do 10-bit 1440p no problem at all. AV1 was designed to be relatively lightweight to decode. In fact on my 2016 iPad Mini I can play 1080p 10-bit AV1 without issues, but it chokes on 10-bit 1080p HEVC. Android SOCs are about 3 years behind Apple for single thread but they're usually only about 1 year behind in multi-thread. So 1440p will be a walk in the park.
  • ksec - Wednesday, December 1, 2021 - link

    >AV1 was designed to be relatively lightweight to decode.

    That is not even remotely true.
  • heffeque - Wednesday, December 1, 2021 - link

    So true. I have an i7-8550U that can't handle 10bit 4K AV1 videos (it's dropping frames on normal scenes, and completely worthless on complex scenes).
  • BlueSwordM - Friday, December 3, 2021 - link

    Actually, it is.

    Utilizing dav1d, AV1 decode is faster than HEVC decode using the fastest currently available decoder ffhevc at the same bitrates.
  • vladx - Friday, December 3, 2021 - link

    A decoder can't be faster once it stops dropping any frames, instead it can consume less power if it's implemented in hardware instead of software.
  • mode_13h - Saturday, December 4, 2021 - link

    If it takes fewer microseconds to decode the frame and spends more time sitting idle, then it's faster.
  • realbabilu - Wednesday, December 1, 2021 - link

    While it need to be checked against av1 hardware decode capable mediatek dimensity 1200 mi11t. But it is 1080p device.
  • vlad42 - Wednesday, December 1, 2021 - link

    Given the performance of modern arm cores and the efficiency of dav1d, you will probably be fine. At the very least, it should last you until you are ready to upgrade.
  • bernstein - Wednesday, December 1, 2021 - link

    so what your saying is that streaming services won't care and just use a software decoder in their apps. (They optimize for lowest bandwidth at lowest royalty cost at highest quality). Ultimately leading to worse battery life on qq SoCs, with qq blaming it on streaming-services and them blaming it qq. In the end it can only hurt qq.
  • linuxgeex - Wednesday, December 1, 2021 - link

    Yes, there will be native AV1 decoders using intrinsics which will easily decode it in real time. It will use a little bit more battery life. Not a huge amount more, like people fear. ie 8 hours instead of 10 hours.
  • vladx - Wednesday, December 1, 2021 - link

    Streaming services outside Google's will continue using HEVC which has much wider hardware support.
  • Zoolook - Wednesday, December 8, 2021 - link

    Netflix isn't Google and they stream av1 to certain clients, i.e capable Android and PS4-PS5
  • vladx - Wednesday, December 8, 2021 - link

    Big emphasis on "certain", majority still uses HEVC to stream Netflix content.
  • brucethemoose - Wednesday, December 1, 2021 - link

    AFAIK the big streamers will only use hardware decoders at high resolutions. Something to do with DRM and licensing agreements (aka piracy paranoia) IIRC.
  • ksec - Wednesday, December 1, 2021 - link

    Additional Hardware complexity and die size. I dont believe Qualcomm is *against* AV1 so to speak. But it make much more sense that both VVC and AV1 decoder block are working together. So my guess is that they will get an AV1 decoder in their next update along with VVC.

    It is worth noting VVC decoding was suppose to be a target for 2021 flagship, so not having them may suggest their whole video engine block IP were behind schedule. ( Or due to other issues ).
  • GreenReaper - Thursday, December 2, 2021 - link

    It's not paranoia if they're really out to get you.

    But this works out, people who care (content licensers) can pay, those who don't care can use AV1+.
  • O-o-o-O - Thursday, December 2, 2021 - link

    AV1 might NOT be royalty free as in this article.

    https://www.cnet.com/tech/mobile/patent-group-want...

    If so, it’s understandable that chip makers not building AV1 encoder/decoder at this time. May as well put VVC.
  • BlueSwordM - Friday, December 3, 2021 - link

    That's just Sisvel doing things their way.

    There hasn't been any legal or financial move, and I'm not aware of AV1 supporters actually paying the fees to it.
  • Adonisds - Tuesday, November 30, 2021 - link

    I was sure they would add it this time. This is a huge disappointment
  • linuxgeex - Wednesday, December 1, 2021 - link

    The disappointing thing is lack of AV1 encode by all vendors in 2022, even on the desktop, even in the top-tier GPUs.
  • dotjaz - Wednesday, December 1, 2021 - link

    Are you stupid? Why are you disappointed by an impossible feature? Nobody expects AV1 HW encoder on a phone, at all. Nvidia will have some in 2022. It took Qualcomm 5 years to implement HEVC encoder. What makes you think 3 years for AV1 is possible?
  • ksec - Wednesday, December 1, 2021 - link

    Because that is what Open Media Alliance marketing promised. ( And failed to deliver )
  • ChronoReverse - Wednesday, December 1, 2021 - link

    Can you link the marketing slides with the promised arrival date of hardware encoders?
  • ksec - Wednesday, December 1, 2021 - link

    You should check out their original schedule of AV1 and delivering AV2 by "2020".
  • ChrisGX - Wednesday, December 1, 2021 - link

    Did they really promise AV1 encode capability in a mobile phone SoC? Commercial AV1 encoder ASICs, i.e. merchant silicon parts, are only just becoming available now. So far, even big players encoding content in AV1 have generally used software codecs running on high end CPU servers rather than specialised silicon developed in-house to get the job done.

    The following links point to a couple of products supplied by vendors of AV1 encoder ASICs as well as a broader range of issues around AV1 encoding of content and AV1 codecs.

    https://www.prnewswire.com/news-releases/netint-an...
    https://netint.ca/netint-announces-the-worlds-firs...
    https://netint.ca/
    https://www.design-reuse.com/news/50175/chips-medi...
    https://www.wowza.com/blog/av1-codec-aomedia-video...
    https://www.streamingmedia.com/Articles/Editorial/...
    https://aomedia.googlesource.com/aom/
    https://www.visionular.com/en/products/aurora1-av1...
  • name99 - Thursday, December 2, 2021 - link

    HEVC was finalized in 2013
    Apple had working code on iPhone in 2014. (SW encode/decode, probably with some GPU assist).
    Apple had full HW decoder in iPhone 2015.
    Apple had full HW encoder in iPhone 2016.
    But the HW was not really "visible" until iOS11 in 2017, when Apple made their big transition from "our preferred codecs are JPEG and h.264" to "our preferred codec is HEVC".

    So
    - HW encoder in 3 yrs *was* possible. Of course VVC is more complex than HEVC.
    - chances are Apple will roll out a similar scheme -- experiment with (doubtless GPU-assisted) SW encode with current devices, ship a stealth HW decoder , then a stealth HW encoder, and maybe around 2024 transition their ecosystem to VVC (with two years of HW already in place, and a good enough SW solution for older HW).
  • BlueSwordM - Friday, December 3, 2021 - link

    Actually, there is already a consumer level HW encoder...

    It is called the Nvidia Orin SOC.

    I'm expecting Nvidia's next-gen GPUs to have AV1 HW encoding.
  • vladx - Friday, December 3, 2021 - link

    Where can I buy said "Nvidia Orin SOC" right now?
  • mode_13h - Saturday, December 4, 2021 - link

    > Where can I buy said "Nvidia Orin SOC" right now?

    Developer kit coming in Q1 2022:

    https://developer.nvidia.com/embedded/jetson-agx-o...
  • vladx - Saturday, December 4, 2021 - link

    Thanks for proving my point, it's a developer kit targeted at developers and prosumers so not "consumer level hardware" as previously mentioned.
  • mode_13h - Sunday, December 5, 2021 - link

    > Thanks for proving my point

    I'm not taking sides. I'm just trying to provide relevant information, where I can.

    I was intrigued to hear that Orin has AV1-encoding (something I haven't verified), given that it's aimed at self-driving cars. Considering how many cameras self-driving cars tend to incorporate, they could quickly rise to the top of the list of compressed video sources, in terms of aggregate GB/day. The number of cameras they each have could mean that auto-manufacturers are that much more susceptible to licensing fees, thereby pushing them more aggressively towards royalty-free solutions.
  • vladx - Sunday, December 5, 2021 - link

    That's why I thanked you, @BlueSwordM is the one who claimed it's "a consumer level HW encoder".
  • LordConrad - Tuesday, November 30, 2021 - link

    One major thing where I disagree with most ARM chip makers: No high-power core should have less than 1MB L2 cache and no low-power core should have less than 512KB L2 cache.
  • mode_13h - Wednesday, December 1, 2021 - link

    The low-power cores don't need so much L2, because they have L3 to fall back on.
  • LordConrad - Sunday, December 5, 2021 - link

    Although I much prefer 512k, I would accept 256k L2 cache for low-power cores. I think the 128k used by most chipmakers today is unacceptable.
  • caribbeanblue - Tuesday, November 30, 2021 - link

    Wonder what the smartphone camera field will look like overall at the end of 2022. The iPhone 14 Pro is rumored to have three 48MP cameras. Seems like we'll have some exciting camera improvements in 2022
  • Alistair - Tuesday, November 30, 2021 - link

    Nothing I've seen suggests any change or improvement in the market. No DIY market for Windows plus Qualcomm. 2 years behind Apple in CPU performance. GPU is fine, but no 2x or 4x options like with Apple's tablet and M1 options. Was hoping for an announcement there for Qualcomm laptops.
  • brucethemoose - Wednesday, December 1, 2021 - link

    As far as laptops go, I think Qualcomm is betting the farm on those upcoming Nuvia cores.
  • Unashamed_unoriginal_username_x86 - Tuesday, November 30, 2021 - link

    QC pointing out that the ISP can now do all photo/video AI processing on its own seemed strange in the context of your previous statement about Apple using their GPU effectively in computational photography. I'm guessing it allows for better power gating/efficiency?
  • mode_13h - Wednesday, December 1, 2021 - link

    Or, maybe it's just them trying to spin a weakness into a strength.

    I had a similar thought. If they effectively fused all of their GPU compute + AI + signal processing, they might deliver more performance on each, while lowering SoC prices (or at least improving their margin) due to smaller area.

    In truth, not that much separates DSP and GPU cores. For AI, Nvidia showed how you can bolt on some tensor multipliers and still feed them from the same GPU register file.
  • michael2k - Wednesday, December 1, 2021 - link

    I assume there is better efficiency in having dedicated hardware blocks from the ISP in the pipeline rather than GPGPU blocks in the pipeline.

    There may be ISP dedicated RAM/cache. Apple has a 32MB system cache that I imagine is used by the GPU for image processing. Qualcomm only has a 4MB system cache, so it would make sense if the ISP has dedicated memory.

    If that were the case then it also makes sense that shuttling data from and to the 4MB system cache for the GPU to use back to the ISP cache for the ISP to use would be computationally and power-wise expensive. Apple would avoid that kind of inefficiency because they would allow the ISP and GPU to both access the same 32MB system cache.

    If the ISP already has access to the 4MB system cache then I don't see any reason to avoid using the GPU, unless the Adreno GPU is poorly suited for GPGPU. It might also just be that Qualcomm is licensing hardware blocks that don't integrate as well since they don't design them the way Apple claims to, and Apple can have multiple cooks in the kitchen as it were between the ML blocks, the NE blocks, the ISP block, and the GPU blocks all working on the same set of memory during photo and video workflows.
  • name99 - Thursday, December 2, 2021 - link

    Isn't this a rerun of GOU fixed function blocks (which were killed by programmable shaders)?

    Sure, if you can be ABSOLUTELY CERTAIN than you know everything your camera will want to do, you can freeze than in the ISP. But doing the work on a more programmable block (some combination of GPU and NPU) leaves you able to retrofit a new idea in two years that you haven't even thought of today.

    Ultimately it probably boils down to split responsibility.
    Apple has the camera SW and HW teams working together.
    QC has the problem (for better or worse) that it has no idea what Google will be doing with cameras in two years, and no strong incentive to ensure that the chip they sell today matches Google's requirements in two years.

    A more interesting aspect is that for Apple's scheme (multiple IP blocks all working together) you need a NoC that can tag requests by QoS (for appropriate prioritization) and by stream (to aggregate locality). Apple certainly does this. The academic literature has plenty of discussion as to how this should be done, but I am unaware of the extent to which anyone in industry apart from Apple does this. Is this part of the standard ARM IP model? Is it something each SoC vendor does in their own way but they all do it differently?
  • mode_13h - Friday, December 3, 2021 - link

    > doing the work on a more programmable block (some combination of GPU and NPU)
    > leaves you able to retrofit a new idea in two years that you haven't even thought of today.

    It's (usually) also a boon for code maintenance, if you can implement features in a similar way, across multiple generations of hardware. Programmable engines (call them DSP, ISP, GPU, or what you will) are the way to do this, with the caveats that there are inevitably hardware bugs and other quirks that need to be worked around in a generation-dependent manner, and that this approach poses additional challenges for realtime (i.e. video), due to shifting performance characteristics and amounts of resources.
  • mode_13h - Friday, December 3, 2021 - link

    > you need a NoC that can tag requests by QoS (for appropriate prioritization)
    > and by stream (to aggregate locality).

    Only realtime apps truly need that. And of those, all that come to mind are video processing and AR. I'd say AR might've been the real driver, here. Sure, audio processing is also realtime, but tends to be so much lower-bandwidth that it wouldn't be as dependent on realtime scheduling.
  • name99 - Monday, December 6, 2021 - link

    Apple was tagging content on the SoC by QoS and stream back with the A4. It's not something new.

    And "realtime" is somewhat flexible. Of course video capture is hard realtime, but even screen animation effects are soft realtime. You mock this as unimportant, but iPhones have been distinguished by the smoothness of their animation, and general lack of visual/touch glitching, from day one (remember Project Butter and all that?)
  • mode_13h - Tuesday, December 7, 2021 - link

    > You mock this as unimportant,

    Huh? Where? I respect rock-solid, smooth animations and consistently good responsiveness.

    I'm not convinced that warrants QoS tagging of associated bus transactions, but that's only because you don't want UI animations to be particularly taxing, for the sake of battery longevity. If they're not, then it should be enough for the OS scheduler to prioritize the supporting CPU & GPU threads.
  • name99 - Tuesday, December 7, 2021 - link

    How do you think prioritization is IMPLEMENTED at the point that it hits that hardware?

    Any particular NoC routing point, or the memory controller, or any other shared resources has to decide who gets access in what order.
    OS scheduling doesn't help here! All OS scheduling has done is decide which code is currently running on which CPU (or GPU), and that code won't change for 10ms (or whatever the scheduling granularity is). If you want the hardware to make better choices (do I prioritize the requests coming from a CPU or the requests coming from the ISP?), it needs to know something about the packets flowing through the router and the requests hitting the memory controller -- which are low latency, which can be delayed so much but no more, which are best effort.
    That's what tagging (by QoS and stream) achieve!

    In the absence of such tags, the best you can do is rely on
    - heuristics (which are never perfect, and frequently far from perfect)
    - massive overprovision of HW (which is sub-optimal in a phone; and never works anyway because demands always expand given that the HW can [to 95th percentile, anyway...] kinda sorta support the new demands. )
  • mode_13h - Wednesday, December 8, 2021 - link

    > How do you think prioritization is IMPLEMENTED at the point that it hits that hardware?

    The OS scheduler? Well, when a context switch occurs due to a timeslice ending, a thread blocking on I/O or blocking on a synchronization object, the OS decides which (if any) thread should next run on that core.

    > Any particular NoC routing point, or the memory controller,
    > or any other shared resources has to decide who gets access in what order.

    If you can put a FIFO there, that works for anything not time-critical. If you need QoS, then priority queues are a simple way to implement it. When a FIFO isn't appropriate, you need an arbiter which probably looks at priority tags to see who wins, and might use round-robin as a tie-breaker.

    > OS scheduling doesn't help here!

    I didn't say it did. What I said was that your animations should be light-weight. If they are, then their compute & bandwidth requirements should be easily satisfied by getting a fair allocation (e.g. round-robin) of said resources.

    > If you want the hardware to make better choices

    Yeah, I get all of that. Again, I was just talking about UI animations. You shouldn't need system-wide QoS tagging of all your bus & memory transactions, just to get some light-weight animations to run smoothly. For the hard-realtime stuff, especially with more stringent compute or bandwidth demands, that's a different story.

    > - massive overprovision of HW

    And this is effectively what we're talking about, with light-weight UI animations.
  • egiee - Tuesday, November 30, 2021 - link

    Maybe it is the first time that Mediateck can beat Qualcomm in high-end Socs next year.
  • Xedius - Tuesday, November 30, 2021 - link

    I know this is off-topic, but should I buy the new Moto G200 with the SD 888+ at $480 or wait for the new SD 8 Gen 1 devices? I'm quite fed up with the Exynos 9810 from my Note 9.
  • egiee - Tuesday, November 30, 2021 - link

    you may wait for SM8450 next year
  • eastcoast_pete - Tuesday, November 30, 2021 - link

    At that price, the Moto sounds like good value, if it checks all the important boxes for you. The alternative is to wait for 2022, possibly until Q2 with the ongoing supply mess, and pay at least twice what that G200 apparently goes for now.
  • Kangal - Sunday, December 5, 2021 - link

    There's a big difference in performance and efficiency going from 10nm/A75 to the 7nm/A76. But after that, there's a negligible improvement from the QSD 855 to the 855+/860/865/865+/870/888/888+. And there seems to be a performance regression with next year's QSD 8-gen1.

    So any upgrade is going to feel big, wether it's with next year's chips or ones from a few years ago.

    My recommendation is that you instead get the Samsung Galaxy A52-S. It's got 5G, OLED, 120Hz, and the new QSD 778 chip that is equal to the QSD 860 (ergo same as all the other chips).

    You'll get a proper flagship phone in the A52-S, without any big compromises like Waterproofing, Headphone Jack, microSD, or software updates (3 years from Samsung).

    By the time that gets old, we will get proper ARMv9 chips and devices. This first gen is not impressive at all.
  • geoxile - Tuesday, November 30, 2021 - link

    Pathetic cache configuration.
  • mode_13h - Wednesday, December 1, 2021 - link

    Cache uses both area and power. So, increasing cache costs more $ and battery life. The only time that's not true is where cache saves you having to go out to DRAM, although there's going to be a crossover point where adding cache costs more power than the DRAM accesses it saves.
  • michael2k - Wednesday, December 1, 2021 - link

    Well, given how small the cache is at 4MB, and how large the DRAM is at 12GB to 16GB, and how much the CPU and GPU both use the cache, I don't see how it's ever going to avoid pulling from DRAM.

    But if your app + OS can live in 4MB instead of 5MB, you'll definitely see better battery life.
  • eastcoast_pete - Tuesday, November 30, 2021 - link

    I know this is probably the least interesting part of this new SoC for many, but I would love to try out the 720p at 960 fps "infinite recording" capability.
    Being able to do that let's one catch that wanted sequence of frames without worrying about starting too late or running out of the few seconds of buffer. Andrei, when you review a smartphone with one of those, can you try and play with that feature and let us know?
  • realbabilu - Tuesday, November 30, 2021 - link

    Andre. The 865 snapdragon on blakshark3, and mi10t pro performa better and stable genshin impact at 60hz comparing to 888 snapdragon on mi11 ultra or blakshark3. Why? Heat throttling i suppose. I think the Heat problem is still the main issue on 888 will go on at this New chip snapdragon
  • serendip - Tuesday, November 30, 2021 - link

    This could the the next SQ3 chip in a Windows on ARM device. I would love to get 2 X2 cores and 2 A710 for good all-out performance while still maintaining decent battery life in a fanless design.
  • Alistair - Wednesday, December 1, 2021 - link

    The Problem is Apple uses a double chip for PCs. Shouldn't you want 4 x X2 cores, and double the GPU here? Even for a fanless thin and light laptop.
  • name99 - Monday, December 6, 2021 - link

    It's really hard to understand QC's (and ARM's) decisions in this respect!
    We all get that they make decisions 4 or 5 yrs before a product ships. But haven't they been playing this game long enough now to appreciate what the state of the art will likely be at the time they ship? And yet they continue to target the competition at the time of design!

    I share your mystification with no 4 X1 laptop-targeted SoCs. And will throw in an additional "WTF is the 510 so weak?"
  • mode_13h - Tuesday, December 7, 2021 - link

    > WTF is the 510 so weak?

    Maybe Cortex-A5xx series is the new A3xx series. A7xx series is the new mid-range, and the X-series is the new high-end tier.
  • name99 - Tuesday, December 7, 2021 - link

    I agree that that would certainly be a better match to use cases (absolutely for laptops, probably so in some form for phones).
    And yet QC is not creating such SoC's. Nor is anyone else...
    Like I said, constantly scrambling to replicate (in five years) what Apple has today, rather than attempting to shoot for what Apple *will have* in five years.
  • mode_13h - Wednesday, December 8, 2021 - link

    > And yet QC is not creating such SoC's. Nor is anyone else...

    Huh? Such SoCs as *what*? Because ARM hasn't announced any A3xx cores for ARMv9, the A5xx are currently the de facto low-end core for ARMv9. Whether or not that's the eventual plan, that's the current reality. ...and so we see SoCs with A510 + A710 + X2 as the low/mid/perf cores.
  • AbRASiON - Tuesday, November 30, 2021 - link

    No AV1 on a processor for 2022.

    Embarassing.
  • shabby - Wednesday, December 1, 2021 - link

    What do you need it for? Just wondering
  • brucethemoose - Wednesday, December 1, 2021 - link

    YouTube, streaming.

    Even some, er, *DIY* videos are released in AV1 now.
  • ChrisGX - Wednesday, December 1, 2021 - link

    For the record, MediaTek Dimensity 1200 and Google Tensor have AV1 decode now and we know that the MediaTek Dimensity 9000 and Kompanio 1200 and 1300T will support AV1 decode in 2022. The list of processors supporting AV1 decode will probably grow longer before the end of 2022. The Dimensity 9000, unlike the others, will support 8K playback (in addition to 4K).
  • SydneyBlue120d - Wednesday, December 1, 2021 - link

    Is Dolby Vision encoding supported?
    I think it is only decoding.
    No AV1 support is just unqualifiable.
  • scarp - Wednesday, December 1, 2021 - link

    Lack of AV1 decoder is a deal breaker for me. So unimpressed.
  • Nicon0s - Wednesday, December 1, 2021 - link

    So buy an iphone, oh wait...
  • 69369369 - Wednesday, December 1, 2021 - link

    Yawn. Nuvia cores or bust.
  • tkSteveFOX - Wednesday, December 1, 2021 - link

    Big doubt about the numbers Nuvia showed and Qualcomm actually going full custom cores. Definitely not before 2023, if ever.
  • Kangal - Thursday, December 2, 2021 - link

    Nuvia cores seem impressive.
    Whilst ARM cores are still struggling to catch up to the Apple A13, Nuvia cores are better than the latest Apple A15 cores. The caveat being that we will never buy those Nuvia cores on a phone.

    Qualcomm is going all-in by using Nuvia's cores in an Apple M1 type of configuration. So we will see some Windows 11s/Pro tablets and laptops releasing with it in the next 6-12 months.

    We might even get an Nvidia Shield TV competitor, just a Mini Box similar to the Mac Mini but running the Apple M1 equivalent Nuvia-Snapdragon. Possible but I doubt it.
  • michael2k - Thursday, December 2, 2021 - link

    But there are no Nuvia cores. They aren’t supposed to be out until 2023, which would align with Cortex X3 and X4 designs, as well as M2 or M3 parts.
  • Kangal - Thursday, December 2, 2021 - link

    Yeah, you're right.
    I thought we were getting some Qualcomm-Nuvia chips in like June 2022. I didn't see that they pushed it back to 2023. Right now, they're releasing the Snapdragon 8cx Gen3.

    By the time the Q-Nuvia chip releases, we should have the Cortex-X3 processor which is going to be a major design overhaul. So it's likely it will catch up to the Q-Nuvia projected performance anyways... and on top, it would be ARMv9 based whilst the Nuvia was designed per ARMv8 protocol.

    Even more troubling we will see the Apple A16 and then A17 at that period. Both expected to raise the performance envelope, so they'll be somewhat competing from that end. Also there's the Apple M2, Pro, Max chipsets that are likely to be released around the same time... and they would likely destroy the Q-Nuvia devices in efficiency and performance and software support.
  • mode_13h - Friday, December 3, 2021 - link

    > we should have the Cortex-X3 processor which is going to be a major design overhaul.

    According to whom? Or, are you saying that because there's an overhaul expected in the A710-successor, which is the likely basis for the X3?

    > So it's likely it will catch up to the Q-Nuvia projected performance anyways...

    That's like saying X3 will beat Apple, which I think is a stretch.

    > the Nuvia was designed per ARMv8 protocol.

    How do you know? And ARMv9 isn't so different than later ARMv8. If they weren't very far along when they learned of v9, they could've conceivably re-targeted.

    > the Apple ... released around the same time... ... would likely destroy
    > the Q-Nuvia devices in ... software support.

    Granted, Qualcomm's laptop ambitions are largely dependent on the success of Windows-on-ARM. That aside, I don't see you point about software support. Apple exists in its own world. Of course their software support will be strong. I just don't really see what that has to do with Qualcomm.
  • Kangal - Friday, December 3, 2021 - link

    The Cortex-X1 isn't really a performance core from the ground up, it's a "medium core" that's been tweaked/beefed up, so of course it will have disadvantages to something else, like Apple's Firestorm or Nuvia's cores.

    The Cortex-X2 design is mostly recycled from the Cortex-X1/Cortex-A78 (and Cortex-A710). And if you really look at it, it's all part of the same Cortex-A76 family. This is ARM's first attempt to get the ARMv9 protocol out there, that aspect alone takes resources away from development. Their performance and efficiency projections for next year have been a let down. That's the context you need to remember.

    The Cortex-X3 is supposed to be designed by ARM's European team, and they have a stronger track record than their US Team. So I expect good things. On top of that, they're supposed to start on a new platform; eg Cortex-A730. I believe the difference will be a kin to the Cortex-A57 versus Cortex-A72. So it will be worth waiting for.

    With that said, Apple's A15 isn't too far from their A14 and A13 processors. Whilst the X1 and X2 failed to catch up to the Apple A13... I do think the X3 will catch up and potentially surpass it. With the point being that Apple has the new lead with the Apple A16.

    The Nuvia cores have been in development for some time. Obviously it's built with ARMv8 in mind. And they do have silicon pressed and sampled already, ie Working Prototypes. And based on the information we know, their Nuvia Cores are better than the Apple A15 in the labs. But now they will have to go back, tweak it, and covert to ARMv9. Again that will take time of the development, and potentially less optimised design in the interim.

    Software Support is the part which Apple wins. It's all-round better. Not only can they support it longer, but businesses and developers trust that ecosystem, which means it will get priority for development and for optimisation. The other point is that Apple's software is very advanced: Swift and Metal are actually pretty awesome, and their SDK is the Gold Standard in the industry. Sure you have certain limitations, but within the boundary the software and hardware meld much closer. It's the result of throwing billions of dollars, thousands of in-house developers, and years of experience.

    A Qualcomm-Nuvia product has no answer for that. If they try AndroidOS, they will have lots of compromises. Whilst Microsoft is usually better, lately they've done a poor job with W-ART, Windows-on-ARM, and Windows 11. So I don't expect a Q-Nuvia laptop doing as well, so they will lose some performance and efficiency there again to the likes of Apple's iPad/Macbook.
  • tkSteveFOX - Wednesday, December 1, 2021 - link

    It's clear ARM designs have hit a brick wall. The improvements in architecture just aren't enough when compared to Apple's. While Android phones have more mature ISP and modems, Job's lot silicon is at least a generation if not two ahead in performance and efficiency.
    Android phones now regularly costing more or as much as iphones means you as a customer are more likely to choose Apple in the longer run.
  • Wilco1 - Wednesday, December 1, 2021 - link

    A 20% yearly improvement is hardly a brick wall. While the L2/L3 caches are still on the small side (especially when compared to Dimensity 9000), this should narrow the gap. You're not going to notice the difference though. Most phones use far older and slower cores (Samsung just announced a phone with 1.2/1.6GHz Cortex-A55!!!), and there a 10-20% difference will be very noticeable. But at the high-end? Absolutely not.

    As for efficiency, my S21 Ultra has the same battery life as iPhone 13 Pro according to AnandTech. And I paid far less for it. As a consumer that matters to me more than whether or not the iPhone is faster on SPEC or Geekbench.
  • high3r - Wednesday, December 1, 2021 - link

    Wow, 3095 mAh vs 5000 mAh. That's something then. :)
  • Wilco1 - Wednesday, December 1, 2021 - link

    Design capacity of S21 Ultra is 4855 mAh, estimated capacity around 4700 mAh according to Accubattery. It lasts 4 to 5 days on a full charge which is exceptionally good.

    iPhones use a huge SoC with lots of cache on the most advanced process which allows for a smaller battery, while Android phones use a much smaller SoC on a less advanced process and a larger battery instead.
  • TheinsanegamerN - Thursday, December 2, 2021 - link

    It will be something in 2 years when the samsung can still go for more then 4 hours without keeling over. The iphone, OTOH......
  • mode_13h - Wednesday, December 1, 2021 - link

    > While the L2/L3 caches are still on the small side (especially when compared to Dimensity 9000)

    Isn't the 9000 aimed primarily or exclusively at laptops? Why compare it to a phone SoC, then? Phones have smaller batteries and worse cooling.
  • Wilco1 - Wednesday, December 1, 2021 - link

    Eh what?!? This clearly states smartphone repeatedly - in huge letters, so impossible to miss... https://i.mediatek.com/summit-dimensity9000

    For laptops there is Cortex-A78C and Cortex-X1C.
  • mode_13h - Wednesday, December 1, 2021 - link

    Okay, my bad. I thought I remembered something about it being a Chromebook SoC.
  • name99 - Monday, December 6, 2021 - link

    You'd think so ("you won't notice the difference") but you do.

    I'm happy with my iPhone XS (A12) but when I used a friend's iPhone 13 Pro (A15) the extra speed was noticeable. I couldn't say WHERE the difference lies; my phone never stutters or glitches in animation and apps launch basically instantly. But even so you can feel that the iPhone13 Pro is faster, and not just subtly so.
    Maybe it's in the 120Hz? But even so, you need a SoC that can keep that 120Hz fed while never glitching...
  • Nicon0s - Wednesday, December 1, 2021 - link

    I mean what exactly makes you think Apple objectively made bigger improvements in CPU arhitecture in the last few years?
    Apple's CPU approach is quite obvious: very wide CPU cores with huge amounts of cache. Realistically that can be replicated but the thing is stock ARM designs try to be efficient from a die area perspective as well.
    If Samsung's 4nm is solid and SD8G1 Android phones won't have heat problems you will hardly be able to sense a real world difference in performante vs an iphone with A15.
  • mode_13h - Wednesday, December 1, 2021 - link

    > Apple's CPU approach is quite obvious: very wide CPU cores with huge amounts of cache.

    For starters. To see what else they've done have a look at name99's 350 pages of collected research on it:

    https://drive.google.com/file/d/1WrMYCZMnhsGP4o3H3...

    > stock ARM designs try to be efficient from a die area perspective as well.

    True, ARM's X-series cores are still just 7-series cores on steroids. They weren't truly designed for max performance from the ground-up.

    Their server cores suffer from a similar affliction, being derived from their mobile cores.

    > you will hardly be able to sense a real world difference in performante vs an iphone with A15.

    That's a whole different discussion. As we've seen, Apple's cores were designed with more lofty goals in mind than mere phone UX.
  • michael2k - Thursday, December 2, 2021 - link

    But you’ll see a much bigger difference when you stick the part in a less thermally and battery constrained laptop.

    That’s wher Apple is using an 8 or 10 core design instead of a 6 core design.
  • Nicon0s - Friday, December 3, 2021 - link

    The SD8G1 is not a laptop SOC at all, not even a tablet SOC so I don't see how such a comparison is relevant.
    Qualcomm's current biggest disadvantage is the lowest quality manufacturing process.
  • Kangal - Sunday, December 5, 2021 - link

    With the throttling and performance we have on the QSD 888+, would you rather classify that as a "Phone SoC" or a "Tablet SoC" ...?

    I think the QSD 8-Gen1 is going to behave very similar to it, possibly with its own quirks.
  • AciMars - Wednesday, December 1, 2021 - link

    Hey Andrei, are you confirmed S8Gen 1 has 4MB System Cache? Because SD888 just use 3MB System Cache. Need Clarification on your Table Information
  • Andrei Frumusanu - Wednesday, December 1, 2021 - link

    I asked Qualcomm in our briefing and they said it was "unchanged" at 4MB. So it seems the 3MB prior reporting was wrong.
  • ZolaIII - Wednesday, December 1, 2021 - link

    Really interested in seeing A510 metrics and impact of shared L2 cache and how it compare to older ARM small core's with shared L2. Old implementation whosent very good and leed to performance degradation when both core's where active and had to wait on it, that's why they abandoned it in the first place.
  • ceisserer - Wednesday, December 1, 2021 - link

    What a boring release - especially keeping in mind the 888 wasn't that overwheling either.
    No AV1 decode in 2022 (?!?!?!) and just 20% faster cores - guess I'l keep my Snapdragon 835 powered phone for another year, or if it dies, buy a Mediatek Dimensitry 9000 based one...
  • defaultluser - Wednesday, December 1, 2021 - link

    The worst part of this : Qualcomm is still stuck on Samsung for the foreseeable future.

    The Dimensitry 9000 will be more efficient TSMC part!
  • Nicon0s - Wednesday, December 1, 2021 - link

    LoL, I guess that 4 times higher single core performance and 3 times better multicore performance than your SD 835 is not enough.
    Thinking about it, my SD 778 has double the CPU performance of your 835.
  • mode_13h - Wednesday, December 1, 2021 - link

    OMG. If that floor plan is at all to scale, I'm amazed by the relatively small proportion devoted to the CPU cores.

    Even if it's not, the degree to which they de-emphasize the CPU portion is intriguing. I expect that'll change once the Nuvia cores take their place.
  • name99 - Monday, December 6, 2021 - link

    Don't take a look at the M1 Max then. Your head might explode :-)
  • Raqia - Wednesday, December 1, 2021 - link

    Congrats on your new job Andrei, I'll miss your reviews. Hopefully you'll still post some nuggets once in a while somewhere...
  • ChrisGX - Wednesday, December 1, 2021 - link

    Ditto
  • mode_13h - Wednesday, December 1, 2021 - link

    Third.

    Wow, the tweet probably got more replies than this article has comments, so far.

    https://twitter.com/andreif7/status/14659755108660...
  • cfenton - Wednesday, December 1, 2021 - link

    Will the improved hardware image processing mean that second tier OEMs like ASUS, Nokia, and Motorola will finally have decent camera performance? It sounds like a lot of the processing that was done in software is moving to hardware, which should create a higher floor for image processing. Or am I understanding that wrong and final results will still be highly software dependant?
  • mode_13h - Wednesday, December 1, 2021 - link

    Knowing Qualcomm, they'll probably license each of the photo & video enhancements, so that even phones with the same SoC won't offer all of the features.
  • gijames1225 - Wednesday, December 1, 2021 - link

    Sounds like a decent jump forward. I'm still quite happy with my LG V60 (and I'll miss the headphone jack and SD card whenever I move on), but probably in another year or two the updates to camera and SoCs will be enough that I'll leap forward.
  • Kangal - Wednesday, December 1, 2021 - link

    If it ever breaks, you can try to buy another Used V60. If that's not fine, the Samsung Galaxy A52-S is a worthy upgrade.

    Otherwise, it's going to be financially painful to (import?) a Sony Xperia 1 or 5, with mk ii/iii/iv.

    It's hard to find a high-end phone these days that have a Headphone Jack and Waterproofing.
  • mode_13h - Thursday, December 2, 2021 - link

    I thought I wanted a headphone jack, but I haven't even been using it in my phone of the past 3 years. It turns out that I value noise cancellation much more, and most of those headphones are bluetooth. Even ones that aren't still have an ADC + DAC, rendering it almost pointless to use a cord vs. Bluetooth with AptX HD or LDAC.
  • TheinsanegamerN - Monday, December 6, 2021 - link

    My 30 year old truck didnt magically develop bluetooth, neither did my 15 year old car, my home stereo steup, my speakers in my kitchen, ece. Until they do, keep that headphone jack around!
  • mode_13h - Tuesday, December 7, 2021 - link

    Point taken. However, if you one day fancy a phone without a headphone jack, I'm sure you can get Bluetooth receivers for your car and elsewhere. Just sayin'.
  • Oxford Guy - Monday, December 13, 2021 - link

    The BT audio codecs I have encountered have also been clearly audibly inferior to Red Book which is slightly audibly inferior to 20-bit 48K audio (basically the limit of human hearing resolution).

    The ‘loudness war’ and mp3/aac have done their best to ruin sound quality but some of us still care. Analog headphone jacks are hardly ideal but they’re better than BT.
  • mode_13h - Monday, December 13, 2021 - link

    Suit yourself, but I've been satisfied with AptX HD, on my Sennheisers, and LDAC, on my Sony headphones (high-quality setting). Of course, you'll want to be sure your phone has good support for these, but I think Sony provides the LDAC encoder, royalty-free. Qualcomm owns AptX, but I'm guessing they charge extra for HD, because not all phones support that version.

    Unless you're in a perfectly quiet environment, it's hard to appreciate the benefits of a fully-lossless signal path. You might find that anything you give up by using one of the better BT codecs, you make up for by lowering the noise floor via noise-canceling. I've used sealed, corded ear buds, but the cord noise bugs me too much and they're not comfortable to leave in for as long.
  • Oxford Guy - Tuesday, December 14, 2021 - link

    The proliferation of BT audio standards also feeds the point about long-term compatibility.

    Corporations love the subscription model ‘must rent everything’ paradigm but not everyone is yet indoctrinated suitably.
  • mode_13h - Friday, December 17, 2021 - link

    > The proliferation of BT audio standards

    I'm not sure AptX and LDAC actually *are* standards. AFAIK, they're strictly proprietary. That's another whole discussion, in itself.
  • Tams80 - Tuesday, December 14, 2021 - link

    Umm, that's great for you, but no one asked you to try and convince us that we don't want a headphone jack.
  • vladx - Wednesday, December 1, 2021 - link

    "This admission was actually a breath of fresh air and insight into the situation, as it’s been something I’ve especially noted in our Kirin 9000, Snapdragon 888 and Exynos 2100 and Tensor deep-dives in criticizing all the new chips. "

    Why are you lying here, Andrei? Neither you or any other AnandTech writer has made a deep-dive of the Kirin 9000. Writing such useless lies in your last AnandTech article is downright pathetic.
  • casteve - Wednesday, December 1, 2021 - link

    This 18-bit ISP business seems like marketing BS. Consumer cameras with much larger pixels can maybe approach 12-bit dynamic range. I'd expect cell phone cameras to be closer to 10-bit at best. Sure, you can have fun with HDR stacking to average out noise...but 8-bits worth?
  • mode_13h - Wednesday, December 1, 2021 - link

    > Sure, you can have fun with HDR stacking to average out noise...but 8-bits worth?

    They take each image with different exposure values and weight/threshold them when stacking.
  • casteve - Wednesday, December 1, 2021 - link

    Great, maybe they can clean up the least bits on the 12-bit ADC and have a couple more wiggle for posterity. 18-bits...doubtful.
  • Kamen Rider Blade - Wednesday, December 1, 2021 - link

    But how many years of OS / Security support will the device that houses these SoC's have?
  • mode_13h - Wednesday, December 1, 2021 - link

    Hey, at least it's ARMv9. That'll at least have more potential for long-term support than ARMv8 devices.
  • Kangal - Wednesday, December 1, 2021 - link

    You're right about that.
    But I actually think performance is going to regress next year. ARM's projection has always been optimistic, and this time it was lower than expected, and they based it on better node with more cache.

    This is going to be a QSD 801/805 vs QSD 808/810 fiasco again. Especially due to Qualcomm pushing the voltage to the throttling limits.

    The Gen2/2023 will probably use much more advanced and efficient ARM cores. And the node will probably mature then, to bring Samsung/TSMC close to each other. I also think by then, the Android software will get a bit more optimised.

    These small advances will come together, like in 2016, when Android 6 and QSD 821 phones were running circles around Android 5 and QSD 810 phones. Example, ZTE Axon 7 versus HTC One M9.
  • TheinsanegamerN - Wednesday, December 1, 2021 - link

    I'm excited to see the successor to the positively neanderthal (at this point) A55 cores. Hopefully the A510 brings some much needed updates to low power and we can move more daily tasks to the smaller cores. The A710 and X2 will hopefully solve the massive power drain issues the 888 had.
  • Nicon0s - Wednesday, December 1, 2021 - link

    "The A710 and X2 will hopefully solve the massive power drain issues the 888 had."

    SD 888's heat problems were because of the GPU not the CPU. I don't think the CPU was any less efficient than the one in the SD 865.
  • iphonebestgamephone - Thursday, December 2, 2021 - link

    Check the spec scores on anandtech, it was indeed less efficient, both the x1 and the a78 cores.
  • Kangal - Thursday, December 2, 2021 - link

    That's not true.
    It throttles hard doing CPU-intensive tasks. I found the GPU fairly efficient tbh. Overall, the QSD 865 was inefficient and a let down due to throttling, and Qualcomm merely doubled-down on that for the QSD 888+

    The QSD 870 is decent. The best news this year was actually the QSD 778/780 chipsets, which offered performance rivalling the flagships at a fraction of the cost to the OEMs and Consumers.
  • Fulljack - Sunday, December 12, 2021 - link

    how is SD870 decent yet SD865 are "inefficient and let down" when both are the exact same chip just binned differently?
  • Kangal - Monday, December 13, 2021 - link

    I don't know exactly why, but the QSD 870 doesn't throttle like the QSD 865.
    I know they're the "same" chip. But its probably a combination of newer node, better binning, and software tweaks. Mostly the software side, where it's not just the better optimised drivers but likely it is deliberately set up by Qualcomm to not be thermally aggressive. They would have done this so that it doesn't outshine their most expensive product (888).

    In terms of unrealistic synthetic benchmarks, the QSD 888+ leaves the rest in the dust.
    In real-world uses, it's on-par with the chips mentioned above*. Actually, it's quiet often slower when it throttles early and for extended periods, and running on OEMs sloppy bloatware. There's really no legitimate reason for someone to upgrade from the 2019 Android phone to a 2021 if we're only concerned about the SoC. And I believe the same is true for 2022 Android phones (first-gen ARMv9). With the exception being that there has been decent improvement in the GPU, but there isn't much to take advantage of that extra performance on Android (maybe with emulators?).

    *QSD 855, 855+, 778G, 780, 860, 865, 865+, 870, 888, 888+
  • mode_13h - Monday, December 13, 2021 - link

    > There's really no legitimate reason for someone to upgrade
    > from the 2019 Android phone to ... 2022 Android phones (first-gen ARMv9).

    If we don't consider these as phones, but compare the cores on their merits, then you can't ignore SVE2. How much practical benefit that has for the typical phone user remains to be seen, because things like deep learning and many other compute-heavy workloads will be running on the NPU, DSP, ISP, or GPU.
  • Zoolook - Wednesday, December 8, 2021 - link

    Don't forget it's produced on a hot process, Samsung is far behind TSMC on efficiency.
  • mode_13h - Wednesday, December 1, 2021 - link

    Based on their initial coverage, the A510 looks relatively disappointing.

    https://www.anandtech.com/show/16693/arm-announces...
  • johnnycanadian - Wednesday, December 1, 2021 - link

    Let's please hope that phone manufacturers disable the always-on camera?

    https://www.theverge.com/22811740/qualcomm-snapdra...
  • fasterquieter - Wednesday, December 1, 2021 - link

    I have often wondered, if chips like these can perform crazy feats like 8K30 HDR encoding, why don't camera manufacturers just buy these instead of making their own chips?
  • vladx - Wednesday, December 1, 2021 - link

    Outside of Panasonic, I'm not sure anyone else in the camera industry is designing its 100% proprietary chips. And the reason they don't buy this type of SoC is price and the need for custom solutions incorporating their IPs.
  • mode_13h - Wednesday, December 1, 2021 - link

    Don't professional cameras typically use some Studio Profile with like 4:4:4 color and maybe no P or B frames? Although you could probably get all that to work on most phone SoC media engines, it probably wasn't implemented by the manufacturer.
  • GeoffreyA - Thursday, December 2, 2021 - link

    Andrei, we appreciate all your material and will miss you. I was always stunned by your terrific, detailed articles, and your mastery over this field. We shake our fists angrily at the company that won you over. All the best in your future endeavours. Take it easy.
  • Speedfriend - Friday, December 3, 2021 - link

    I have to second that, I will miss the deep dive articles into new SOCs. Yours were the only ones that truly looked at the power performance trade off
  • mode_13h - Friday, December 3, 2021 - link

    They should get him to recruit his replacement, before letting him go!

    BTW, his twitter profile says he's in Luxembourg? I always sorta wondered. Wow.
  • GeoffreyA - Saturday, December 4, 2021 - link

    That's a good idea.
  • mode_13h - Sunday, December 5, 2021 - link

    At the very least, I hope he trains any future reviewer in his test tools & methodologies.
  • GeoffreyA - Monday, December 6, 2021 - link

    I wonder if Anandtech has found anybody as yet. This will be a blow to the site's mobile coverage.
  • mode_13h - Tuesday, December 7, 2021 - link

    Yeah, I noticed the same thing.
  • Arbie - Thursday, December 2, 2021 - link

    Charlie at SemiAccurate has a very different view on this; worth a read and not paywalled.
  • Kangal - Thursday, December 2, 2021 - link

    That was a fun read. I forgot that site existed.
    Even Google Search buries them.
  • mode_13h - Friday, December 3, 2021 - link

    IMO, it's their own fault. I hate the way they tease on the public side of the paywall, especially when the subscriptions are so expensive that only a few investors and industry insiders would bother to pay it.

    The other thing they should do is make the older articles free. Especially when Charlie references them in his gloats about being right, which were pretty much the only free articles they had at the last time I stopped checking. It hurts his own case, if you cannot go back and actually read the article to see what he had said.

    The thing that bugs me most about Charlie is that he's not good at keeping a level of detachment. He lets his opinions color his reporting, too much. I don't mind the he has opinions and that he's vocal about them, but he does his readers a disservice when he's not clear about the facts he's gathered vs. his interpretation and projections. And I don't trust him to report facts that run counter to his opinions. It just reads almost like a semiconductor industry tabloid, rather than a credible resource.

    Yet, in spite of all that, I'd still probably kick him $10/year to read the stuff. Maybe that's what he needs: a tiered subscription, where the top tier gets articles as they're published, the next tier gets them after 30 days, the bottom tier gets them after 90 days, and they're free after 180 days or a year.
  • yankeeDDL - Saturday, December 4, 2021 - link

    I completely agree on the peak-performance/consumption remarks.
    I think that with Snapdragons being so far behind the Bionic (I'd say easily 1-2 generations in terms of performance) it is no wonder that vendors look to ... cheat the benchmarks. Already today it is a sore sight looking at the 888 vs the A15 and A14, and that is with the 888 in "furnace" mode.

    I don't understand how we can be at a point where a market leader like Qualcomm is simply unable to come anywhere in the neighborhood of what a single team at Apple can do. It's quite amazing actually. So until they get their acts together and create some architecture that can at least compete side-bi-side, without going nuclear, with Apple, this issue won't go away.
  • yeeeeman - Sunday, December 5, 2021 - link

    When do we get a performance preview?
  • mode_13h - Sunday, December 5, 2021 - link

    Probably whenever Qualcomm releases a reference platform. I don't follow phone news to say when that's likely, but you can probably look to prior precedent.

    BTW, if you didn't notice some of the above posts, this is Andrei's last article for the site. I posted the twitter link, if you want more details (but he's not disclosing his new employer). I mention that by way of offering that future mobile SoC reviews are not a foregone conclusion.
  • yeeeeman - Monday, December 6, 2021 - link

    The reference platform is released...Arun from Mrwhosetheboss on YT has a video with it already. i asked when do they get permission to post benchmarks.
  • domboy - Monday, December 6, 2021 - link

    Odd there has been no mention of the 8cx Gen 3 on Anantech that I've seen. It was also announced at this same event.

    https://www.xda-developers.com/qualcomms-snapdrago...
  • mode_13h - Tuesday, December 7, 2021 - link

    Considering how much other news this site misses, I'm not surprised. The editors are even retweeting interesting stuff on their twitter feeds without so much as a passing mention of it, on this site. I wonder if there are some financial problems at play.
  • Raqia - Thursday, December 9, 2021 - link

    More details here:

    https://hothardware.com/reviews/snapdragon-8-gen-1...
    https://www.notebookcheck.net/First-benchmarks-Thi...

    Big improvements in GPU (including sustained performance) and AI benchmarks. A respectable single threaded bump in Geekbench, but actually relevant CPU performance (Speedometer 2.0) for browsers is up quite a bit.

    The biggest gains may be in the new ISP, but that can't be benchmarked and will come down to the qualitative improvements in actual implementations.
  • mode_13h - Friday, December 10, 2021 - link

    Thanks!
  • Kuhar - Tuesday, December 14, 2021 - link

    I agree. I dont know what is going on with anandtech. Latest article from 30.11.2021??
  • Oxford Guy - Monday, December 13, 2021 - link

    8K video in a phone? Hallelujah.
  • zodiacfml - Thursday, December 16, 2021 - link

    Underwhelming. No update on their chip vs M1?

Log in

Don't have an account? Sign up now