Comments Locked

36 Comments

Back to Article

  • Iketh - Tuesday, May 26, 2020 - link

    reading that was pulling teeth

    here's your paragraph:

    "The async feature from an energy efficiency perspective is proclaimed to be around 6-13% depending on the workload. This is actually a bit of a more complex figure in my view. The main problem in my view is that to achieve this, the SoC vendor needs to actually go ahead and employ a second voltage rail for the GPU to gain the most benefit of the asynchronous frequencies. The efficiency benefit here is small enough, that it begs the question if it’s not just cheaper to add in a few more extra cores and lock them lower, rather than incurring the cost of the extra PMIC rail, inductors and capacitors. It’s an easy efficiency gain for flagship SoCs, but I’m really wondering what vendors will be deploying in the mid-range and lower."

    here's the same paragraph cleaned up a bit:

    For energy efficiency, the async feature claims to improve 6-13% depending on the workload. This seems difficult to implement in my opinion. The main problem is the SoC vendor needs to employ a second voltage rail for the GPU to see the biggest benefit of asynchronous frequencies. The efficiency benefit is small enough that it begs the question if it’s cheaper to simply add more cores and clock them lower rather than incurring the cost of the extra PMIC rail, inductors, and capacitors. It’s an easy efficiency gain for flagship SoCs, but I wonder what vendors will deploy in the mid and low range.
  • psyclist80 - Tuesday, May 26, 2020 - link

    Thanks Teach, here's your apple! now 'eff off...find someone else to pick on to satisfy your ego
  • Cellar Door - Tuesday, May 26, 2020 - link

    Actually - it is your reading comprehension, that is the issue here.

    Please, refrain from blaming others for it.
  • jjpaq - Tuesday, May 26, 2020 - link

    Are you arguing that it's pointless to ever edit text for concision and interest?

    The extra commas sprinkled throughout your reply seem to make his point perfectly.
  • Spunjji - Thursday, May 28, 2020 - link

    Are you familiar with the concept of a straw man? 🤦‍♂️
  • dotjaz - Tuesday, May 26, 2020 - link

    Nope, I can understand the original article effortlessly but it doesn't mean it's pleasant to read.
  • mkozakewich - Wednesday, May 27, 2020 - link

    No, he seemed noticeably more flustered. Must have been a deadline.
  • Alistair - Tuesday, May 26, 2020 - link

    Nice. It is nice to see simple and direct language.
  • judithesanchez68 - Thursday, May 28, 2020 - link

    Make money online from home extra cash more than $18k to $21k. Start getting paid every month Thousands Dollars online. I have received $26K in this month by just working online from home in my part time.every person easily do this job by just open this link and follow details on this page to get started… W­­W­W.iⅭ­a­s­h­68­.Ⅽ­O­Ⅿ
  • Spunjji - Thursday, May 28, 2020 - link

    I agree that the original paragraph could have been cleaned up, but I actually found yours not a lot better.
  • HardwareDufus - Thursday, May 28, 2020 - link

    Thank you for taking the time to teach the author by example. Concise vocabulary and brief statements are a must in tech journalism as the subject matter at hand is quite complex.

    While your intent was good, you might have refrained from using the phrase 'like pulling teeth'. A phrase like that will immediately put people on the defensive, as is evident by several responses to your post.

    You write very well and your constructive criticsm and suggestions are helpful. Strive to serve it in a manner where the recipient is more likely to accept. The best way to offer constructive criticism is the 3C rule: Commend, Counsel, Commend.
  • Oldiewan - Saturday, September 11, 2021 - link

    Anal much?
  • Oldiewan - Saturday, September 11, 2021 - link

    Reading that was like pulling teeth.
  • brucethemoose - Tuesday, May 26, 2020 - link

    TBH, does ARM really need to design a top end GPU?

    Apple has definitely-not-PowerVR, and Imagination is hungry for any customer they can get. Samsungs going AMD. Broadcomm, Qcom, Nvidia and Hisilicon are all in firmly in house.

    That leaves Mediatek and small fish, which is not a very demanding market.
  • SarahKerrigan - Tuesday, May 26, 2020 - link

    Hisilicon uses Mali; they aren't inhouse.
  • brucethemoose - Tuesday, May 26, 2020 - link

    Ah, yeah, dont know what I was thiking. And HiSilicon is a big fish.

    Still, theres more demand from big customers for licensed CPUs than GPUs.
  • dotjaz - Tuesday, May 26, 2020 - link

    True for now, there's no guarantee they won't switch to PowerVR B-series soon given the current political environment causing them being cut off from all other parts of ARM (bar the perpetual architectural-licensing).
    There's really no economy of scale left once they switch to in-house CPU core inevitably in the next year or two.
  • s.yu - Sunday, May 31, 2020 - link

    In house based on what? RISC-V? Know that if they're banned, they can't license any further updates to ARMv8 either.
  • Kamen Rider Blade - Tuesday, May 26, 2020 - link

    Apple has kicked PowerVR to the curb and gone in-house for their GPU's.
    Samsung says screw it, AMD take over the GPU side.
    Qualcomm has their own ADRENO brand that they bought from AMD's Imageon line.
    nVIDIA is obviously using their own GPU.
    MediaTek switches from Mali to PowerVR depending on product line / generation.
    HiSilicon has generally stuck with Mali
    Broadcom has their own VideoCore line of GPU's
  • regsEx - Tuesday, May 26, 2020 - link

    Samsung still only uses Mali. AMD is just a rumor so far.
  • Deicidium369 - Tuesday, May 26, 2020 - link

    It is a cry for help.
  • dotjaz - Tuesday, May 26, 2020 - link

    What rumor? By rumor you mean both AMD and Samsung already announced RDNA will power Exynos last year? Samsung still use exclusively Mali for at most 1 more generation. That's it. ARM can count on Samsung being a Mali customer anymore.
  • dotjaz - Tuesday, May 26, 2020 - link

    *can't
  • lucam - Tuesday, May 26, 2020 - link

    Apple always used PowerVR, and they will again. You must have been in hibernation in the last 12months , as you don’t read news around
  • Spunjji - Friday, May 29, 2020 - link

    I haven't kept entirely up-to-date with this, but my understanding was that Apple went from using scaled-up versions of Imagination GPUs to using their "own design", a design which was miraculously compatible with all the low-level APIs they designed for Imagination tech. As a result of Apple's announcement, IMG's stock plummeted and they had to sell off assets and seek investment from China. Subsequently Apple went back to paying Imagination a licence fee, almost as if the whole thing was an attempt to bully a smaller company who couldn't afford extended litigation for IP theft into giving them better licencing terms.

    In conclusion: Apple pull this nonsense on a regular basis, and they suck.
  • lucam - Tuesday, May 26, 2020 - link

    Apple use PowerVR, update your source
  • Zoolook - Tuesday, June 2, 2020 - link

    Power VR is a division of Imagination, so technically you are both correct.
  • eastcoast_pete - Tuesday, May 26, 2020 - link

    My first thought was "I smell a Windows-on-ARM" laptop push here. If any of ARM's licensees (Samsung, maybe Mediatek?) wants to go down that road and join Qualcomm, they'll need a beefier graphics setup to compete with Qualcomm's 8cx designs. Maybe 24 G78s will get Samsung & Co. closer to that goal. I am actually rooting for this to happen; the low-power, always connected ultraportable space can use some new entries.
  • eastcoast_pete - Tuesday, May 26, 2020 - link

    Also, is is just me, or is ARM basically going the route used by dGPUs in the PC/Mac space (many cores/compute units)? As Andrei pointed out, that comes at the expense of some power-consuming repetition/redundancy, which is okay in a desktop or powerful laptop (big battery), but problematic if you're trying to run it all day and carry it with one hand.
  • lucam - Tuesday, May 26, 2020 - link

    New GPU, same garbage all over again..
  • tkSteveFOX - Wednesday, May 27, 2020 - link

    Apart from MTK and Huawei most will drop using Mali Cores as the architecture doesn't scale well at all.
    Anything over 7-8 cores and you start to lose performance and get the consumption up.
    When Samsung finally unveil their RDNA powered GPU, even Apple's cores might lose their crown.
    I doubt it will be very power efficient though, just like Apple's.
  • lightningz71 - Wednesday, May 27, 2020 - link

    Haven't the various mobile RISC cores gotten very close to hitting the wall with respect to memory bandwidth? Feeding the G78 in a full-house config with enough data to allow it to reach it's full throughput potential would require fairly massive amounts of RAM bandwidth. All that bandwidth will require some very wide channels and a lot of memory ICs on the phone motherboards, or, it'll require some quite power hungry HBM stacks. At best, we get a couple of channels of low power DRAM that spends as much time as possible in low power mode. I just don't see it being very useful on a mobile device. At the very best, if it's used in an ARM Windows laptop, and if it gets a solid memory subsystem attached to it, it MAY be competitive with other iGPU solutions available in the market. However, once you go down that road, you have to ask yourself, is it worth putting that many resources into the CPU and its memory subsystem when there are available low power dGPU solutions out there that will still run rings around it in performance and not cost any more per unit to integrate into your solution? Even if it costs a bit more power to do so, in a laptop, you have a much larger form factor and much larger power budgets to play with.
  • ballsystemlord - Thursday, May 28, 2020 - link

    Spelling error:

    "The core's cache shave also had they cache maintenance algorithms improved with better dependency tracking,..."
    "the" not "they":
    "The core's cache shave also had the cache maintenance algorithms improved with better dependency tracking,..."
  • Lobstermobster - Saturday, June 6, 2020 - link

    How can we compare this new mobile GPU to others made by Qualcomm, Nvidia and Imagination? How many teraflops do these mobile GPUs have? I know the Switch uses a Tegra chip that can go up to 1 teraflops in dock mode
  • iphonebestgamephone - Sunday, June 7, 2020 - link

    Whats the use of knowing the flops anyway.
  • IUU - Friday, October 2, 2020 - link

    "Whats the use of knowing the flops anyway." I believe it is one of the most important metrics to know. Because a chip will always perform a certain percentage of its theoretical performance, often about 60 to 70% of theoretical. So , if a chip's theoretical performance is say X5 compared to another chip, no-one can fool you with the usual nonsense, "yes but it is real world performance that matters" . Because a x5 theoretical performance wins hands down in real world scenarios, no matter what marketing gimicks would want you to believe.

    That said , just consider , the modern fashion of hiding details about architecture , of a lot of companies, lately even by Intel, and you will see , there is an effort to go by marketing only to hide potential weaknesses.

Log in

Don't have an account? Sign up now