Comments Locked

54 Comments

Back to Article

  • Dragonstongue - Friday, June 5, 2020 - link

    be nice of their (GPU in general) would detect the program and adjust power with performance for overall power use, something that is far from ideal at this moment in time.. Imagine how much worldwide power would be saved (as well as wallets) if a high performance GPU would downclock from say 1800Mhz to 500Mhz (or less) yet still have the full performance expected.. granted this would only work with older games/apps

    example Diablo 3, no way in heck if you used todays mid to high end GPU it NEEDS to run full clocks, that wastes a bunch of power for nothing (not to mention, produces lots of waste heat)

    on that note, would be excellent if we the user could easily set the low 2d (idle basically) 3d and high 3d clocks overall, where the person whom likes to tinker is able to with a few clicks do such things.

    That smartshift, sounds exactly what PS5 will be doing.. odd this has not been done for years now, things like Optimus on paper are great, real world has been not exactly great.

    hope works out for them overall, maybe will lead to even better things down the line for PC, Laptop, Mobile and so forth, as the many makers seem overall against having a very nice size battery (certainly against top notch cooling regardless of price) at least this in theory will save quite a bit of power overall being "intelligent" allocation of power budget to keep performance up but actual power used as low as possible as often as possible .. to me, that is what it should be.

    you know, to save them 4 cell batteries and all that LOL
  • Phartindust - Friday, June 5, 2020 - link

    Have you tried Radeon Chill? You are able to set max frame rate with it which reduces power usage as you suggest.
  • yeeeeman - Saturday, June 6, 2020 - link

    That is not the point here. The idea is to maximize frame rates by shifting the power to the unit it needs it more. Say a game uses the CPU a lot, but the you not so much and because the default split of power is more to the GPU and less to the CPU, you will get bad performance. But this tech can detect this scenario and shift the power balance to cpu. You get better perf and the power usage is the same
  • yeeeeman - Saturday, June 6, 2020 - link

    But the CPU not so much*
  • deil - Monday, June 8, 2020 - link

    Radeon Chill feature is great thing for gaming laptop. I set 30 fps and have 50'C on both or I can switch to 45 fps and have comfortable ~60.
    Problem with laptop is that 130 FPS that my laptop can do in games I play, comes with hefty price of literally burning fingers. (I have dell g5 older setup)

    but I think what Dragonstongue meant that you can get 60 fps in 2 ways.

    Craning up the horsepower of gpu to max, and then use utilize 30%
    or
    stay in second/third tier out of 5 (like my nvidia have 5 step boost ladder) of boost table and utilize 60-70% of gpu, while using 70% of the power envelope.
    the idea is that staying boosted very high for long time is still wasting power even if you idle.
    and If I've seen my GPU then it jumps only between 1/5/1/5/1/5 and usage is like 15w/40w/16w/42w
    nothing in between.
  • whatthe123 - Saturday, June 6, 2020 - link

    Real world performance is generally poor for things like this because what you might consider "acceptable" speed would be considered unacceptable for another user. It's not exactly reasonable to expect a hardware company to tweak boost clock performance for every single game imaginable at a specific target framerate for every GPU spec. Best case you have a framerate cap and the software predicts frequencies required to hit that target, but even then power savings will be marginal as hardware utilization still plays the biggest role by far compared to clockspeed.

    What you're describing isn't what the PS5 is doing either. The PS5 has a limited power envelope (probably 100-150w) and will be shifting the power cap and boost clocks of the CPU and GPU on the fly. They're not planning on having significant downclocks to reduce the power envelope. They claim their biggest savings will be from being able to maintain a suspend state at .5w
  • alufan - Sunday, June 7, 2020 - link

    I think most modern GPUs from either side do that if I Play an older game such as UT04 even at max everything the fans on my AMD aircooled GPUs dont even come on as the GPU is not working enough to raise the temp above 48degc my liquid cooled pcs dont even rise more than a deg or so no matter how long it plays and I have both brands doing the same even an aircooled living room pc in an itx case with a Ryzen 3600 and a 5700 stays silent
  • DillholeMcRib - Monday, June 8, 2020 - link

    Do you know what punctuation is?
  • zodiacfml - Sunday, June 7, 2020 - link

    You don't know what you're talking about. To help you a bit, GPUs don't use all the graphics power when it is not required, like a framerate cap or vsync when the system reaches it. I play Diablo 3 in 4K with a Vega 56 at 60Hz display, shows using 70-80 watts. Witcher 3, GPU uses all the power it can.
  • alufan - Sunday, June 7, 2020 - link

    hmmm so you just repeated what I said?

    And I dont know what am talking about numpty
  • rrinker - Monday, June 8, 2020 - link

    I'm sure that was in reply to the OP, since it's not indented under your first message like your reply to him, and my reply to you.
  • extide - Sunday, June 7, 2020 - link

    PS5 does not use smart shift. Smart shift is specifically for shifting total platform TDP budget between a CPU/APU and a dGPU. NOT for shifting power between CPU cores and GPU cores inside of a single APU. Both AMD and Intel have been doing that already for quite some time.
  • Fataliity - Monday, June 15, 2020 - link

    Sony specifically called it Smartshift in the PS5. Although the implementation may be slightly different.
  • amylrichards77 - Tuesday, June 16, 2020 - link

    I’ve made $66,000 so far this year working online and I’m a full time student. I’m using an online business opportunity I heard about and I’ve made such great money. It’s really user friendly and I’m just so happy that I found out about it.. W­­W­W.iⅭ­a­s­h­68­.Ⅽ­O­Ⅿ
  • yannigr2 - Friday, June 5, 2020 - link

    SmartShift a feature that is not ready yet or OEMs do not want to use it. And of course Renoir laptops few and with worst hardware components compared to the Intel models.

    Someone at AMD mobile is NOT doing/does NOT know how to do, his/her job.
  • deksman2 - Friday, June 5, 2020 - link

    I don't agree that SmartShift is a feature which isn't ready.
    Its already implemented in DELL G5 15 SE.

    This is just a classic OEM stupidity of NOT wanting to build all AMD laptops (for whatever reason).

    Its idiotic.

    As for AMD not doing their jobs... excuse me, what the freaking heck is AMD supposed to do?
    I'm sad to say that AMD is NOT the one making the decision on which hw OEM's decide to use.
    And Intel has a track record of paying OEM's to NOT make all AMD stuff (or basically cripple it).
  • rahvin - Friday, June 5, 2020 - link

    Given the Renoir is AMD's first mainstream competitive laptop chip in a number of years it's not unforeseen that OEM's would play it safe and not waste design costs on an unproven platform.

    Just like Epyc as AMD continues to execute and if the AMD laptops prove to be good sellers with good margins the OEM's will begin to take the offerings more seriously and spend more design money on them. The key here is AMD needs to keep providing competitive Laptop chips.

    And honestly, this feature is never gonna be a feature people shop for. For most people the extra $10 they will save in a year would be totally unnoticeable and it doesn't help that you can't explain what this does simply with a single sentence. Most people these days aren't going to deep dive a feature to understand it and you need an article about the size of this one to explain how this works and why it's a good thing.
  • Drake H. - Friday, June 5, 2020 - link

    Nah, Picasso is competitive.

    Renoir is not only "competitive", it is a superior and more efficient CPU than anything that intel has at the moment. It's a shame what the OEM's are doing: using inefficient, hot and low sustained performance CPUs in premium products.

    www.notebookcheck.net/Intel-will-continue-to-dominate-the-premium-gaming-laptop-market-Frank-Azor-confirms-no-Ryzen-4000-and-RTX-2070-or-RTX-2080-laptops-anytime-soon.462278.0.html https://www.notebookcheck.net/Most-OEMs-treat-AMD-...
  • lmcd - Friday, June 5, 2020 - link

    Picasso isn't even close to competitive. It provided worse performance and worse power consumption simultaneously.
  • sonny73n - Saturday, June 6, 2020 - link

    This proves you’re wrong:

    https://www.anandtech.com/show/15786/honor-magicbo...

    My money will go to a Renoir laptop in the next few weeks. I don’t care if it’s from a Chinese manufacturer, as long as it beats the craps out of those more expensive Intel ones.

    You and rahvin must be Intel shills.
  • CiccioB - Monday, June 8, 2020 - link

    Can't see what these tests should prove.
    I see low performance results with a not so good autonomy.
    What should I look exactly to see that Picasso is a good efficient piece of silicon?
  • Drake H. - Saturday, June 6, 2020 - link

    https://www.youtube.com/watch?v=xfIfN2kOaT0
    Sounds pretty decent to me. The consumption depends on the optimization of each manufacturer, it is remarkable that there is a wide variation in consumption, performance and battery life depending on the model analyzed, Xiaomi and Huawei have shown this well.
  • Spunjji - Monday, June 8, 2020 - link

    That's still competitive when the cost is accordingly lower.

    I don't understand why some people seem to think something is only competitive if it's the same or better. There's a lot more to it than that.
  • Lord of the Bored - Saturday, June 6, 2020 - link

    "For most people the extra $10 they will save in a year would be totally unnoticeable and it doesn't help that you can't explain what this does simply with a single sentence."

    Lemme try.
    "It makes your games go faster and keeps your laptop cooler."
  • Spunjji - Monday, June 8, 2020 - link

    He seems to think it's about saving power, which means he didn't even read the article (or any of the others on this topic) before commenting.
  • 1_rick - Friday, June 5, 2020 - link

    The idea that [all] Renoir laptops ship with "worst hardware components" is ridiculous, too; the Zephyrus G14 is an obvious counterexample, as, apparently, is the Dell G5 this very article is about.
  • Spunjji - Monday, June 8, 2020 - link

    The Dell G5 proves his point. It's available with a more limited range of customizations than the Intel variants, and more specifically there are fewer high-end display options.

    The Zephyrus G14 is the only exception to the rule.
  • yeeeeman - Saturday, June 6, 2020 - link

    I don't see why you say oems do not want to built AMD laptops. If that is the case then why Asus didn't do it? They were the first one to release AMD laptops and they even have a deal with them for the hs parts. So why didn't they just took the time to bulit a laptop with the magical smartshit feature?
    Everyone is winning on the internet about AMD laptops but they don't understand how things work. Building a laptop is not like building a PC. You have a lot of requirements to meet and without the direct implication and professional support from the manufacturer the OEM cannot just take some components and strap them to a laptop case. The only blame here is on AMD because they don't have great community with OEMs, don't have enough field engineers to support many projects with many OEMs and they don't have an optimized flow for new laptop platforms. They just can't compare with how Intel does this. I'd you would have access to the Intel technical library you'll see they give clear and periodic instructions and have detailed documents for oems. They have a lot of field engineers that support oems and they take care of oem requests, be it hardware or software. These things are missing from AMD and no matter how much you like AMD now and hate Intel (don't know why) Intel has worked a lot during the last 10 years to improve the laptops and thin and light laptops are more or less here thanks to apple and Intel.
    So oems are used to expecting some type of support and professionalism that AMD simply doesn't have yet. So that is why oems don't have lots of AMD laptops. Also, when the existing laptops have been designed and was on Zen 1 which wasn't the greatest mobile performer.
  • CiccioB - Monday, June 8, 2020 - link

    [blockquote]I don't see why you say oems do not want to built AMD laptops.[/blockquote]
    Not correct.
    The right assertion is:
    I don't see why you say oems do not want to built AMD laptops with AMD dGPU
    which is the HW configuration where this Smartshift works.
  • Alistair - Friday, June 5, 2020 - link

    nothing to do with not ready, the market wants nvidia GPUs, so you won't see it, as it needs the AMD gpu, and AMD's mobile GPU options are not great
  • Cooe - Friday, June 5, 2020 - link

    The 5700M is a good piece of silicon, but nobody wants to use it. The 5600M isn't bad either (it's basically a cheaper GTX 1660 Ti). The RX 5500M is trash though, no doubt.
  • senttoschool - Saturday, June 6, 2020 - link

    This is correct. I want an AMD APU + an Nvidia 3xxx GPU in my next laptop.

    I want ray tracing + DLSS 2.0 + Cuda cores for machine learning.
  • Spunjji - Monday, June 8, 2020 - link

    They're fine. Not brilliant, but they would definitely add a lot to the competitive marketplace if someone would use the damned things.
  • yeeeeman - Saturday, June 6, 2020 - link

    Aaand the only single product is made by the most Intel loving company in the world.
  • linababe - Saturday, June 6, 2020 - link

    is there any chance of smartshift appearing in any of the am4 apus?
  • Ryan Smith - Saturday, June 6, 2020 - link

    SmartShift is based around the idea of a shared TDP budget and shared cooling system. With the notable exception of open loop liquid cooling, desktops don't have shared CPU/GPU coolers. Nor are they so heavily cooling-constrained like laptops are.
  • IntelUser2000 - Saturday, June 6, 2020 - link

    Laptops have different requirements than building desktops, and system builders have a different goal than component vendors such as AMD/Intel.

    Even in Intel laptops power states are not fully implemented or taken full advantage of. Especially since it increases TTM(time-to-market) they often have to sacrifice few things to get it working.
  • brucethemoose - Saturday, June 6, 2020 - link

    Well, from what I've seen, AMD's Navi laptop GPUs aren't as power efficient as Turing. Which is... suprising, given the large process advantage. So why would OEMs want to use them?

    This feature would make more sense if AMD sold big, underclocked 5700 XTs akin to Nvidia's MaxQ GPUs.
  • Spunjji - Monday, June 8, 2020 - link

    "This feature would make more sense if AMD sold big, underclocked 5700 XTs"
    That's exactly what the 5700M is. Nobody's integrating it, though.

    I guess we'll have to wait for RDNA2, at which point people will probably be saying that no OEM is using it because none of them used RDNA... etc.
  • CiccioB - Monday, June 8, 2020 - link

    That rises the power efficiency without doubts but then you have always to compare that solution with what provides the competition.
    Nvidia offers 2080Max-Q@80W. What is going to obtain AMD with 80W? They would probably need a 1000mm^2 GPU to be as efficient as the TU104.
  • Rookierookie - Saturday, June 6, 2020 - link

    Now I actually wonder if AMD hiring Azor had something to do with Dell getting ahead in the SmartShift game.
  • zodiacfml - Sunday, June 7, 2020 - link

    Still not clear how Smartshift works. Higher end laptops somehow manages this by having one or two heatpipes connecting the CPU and GPU, this way cooling full system cooling is partly shared between the processors
  • Spunjji - Monday, June 8, 2020 - link

    In the scenario you just described, SmartShift helps by tweaking the CPU and GPU to give you the maximum possible performance within the limits of the shared cooling and power delivery.
  • eastcoast_pete - Sunday, June 7, 2020 - link

    @Ryan: Thanks for the information! Two questions, appreciate any information:
    1. How much power does the integrated graphics of Renoir pull at maximum GPU load, and how badly is that eating into the thermal budget of a 4800h or 4800u?
    2. Is Speedshift at risk of being mainly a way to "cheapen out" on overall cooling capability? I am a bit wary of Dell being the pioneer here, as some of my past (Intel + NVIDIA) Dell laptops looked great and had promising specs, but throttled fast and furious if I really challenged the dGPU. Fool me once and all that...
  • CiccioB - Sunday, June 7, 2020 - link

    It is not surprising that this technology is not supported.
    It is not because it doesn't work or it s not useful.
    It is only because NO OEM want to install a AMD dGPU into their laptops: they are too much power hungry with respect to their performance. Any Pascal or Turing GPU do better than AMD ones, even if they are on 7nm.
    See what Nvidia provides in their Studio labeled laptops with GPU using only 80-100W.
    AMD is still way behind Nvidia in power efficiency and this is the main reason why there is only a humble number of laptops with AMD GPUs. And all cheap class. And it has been a long time, 10 years or more, this is so.
    AMD should really create a completely new architecture that can compete in terms of power consumption and perf/transistor. Navi is not near that at all. Let's see what BigNavi is bringing with the promise of 50% better perf/watt, hoping it is not only an advantage brought by a new PP and only in some circumstances.
  • alufan - Sunday, June 7, 2020 - link

    lets see what they both bring to the table later this year Lisa has a knack of doing what she says she will
  • CiccioB - Monday, June 8, 2020 - link

    Up to now in the GPU market they failed to keep all promises they announced.
    The actual scenery where no AMD dGPU are preferable over Nvidia ones in mobile market is quite self explanatory.
    While in desktop market AMD can play with price over efficiency, even cutting prices before launching the product, in mobile market they cannot do anything for making their GPU preferable, even if they give them away for free.

    Nvidia is already ahead in energy efficiency with 12nm (that's 16+nm), I can't see a reason while they can't be better than AMD at 7nm seen what AMD has done up to now with this PP, see Vega VII (what a fail) and Navi.
  • Spunjji - Monday, June 8, 2020 - link

    RDNA kept all of its promises on release. No reason so far to assume RDNA2 will be different.

    Vega VII was a specific product for a specific market and did okay in that regard. As a simple die shrink it got better performance and power than its 14nm predecessor. Not sure how you regard that as a failure.
  • CiccioB - Monday, June 8, 2020 - link

    RDNA did not maintain anything because AMD simply did not say anything about what they were trying to obtain from GCN evolution. Probably previous generation failures taught something.
    I remember the promised 2.5x better efficiency of Polaris versus GCN1.2 and we end with a worse efficient GPU that had to be overclocked to have the performances of a 20% smaller one. Going beyond the PCI specification of power consumption, showing the final product was not what was intended originally. And the thing got even worse with the further OC needed to show that that sh*t could be better than the smaller chip made by Nvidia. Consuming twice the power.
    I can remember the hype and promises on what Vega should have been, better than the Titan X (yes, there are sentences made by AMD marketing about it) how that would have changed the market. And I can clearly remember what AMD had to do to sell it, that is cutting price before launching and giving away professional app accelerated drivers. That selling it in the consumer market at a loss.
    Yes, that was changing the market. Making them loose money.
    Should I remind you of Fiji?

    You would describe a 7nm 300W power hungry board with the computation capacity of Vega VII a successful product?
    At 300W and 7nm Nvidia simply rolls over that useless piece of silicon. Ampere just demonstrated it.

    And RDNA is good because it is cheap. With the same number of transistor of the 1080Ti and the same power budget, it can't beat it. 7nm vs 16nm. 3 years later. With no new feature supported with respect to Pascal. It is just Pascal shrinked and made wrong.
    So why you think OEM should prefer RDNA over Turing which is the right Pascal evolution made on a node that does not even allow transistor shrinking? What are you trying to defend?
    OEM knows that a7 7nm Nvidia is going to get rid of AMD dGPU on mobile market. They already do this at 12nm. At 7nm they will simply blow AMD away. They are simply avoiding to waste time and money on an already dead horse. In a market that is very competitive, never like before.
    AMD dGPU are good on Apple devices that are for the Dandy and IT ignorant user that likes the design more than the actual performance of the device it has paid so much.
    Keep it there, and be happy of that market share.

    Yes, again, let's see next generation, the one that will make #poorvolta real... despite we are now approaching the second generation beyond Volta...
  • Spunjji - Monday, June 8, 2020 - link

    Pascal does not have better power efficiency than RDNA. RDNA's absolute efficiency is, in fact, about about the same as Turing when run at sensible clocks (see the vanilla 5700 on desktop). Turing is more efficient on an architectural level, but it's the end result (architecture + process) that matters for system design, and they're about even in that regard.

    If power efficiency were the only reason for the lack of high-end AMD GPUs, it wouldn't make sense for integrators to use the low-end AMD GPUs either - they could use an equivalent Nvidia GPU and save weight and money on cooling at a really cost-critical end of the market. That's not the case, so clearly there's a little more to it.
  • CiccioB - Monday, June 8, 2020 - link

    And actually no OEM offers laptops with AMD dGPU but for some design that is probably given by AMD themselves and for very cheap devices.
    And that's why none is going to use SmarthShift.

    And it is Pascal that is on par with RDNA even on 16nm, Turing efficiency is way better. In fact , just downclock Turing as for laptop requirements and you get super power efficient dGPU that AMD can't rival in any way.
    Run RDNA at its best Frequency(V)/W spot and you just get snails. Yet, they do not eat very much, but they remain snails with respect to what Nvidia can provide at its best freq/W spot.
  • ranran - Monday, June 8, 2020 - link

    My feeling is that for AMD to really break into the laptop market, they are going to have to bite the bullet and design/implement their *own* line of laptops, much like Apple, to really start generating market penetrance.... OEM's are still too Intel-centric and if AMD just waits for OEM's to do it, they might miss out on a huge window of potential before Intel catches up and shuts them out again....
  • CiccioB - Monday, June 8, 2020 - link

    This is wrong.
    Intel is of no importance here. The problem is the dGPU offerings, and Intel is not providing them (yet).
    You can see there are many more laptops and netbooks with AMD APU inside. The problem for AMD is that ends there. They dGPU are too much behind that almost all the new designs that use AMD APUs are matched with a Nvidia GPU. Were SmartShift does not work.

    So you will see an increasing number of mobile solutions with AMD inside, but not dGPUs. Only APUs.
  • Adam844 - Saturday, June 13, 2020 - link

    Good article <a href="https://fullformordefinition.blogspot.com/p/progra...

Log in

Don't have an account? Sign up now