Comments Locked

67 Comments

Back to Article

  • shelbystripes - Monday, June 15, 2020 - link

    Man... a $700 upcharge?? On top of the Radeon already in the 16” MBP base model?

    OTOH, that looks like killer performance for a 50W power envelope, and the 8GB HBM2 they’re using isn’t making things cheaper...
  • RadiclDreamer - Monday, June 15, 2020 - link

    Looks to me like this is more of a productivity based card vs a gaming one and the price point reflects that.
  • mrvco - Monday, June 15, 2020 - link

    There are far better choices for a 'gaming' laptop than a $3500+ 16" MacBook Pro :-)
  • shelbystripes - Monday, June 15, 2020 - link

    It’s a Mac, of course it’s a productivity based product, that’s a given.
  • milkywayer - Sunday, June 21, 2020 - link

    Anyone knowledgeable knows how good it'll be for gaming if money was no object? Asking for popular games like the new Call of Duty, League of Legends or Tekken 7 etc.

    I'm happy to see League of Legends plays at 60fps with the i7 model on the 2020 xps 13 but struggled on the i5 model xps 13 (2020).
  • mrmachine - Thursday, July 2, 2020 - link

    This option exists to solve the problem where the 5300m and 5500m consume minimum 18W at idle as soon as you connect an external display to the MBP, because AMD drivers run the memory at full speed by design to avoid glitching with multiple displays.

    Apple won't admit it's a design fault/limitation with the 5300m and 5500m options, but there are over 2000 posts complaining about it on MacRumors forum.

    With the 5600m, memory running at full speed consumes less power and generates less heat, so you not only get better GPU performance (claimed 75%), you have more power and thermal capacity available for CPU performance (+500MHz sustained) and you have less heat and less fan noise at medium workloads.

    Plus, it's portable and you have none of the issues you'd get with an eGPU. Having to "disconnect" before unplugging, which forces half your apps to quit or relaunch, and general eGPU bugs.
  • Spunjji - Tuesday, June 16, 2020 - link

    Apple have been taking the piss with high-end GPU pricing on the Macbook Pro for years now.

    Definitely interested to see how this performs in practice. AMD's GPU designs have been crying out for wide-and-slow OEM implementations for years now. I guess this is the perfect place to do that, where the consumers are already primed to absorb the higher cost for a given level of performance.
  • PeterCollier - Monday, June 15, 2020 - link

    For $700 extra I would rather rent an Amazon EC2 server instance and run my computations on there rather than locally. I'm an AI scientist and that would be way faster than some thermally constrained GPU in a PoS Apple designed cooling solution.
  • Operandi - Monday, June 15, 2020 - link

    Speaking of intelligence.... maybe.... just maybe AI scientists are not the target market here?

    And is Apple's thermal solution really that much worse than the competition? I highly doubt it.
  • willis936 - Monday, June 15, 2020 - link

    Maybe. But who is? 3D modelers? Anyone else?
  • MarcusMo - Monday, June 15, 2020 - link

    Likely content creators. Or anyone using gpu compute for that matter.
  • PEJUman - Monday, June 15, 2020 - link

    .... and just like that.. we're back to AI and machine learning. I tend to agree with Peter.... hard to see who is the target market for this. Thermally constrained Chassis and 50 W TDP with this kind of processing power = burst load instead of steady state continuous use of GPU based compute.

    So this rules out video encoding/content creators. AI load are occasionally transients; but at this price, can't argue the Amazon instance. So that leave the 3D modelers, which I think still mostly on windows ecosystem.

    So Operandi, what would you use this workstation for?
  • Santoval - Tuesday, June 16, 2020 - link

    "So that leave the 3D modelers, which I think still mostly on windows ecosystem."

    That depends on the 3D modeler / CGI artist, their workload, and their personal preferences. Many, in particular those who use Maya, Modo, Cinema 4D, Nuke, Arnold, Mudbox, Houdini and of course Blender, prefer the freedom, stability and flexibility of Linux.

    Some software like 3Ds Max (one of the rare products of Autodesk, and arguably the most important, with no Linux port) and Lightwave 3D do not have a Linux port but they can export files in commonly used formats that can then be imported by some other software running on Linux to continue work there. It is a hassle though, since it requires a separate computer (either physical or virtual), so most artists who work on Linux tend to avoid it.

    As a rule, the higher the budget of a film (or TV show) the more likely it is for most or all of its CGI to have been done on Linux. The most premier VFX houses that work on blockbuster films tend to avoid Windows.
  • PEJUman - Tuesday, June 16, 2020 - link

    How about a TB3 eGPU for this application? Which one would be a better $700 investment?

    I am asking this from the typical usage of a desk + docking station and the laptop being portable only for meeting/customer pitches. i.e. how often do you need to this done on your workstation laptop/on-site @ client location?
  • vFunct - Tuesday, June 16, 2020 - link

    People hereat like photographers & videographers are a tiny market.

    There are a lot more content creators than there are AI researchers.
  • vFunct - Tuesday, June 16, 2020 - link

    ^act
  • PEJUman - Tuesday, June 16, 2020 - link

    I am under the following assumptions for adobe consumer:
    These GPUs does not scale with CUs under typical adobe load. It scales more with TDP limits... which makes spending $700 for more CUs but the same TDP makes the 5600M only better in some corner cases (i.e. brief spike to full load, when the chassis thermal mass + heat rejection capability allows the 5600M to stretch its legs for 1-2 minutes).

    Am I incorrect in this assumption?
  • AlexDaum - Wednesday, June 17, 2020 - link

    @PEJUman, most GPU compute applications scale very well with CU count, so it probably will have better performance. And more CUs at lower clock Speed consume less power, so sustained performance in thermally constrained package.
  • Samus - Tuesday, June 16, 2020 - link

    True, a lot of Adobe plugins use GPU resources.
  • skavi - Monday, June 15, 2020 - link

    it is no worse than any computer of comparable size, with similar battery capacity. better than most of that class in fact.
  • deil - Monday, June 15, 2020 - link

    Apple thermals are usually scrapping the barrel, but it seems it becoming a standard. Everybody loves to fry eggs on a laptop these days. Even if laptops still have some space inside that would fit a heatpipe with 50g copper rad, beeing thinest and red hot is the thing now.
  • brucethemoose - Monday, June 15, 2020 - link

    I don't think thats the target market. You'd have to be nuts to buy an AMD GPU for training R/N, much less one in a thin, pricey laptop.

    I don't know about iOS-dev stuff though. Can Core ML even run on OSX?
  • not_anton - Tuesday, June 16, 2020 - link

    Yes it can - that's the point of Core ML compared to CUDA or OpenCL. It runs equally well on GPUs of any Apple products, up to their smartwatch.
  • Chaitanya - Monday, June 15, 2020 - link

    AMD seems to have found a very good customer in form of Apple for which they are bringing a lot of custom GPUs.
  • patel21 - Monday, June 15, 2020 - link

    Apple is still not going to pass on even the half of what its charging customers for the upgrade
  • zepi - Monday, June 15, 2020 - link

    Apple Gross margin has been traditionally about 40%, so depending on how you look at it, on average they keep about 40% for themselves as a profit and pass on 60% further into the "supply chain" in one form or another.
  • Spunjji - Tuesday, June 16, 2020 - link

    At $700 for the "upgrade" (as in, on top of the GPU cost factored into the standard model), even if they only pass 50% of it onto AMD they'd still be laughing all the way to the bank.
  • brucethemoose - Monday, June 15, 2020 - link

    Thats *way* more interesting than the rumored RX 5700M/5600M, which supposedly runs hot, has 36 CUs and is shipping with low end GDDR6.
  • Valantar - Tuesday, June 16, 2020 - link

    Yeah, this thing is really interesting. I know it's wishful thinking, but I would love this in a 75W version on a HHHL AIC. Best SFF GPU by quite a bit.
  • Spunjji - Tuesday, June 16, 2020 - link

    That would kick substantial amounts of ass.
  • brucethemoose - Tuesday, June 16, 2020 - link

    +1

    That would be a fantastic "upgrade your OEM tower" card as well, and they could squeeze it into a thunderbolt TDP for a self contained eGPU.
  • Hul8 - Wednesday, June 17, 2020 - link

    Assuming the card takes all its power thru the 12V rail in the PCIe slot, wouldn't that then be 66W max?

    From Wikipedia:
    > A full-sized x16 graphics card may draw up to 5.5 A at +12 V (66 W) and 75 W combined after initialization and software configuration as a "high power device".

    At most the auxiliary rails would be used for the fan, or RGB lighting.
  • bananaforscale - Monday, June 15, 2020 - link

    "With a CU count of 40, resulting in 2540 stream processors"
    2560 because 40x64.
  • haukionkannel - Monday, June 15, 2020 - link

    Really cool product and definitely for me, but if you need best laptop that money can buy with ios... go for this!
    It is expensive as a hell, but it is Apple and its is HBM memory, neither known for cheap price! So for that it is not bad at all! If you have money and need, you most probably can not find better laptop!
  • DigitalFreak - Monday, June 15, 2020 - link

    !
  • yeeeeman - Monday, June 15, 2020 - link

    1035mhz boost? This is GTX1650/1660 class performance at that frequency, so no wonder it has 50W TDP.
  • SaturnusDK - Monday, June 15, 2020 - link

    Look at the TFLOPS. It's 5.3 vs 4.55 for a 2060 Max-Q which is 80W TDP so 15% more performance for 60% less power.
  • Spunjji - Tuesday, June 16, 2020 - link

    And it's Navi, so we know those TFLOPS actually mean something.

    I've been yelling at people for ages now that Navi's more than just competitive with Turing at low clock speeds. Will be interesting to see if the benchmarks prove me right!

    Nvidia have a great design that stays efficient right up to the high-end, but the Max-Q performance and heat issues show how difficult it is for them to scale all the way to the bottom range of TDPs on 12nm.
  • drexnx - Monday, June 15, 2020 - link

    wide and slow, keeps the power low
  • smartthanyou - Monday, June 15, 2020 - link

    Other World Computing has an eGPU box/5700XT bundle for $750. Obviously, that isn't very mobile but if you don't absolutely need all that GPU power on the road, the eGPU w/5700XT seems like it would be a better solution.
  • voodoobunny - Monday, June 15, 2020 - link

    5.3 TFLOPS in 50W ... *oof*, that's nice. Theoretically, that means 7.5 TFLOPS inside the PCIe slot power envelope.
  • Spunjji - Tuesday, June 16, 2020 - link

    Unfortunately it almost certainly requires the HBM2 to pull off that TDP, so they'd have the same problem Vega did - either clock it reasonably and have it be expensive for the product class, or try to make the performance match the price and blow the TDP six ways to hell.

    They could maybe hit 6 - 6.5 TFLOPS at the 75W limit with GDDR6, though, which would still be a nice little card for SFF systems.

    Apple might just be the only OEM who have the stones to integrate a product like this and charge accordingly.
  • AlexDaum - Wednesday, June 17, 2020 - link

    There could be a market for expensive premium GPUs with low power consumption and still high speed, but I think most people would rather by the cheaper and hotter GPU, since power is rarely a limiting factor in Desktop PCs
  • SaturnusDK - Monday, June 15, 2020 - link

    15% more performance (in TFLOPS) than a 2060 Max-Q (5.3 vs. 4.55) using 60% less power (50W vs. 80W TDP). Gotta say that's mighty impressive.
  • senttoschool - Monday, June 15, 2020 - link

    Is it really though? The 5600M Pro is using expensive HBM and it's 7nm.
  • SaturnusDK - Tuesday, June 16, 2020 - link

    Not sure what your point is? It doesn't really matter how the efficiency is achieved. If you have a laptop form factor that demands a certain maximum power usage with the highest possible performance and price of no concern then you use whatever technology you have access to.
  • web2dot0 - Tuesday, June 16, 2020 - link

    Haters are gonna hate. Now you hate Apple for being too power efficient 🤣
  • eastcoast_pete - Monday, June 15, 2020 - link

    Could be of interest for video editing on the go/in the field, except that AMD cards often don't give the best results with some of the key video editing suites like Adobe Premiere or Black Magic's Da Vinci. I guess it works well with Apple's own software, and that AMD will make sure their MacOS drivers are cooperative.
  • not_anton - Tuesday, June 16, 2020 - link

    Macbook with this card and 8TB of storage will be the best machine ever for editing raw video at location. HBM2 gives crazy bandwidth, and video processing needs less GPU computation than video games or deep learning.
  • SaturnusDK - Tuesday, June 16, 2020 - link

    It's just a shame it's paired with an Intel CPU or it would have been a home run.
  • Spunjji - Tuesday, June 16, 2020 - link

    And how! 8 Zen 2 cores and a competent iGPU for when you're on battery power would have made this setup literally unbeatable in the Windows world.
  • brucethemoose - Tuesday, June 16, 2020 - link

    Yeah, that seems like a glaring omission...
  • Spunjji - Wednesday, June 17, 2020 - link

    Sadly I don't think AMD can produce the volume Apple demand, and either way, Apple certainly aren't liable to switch CPU designer mid-refresh.
  • Demiurge - Wednesday, June 17, 2020 - link

    Based on what?
  • SaturnusDK - Sunday, June 21, 2020 - link

    If AMD can meet demands from both Sony and Microsoft for their upcoming console launches, they can certainly keep up with the comparably laughable demands from Apple.
  • Smell This - Thursday, June 18, 2020 - link


    I'm not the guy for this but my understanding is AMD GPU compute under Final Cut Pro / OpenCL / OSX is quite stout.

    Sony Vegas on the Win side, too -- I just got a $60 upgrade key for Studio Platinum 17 (from version 9!) for some H.265 HEVC / 4K XAVC editing and 'coding fun. They used to be good about moving keys around, too, as long as you do not abuse the privilege (we'll see how it goes with 'Magix').
  • jim bone - Monday, June 15, 2020 - link

    The table shows the memory clock in units Gbps; clock frequencies are measured in Hertz.
  • Ryan Smith - Tuesday, June 16, 2020 - link

    The GPU manufacturers have requested that we report it in Gbps/pin, since double (and quad) pumping makes a mess of traditional Hz metrics.
  • jim bone - Tuesday, June 16, 2020 - link

    Interesting, thanks for the clarification.

    May I suggest using the "data rate per pin" as the table label in the future? Or some other clarifying name? I would have eventually figured out it was a half/quarter rate clock even if unstated, but the Hz vs Gbps with the current labeling just stands out as a technical error.
  • Quantumz0d - Monday, June 15, 2020 - link

    $3600 in US, what a fucking rip off, just a simple 8 Core processor from 9th gen with this GPU and 16GB DDR4 is that price. Anyone with a little commonsense will go and buy a much better laptop for that price, esp the fact that Area 51M R2 is one option and Clevo X170 series both have a Z490 motherboard, the Alienware is crippled with their crappy AWCC software and Dell price markup over freedom but the X170 is a beast. Although both have a full 2080 SUPER option plus removable SSDs and RAM.

    Horrible Soldered POS product for POS cooling at insane price.
  • web2dot0 - Tuesday, June 16, 2020 - link

    In a 50W TDP? 🤣

    Haters are gonna hate. Don’t like it don’t buy it. Totally cool. But don’t say there’s something better in that power envelope.
  • brucethemoose - Wednesday, June 17, 2020 - link

    Something like a Ryzen/2060 MaxQ Asus TUF is pretty close... and its $1000.

    Sure, the screen and battery might be worse, but spending that remaining $2k could remedy that.
  • javadesigner - Wednesday, June 17, 2020 - link

    "Anyone with a little commonsense will go and buy a much better laptop for that price"

    Just curious: where can "anyone" buy a "better laptop for that price" that runs Max OS X ? (which is the entire reason to buy an apple product - OS X)
  • Oxford Guy - Monday, June 15, 2020 - link

    Does it come with insurance for the inevitable tinnitus?

    Remember, folks... thin is in.
  • Santoval - Tuesday, June 16, 2020 - link

    "A key difference here lies in the clocks as this mobile variant only clocks up to a maximum of 1035MHz, resulting in a theoretical maximum throughput of 5.3TFLOPs, quite a bit less than its
    desktop counterpart which lands in at 9.75TFLOPs."

    The Radeon Pro 5600M appears to be identical to the RX 5700 XT (memory type aside), so the former should also have 64 ROPs. So 5600M has 46% lower performance at a 78% lower TDP than the 5700 XT. Since 5600M is a laptop GPU the trade-off is fully worth it. Apple should be happy.
  • ABR - Tuesday, June 16, 2020 - link

    Error in table: the 5500M can have up to 8GB VRAM, not 4GB. Apple has this config available for the MBP 16.
  • darkos - Friday, June 26, 2020 - link

    Hmmmm. The row labelled: 'Memory Clock' has values that are described as 'Mbps' which seems more like Mega bits per second (which is a *bandwidth* rather than a clock speed like MHz). Isn't that odd?

Log in

Don't have an account? Sign up now