Comments Locked

31 Comments

Back to Article

  • willis936 - Wednesday, January 31, 2018 - link

    I keep seeing “AMDs GPUs are more suited to mining”. That is old news. The 400 series was the king for a long time and arguably is currently second. The 500 series js a sidegrade and a downgrade when cost is factored in. Vega is not suited for mining at all in terms of efficiency or upfront cost. The nvidia 1060 and 1070 however are the current kings of cyrpto mining and have been for over half a year. Ramping up vega production for mining just doesn’t make sense. Maybe to satisfy the gaming market that can’t afford popular GPUs that have prices that are affected by the current mining craze. The 1080 isn’t a go to pick in terms of efficiency but it’s price is higher than normal because gamer demand has been funneled into it.
  • Stuka87 - Wednesday, January 31, 2018 - link

    At this exact point in time, the model of the card doesn't matter. Heck people are buying up mass quantities of 1080Ti's for mining. There was a photo on the forums last week of a couple that bought something like 100 of them. Sure its dumb to be buying those, but thats not stopping people.
  • 0102030405 - Thursday, February 1, 2018 - link

    It's absolutely not dumb to be mining with 1080's. They get the best hash rate of any cards when mining Equihash.

    There's more than one algorithm in crypto
  • bpepz - Wednesday, January 31, 2018 - link

    Actually the Vega GPU are insanely good at mining, the best at this time in fact at any price. The reason is certain algorithms like the one for Monero only need memory bandwidth for mining. Because of this Vegas can be under clocked to the point they only use 135watts at full load, then combine that with HBCC and under clocking the core, they can do 2100 H/S. Compare that to a 1080ti at $699 and 250watts that only does 800 H/S.
  • Samus - Wednesday, January 31, 2018 - link

    Yeah, I don’t get where he is getting his data from that VEGA is bad for mining. There are very clear reasons you can’t find them, and I don’t believe it is simply production related. Demand is extraordinarily high because of these stupid mining rigs people build. At the end of the day you are better off with ANTminers than a non-Titan nvidia GPU because of the nerfed floating point precision.
  • Dragonstongue - Wednesday, January 31, 2018 - link

    completely depends on the mining in question, Radeon have more or less ALWAYS been superior at crunching vs Nv counterparts, yes the newest Nv card are excellent because of the mentioned power usage at SOME types of mining, but, the Radeons on the other hand are still very much KING when it comes to crunching power because at the very least AMD did not take everything away to focus performance levels on specific use cases.

    Nv for years and years were quite power demanding and subsequently resulted in temperature issues (especially when the Vreg, capacitor etc they use were of a lower tolerance to heat then competing Radeons i.e 105c vs 125c) slowly generation after generation they focused their efforts to tweaking the design to get power use down while maintaining "some" of the grunt, this also allowed them to get a higher clock speed as a result which "appears" as they are so much faster, technically they are not directly, but indirectly because of the fancy tricks they use vs hardware/software etc they end up being quite fast.

    As mentioned however, they are not in every case, some mining or hashing programs are infinitely faster on Radeon hardware especially when the software was written to take advantage of the way the Radeons are designed, for example code cracking 9/10 does not use Nv hardware, they use Radeon, mining is just another form of this, by all means am sure some forms of cracking or mining likely are tweaked to capitalize towards Geforce vs Radeon design.

    Radeon are (at least for the moment) more like a sledgehammer more robust, whereas Nv have transitioned to more of a surgeons knife approach.
  • SniperWulf - Wednesday, January 31, 2018 - link

    "I keep seeing “AMDs GPUs are more suited to mining”. That is old news. "

    This is where you are mistaken. The current architectures on both Red and Green teams excel at certian algorithms and are meh at others. Nv can't be touched in equihash, Lyra and a few others, while AMD can't be touched in Cryptonight, Ethash and a few others. It's all about picking the right tool for the job.

    If you're planning to mine ethereum, why buy a 1070 Ti @ $449 (MSRP) to get 30Mh/s when a RX 480 @ $239 (MSRP) or RX 580 @ $229 (MSRP) can get the same job at 135W.

    It wouldn't make sense to buy 1080 Ti's ($779 MSRP) to mine Monero @ 800H/s when a Vega 56 ($399 MSRP) can get 1900H/s at the same 135W.

    On the flip side, I wouldn't by any Radeons if Zencash, Zclassic or Verge were my coin/algo of choice.

    Granted that those prices mean dick in today's market, it's not about one company vs another, it's all about picking the right tool for the job.
  • Samus - Wednesday, January 31, 2018 - link

    Oh that’s interesting. Thanks for the post. Didn’t realize different architectures excelled so much in different algorithms.
  • mspamed - Monday, February 5, 2018 - link

    As an owner of 2 gtx 1060 and 1 rx 570 4gb, the rx 570 was 20$ costlier. The gtx 1060 made 3 dollar during their height last month while rx 570 made 5$ and now when the mining profits are down, the gtx are making me 2.2 dollars while the rx 570 is making me 3.25 dollars per day. The GTX are undervolted but still take 110 Watts while the rx 570 only takes 90 Watts. Nvidia or atleast gtx 1060 are definitely not better than rx 570 for mining even at the current rates.
  • Torrijos - Wednesday, January 31, 2018 - link

    The question the is : Shouldn't they (GPU cards manufacturers) build special units with the right amount for crypto (I imagine a couple of GiB are enough) instead of letting cards with 8-16 GiB go to waste?
  • Flunk - Wednesday, January 31, 2018 - link

    Etherium is RAM limited and the amount of RAM required to crunch hashes increases with time. Right now it's approaching the point where 3GB cards won't be enough.
  • edzieba - Wednesday, January 31, 2018 - link

    The development of 'ASIC resistant' algorithms is mainly based on choosing algorithms that are RAM bandwidth dependent. Reducing the number of RAM dies on board would make them unsuitable for that market, as it is the demand for RAM in the first place that is driving demand for the cards.
  • ZolaIII - Wednesday, January 31, 2018 - link

    So who you gonna call? Ghostbusters? I am afraid not. You gonna call Samsung that ramped up production & naturally they will cash in the same way they did last year on NAND shortage.
  • Duckeenie - Wednesday, January 31, 2018 - link

    Looking at how the question was structured I'm surprised the reply was anything more than a blank stare!
  • mode_13h - Thursday, February 1, 2018 - link

    I don't see quite what you're on about. It starts with an observation, and then the question:

    "And is there any sort of acute shortages here, I mean can your foundry partners do they have the capacity to support you with a ramp of GPUs at the moment and is there enough HBM2 DRAM to source as well?"

    Pretty straight forward question about ramping production of GPUs and availability of HBM2, really.
  • Dribble - Wednesday, January 31, 2018 - link

    The moment miners decide all those Radeons are no longer much good for mining - due to some mining disaster, or a new range of asics appearing that blow away gpu's, or new Nvidia's volta gpu's being much better miners - they all go on ebay. Then you can't sell a thing because users are buying used or new volta gpu's.

    I would have thought if you wanted to ramp up you would have already done it, now is not the time to start? Ramping up production too late last time was what caught AMD out before.
  • lmcd - Wednesday, January 31, 2018 - link

    I'm dying of laughter from "mining disaster," you've made my day good sir.
  • boozed - Wednesday, January 31, 2018 - link

    If there's a chance it's been used to mine tulips, I wouldn't touch a used GPU with a ten foot pole.
  • Wiliam - Wednesday, January 31, 2018 - link

    Sometimes i wondering Why amd not try more to make some APU more powerful then 2400G from their roadmap so its almost same likely they vega 56 or 64 its self? So we at least have some hope for a choice to get out from dark circle of cryptominer.
    If AMD can make more powerful apu so we just struggle to find a good dram, maybe this an opportunity for AMD.
  • T1beriu - Wednesday, January 31, 2018 - link

    You can't just clap and magically a monster APU (700 mm2) is born.
  • Wiliam - Wednesday, January 31, 2018 - link

    LOL, i'm just wondering...
    At least if AMD doing to build more powerful of desktop APU, we can get opportunity to get a "good" graphic from it, then just suffering because a cryptominer does hijack best gpu...
    Because AMD just make the 2400G and 2200g from 14nm fabrication...
    If They can maximize production from 12nm manufacturing, maybe they still can maximize the opportunity of their APU, although they have some "problem" they must to hold the power of their APU because it will break the contract from Intel, you know what i mean...
  • mode_13h - Thursday, February 1, 2018 - link

    First, APUs can only be so powerful before getting memory-bottlenecked. This is why consoles mate big APUs with GDDR5.

    Second, if APUs were big and had lots of memory bandwidth, then cryptominers would buy them also, and you'd still be left with empty store shelves and having to pay through the nose for them on ebay.
  • msroadkill612 - Thursday, February 8, 2018 - link

    "APUs can only be so powerful before getting memory-bottlenecked" hmm? a bottle neck is a bottle neck at any power level, no?

    Actually, depending on the app, all gpuS must access system memory at times, but less so with dedicated gpu cache.

    The real killer is not memory speed, but that cacheless apuS must, historically, perform ALL io via the much slower than ram, pci bus. ~ANY ram can saturate the pci links.

    All that changes with RR.

    The; cpu, gpu & memory controller are connected via the faster smarter Fabric bus over tiny distance on a single die.

    Considering; this, improved memory and very doable liquid cooling of both processors using a single cooling block, then we need to be wary of using the past to judge present apuS.
  • msroadkill612 - Thursday, February 8, 2018 - link

    My guess is apu power is restricted by tdp considerations, and a we now have the new paradigm of liquid cooling them.

    Imo, you could extrapolate from the differences in tdp between known iterations of vega, to figure how many CUs are doable on an LC RR apu.

    There remains a lot of wiggle room between the current 65w RR desktop tdp rating and maybe 2-300w?
  • jjj - Wednesday, January 31, 2018 - link

    Don't be such a noob, they are not saying that production is constrained by DRAM supply, just that prices are high and that's uncomfortable. You need to edit this as it is fake news, your interpretations is PhoneArena worthy LOL.

    The real issue is planning, it's hard to plan for cryptocurrency price spikes and ofc inventory in the channel has its own costs so they can't overdo it.
  • jjj - Wednesday, January 31, 2018 - link

    Pseudo-edit
    If you don't believe me, contact AMD's PR/IR and ask them to clarify.
  • zodiacfml - Thursday, February 1, 2018 - link

    AMD: We wanted to but RAM shortage is as bad as the GPU shortage.

    I see that there are available Rx 560s which is less than half the price rx570
  • ABR - Thursday, February 1, 2018 - link

    I remain completely mystified why AMD doesn't simply raise wholesale prices to match demand? Afraid of offending gamers? Puhleeze... They're in business to make money. Unable to violate some price fixing arrangement with Nvidia?
  • Intel999 - Thursday, February 1, 2018 - link

    There are reports that AMD stopped providing Vega to the AIB channel in Q1. Instead, all Vega volume not going to Apple and Instinct is being sold directly through their Frontier Edition. AMD sales this to resellers at over $600 per unit as compared to closer to $250 when they sell it to the AIB manufacturers. The problem remains that after they meet their Apple commitments and meet Baidu and others Instict requirements they only have so much left over for Frontier cards.

Log in

Don't have an account? Sign up now