Comments Locked

93 Comments

Back to Article

  • Fataliity - Tuesday, January 7, 2020 - link

    You two have a similar face but different nose. Related? =p
  • goatfajitas - Tuesday, January 7, 2020 - link

    I feel like there are so many pics of Ian on this site these days. LOL.
  • FreckledTrout - Tuesday, January 7, 2020 - link

    IanTech, where we put your chips to the test to see which ones tastes the best.
  • Threska - Tuesday, January 7, 2020 - link

    Always thought he was older.
  • Ian Cutress - Monday, January 13, 2020 - link

    I'm 34, that's two years older than when Anand 'retired'. I look 24. :)
  • Smell This - Tuesday, January 7, 2020 - link

    Related in that they have that 'Doc of Letters' hand shake?

    We need a follow-up on how MS improved 'firmware' on AMD mobile __ past, and going forward. I got an update on a 3500U that makes me giggle ... ;-)
  • scineram - Wednesday, January 8, 2020 - link

    wat mean?

    Your BIOS update gave a visible improvement? Windows update?
  • Smell This - Wednesday, January 8, 2020 - link

    AMD Ryzen 5 3500U w/ Radeon Vega 8

    I got a firmware update from Lenovo (v1.22?)
    Win10 v1903 (KB4524570 & -4530684) ... and dotNET updates
    Radeon 19.12.2 (from 18.41.4 ?)

    I don't have any fancy monitoring tools. I use HWMonitor (v1.93?) and the CPU 'package' idled @ 1.65w or so. With deep sleep/hibernation reported 0w at 1.04A after 'woke'.

    3DMark improved 5- to 20% (not sure why Ice Storm went from 49xxx to 58xxx !). Overall, it was 7-8%.

    CB15 was 644 with OpenGL 50+ a bit. CPU load was 2.93GHz'ish (no sign of throttling ...). I ran it off an old HP USB3 stick ... :-0

    Not bad for a $375 lappy with a 256GB SSD and FHD screen!
  • The_Assimilator - Wednesday, January 8, 2020 - link

    Nobody cares about your synthetic benchmark scores.
  • Carmen00 - Thursday, January 9, 2020 - link

    ...except for the person who asked. Did you really need to log in and be unnecessarily rude to a stranger? Just something for you to think about, going forward.
  • Spunjji - Thursday, January 9, 2020 - link

    How else are they supposed to characterise an improvement with repeatable apples-to-apples testing? If you can't be nice to people who weren't even talking to you, maybe think about logging off for a while.
  • Smell This - Thursday, January 9, 2020 - link

    No worries __ the higher the chimp climbs the tree, the more people can see his Ice (Lake Fail).

    I'm simple comparing an off-the-shelf $400 Lenovo AMD laptop, that I purchased, to the articles on AT regarding the AMD Surface 3 and Ice Lake. You may draw your own conclusions, and I'm happy to post screen-caps of my results.

    I think Dr Su 'sand-bagged' Brett and Ian (and Intel and MS), and look forward to future articles from AT on 2020 AMD 4K mobility products.

    That's all.
  • RollingCamel - Tuesday, January 7, 2020 - link

    Any news about Ryzen Pro and Threadripper Workstations? OEMs are a bit slow pushing out Ryzen Pro workstations. Also, any plans to offer Ryzen Pro on retail?
  • Drazick - Tuesday, January 7, 2020 - link

    They need to invest in software frameworks.
    Something which equivalent to Intel MKL, IPP, etc...

    Otherwise, it will be hard to extract the best performance from the CPU.

    They have few efforts in that department, but it seems to be too little.
  • eachus - Wednesday, January 8, 2020 - link

    https://developer.amd.com/amd-aocl/amd-math-librar... AMD has had IPP replacements/look-alikes a decade ago, but AFAIK these are all deprecated, in favor of using Intel's IPP which is hardware agnostic. Personally, I used to follow this sort of thing, but since I retired I've used GNAT, the Ada front-end for GCC, almost exclusively. GNAT supports all the standard IEEE floating-point formats including 80-bit IEEE extended. GNAT uses the GCC back-end, and will generate SIMD instructions where appropriate. Ada also provides a generic math library which can be instantiated for any floating-point type. Can you use it to roll your own 16-bit float type? Sure. (Note that by "your own" here I mean a different size exponent and mantissa than the standard FP16.)
  • mrdude - Tuesday, January 7, 2020 - link

    Such a stark difference between the Bulldozer era and now. Although AMD is still holding cards close to their chest, there isn't a sense of overselling and sneakily misrepresenting their products, roadmaps, and performance. Their confidence is well deserved, and I certainly hope they continue.

    I do, however, have reservations regarding their GPU/Radeon side. Whereas Intel was willing to rest on its laurels, nVidia has shown significant generational improvements to performance and perf-per-watt without relying node shrinks, dating way back to 28nm Kepler. I agree that the performance jump with Vega for the mobile sector is a great sign, but Navi (in comparison to 12nm Turing) is still built on a more expensive node with limited volume and still shows a perf-per-watt detriment.

    ATi desperately needs its own Zen moment. I would argue that it's likely going to be even more difficult to achieve.
  • Threska - Tuesday, January 7, 2020 - link

    "ATi desperately needs its own Zen moment. I would argue that it's likely going to be even more difficult to achieve. "

    Isn't that what RDNA is for?
  • BenSkywalker - Tuesday, January 7, 2020 - link

    How do you think the 780Ti and the 1080Ti stack up as head to head competitors? Would you say it's close?

    Absurdly stupid question, obviously. That is the difference a full node process shrink makes. Right now AMD has a full node advantage over nVidia, the 1060 was better than the 780Ti, and markedly so. How do you think the 5600 is going to do against the 2080Ti?

    Marketing isn't going to help AMD on the GPU side, nVidia is closing in on launching their 3 series GPU line with a full node drop from their prior generation. NVidia could both fail horribly on an engineering basis *and* overprice absurdly and still beat AMD badly.

    Silly marketing tactics are what Intel did when they were beating the brakes off of AMD, only solid engineering is going to save them now. Same thing with AMD, every generation we get a new buzz word that's going to change the game, async, HMBM, RDNA, yet we don't see the kind of quality engineering they absolutely have to have to remain viable in the desktop space.
  • Spunjji - Thursday, January 9, 2020 - link

    It is a stupid question, but not for the reasons you're trying to imply.

    For a start, you're skipping Maxwell in your comparison. Nvidia made a dramatic improvement from the 780Ti to the 980Ti on the same process node - anywhere from a 25% to 100% performance improvement depending on the game and settings. The 1080Ti isn't just a die-shrunk and expanded 980Ti either - they did a bunch more architectural work for Pascal.

    My point here is that you're talking about a full node shrink as if it's the sole key to unlocking Even More Performance, but it's really not that straightforward - which is why comparing the 1060 to the 780Ti (2 technology generations and a node shrink) and then comparing the 5600 to the 2080Ti (same technology generation, different nodes) makes no sense whatsoever.

    Expecting a 251mm^2 GPU to perform anything like a much larger GPU from the same technology generation (it would still be at least ~390mm^2 on 7nm) is not reasonable - even less so because the 5600 isn't the fully enabled variant of that GPU.

    And yes, I do realise I'm implying AMD is way behind Nvidia in releasing this "generation" of GPUs. I don't think that's contentious. Vega was the same generation as Pascal, Navi is the same generation as Turing, and everybody knows AMD was late to the table with both.
  • Spunjji - Thursday, January 9, 2020 - link

    Correction on the die size comparison there: Vega 10XT (14nm) to Vega 20 (7nm) went from 486mm^2 to 331mm^2 with no changes but the node. Assuming similar scaling, the 2080Ti on TSMC 7nm with no other changes would be around 500mm^2 - not 390mm^2 as I guessed before.

    Even accounting for the RT hardware, you'd still expect a chip to perform better than a competitor from the same generation when it's literally twice the size on a comparable process.
  • BenSkywalker - Thursday, January 9, 2020 - link

    Please explain how 'architectural work' is different than engineering.

    My point was AMD's engineering is far behind nVidia and is only being masked by a full node process advantage. Your 'rebuttal' is to point out that nVidia has better engineering?

    I compared the 1060 to the 780Ti as a general example of what a full node drop with genuine architectural improvements looks like. We have not seen that from AMD.

    Last point, if you want to compare die size straight across let's turn on ray tracing where the 2080Ti is several orders of magnitude faster than the RDNA counterparts. No, I'm not suggesting that's valid but if we are discussing die size it's rather disingenuous to ignore one has a lot of transistors dedicated to something the other can't do.
  • Threska - Thursday, January 9, 2020 - link

    Stupid indeed.

    https://wccftech.com/amd-radeon-rx-navi-high-end-u...
  • TheinsanegamerN - Wednesday, January 8, 2020 - link

    Only if AMD were willing to release a big chip to fight the 2080ti, which they are very reluctant to do. And they've horribly overpriced both the 5500 and 5600 chips. Oh, and there is the whole problem of nvidia actually COMPETING, something intel doesnt do, and nvidia having not yet reaped the benefits of 7nm. rDNA on 7nm isnt as efficient as 12nm nvidia, so unlike zen, rDNA is still a full generation behind.
  • flyingpants265 - Wednesday, January 8, 2020 - link

    I know absolutely nothing about building gpus. But how hard can it possibly be? I know for example, that a Snapdragon 845 is $25, while a Snapdragon 660 is $10, and the difference in phone cost is up to $500. Could AMD and partners lower the price of those cards if they wanted to? What's the true cost to produce a 5500xt, including recouping development costs, and at what margin?

    Maybe they are trying to get rid of old RX 580 stock? Because in my mind, $149 is a good price for these cards.
  • Carmen00 - Thursday, January 9, 2020 - link

    How hard can it possibly be to build a GPU? Well. There's a reason that the people doing these things often have PhDs, and millions of dollars behind them, and billions of dollars of fabrication equipment, and it still takes them years to get it right (and they still sometimes get it wrong!).

    Mobile GPUs are aiming for a completely different market where power constraints are much more significant. You cannot take a GPU built for a 4W envelope and expect it to compete in a market where 100W+ is the norm.
  • sarafino - Thursday, January 9, 2020 - link

    If designing GPU's were easy, Intel would be a lot more competitive in performance than they currently are. Especially considering they're been designing them for 20 years, albeit mostly in integrated applications.
  • mrdude - Wednesday, January 8, 2020 - link

    If RDNA is their Zen moment, then I hope they significantly improve the second generation.

    Intel barely nudged their architectural improvements and showed paltry devotion to pushing the envelope. They opted to spend most of their money on stock buy-backs rather than invest into R&D. nVidia hasn't made those mistakes. Intel's generational improvements amounted to single-or-barely-double-digit improvements whereas nV doubled perf-per-watt. That's a stark difference.

    As it stands, RDNA is more expensive to make than competing nV products: 7nm (smaller node tighter supply) and less power efficient (costlier circuitry). In contrast, Zen compares to Intel's latest and greatest far more favorably on both accounts.

    RDNA isn't their Zen moment. Certainly not in its current iteration
  • haukionkannel - Wednesday, January 8, 2020 - link

    Zen1 was good, but there were a lot of improvements in Zen1+. We know that Zen2 was really good and Zen3 seems very promising. So yeah... RDNA2 and RDNA3 will improve first gen RDNA.
  • BenSkywalker - Wednesday, January 8, 2020 - link

    The problem is minor tweaks to RDNA are competing against nVidia engineers given a full node die shrink. This is the company that drives the most powerful computers in the world *and* some of the most efficient.

    Half node drops you can normally choose roughly 30% more performance or roughly 30% more efficient. With a full node drop nVidia could be looking at roughly doubling their performance per watt(100 FPS at 100 watts versus 130FPS at 70 watts).

    Our only hope for competition's sake is that nVidia gets absurdly greedy and targets 70% margins(or higher).
  • Spunjji - Thursday, January 9, 2020 - link

    If AMD can have a "Maxwell moment" (a bit like Intel's Haswell moment, only more impressive) between RDNA and RDNA 2 then things might be more competitive than the manufacturing technology might otherwise suggest.

    I'd be surprised, though. RDNA was already an impressive improvement on Vega, and I'd be amazed to see them take a similarly large step twice in a row.
  • Cooe - Friday, April 23, 2021 - link

    I love just how freaking stupid the future made you look. xD
  • Nicon0s - Thursday, January 9, 2020 - link

    It's obvious that AMD had and still has a long way to go to match Nvidia in gaming but the first generation RDNA cards show very positive improvements, similar to the first GCN cards.
    I suspect AMD can squeeze more performance out of the 7nm node and RDNA and the fact that they went with 7nm early will help a lot.
    Al least half of Nvidia's GPU sold in 2020 should still be on 12nm and their high 7nm cards will be very expensive so AMD should have room to compete on price/performance.
  • cwolf78 - Thursday, January 9, 2020 - link

    RDNA is to AMD's GPUs what Zen is to their CPUs. It's a very scalable architecture they will be improving on with a regular cadence like what they've done with Zen. Lisa Su has already pointed out that the power gating and efficiency learnings they've gained with the development of the Ryzen 4000 APUs will be applied to their Navi and other GPU products as well. In addition RDNA 2 is shaping up to be *much* more power efficent compared to the first gen. I don't know if it's going to beat Nvidia or anything, but it should make products featuring "Big Navi" have a much more reasonable TDP that they'd have otherwise.
  • Threska - Friday, January 10, 2020 - link

    Someone who understood my statement.
  • FreckledTrout - Tuesday, January 7, 2020 - link

    Funny, Lisa knew you, Ian, were not about to drop the Zen 3 in 2020 question :) Well to be honest she should have been more clear. I do wonder if AMD is being so hush hush if they are trying to get Zen3 out soner like end of Q2 or very early Q3? AMD is up to something here with not sharing roadmaps and I tend to think it's so Intel can't plan around there Zen 3 timing.

    Good questions all around.
  • Hul8 - Tuesday, January 7, 2020 - link

    "You'll hear more [about Zen 3] in 2020" does not equate "Zen 3 will be announced in 2020", never mind "Zen 3 will be released in 2020". (It could, though.)

    The obliqueness is deliberate. Many people would be willing to wait and see, if they heard that the next generation was releasing in 6 - 9 months' time. The time frame between the (even approximate) announced - even with just an approximate release date - and the release itself would see sales on current products plummet, unless there were sizable price cuts. AMD wants you to concentrate on their current portfolio, and be interested in it, for as long as possible, without the distraction of an "even better" something in the horizon.
  • Hul8 - Tuesday, January 7, 2020 - link

    *announcement (instead of announced)
  • extide - Wednesday, January 8, 2020 - link

    Zen3 has already been "announced"

    It's on public roadmaps

    It's done (according to public roadmaps)
  • ordray - Tuesday, January 7, 2020 - link

    But she did say that we would see Zen 3 in 2020, so unless she's talking about a simple reveal in 2020 and release in 20201, which I'm doubting with her being that coy about it, then I'm expecting that we will see a release.
    I think that you are correct, though, in that she doesn't want to eat into their Zen 2 sales which is still a relatively new product.
  • WaltC - Wednesday, January 8, 2020 - link

    I was surprised to see that nobody asked the obligatory question about Zen 3, to be released likely Q4 2020, imo. Which is: "Will Zen3 be AM4?" I think it will be, but have no definite confirmation of that, of course.
  • Threska - Tuesday, January 7, 2020 - link

    The "Osborne effect".
  • WaltC - Wednesday, January 8, 2020 - link

    I didn't see anything opaque or oblique in the following quote:

    "Rather than ask me the question three times Ian [laughs], let me clear: you will see Zen 3 in 2020!"

    Last sentence--I think some people were skimming instead of reading...;)
  • lightsout565 - Tuesday, January 7, 2020 - link

    Great read!

    Small typo --
    PC World: With Threadripper now at 64 cores, is there a limit to how far you can push the core count for consumers?

    LS: Is there a limit? [launchs]
  • Threska - Tuesday, January 7, 2020 - link

    Memory bandwidth per a core might become an issue.
  • extide - Wednesday, January 8, 2020 - link

    woooosh
  • ianisiam - Wednesday, January 8, 2020 - link

    That one made me laugh because I read it as "launches" and imagined her suddenly blasting off like a rocket.
  • MikeSmith007 - Tuesday, January 7, 2020 - link

    Dear Ian,

    The next time you get to meet our wonderful and beloved AMD folks, Dr. Su and Mr. Papermaster, maybe you could ask them a simple question: Why doesn't AMD release their PRO Ryzen chips to Enthusiast-Consumer (EC) market? Let's all come to face with the truth - manufacturers like Lenovo and HP are pairing up AMD's best silicon with some of the most sub-par components on the market, just look at the teardown videos of their desktops, you will see how cheap the components are on their motherboards. Pairing an AMD Ryzen PRO chip with a Lenovo/HP motherboard is like putting a Ferrari engine into a Hyundai frame, what is the point?
    Being both a pro user and business owner, the quality and security features of my hardware choices is not just important, it is Paramount. Personally, I would choose and AMD Ryzen PRO chip over a regular Ryzen, anytime, because I run a single workstation with several VMs on it for both personal and business use. So in my true and honest opinion, AMD highly underestimates the value of offering their PRO line chips to the consumer enthusiast market, and in doing so, is also cutting their own profits. Hopefully in 2020 AMD gets to "Ryzen" to an enlightening idea of offering their PRO line chips to a wider audience.

    Sincerely
  • MikeSmith007 - Tuesday, January 7, 2020 - link

    "To be truly successful, one must also be open-minded"
  • FreckledTrout - Tuesday, January 7, 2020 - link

    They asked this in a bout the only way possible. Obviously AMD knows and the way Lisa responded they fully intend to make sure this improves.
  • Topweasel - Tuesday, January 7, 2020 - link

    Ryzen pro is just their more OEM focused SKU's (65w and lower) with software and bios support packages for enterprise features (like encrypted memory, and remote access like Vpro). Not really any better. Things like Asus with the 35w H's and the Surface Laptop Ryzen's are the only chips with truly better silicon and when a company is paying for their own sku, its not going in bargain basement product.
  • Hul8 - Tuesday, January 7, 2020 - link

    There's also ECC UDIMMs being fully and officially supported (by both AMD and the OEM). That's the extra feature that a home enthusiasts might be interested in (for home server use).
  • Hul8 - Tuesday, January 7, 2020 - link

    That's only true for the CPUs, though - not APUs (which don't support ECC at all).
  • Irata - Tuesday, January 7, 2020 - link

    Great Q&A sessions and thanks for being persistent. Seems like Lisa Su cannot dodge your questions as well as others' :)
  • webdoctors - Tuesday, January 7, 2020 - link

    I wonder what SW developments will drive a market for more core counts. Seems to have plateaued last 5 years
  • AshlayW - Tuesday, January 7, 2020 - link

    Probably because Intel stagnated the mainstream market at 4 cores for ~7 years. Give it time.
  • TheinsanegamerN - Wednesday, January 8, 2020 - link

    The professional market has had accessto 8 core CPUs for a very long time, and game console shave had 8 cores for 7 years now.

    Quit using intel's quad core scapegoat. Developers have been entirely too slow on embracing a multi core future.
  • scineram - Wednesday, January 8, 2020 - link

    Because it is nonsense.
  • Korguz - Wednesday, January 8, 2020 - link

    TheinsanegamerN/scineram. then would you care to explain why until zen 1 came out why intel stuck the mainstream desktop cpu market at 4 cores for so many years ?? professional/HEDT and mainstream markets.. are 2 different markets...
    " Developers have been entirely too slow on embracing a multi core future. " why would they embrace the mainstream market.. when its been stuck at quad core ??
  • Korguz - Wednesday, January 8, 2020 - link

    that reply should of been farther up... grrrr
  • Korguz - Wednesday, January 8, 2020 - link

    * sigh * never mind.. its where it should be....
  • Spunjji - Thursday, January 9, 2020 - link

    The professional market had access to dual-CPU systems for decades before they became mainstream. The point is that *until they became mainstream*, there was no reason for software developers to write for them.

    Similarly, the 8-core mobile chips in current consoles are more than matched by 4 solid cores + HT/SMT. Given that the mainstream CPU offerings stagnated on that paradigm for so long, it's no surprise that video games designers and engine coders didn't embark on a journey to write code for hardware that did not yet exist.
  • sarafino - Friday, January 10, 2020 - link

    "The professional market has had accessto 8 core CPUs for a very long time..."

    They sure did and Intel was charging a fortune for those processors, so they were extremely niche cases.
  • evernessince - Tuesday, January 7, 2020 - link

    Entirely incorrectly. There are many games on the market that can use 6 - 8 cores now.
  • Threska - Tuesday, January 7, 2020 - link

    Cheap VFX software for starters.
  • Carmen00 - Thursday, January 9, 2020 - link

    My prediction: the rise of functional programming. Share-nothing trivially-parallelized workflows, available in most modern languages? Sign me up.
  • ksec - Tuesday, January 7, 2020 - link

    The biggest answer coming from this interview is we can stop the TSMC capacity nonsense.

    AMD were conservatives, or the market were too excited. ( I am betting on the former ) and their future forecast has not been keeping up with reality. And TSMC dont have extra capacity to sit around waiting.

    I hope they do much better with Ryzen Mobile. It is a much bigger pie in terms of unit volume.
  • Yojimbo - Thursday, January 9, 2020 - link

    I don't understand what you mean. What is the TSMC capacity nonsense?

    Why aren't OEMs going to AMD to get around the Intel supply shortages? Because validation for large scale products takes time and demands assurances, and AMD cannot promise the necessary volume to make such a product roll-out worthwhile. Why not? Because TSMC wafer supply is tight. Why is TSMC wafer supply tight when they were expecting a fast AI boom as well as Bitmain and NVIDIA to be producing on the node considering that now AI ASICs are still very low volume, the crypto boom burst and NVIDIA decided to sit out, presumably after considering the status of the node? My guess is it is because TSMC's 7 nm efforts have issues. Who is taking the capacity of the node that you seem to imply AMD could have had if they staked a claim to it before? I'm guessing a lot of space is going towards ramping up the rejiggered processes such as 7+, 7NP, and 6, and the yield of the 7 process is not that good, as reflected by AMD's margins.

    It's this volume situation that the people claiming Intel is failing at fabrication and TSMC is taking over aren't taking into consideration. The product, its performance, and its volume need to be considered together from a fabrication standpoint (as well as from the standpoint of an OEM offering support for a product line).
  • Spunjji - Thursday, January 9, 2020 - link

    "Who is taking the capacity of the node that you seem to imply AMD could have had if they staked a claim to it before"

    There's the clincher. Based on what we know, it looks like AMD have taken everything they can get.
  • andykins - Thursday, January 9, 2020 - link

    Apple is, of course! Although they'll be much less reliant on 7nm later this year when they transition to 5nm chips for their latest mobile devices.
  • sarafino - Friday, January 10, 2020 - link

    AMD was conservative on their wafer orders because they can't afford to spend all their money on a excess silicon that they won't be able to sell out of before the next gen stuff arrives, especially when it's 7nm stuff that already has a higher cost than older nodes. They were trying to buy only as much silicon as they thought they could sell and as it turns out, early demand for Zen 2 appears to have exceeded their expectations.
  • quadibloc - Wednesday, January 8, 2020 - link

    AMD has definitely pulled ahead in laptops with their 4000-series APUs. Since that's where Intel was ahead, it will tilt the balance significantly in AMD's favor. But even with more 7nm capacity coming, when Apple moves to 5nm, will AMD have enough capacity available to it to be able to fully benefit from this? And there's Intel's recent announcements; their neural net assist instructions are the one thing that impressed me.
  • jospoortvliet - Wednesday, January 8, 2020 - link

    > AMD has definitely pulled ahead in laptops with their 4000-series APUs.

    We haven't seen independent benchmarks yet, let's wait for that until we say so ;-) the 3k series is ok but behind intels greatest, esp on battery life. Maybe 4k catches up, maybe not... it will matter a lot how they really compare.
  • Nicon0s - Thursday, January 9, 2020 - link

    "Maybe 4k catches up, maybe not..."

    I mean AMD can do an 15W 8 core mobile CPU on 7nm while Intel is stuck on 14nm for such high core parts. AMD's power efficiency goes up to 2x vs Intel's 14nm CPUs.
    If you compare Intel's 10nm CPUs with AMD's 7nm ones Intel might pull ahead in idle efficiency and that's about it.
    Anyway OEM's seem very interested in AMD's mobile Ryzen, as interested as they were in Ryzen 3000 CPUs so AMD should gain laptop market share much faster this year.
  • TheinsanegamerN - Wednesday, January 8, 2020 - link

    Hold your horses there bud. We only have AMD's word, which is infamously unreliable, that the 4000 series hoses intel in mobile. Let's see the benchmarks first.
  • Korguz - Wednesday, January 8, 2020 - link

    TheinsanegamerN and intels word has been more reliable the last few years ?? come on...
  • WaltC - Wednesday, January 8, 2020 - link

    Are you joking? For the last few *years*, AMD's "word" has been golden--their execution rate is 100%. Much better than Intel promising 10nm 3-4 years ago without getting their yet...;) It's Intel that is "famously unreliable" of late, and you can take that to the bank...;)
  • WaltC - Wednesday, January 8, 2020 - link

    I meant "there"--one of these days AT will surprise me and put in an edit function out here...;)
  • The_Assimilator - Wednesday, January 8, 2020 - link

    No, their word has not been golden. The marketing slides put out by companies are never truthful, and never will be, and it doesn't matter which company it is. AMD's have become more truthful since the Bulldozer days, but if you are willing to take their word at face value you should also be willing to buy a bridge from me. Don't buy into the hype, buy into actual benchmarks by impartial third parties.
  • Korguz - Wednesday, January 8, 2020 - link

    maybe not golden.. but still worth noting.. where intels.. is a joke..
  • Spunjji - Thursday, January 9, 2020 - link

    Don't move the goalposts. You said "We only have AMD's word, which is infamously unreliable".
    In current context (2016 onward, CPU-related) that's a lie.

    I'm tired of repeating this: yes, we should wait for product reviews before declaring winners and losers. No, we don't have to pretend that we know nothing about what those products will be like until they're released.
  • sarafino - Friday, January 10, 2020 - link

    Bulldozer-era AMD was a very different company ran by different people, with different approaches to how products were designed, manufactured, and marketed. If we look at what AMD has stated pertaining to everything Zen-related, they have delivered on the vast majority of their claims. When things haven't worked out the way they said they would, they acknowledged the issue, explained what's going on, and offered up solutions. That is as much as you can ask of a big tech giant.
  • PierreJG - Wednesday, January 8, 2020 - link

    Well the mentality was.Intels best amd's crap.I know a company here that use to sell just intel.I saw the arch of the first ryzen cpu before it was launched.I told them soon you will have more amd's than intel.They still laughed and said that will never happen.How they are eating there words.They don't even want to speak to me :D
  • remosito - Wednesday, January 8, 2020 - link

    can't believe nobody asked when we are finally gonna see hdmi 2.1 outputs....
  • mdriftmeyer - Friday, January 10, 2020 - link

    That and the fact she bypassed TB3 and no one followed up and asked her to directly address a lack of Thunderbolt.
  • sheepdestroyer - Wednesday, January 8, 2020 - link

    Ian uses the Dr title for himself in front of his questions, but not for Lisa's answer?
  • Ian Cutress - Monday, January 13, 2020 - link

    I use it in the first instance. In this case, look at the title of the article.
  • quadibloc - Wednesday, January 8, 2020 - link

    Given how important the ability to use Windows applications programs is to many people, focusing on x86 is indeed "the right thing to do"; being able to make x86 processors, instead of being locked out of that market, is almost a license to print money. As both the Bulldozer years and the fate of Cyrix proved, though, you still have to be competitive, but AMD is meeting that challenge.
  • sarafino - Tuesday, January 14, 2020 - link

    Surprisingly, even Centaur announced a decent (for what it is) new x86-64 processor.

    https://fuse.wikichip.org/news/3099/centaur-unveil...
  • Rudde - Friday, January 10, 2020 - link

    "You should expect that we will have a high-end Navi"
    "You should expect that our discrete graphics as we go through 2020 will also have ray tracing."
    A high-end Navi with ray tracing in 2020 is almost certain at this point.
  • realbabilu - Friday, January 10, 2020 - link

    I wish amd invested also on software production to optimize their products. Like optimized blas, blis, to counter Intel MKL that superior in Intel, and make and chip looks bad while using their MKL.
    AMD acml is not good, better support Openblas and Blis to get every juice performance at amd massive core with avx512.
  • duvjones - Monday, January 13, 2020 - link

    Dr. Su is hedging quite the bet against ARM and RISC-V, but whom am I to argue when things seems to be working. My question with this is with the semi-custom unit, because as much as SONY and Mircosoft enjoy their chips... It seems rather limited to be an x86/x64 shop in a ARM based world, but we'll see how that pans out and which company becomes a customer of the semi-custom unit.
  • zamroni - Monday, January 13, 2020 - link

    amd needs to add more pci lanes in ryzen mobile so oem can put thunderbolt chip.
  • urbanman2004 - Thursday, January 16, 2020 - link

    Lisa Su, easy does it like a boss

Log in

Don't have an account? Sign up now