Comments Locked

81 Comments

Back to Article

  • prophet001 - Thursday, June 17, 2021 - link

    Didn't read it all yet but the part about stop arguing about op-codes was pretty nice to hear. (Looks at apple fanboys)
  • name99 - Thursday, June 17, 2021 - link

    Better read the whole thing then. Because his comments (especially about the importance of abstraction layers) don't mean what you think they mean...
  • mode_13h - Friday, June 18, 2021 - link

    He never actually says that. His stance on ISAs is pretty clear, if you read to the end of that section. Ian tried pretty hard to nail it down.

    > if I want to build a computer really fast today, and I want it to go fast, RISC-V
    > is the easiest one to choose. It’s the simplest one, it has got all the right features,
    > it has got the right top eight instructions that you actually need to optimize for,
    > and it doesn't have too much junk.

    > As you go along, every new feature added gets harder to do,
    > because the interaction for that feature, and everything else, gets terrible.

    > The marketing guys, and the old customers, will say ‘don't delete anything’,
    > but in the meantime they are all playing with the new fresh thing that only
    > does 70% of what the old one does, but it does it way better because it
    > doesn't have all these problems. I've talked about diminishing return curves,
    > and there's a bunch of reasons for diminishing returns, but one of them is
    > the complexity of the interactions of things. They slow you down to the point
    > where something simpler that did less would actually be faster.
  • GeoffreyA - Friday, June 18, 2021 - link

    I was glad to hear his statement about RISC-V. I just hope that if/when x86 goes down, AMD and Intel choose RISC-V. Windows is another problem, though; there isn't any RISC-V version as far as I know; and if they do release one, it'll take some time before x86-64 emulation is up and running.
  • mode_13h - Friday, June 18, 2021 - link

    We should recall that Tenstorrent is using some RISC V cores from SiFive, in an upcoming chip. That decision is kind of putting his money where his mouth is, although it could just mean that RISC-V was simply better in a single respect: TTM, PPA, PPW, or licensing costs, and "just good enough", in the others. He also cited SiFive's Chris Lattner, "one of the best compiler guys on the planet".

    TTM = time-to-market
    PPA = performance-per-area
    PPW = performance-per-Watt

    BTW, Intel is rumored to be trying to acquire SiFive for $2B. That could put an interesting wrinkle in Tenstorrent's long-term plans with them. But, by the time follow-on chips are being built, maybe Tenstorrent will have been acquired as well.

    I should add that one advantage of ARM over RISC V is how strict ARM is in avoiding vendor-specific ISA extensions. That makes ARM code very portable, by comparison with RISC V. This isn't a problem for embedded uses of RISC V, but plays against its potential for success in general purpose (or should I say "total purpose" ;-) computing.
  • Ian Cutress - Friday, June 18, 2021 - link

    The RISC-V cores are a simple strip on one side of the chip to do mid-cycle compute. It's the Tensix cores that do most of the heavy lifting
  • mode_13h - Friday, June 18, 2021 - link

    Yes, definitely worth noting. However, Jim was keen to point out the 512-bit vector unit of the SiFive X280 core they'll be using. So, the fact that they aren't as fundamental to the chip as the Tensix cores doesn't mean they're not plenty important.

    For anyone interested in further details, check out Part 1 of the interview.
  • mode_13h - Friday, June 18, 2021 - link

    https://www.anandtech.com/show/16709/an-interview-...
  • mode_13h - Friday, June 18, 2021 - link

    ^ Part 1
  • GeoffreyA - Saturday, June 19, 2021 - link

    mode_13h, it would be interesting to see a showdown between RISC-V and ARM, but no doubt the picture might be muddied by ARM having better, more mature implementations. Anyway, I'm sure if AMD built one, they'd cover the ground quickly.
  • mode_13h - Sunday, June 20, 2021 - link

    You could get your wish, especially if Intel buys SiFive. It'd probably take several years for RISC-V implementations to catch up with the best-of-class big ARM cores.
  • FunBunny2 - Saturday, June 19, 2021 - link

    "Windows is another problem, though; there isn't any RISC-V version as far as I know"

    given the ubiquity of C cross-compilers, and that x86 (and everything else) is RISC on the metal, 'how hard can it be?' :) last time I looked it up, the average Win app is on the order of 80% system calls anyway.
  • GeoffreyA - Saturday, June 19, 2021 - link

    It shouldn't be too hard for Microsoft to put together a RISC-V build. NT was designed to be portable. But will they? And if they don't, Intel and AMD might not be willing to invest in a RISC-V CPU.
  • mode_13h - Sunday, June 20, 2021 - link

    One would hope that most of what MS did to support native ARM builds would make it that much easier to port to RISC-V.

    As for the portability of Windows NT, it was actually supported on MIPS and DEC Alpha, in the 90's. For a while, they maintained support for Itanium, too.
  • mode_13h - Sunday, June 20, 2021 - link

    I forgot: Win NT also supported Power PC!
  • GeoffreyA - Sunday, June 20, 2021 - link

    I remember portability was one of NT's chief design goals and it helped the OS to be better structured. I'm getting rusty but I think mainly the hardware-abstraction layer and drivers need to be rewritten or worked on. The kernel, can't remember whether it's above or beneath the HAL. Generally, everything beneath needs work.
  • GeoffreyA - Sunday, June 20, 2021 - link

    Looks like work needs to be done on the HAL, the kernel, and the executive.

    "All known HAL implementations depend in some measure on the kernel, or even the Executive. In practice, this means that kernel and HAL variants come in matching sets that are specifically constructed to work together."

    "Abstracting the instruction set, when necessary (such as for handling the several revisions to the x86 instruction set, or emulating a missing math coprocessor), is performed by the kernel, or via hardware virtualisation."

    https://en.m.wikipedia.org/wiki/Architecture_of_Wi...
  • Silma - Monday, June 21, 2021 - link

    Windows NT, on which modern Windows is based, was from the start conceived to be processor neutral. It worked on alpha, and MIPS, which is the foundation of RISC-V.
    Pretty sure Microsoft could bake a RISC-V version of Windows quite quickly if they wanted to.
  • mannyvel - Thursday, June 24, 2021 - link

    What he really says about ISSs is "over time, it doesn't matter." Crufty stuff starts happening eventually, and instruction decode time is marginal from a system perspective. What seems to be more important is architecture and layering, because it allows for improvements over time...much like the nextstep architecture made it easier for Apple to move basically the same OS to multiple platforms.
  • Adonisds - Thursday, June 17, 2021 - link

    This is really great. I wish you could've asked him to recommend as many books as he can
  • mode_13h - Friday, June 18, 2021 - link

    Yes, although:

    > It's hard to say, ‘read these four books, it'll change your life’.
    > Sometimes a [single] book will change your life.
    > But reading 1000 books will [certainly] change your life that's for damn sure.

    Still, let's piece together what we can:

    > You'd be better off reading Carl Jung than Good to Great if you want to
    > understand management.

    > I actually contacted Venkat (Venkatesh) Rao, who's famous for the Ribbonfarm blog

    ...which turns out still to be going. Moreover, it has a "Now Reading" section, as well as a short list of "Books by contributors and editors of ribbonfarm".

    * https://www.ribbonfarm.com/now-reading/
    * https://www.ribbonfarm.com/our-books/

    Later, he recommends:

    > Read Shakespeare, Young (Carl Jung?), a couple of books, Machiavelli,
    > you know, you can learn a lot from that.

    Second only to that, I think his best advice is really:

    > then you go on Amazon and find the best ones

    The positive reviews are somewhat useful in figuring out whether people who liked it had similar goals and preconditions as you. However the negative reviews are possibly even more useful in this regard.

    You can also find a bunch of curated lists on github and elsewhere, to give you ideas about books to check out.
  • GeoffreyA - Saturday, June 19, 2021 - link

    "Read Shakespeare"

    I'd say, if a person had time to read only one book in life, go for this man's work. The great master who knew everything. Best of all, it doesn't just possess wisdom but is beautiful too. The language never reached a higher point, despite all the fancy attempts of the moderns.
  • jamesindevon - Thursday, June 17, 2021 - link

    "Floating-point cache"? Did Jim mis-speak, or is there a mistake in transcription, or is there really a floating-point cache in Zen? Where does that fit in the cache hierarchy?
  • Fataliity - Thursday, June 17, 2021 - link

    If I had to guess, I would say he's talking about the registers that store what is being calculated in the floating point units.
  • GeoffreyA - Friday, June 18, 2021 - link

    Yep, the floating-point register file. Could be.
  • mode_13h - Friday, June 18, 2021 - link

    Really? Look at the other things he's calling out, in that section, and ask yourself if the FPU registers are on the same level of architectural significance.

    Perhaps it was more like "floating point; register file", as in two separate things.
  • mode_13h - Friday, June 18, 2021 - link

    Oops, should be: "floating point; cache"
  • eachus - Friday, June 18, 2021 - link

    I think "floating-point cache" is what was intended. If you load or store a floating-point value, it is not required to be in a single cache line. If your code is generated by a compiler, the compiler authors try very hard to ensure that FP values don't cross cache line boundaries, but it is not always possible. So what is a CPU designer to do? A very small level zero cache that can marshall and unmarshall the bytes. (If you are writing a compiler or trying to write fast assembly language, look up PREFETCHNTA. It won't read a full cache line but will pull the bytes of the FP value into this "invisible" cache.)
  • mode_13h - Friday, June 18, 2021 - link

    > If you load or store a floating-point value, it is not required to be in a single cache line.

    But it's not only floating-point values that x86 allows to be unaligned. So, that still doesn't explain it.

    > look up PREFETCHNTA. It won't read a full cache line but will pull the bytes of the FP
    > value into this "invisible" cache

    Um, no. You should read up on the non-temporal stuff Intel added (first, in SSE). In my experiments (long ago), it doesn't completely circumvent the cache hierarchy, but rather is limited to evicting just one set. The "non-temporal" part is just a hint, rather than a guarantee it won't affect the caches.

    For more, see: https://stackoverflow.com/questions/53270421/diffe...
  • twotwotwo - Friday, June 18, 2021 - link

    Mistranscription of "floating point and cache" or something like that?
  • mode_13h - Friday, June 18, 2021 - link

    Yes, that's what I'm thinking.
  • mode_13h - Friday, June 18, 2021 - link

    So far, my favorite transcription error is "total purpose computing". That ought to be a thing...
    : )
  • elforeign - Thursday, June 17, 2021 - link

    Really a fantastic read and look into Jim's mind, his reflections and outlook. Many thanks for his time and frankness. Also, great job Ian with such excellent questions.
  • mode_13h - Friday, June 18, 2021 - link

    +1
  • mode_13h - Thursday, June 17, 2021 - link

    Perfect timing! Earlier today, I just finished watching his 2019 talk: "Moore’s Law is Not Dead"

    https://www.youtube.com/watch?v=oIG9ztQw2Gc
  • mode_13h - Friday, June 18, 2021 - link

    > When I was in Intel, ... like everybody thought Moore's Law was dead,
    > and I thought ‘holy crap, it's the Moore's Law company!’

    LOL, nice!

    However, I'd be really interested in seeing a plot of transistors per core vs. core IPC, over the past few decades. Maybe with and without SMT. I'm guessing it wouldn't look pretty.

    I'm not expecting single-thread performance to increase more than an order of magnitude in the next couple decades, if ever.
  • GeoffreyA - Saturday, June 19, 2021 - link

    Yah, that Moore's Law company joke was a good one. I liked that.
  • GeoffreyA - Saturday, June 19, 2021 - link

    "I'm not expecting single-thread performance to increase more than an order of magnitude in the next couple decades, if ever"

    While new discoveries will take things further, I think there's a limit to how far computation can advance. It's conceivable that if time could be manipulated, they could get more juice out of the tank, but even there, the unit of computation will be the same.
  • mode_13h - Sunday, June 20, 2021 - link

    In Jim's 2019 talk, he said his team at Intel figured out a plausible path to at least 50x more transistors per mm^2. So, that's somewhat reassuring. Though, if he said what the baseline was, I missed it.

    My point is that IPC is fundamentally limited, in most software, and I think it's probably taking close to exponentially more transistors for each additional increment of IPC. At some point, it just becomes too wasteful of transistors. So, even 50x as many transistors probably won't net us more than 2x IPC, if that. Anything else would have to come from clock speed, which we know is wasteful of power, even with further voltage improvements. That's why I'm saying an order of magnitude is probably a reasonable upper bound.
  • GeoffreyA - Sunday, June 20, 2021 - link

    In agreement. Only so much IPC can be extracted out of software and clock speeds are struggling already. I'd also like to think there's some sort of ultimate limit to computation. How far we are away from that remains to be seen, and I'm not saying give up and call it a day, but it's an interesting thought. For example, it's clear that clock speeds cannot go beyond the resolution/granularity of time (and certainly, we'll never come within 1,000 miles of that regime in our lifetime).
  • ballsystemlord - Thursday, June 17, 2021 - link

    You didn't end up asking my Q Ian, but I looks like a great and truly informative interview based on the questions!!! I'll read it through soon!
  • abufrejoval - Thursday, June 17, 2021 - link

    A constant theme in the interview are the Dunbar numbers (5/15/50/150/500) which seem to pop up everywhere. So I wonder if Robin Dunbar's books simply have been part of his library (and then a bit of credit would not hurt), or if it's just keen observation of these numbers, which have been for more often observed than explained.

    Ah and yes, I remember the stories about ECL VAX machines: 50MHz on ECL when 5MHz was the best NMOS could do before burning up. And then of course came CMOS and IBM's mainframes jumped ship, while Fujitsu/Amdahl didn't...

    Today it's kind of cool to run VMS on an VAX emulator on a RASPI 4 and see it beat the ECL original into a pulp.

    I guess the biggest challenge with future architectures is to find abstractions that remain useful across 5-10 hardware generations. Any ML optimized architecture otherwise risks becoming another Thinking Connection Machine.
  • mode_13h - Saturday, June 19, 2021 - link

    > ML optimized architecture otherwise risks becoming another Thinking Connection Machine.

    You mean people will just play Tetris on its front panel LEDs?

    What he said definitely reflects what the bigger AI chip makers seem to be converging around:

    > you find you want to build this optimal size block that’s not too small,
    > like a thread in a GPU, but it's not too big, like covering the whole chip
    > with one matrix multiplier. That would be a really dumb idea from a
    > power perspective. So then you get this array of medium size processors,
    > where medium is something like four TOPs.

    I think that's got some legs. Not necessarily 5-10 generations. But, if most people are using these via deep learning frameworks rather than programming them directly, then it's easier for the hardware to evolve. Much like GPUs did, in fact, thanks to 3D APIs and portable shader languages.
  • TanjB - Saturday, June 19, 2021 - link

    CMOS was just starting to be practical when the mid-range VAXen were coming out, and there were projections (based on Dennard scaling) that they would eventually beat them. DEC did produce a micro-VAX around 1990. It was not until the mid-90s that CMOS like Alpha started to have clock rates better than bipolar mainframes. The writing had been on the wall for some time, but it took serious hard work, breakthroughs in fabrication, and many iterations to make it real.
  • hd-2 - Friday, June 18, 2021 - link

    It was a great read, and I can just hear his mannerisms in the text; he has something useful to say about a lot of stuff in tech and manages to come across as everyman. Really interesting to hear his thoughts on engineers and managers, and on the technical side his views into where the technology is going.
  • twotwotwo - Friday, June 18, 2021 - link

    Nothing substantial to say, but fantastic that AT ran this and that Keller talked so much about the bigger picture--life outside work, people finding what they want to do, breaking big problems up, etc. More rambling interviews about whatever with folks who are fun to talk to!
  • mode_13h - Saturday, June 19, 2021 - link

    > Nothing substantial to say

    A lot of good career & engineering advice in there, and some insight into several of his previous big employers that you're not going to find, elsewhere.
  • easp - Friday, June 25, 2021 - link

    I believe that twotwotwo was depreciating his own comment, not Keller's.
  • mode_13h - Friday, June 25, 2021 - link

    Makes sense. Thanks for that.
  • GeoffreyA - Friday, June 18, 2021 - link

    Thanks, Ian. Excellent interview and better than part one. Good questions, too. Again, more information on Zen's development. One team did this in Austin, another did that in Sunnyvale, etc. Take off my hat to Jim and wish him all the best in life. Writing this comment on a Zen computer, so thank you, sir, for the hand you had in it. Even my old Athlon 64's somewhere in my room!
  • mode_13h - Saturday, June 19, 2021 - link

    I can see myself referring back to this, in the future!
  • vol.2 - Friday, June 18, 2021 - link

    I'd like to know how Jim feels about the dangers of self-driving cars. What does he make of the two men mysteriously killed in the Tesla accident?
  • Spunjji - Friday, June 18, 2021 - link

    Based on his comments about ethics and the balance of harm and good, I suspect he'd probably feel that unless the self-driving cars are proportionally killing more people than ones driven by humans, it's not a direct concern in a conceptual sense. Of course, if people are dying because of design oversights or bad corporate practices, I suspect he'd have a different opinion...
  • GeoffreyA - Saturday, June 19, 2021 - link

    Spot on.
  • vol.2 - Sunday, June 20, 2021 - link

    It could be argued that anyone killed by a self-driving vehicle is the result of a "design oversight."
  • vol.2 - Sunday, June 20, 2021 - link

    Just FYI, I'm not anti self-driving vehicle. These are things that I think about, and not necessarily reflective of my opinion.
  • Oxford Guy - Monday, July 26, 2021 - link

    ‘unless the self-driving cars are proportionally killing more people than ones driven by humans, it's not a direct concern in a conceptual sense’

    There is more to it than that. Being killed in busy Chicago traffic or on that highway in Florida where people really speed is one thing. Having a robotic car run someone down in a comparatively quiet upscale neighborhood is going to lead to more pushback.

    There is also the potential for control factors, in terms of people being more able to avoid dangerous human drivers by avoiding the places bad drivers tend to drive and the times they tend to be on the road.

    The importance of human life has always been largely based on things like net worth. Robots don’t fundamentally come with that bias, although they can certainly be programmed to.

    People also get some level of satisfaction by putting such drivers behind bars, taking away licenses, extracting apologies, etc. How sincere will the robot car’s apology be?
  • PaulTheRope - Friday, June 18, 2021 - link

    Absolutely fascinating read Dr Cutress, really insightful, by far the most interesting interview I've read for a long while.
  • Spunjji - Friday, June 18, 2021 - link

    That's easily amongst the most fascinating interviews with somebody in the technology industry that I've read. Thanks for putting that together. I left it all with the impression that his approach to being good at tech is to be good at people, too.
  • Spunjji - Friday, June 18, 2021 - link

    Should have been specific about this, but I wasn't, so: Thank you, Ian, for doing a great job of interviewing. I feel like you found a solid compromise between pushing him towards topics people wanted answers on and letting him conceptualise things in a way that makes sense to him.
  • mode_13h - Saturday, June 19, 2021 - link

    +1
  • Dehjomz - Friday, June 18, 2021 - link

    X-ray lithography for transistors. Now that is interesting.
  • biigD - Friday, June 18, 2021 - link

    This is a phenomenal interview, Ian. Thank you!
  • Antony Newman - Friday, June 18, 2021 - link

    Fascinating. Thankyou IC & JK.
  • ABR - Saturday, June 19, 2021 - link

    This interview and part 1 really knocked it out of the park. Really insightful questions and thoughtful answers. It would have been nice to hear a bit more about what he thought about Elon Musk, after some of his unprompted comments in the first interview, but everything asked was interesting.
  • Makste - Saturday, June 19, 2021 - link

    He seems to have enjoyed his time a lot, at AMD. I'm happy for him
  • poppy20 - Saturday, June 19, 2021 - link

    This is the best article Ive read at Anandtech. So insightful over so many areas. More of this please.
  • Zoide - Monday, June 21, 2021 - link

    Is the full video of the interview available anywhere, podcast style?

    Thanks
  • n0x1ous - Monday, June 21, 2021 - link

    Fantastic. This is the kind of content that makes Anandtech Anandtech. Now if only we could get this on the GPU side of things again
  • efm01 - Tuesday, June 22, 2021 - link

    Excellent read, great questions (and insightful answers).
    Audio format would be a great addition
  • jm0ris0n - Wednesday, June 23, 2021 - link

    Phenomenal article. Never taken so many notes from an interview in my life.

    Grew up on K7 architecture, never forget how proud I was to get an Athlon 850 up to 1Ghz stable on air.

    Would be very interested in a Jim Keller book club. I love and hate reading all at the same time ... if that makes any sense. Hearing him recommend a topic to research would spur on the love/hate relationship!
  • mode_13h - Wednesday, June 23, 2021 - link

    > I love and hate reading all at the same time ... if that makes any sense.

    What's the part you hate? I tend to take lots of notes, which turns it into more of a chore. That could be why I don't do much reading, lately.
  • mannyvel - Thursday, June 24, 2021 - link

    Great article. It's not often that interviews have so much good, interesting stuff. It's also great reading about how an engineering-manager thinks, because that's what engineering is really about - translating thoughts into reality.

    And it's important for people, especially engineers, to realize that it's not magic. Just like the Pixar guys, he's turned what people thought of as "bad teams" into "great teams". That's the power of good management. Engineers tend to believe managers are useless, but they're only useless when they suck...which is a lot of them. When things don't happen people blame management. When things happen they credit "the team." But behind that engineering team is a management team front-running for them in ways that are not obvious.

    The management book part is amusing, because apparently the same statistic applies to many engineers as well.
  • mode_13h - Friday, June 25, 2021 - link

    Agreed.

    What's weird about management is that it seems like the only job people routinely to get into by being good at something else. However, I've heard numerous horror stories about MBAs being hired as engineering managers and making a hash of it.

    My ideal engineering manager is someone with an engineering background with good organizational and people skills and a genuine interest in helping people and teams be more successful. Unfortunately, what you usually get is someone who's maybe just an average engineer, wants more money, and isn't afraid to play politics in order to climb the ladder. The worst sort are the ones craving power and status.
  • Oxford Guy - Monday, July 26, 2021 - link

    ‘I've heard numerous horror stories about MBAs being hired as engineering managers and making a hash of it.’

    Read the comments from Wozniak about the Apple III — ‘the first Apple computer designed by a committee’ rather than by him. Almost sunk the company.
  • NoGyan - Tuesday, June 29, 2021 - link

    Didn't read it all. Too much hype to him. There are so many good engineers behind the success of those chips. I don't know why industry hypes him so much.
  • mode_13h - Wednesday, June 30, 2021 - link

    I had a similar suspicion, until I read these two interviews. I agree there are a lot of good engineers out there, and Jim seems to acknowledge that fact. However, it sure sounds like he spearheaded the organizational change that made Zen possible. And to hear what he said about re-energizing Intel makes it sound like that success at AMD wasn't just a fluke.

    What I think makes him rare isn't his engineering aptitude, but its combination with his drive, his ability to communicate with engineers and management alike, his vision, his ability to debug teams, and whatever way he has of kicking people's ass that they often seem to mistake for mentoring.

    What I found most surprising were his insights into management, teams, and organizations. But, there are lots of interesting insights sprinkled throughout. I found it well worth the read, but I respect those who might not. Will it change your life? Surely not as much as reading 1000 books -- "that's for damn sure."
  • Carstenpxi - Monday, July 5, 2021 - link

    Second to none, the best article about computers, design and management I have read in decades. I don’t know if you have to be a retired engineer to really appreciate the depth and wisdom embedded in this fantastic interview.

    Carsten Thomsen
  • CoachAub - Tuesday, July 6, 2021 - link

    Ian, this was an incredible interview! Thank you. It's the most interesting interview I've read in a long time.

Log in

Don't have an account? Sign up now