Comments Locked

66 Comments

Back to Article

  • Machinus - Tuesday, January 5, 2021 - link

    Do we know how much he contributed to Zen3, and if so, can we get a comment on that accomplishment now that it's out?
  • Ian Cutress - Tuesday, January 5, 2021 - link

    It's understood that while he was the lead of both teams, K12 and Zen, and built the teams, he actually spent most of his time dealing with K12. I would like to nail this down once and for all; he might still be under NDA. Isn't the usual 3-5 years after you leave a company?
  • anonomouse - Tuesday, January 5, 2021 - link

    NDAs don't have to have a time limit, and it depends on the kind of information. Confidential/proprietary/"trade secret" information can be protected indefinitely.
  • Deicidium369 - Tuesday, January 5, 2021 - link

    Most enforceable NDAs do have a shelf life.
  • Hifihedgehog - Thursday, January 7, 2021 - link

    You do forget government classified, top secret work. Though not necessarily a NDA in the classical public company sense, you are held to strict silence for life or until such things are fully declassified.
  • dotjaz - Thursday, January 7, 2021 - link

    That's not *an* NDA. So what's your point?
  • InfinityzeN - Sunday, January 10, 2021 - link

    As someone who spent the last 8 years of his military career both handling the read on/off and informing people of the legal requirements, I can tell you for a fact that it is not for life. Top Secret is the longest at 50 years. Lower level time restrictions last significantly shorter.
  • Gigaplex - Wednesday, January 6, 2021 - link

    "Which product did you work on" isn't a trade secret worthy of protecting indefinitely.
  • ikjadoon - Wednesday, January 6, 2021 - link

    Is K12 any closer to a single press release?

    Soon, it’ll be significantly slower than its competing ARM CPUs. Unless AMD has kept iterating K12, I don’t know if most of Keller’s efforts actually will ever ship.

    Apple released A4/A5 right on schedule: they had to. Is AMD *really* going to sit on a high-perf ARM architecture until it’s already obsolete?

    My hopes are dim for K12, some half-decade later.
  • mode_13h - Wednesday, January 6, 2021 - link

    Now that AMD would have to pay license fees to you-know-who, they might skip ARM and go straight for RISC-V.
  • linuxgeex - Wednesday, January 6, 2021 - link

    Don't count your acquisitions before they are cooked. There's still hurdles to be leapt, and the official timeline if there are no surprises, is to close the deal March 2022. NVidia isn't going to be extorting implementors. That would land them in antitrust court.
  • whatthe123 - Wednesday, January 6, 2021 - link

    I think he means they don't want to indirectly help nvidia by giving them business. It would be suicide for nvidia to deny or upcharge AMD.
  • mode_13h - Friday, January 8, 2021 - link

    It was mostly an idle comment, but I think AMD should be more strategically focused on two things:

    1. Where they have the most competitive edge. Here, it seems like ARM has established a formidable challenge for AMD. By the time AMD could launch an ARM-based CPU, it would be going up against competitors with V1 and N2 cores, if not even newer iterations. Even achieving *parity* with such CPUs would not be a foregone conclusion. As a relative latecomer to that market, AMD can't afford to enter with a weaker offering.

    2. What type of server ecosystem they want to help foster. Lending more credibility to the ARM server movement helps Nvidia, while damaging the x86 server market position. And it's that credibility that's a lot more valuable to Nvidia than any short-term ISA licensing royalties.

    Also, we were already seeing a movement towards RISC V by Europeans, Chinese, and others who were skeptical about ARM's long-term openness and availability. With it now sitting in US hands - and Nvidia's, in particular - there's going to be a baseline of demand for non-ARM CPUs, without regard for any potential performance or efficiency differentials.

    So, AMD needs to ask itself some serious strategic questions. However, it makes sense for them to keep pushing their x86 CPUs until they come under serious threat from Intel or ARM-based CPUs. Shifting too early could shake confidence among customers about AMD's continued commitment to x86.
  • edzieba - Friday, January 8, 2021 - link

    AMD use small ARM cores for various other tasks (e.g. the PSP within all Zen chips), so will still be using and paying for that ARM architectural license regardless of whether K12 ever ships.
  • mode_13h - Saturday, January 9, 2021 - link

    Aren't those just off-the shelf 32-bit ARM-designed cores? Why would AMD need an architectural license for that? They would be paying royalties on them, but I'm sure those are a lot cheaper than their current performance-oriented cores.
  • arashi - Wednesday, January 6, 2021 - link

    AMD has an architectural license like Apple, and it's a one off payment per arch if memory serves.
  • wumpus - Wednesday, January 6, 2021 - link

    The existence of a high performance commodity ARM server chip would significantly undermine AMD's most valuable asset, their AMD64 (sometimes called x86) ISA.

    I wouldn't be surprised if it mostly existed as a threat to keep Intel from extinguishing AMD during those dark days before Ryzen. As EPYC is breaking into the server room, there is even less reason to allow it to sneak out of the lab now.
  • Yojimbo - Wednesday, January 6, 2021 - link

    Wasn't the idea with K12 and Zen that they basically shared a common architecture but targeted two different ISAs?
  • Ian Cutress - Thursday, January 7, 2021 - link

    A common platform, not a common architecture. So the interfaces on the motherboards were identical, for IO, power, and DDR.
  • ViRGE - Tuesday, January 5, 2021 - link

    Jawbridge? Grayskull?

    Has anyone checked to see if Keller is on top of the building wielding a large sword and yelling "I have the power!"
  • SydneyBlue120d - Wednesday, January 6, 2021 - link

    LOOOL, I was thinking the same 😂
  • Arbie - Wednesday, January 6, 2021 - link

    Better than Homelander, at least.
  • Droekath - Tuesday, January 5, 2021 - link

    I would say that I'm somewhat disappointed. I had hoped that after Intel, we would see Jim in another consumer-facing company, doing something that I could personally get my hands on.
    But all the best to him, I'm sure he'll do amazingly well wherever he is.
  • Sychonut - Wednesday, January 6, 2021 - link

    Big companies are really no fun to work at. The pay is generally better but the bureaucracy and politics is soul crushing. I am almost certain this was a big factor in his decision to move away from a big company to a small one.
  • Kjella - Wednesday, January 6, 2021 - link

    Rank and file employees might feel that way but if you're someone like Jim Keller I think your experience is quite different. For one a paycheck with F U money and he could probably get half a dozen job offers in a week, if head hunters didn't chase him down first. An R&D group with the bugdet to actually do groundbreaking cool stuff. Life's usually pretty good at the top.
  • whatthe123 - Wednesday, January 6, 2021 - link

    I don't know how credible the rumors are but "rumor" was that Keller could not convince Intel's management that they were too far behind in fabrication and needed to outsource some chips if they wanted to stay on track, so he left because his work wouldn't have made a difference without the process to back it. Soon after he left intel they finally admitted they needed to outsource their GPUs since 7nm is behind schedule as usual, so it seems possible that even at this level he couldn't break through the bureaucracy.
  • Oberoth - Wednesday, January 6, 2021 - link

    Intel had the money and resources to create an entire department just for Jim that's untouched by the normal dysfunctional Intel bureaucracy, they should have given Jim anything he wanted as he would have given back 10x that.
    I just really hope he managed to put together some decent chips in his short time at Intel because they desperately needed his genuine. My understanding is he helped with Alder Lake but 'the Jim chip' is the one after this, it was originally called Ocean Cove but I believe that have been changed now.
  • FunBunny2 - Wednesday, January 6, 2021 - link

    "Intel had the money and resources to create an entire department just for Jim that's untouched by the normal dysfunctional Intel bureaucracy, they should have given Jim anything he wanted as he would have given back 10x that."

    as the then CEO of Intel said back in the late 70s, early 80s, I'd rather have my chip in every Ford than in every PC. IOW, Intel has been chasing volume hardware. doesn't sound like this new gig is anything like that.
  • webdoctors - Friday, January 8, 2021 - link

    I used to think like this, but now these chips/projects need huge teams to do both chip design and the SW support for programming them. A team of 100 engineers would barely be enough to get a basic chip taped out and running off the shelf HW. Including the SW layer on top and you'd need several hundred employees.

    Gone are the days of doing something in your garage.
  • FunBunny2 - Saturday, January 9, 2021 - link

    "huge teams to do both chip design"

    when was the last time a chip designer drew a transistor, resistor, capacitor, inductor? decades? there are myriad CAD tools that build a chip like a Lincoln Log cabin, or Lego X-Man if you prefer. these tools have widgets for the discrete elements of current architectural understanding. there a some number of engineers who build these tools, just as there are coders who write compilers for software languages. I'd wager that the labor component of chip (virtual) tape out today is a fraction of what it was back when 'tape out' was literal. I'd wager even more if that metric is scaled by complexity (# of transistors, etc. or discrete elements) of the chip. we know for a fact that node size drop has led to most of what used to be peripheral functions on other chips are now on the SoC. I wonder why they call it that?
  • mode_13h - Saturday, January 9, 2021 - link

    Exactly what scale of project are you proposing that fewer people could do? If you just want to slap together something that works, it's probably easier than ever.

    But, if you want it to be competitive, then the ever-increasing complexity of higher-level structures and all the nuances involved in architecturally balancing everything optimally and minimizing power usage is a monumental undertaking. It requires chip simulations and analysis of different software on your architecture, in order to find bottlenecks and unused capacity.

    And it used to be that a lot of time was spent hand-designing low-level building blocks, which automated tools simply didn't do as well as human designers. I don't know if that's still true.
  • Oberoth - Wednesday, January 6, 2021 - link

    I totally agree, the work he did at Apple set in motion the amazing chips we have today, the work he did at AMD has given us the world changing Zen chips and we can only cross our fingers that he had enough time at Intel to complete some designs so he can reshape the future again.

    I think we all know the real reason Jim was forced to leave Intel and that guy was kicked out a few months later anyway so I was really hoping Intel would apologise to Jim and rehire him to finish his work.
  • ljwobker - Tuesday, January 5, 2021 - link

    looks like a six month non-compete may have just run out... will be interesting to see what gets cranked out here...
  • SydneyBlue120d - Wednesday, January 6, 2021 - link

    I had expected him to land in VW group, really curious to know how he will change history once again.
  • Desierz - Wednesday, January 6, 2021 - link

    Never heard of them before, but I initially wondered why Jim Keller would work for a P2P company.
  • Anne - Wednesday, January 6, 2021 - link

    Yes your indeed.
  • k_sze - Wednesday, January 6, 2021 - link

    There's a slight problem in the article:

    > The next generation chip, known as Wormhole, is more focused on training than acceleration, and also bundles in a 16x100G Ethernet port switch. The move from training to acceleration necessitates a faster memory interface, and so there are six channels of GDDR6, rather than 8 channels of LPDDR4.

    These two sentences seem to contradict each other: is Wormhole actually focused on training or on acceleration?
  • Rudde - Wednesday, January 6, 2021 - link

    The current gen chip is an inference accelerator and the next gen moves from acceleration to training. Wormhole is focused on training.
  • romrunning - Wednesday, January 6, 2021 - link

    So the sentence saying "The move from training to acceleration necessitates..." in connection with the new hardware on the Wormhole product should really be written as "The move from acceleration to training necessitates..."
  • mode_13h - Wednesday, January 6, 2021 - link

    I'm sure this notion will fly like a lead balloon, but I wonder how much they hired him for his name vs. his actual capabilities. No doubt he's a smart guy, but I find all the hype around him a bit hard to swallow.

    Even if he was truly instrumental in some of his more notable prior accomplishments, can he really have kept abreast of all the developments in CPU design well enough that he's *still* that far above and beyond most others?
  • Holliday75 - Wednesday, January 6, 2021 - link

    How much input does he have on the specifics anymore? At that level he is building teams, culture and philological approaches to design and its up to the engineering teams to implement that. From what I read about him his input is at a much higher level in that design now and more about choosing which road to take versus how to design that road.
  • GeoffreyA - Wednesday, January 6, 2021 - link

    Sadly, this is a world of hero-worship.
  • mode_13h - Friday, January 8, 2021 - link

    I'm guessing it's a matter that fans latch onto the few names that get reported. Then, the tech media sees the level of interest and does more reporting on those individuals, setting up a sort of feedback loop.
  • ottonis - Tuesday, May 11, 2021 - link

    When you look at his CV and previous positions, it becomes obvious that he was always there before a company made significant advancements: AMD, before the original Athlon (K7/K8) flew off, and he worked again with AMD before the Zen-architecture became one of the most successful "resurrections" in chip-history.

    DEC Alpha (Risc architecture), Apple A4 chips etc....

    WHile you are completely right that only very few people know what his contributions were exactely to those architectures, he seems to be somehow linked with subsequent success, so he is a name in the industry, and deservedly so.

    I am totall curious to see what will emerge from the project is purpotedly was involved with Intel (Ocean Cove), but I wouldn't be surprised if this becomes a success-story as well.
  • PVG - Wednesday, January 6, 2021 - link

    Excuse the off-topic, but where are the in-depths of Ampere and RDNA2?
    It's been months now, and I really miss the usual Anandtech pieces on new architectures.
  • romrunning - Wednesday, January 6, 2021 - link

    It will be posted when those products start shipping in volume. So maybe March/April... :)
  • Anne - Wednesday, January 6, 2021 - link

    Is this generate a few of someone occurrence to equate of every people ?
  • Yojimbo - Wednesday, January 6, 2021 - link

    "It is high praise when someone like Jim Keller says that your company ‘has made impressive progress, and has the most promising architecture out there’. That praise means twice as much if Keller actually joins the company."

    I disagree. It means something that he'd join the company, I suppose. But the statement would mean more if he didn't join the company along with it.
  • Holliday75 - Wednesday, January 6, 2021 - link

    A guy like Jim Keller is a household name in the industry. He's as much a name brand as he companies he works for. He interviews them as much as they interview him. He is not going to put his name on something that will make him look bad and possibly damage his brand. This can go both ways.
  • Yojimbo - Thursday, January 7, 2021 - link

    I can't see what that has to do with what I said.
  • FullmetalBlackWolf - Friday, January 8, 2021 - link

    Do you mean to say:
    "It is high praise when someone like Jim Keller says that your company ‘has made impressive progress, and has the most promising architecture out there’. That praise means twice as much since Keller didn't join the company."
    Since the company made impressive progress which is stated by Keller, how could the praise justify by keller not joining the company? How could the praise becomes twice by him not joining the company? Shouldn't he give the company a helping hand to improve the architecture? Correct me if i am wrong. Did i misunderstood?
  • WaltC - Wednesday, January 6, 2021 - link

    Just what we need--another Rambus! Keller's got a cushy new job...what do people *expect* him to say about the company shelling out the dough?...;) Will he last longer than two years here, I wonder?...IIRC, he wasn't able to accomplish much at Intel, although he took the job, and the pay, they offered. Talk is always cheap--show me the products and processors.
  • alumine - Wednesday, January 6, 2021 - link

    I think he ruffled too many feathers there (either management but more likely board members / investors) and didn't really get a chance to do much.
    You've completely dismissed his previous achievements though - DEC, AMD, Tesla, and Apple.
  • FunBunny2 - Friday, January 8, 2021 - link

    "I think he ruffled too many feathers there "

    it wasn't all that many years ago that Intel, and it's zealots, based their notions of superiority on Intel's prowess at fab. those days are gone forever, so what's left? architecture? X86 is 50 years old, Itanium didn't do anything, and i960 not much. really, how much different is today's cpu core functionality different from ENIAC?
  • Yojimbo - Wednesday, January 6, 2021 - link

    All of these designs are putting large pools of SRAM on the dies. SRAM densities are not expected to scale much over the next couple of process shrinks at TSMC, however. If a 256 Mbit SRAM cell is 5.4 mm^2 on the 5nm node and they triple their SRAM when going from 12nm to 5nm so they have 360 MB, that's over 60 mm^2 of die area spent on the SRAM.

    But that's for an inference chip. The Graphcore design, a training and inference chip like Tenstorrent want to create, already has 900 MB of SRAM on their 7 nm chip. If others are similar and they just scale the architecture up that might mean about 1300 MB on a 5 nm chip, which is over 200 mm^2 of die area for the SRAM pool. That's a sizable chunk of a 650 mm^2 chip. I don't know if it would work that way but it's something which has crossed my mind as these companies are putting bigger and bigger SRAM pools on these chips. Maybe the resulting power savings are worth it, but it seems like they'll be getting diminishing returns on the strategy as the process shrinks. Maybe they can get more efficient usage of the cache capacity as they refine their designs.
  • name99 - Wednesday, January 6, 2021 - link

    "SRAM densities are not expected to scale much over the next couple of process shrinks at TSMC"

    This seems a very strong statement built upon extraordinarily flimsy evidence (basically taking a single data point and insisting it represents the future).

    Look at the graph in this article, which suggests a much more balanced future:
    https://semiwiki.com/eda/synopsys/294205-what-migh...
    There are a few years for which SRAM density grows less rapidly than logic (but still grows) then a big jump once we move to CFETs.

    Another direction from which it's unclear this is a catastrophe is MRAM. If MRAM scaling continues then at some point it make make sense to move many of the large pools of slowish SRAM (ie your L3/system level caches) to MRAM. As far as I can tell the jury remains out on whether such a crossover point is in our near future.
  • Yojimbo - Wednesday, January 6, 2021 - link

    What? TSMC has said so themself. Why don't you ask me for evidence first before declaring there is "very flimsy evidence?" Strange.

    https://i1.wp.com/semianalysis.com/wp-content/uplo...
    https://www.anandtech.com/show/16024/tsmc-details-...

    So:
    7 -> 5: 1.8 times logic density, 1.35 times SRAM density
    5 -> 3: 1.7 times logic density, 1.25 times SRAM density.

    Now, maybe their modified processes for high powered chips will have better SRAM scaling than their low power processes, but I wouldn't count on it. SRAM is also a large percentage of SoCs.

    And, CFET? I am talking about real products based on a real product roadmap for the next 5 years. Not on "what might the 1 nm node look like?" starting in 2028-2030, at the earliest. Notice the word "might" and the years 2028-2030.

    And then you introduce some other unproven technology, MRAM, something else that "might" make a difference.

    It's sort of ironic that you accuse me of making "very strong statements" on "extraordinarily flimsy evidence" and then start talking about CFET and MRAM. And nothing you said addresses my "strong statement" on THE NEXT COUPLE OF process shrinks. I hate to use all caps, but you seem to have completely overlooked it the first time.

    Please, don't use such language if you are going to be extraordinarily wrong about what you say.
  • Yojimbo - Wednesday, January 6, 2021 - link

    sorry, i should have written "extraordinarily flimsy evidence" instead of "very flimsy evidence" in quotes.
  • mattbe - Wednesday, January 6, 2021 - link

    SRAM is actually only getting a 1.2x improvement going from 5nm to 3nm. It's right in the article you linked. Things will probably get worse at 2nm, judging by the trend for SRAM.
  • Yojimbo - Thursday, January 7, 2021 - link

    The article said a 20% improvement but it also said .8 times scaling, as in area size is .8 times for the same cell. Since that is the way TSMC seemed to talk about SRAM scaling in their own slide I figured that the .8 was probably the actual information. And I may have actually seen that in another source. If so, then the density scaling is 1/.8 = 1.25 times (you have the same size cell or same number of bits - 1 cell, if you wish - in .8 times the area; density = bits/area).
  • Yojimbo - Thursday, January 7, 2021 - link

    That being said it could indeed be 1.2x, it's unclear to me. I'm just explaining why I put 1.25x.
  • mattbe - Wednesday, January 6, 2021 - link

    This is cringy to read. You accused him of "flimsy evidence" despite official information from the foundry itself saying otherwise. You should apologize to that poster.
  • name99 - Monday, January 11, 2021 - link

    Everything depends on how many you consider "the next couple" process shrinks.
    Does that cover two (3nm, 2nm)?
    Does it cover to 1nm and CFET?

    I am trying to point out that EVEN THOUGH there are issues out to maybe 2026, they don't last forever.

    As for flimsy evidence, my point (which admittedly you're not psychic so you wouldn't necessarily know) was that most of the chatter I have seen around reduced scaling (both of SRAM and 5nm generally) is based on the single data point of A14 density, which I don't consider to be a useful data point because the particularities of Apple's rollout of the M1 meant that the highest priorities by far were to get functioning chip, with the Mac-specific stuff working, rather than to ensure that area (or even power) were optimized, and so as much as possible was ported from 7 to 5nm as easily/reliably as possible.

    There is a second chip that could in principle be tested for 5nm density, namely the Kirin 9000. But we don't know the transistor density for that. The transistor count is supposed to be 15.3B, 30% higher than A14, but no-one seems interested in de-lidding one to get an area.
  • Rakulko - Thursday, January 7, 2021 - link

    Guys, please get your facts straight. Jim Keller was NOT the lead architect of the K8 architecture. 🤦🏽‍♂️

    https://www.anandtech.com/show/1655

    https://www.zdnet.com/article/amd-sets-date-for-so...
  • ajohntee - Tuesday, January 12, 2021 - link

    I've seen it mentioned elsewhere too that Keller was in charge of the bus, not the K7/K8 chip itself.
  • TikTokd - Saturday, August 7, 2021 - link

    The best Instagram downloader online: https://www.instagramsave.com/

Log in

Don't have an account? Sign up now