Comments Locked

11 Comments

Back to Article

  • Wereweeb - Wednesday, October 6, 2021 - link

    Didn't Ian say that the 28nm node still had the lowest cost per transistor? I wonder if this node aims at being cheaper than 28nm in every way except initial costs.

    Could they reduce the processing steps/masks required by relaxing the design rules, while keeping it IP compatible? Or is most of the added complexity/costs relative to 28nm inherent to finFETs?
  • Kangal - Thursday, October 7, 2021 - link

    That was from a lecture by someone else. Can't remember who or where, but it was from someone knowledgeable in the field (Computer Scientist?). But it is true.

    However, whilst the 24nm-class of silicon used to have the lowest cost/transistor, and it will continue to do so. I remember seeing the projections for 16nm coming close to it. Which obviously isn't the case today due to the inflated market. But when things blow over in 2022/2023, that is how it will settle.

    I say this because, that extra cost to leap to 16nm-class, is actually worth it from a battery life or performance point of view for many products. And that's why SMIC had been on the fast-track to hit those targets by 2020-2022. It's just financially lucrative. SMIC and others also have plans to develop a 8nm-grade silicon sometime in the near future, which would help ease tensions between China (and it's military) for access and capabilities, against the products controlled by Taiwan and USA.
  • Wereweeb - Thursday, October 7, 2021 - link

    I am aware 16nm/14nm is closing it in price. But if Samsung is creating an intermediary node distinct from their 14nm, it must be either because

    1) They can price match/make it cheaper than 28nm for the consumers that can afford re-designing their chips and would benefit from a node shrink, or
    2) Because they can reuse older equipment to fab more modern silicon (and it's unlikely they have old equipment laying around in a silicon shortage.)
  • Kangal - Friday, October 8, 2021 - link

    I think it's a bit of both. And a third option; to show they are continuously innovating on all sectors.

    It makes no sense, to a big company which only requires a moderate node like 28nm, (eg Flash Module) to go to the effort of re-designing their product for this weird 18nm hybrid step, only to gain a slight improvement in cost and efficiency and performance. If you're at that point, you would have bitten the bullet and re-designed using the full 16nm node, paid a slight premium, and know that your product will be competitive for some long time.

    So I'm not too impressed with this showing. Had they introduced this back in 2015-ish then it may have been useful and gained traction, as these companies/products were transitioning from the 48nm-class of transistors. But by 2018, most of them had already transitioned their low-margin/high-volume products on to TSMC's 28nm, Samsung's 32nm, and GlobalF's 28nm transistors.
  • Wereweeb - Monday, October 11, 2021 - link

    It's probably just a 14nm with relaxed specs, so they should be design-compatible. And there are applications where cost overrides any other concerns (MCU's, for instance).

    For a consumer it makes sense to just get the slightly better thing and not worry about it, but for volume-based electronics companies, every cent matters.
  • Roland00Address - Thursday, October 14, 2021 - link

    Sophie Mary Wilson is the likely source. She was one of the main designers for ARM in 1983, and now works for Broadcom.

    Ian did a video on this at Youtube on his TechTalkPotato account "The True Cost of Processor Manufacturing: TSMC 7nm" some months ago. And he cited Sophie Wilson. Note it is not just the cost per transistor, but the cost per transistor that is **useable** and turned on at a specific time measured in fractions of a second. For 90nm you can have about 5/6ths of the gates active at the same time, but for 7nm you can only use about 6/10ths of the gates active at the same time due to power and heat reasons, you have to have some of the silicon "temporary" turned off and this is called dark silicon. Thus higher cost per wafer at smaller density for features (but you also get more features per chip), and power reason makes 28nm at this moment the cheapest cost per transistor as of 2016. Of course that is 5 years ago so the price per wafer may be out of date even though lots of the technical stuff with density and other characteristics will not have changed.
  • eastcoast_pete - Wednesday, October 6, 2021 - link

    They will have some customers for sure, but the need to redesign almost from scratch really hurts in a segment that is usually very cost-sensitive. But, right now, people are desperate for ICs, so they'll get their customers.
    What I'd like to know if going to 17 nm FinFet will increase the number of chips Samsung can get per wafer?
  • MrCommunistGen - Wednesday, October 6, 2021 - link

    It does state that there's an expected decrease in die area over traditional 28nm.
  • Oxford Guy - Thursday, October 7, 2021 - link

    But how does it compare to 14nm?

    Who cares if it's better than 28nm if there is a better value to be found in a more recent process? Now, I understand that it's a given that we're to assume that this is supposed to be a better value than 16/14/12nm alternatives. However, to only provide a comparison with 28nm is inadequate.

    How much more die area? How much lower performance? How much worse power efficiency? How much better performance-per-dollar?
  • MrCommunistGen - Wednesday, October 6, 2021 - link

    Oof. Meant to put this in my first reply:

    I imagine the intent is for 17LPV to be for new designs, not die shrinks of existing designs.
  • Wereweeb - Wednesday, October 6, 2021 - link

    They also said they're bringing MRAM to the 14nm/17nm nodes, which could be very nice for ultra-low-power MCU's (Especially battery-powered and/or energy harvesting ones).

Log in

Don't have an account? Sign up now