Comments Locked

43 Comments

Back to Article

  • firewrath9 - Sunday, November 17, 2019 - link

    "Volume Ramp 2H 2020"
    2H 2020 for server gear, so hopefully 1H 2020 for consumer gear?
    normally (skylake-EP, broadwell-EP, haswell-EP) server gear comes out 1-2 years after conusumer gear, but its 2H 2019 and no consumer 10nm yet (excluding the i7s + Iris plus)
  • shabby - Sunday, November 17, 2019 - link

    It's different now, first it's low clocked mobile chips, then pricey server chips, consumer chips are dead last.
  • Kevin G - Monday, November 18, 2019 - link

    The way things are going, we won't even see 10 nm desktop chips from Intel. Desktop Cannon Lake was formally cancelled years ago and desktop Ice Lake has not been emphasized in recent road maps. Intel will likely release a HEDT part based on 10 nm which will cover them for the 10 nm marketing claim, but I wouldn't bet on a mainstream desktop consumer 10 nm part.
  • Santoval - Monday, November 18, 2019 - link

    Desktop Ice Lake (i.e. Ice Lake-S/H) was never "emphasized", neither recently nor earlier, because Intel simply have no plans to release it. Why? Because their 10nm+ node has poor yields at the high clocks required for Ice Lake-S & -H parts. On the contrary, Ice Lake-U/Y parts have lower clocks because they are low power, while Ice Lake Xeons have lower clocks due to their high number of cores - yields are also less of an issue for the latter due to their obscene price.
    Unless a miracle happens so that Intel can manage to fix their low yields at high clocks, they are going to replace Ice Lake-S/H with Comet Lake-S/H. I don't believe in miracles.
  • JayNor - Tuesday, November 19, 2019 - link

    Intel is shipping 10nm Ice Lake in volume. These are highly integrated, with wifi6, thunderbolt 3, avx512, optane support, gen11 graphics. They are sampling 10nm Agilex FPGAs and 10nm Lakefield 3D chips, 10nm NNP-I chips. They are scheduled to deliver 10nm Snow Ridge 5G networking chips in 2020 q1, Tiger Lake 10nm laptop chips with integrated Xe graphics in 2020 q1 and 10nm Ice Lake Server chips in 2020 q3.

    Intel has only two 10nm fabs currently, but are ramping up a third fab in Arizona in 2020.
    They don't currently have the fab capacity to build all products in 10nm. They probably build on the order of 10x more chips than AMD.
  • meacupla - Sunday, November 17, 2019 - link

    I'm rather surprised Sapphire Technology hasn't sued Intel for trademark infringement.

    I sure as hell was confused with a Sapphire CPU.
  • nathanddrews - Monday, November 18, 2019 - link

    Code names based on real life locations are probably difficult to sue over.
  • meacupla - Monday, November 18, 2019 - link

    "Google Maps can't find sapphire rapids"
  • Khato - Monday, November 18, 2019 - link

    Most prominent result would be Sapphire Rapids in the Grand Canyon. Wouldn't be surprised if there are others as well.
  • Dragonstongue - Monday, November 18, 2019 - link

    pretty sure is a very key technicality due to naming
    Sapphire Technology as well Sapphire Technology Limited are name of the company that makes AMD GPU (mostly.. I not sure of other things, I could very well be wrong)

    where this from Intel provided is ALWAYS reference as
    Sapphire Rapids in this case Sapphire Rapids CPU
    seems very distinct IP naming

    so quite likely they are "safe" heck even if they are not, they have enough coin to buy out Sapphire directly then take the name how they see fit (if not "pay" whatever lawsuit comes their way..if they decide to pay at all)

    would not be the first time in their history they pull douche moves like that
  • outsideloop - Sunday, November 17, 2019 - link

    "Leadership Performance" I love good comedic content.
  • IntelUser2000 - Monday, November 18, 2019 - link

    "Third" Gen Optane DIMM, not Second.

    By the way if you assume the whole setup is 40MW, and you further assume the 2400 nodes use 90% of the power, you end up with a single node consisting of 2x Sapphire Rapids and 6x Ponte Vecchio using 15KW. So maybe 80% of that is the GPU, so each Ponte Vecchio is using 2KW.
  • IntelUser2000 - Monday, November 18, 2019 - link

    Based on that the improvement over the current GPU is 10x, because supercomputers care about FP64 not FP32. 6.7x more power for 10x more performance equals 50% more efficiency. That makes more sense.
  • MrSpadge - Monday, November 18, 2019 - link

    Or maybe your assumption of 40 MW is wrong?
  • Santoval - Monday, November 18, 2019 - link

    I strongly doubt a single Ponte Vecchio will draw 2 kW. Even my boiler does not require 2 kW. As for the assumed 15 kW for a single 2U node I am pretty sure such a power density would rival that of nuclear reactor cores.
  • Yojimbo - Monday, November 18, 2019 - link

    In terms of HPC, exascale means FP64 performance, not FP32 performance.

    The Summit supercomputer has over 27,000 Tesla V100s and uses about 13 MW. The Aurora Supercomputer will be a ~30 MW machine. My guess is either there will be 50,000+ GPUs or their GPUs are going to be made very large using the chiplet technology.
  • IntelUser2000 - Monday, November 18, 2019 - link

    Yes it looks like we're going from 300W GPUs that look like a video card, to a much larger one using 1-2KW but with much more performance. Intel's presentation about the card using 8 MCM GPU die is evidence of that.
  • Yojimbo - Monday, November 18, 2019 - link

    Many of the NVIDIA HPC GPUs already don't look like a video card, they use a mezzanine connection. Making the GPUs draw so much power (more than 1 kW) creates a rather specialized solution because I am not sure hyperscalers and cloud builders are set up to cool and power such beasts. Of course, with the chiplet approach it will be less work to scale it down to something more manageable, but I don't think Intel has that concern in mind with their first generation of compute GPUs; they can't expect much commercial uptake, anyway. I would think they would want some sort of lower power developer boards available, however, to start to go after the commercial market.
  • Kevin G - Monday, November 18, 2019 - link

    This reads like an architecture that Intel wants to win the performance crown. At a high level, the technology is there with the potential to do it as they are throwing a lot of silicon at the problem.

    I am not surprised about the large power budget and it is something I've expected once the move to chiplets begin. There are three effective caps on power consumption: chip size, voltage and frequency. One of the caps has been removed and thus a spike in how much energy can be put toward a product. Thermo dynamics still reigns supreme but all that heat being generated is spread out over a larger area, something not possible with monolithic designs.

    The good news with a chiplet designs is that creating a low power developer board just means scaling back the amount of chiplets. I'd imagine it'd be two compute dies, a Rambo cache die and some HBM would easily fit into a PCIe card's power budget.
  • Yojimbo - Monday, November 18, 2019 - link

    Oh, a minor correction. The original Aurora machine built around the Knights Hill Xeon Phi was supposed to be delivered in 2018, not 2020.
  • mdriftmeyer - Monday, November 18, 2019 - link

    Intel pushing the goal posts back is nothing new. The fact that Anandtech isn't critical of these `claims' versus reality is nothing new.
  • Yojimbo - Monday, November 18, 2019 - link

    Regarding the Aurora Supercomputer begin the first customer of Ponte Vecchio, my opinion that the answer was dodged in such a way is because the answer is a resounding yes. There is so much new technology going into this that it cannot possibly be profitable for commercial usage: a new 7 nm node, EMIB, chip stacking, CXL, and last but not least, Intel's first attempt at a compute-focused GPU built on a mostly untested GPU architecture. Then one must consider that the software ecosystem won't exist for commercial GPU compute, yet. Aurora will be programmed with compiler directives, but commercial users will want to be able to develop their own codes based on lower-level APIs. There will be a lot of work to do to enable that to be done smoothly and productively. I think that the first users, and perhaps only users with much volume, of these Ponte Vecchio GPUs will be HPC shops, and no HPC shop is going to get these GPUs before Argonne.
  • Kevin G - Monday, November 18, 2019 - link

    This makes me wonder how big the chiplet dies are. Intel was hit hard with 10 nm issues with troubles making the first Cannon Lake which was only ~71 mm^2. Eight of those is a 'mere' 568 mm^2 which is possible today on current nodes. EUV is supposed to reduce the maximum area they can manufacture so that may not be possible anymore at 7 nm (and EUV is going to be another first here for Intel too).

    However, I would argue that Intel will have the packaging side of things squared up in time. This isn't their first EMIB product nor will it be there first Forveros product either. CXL will ship first in Sapphire Rapids, mainly because it has to be but the first Intel peripheral to use it will likely be a 400G Ethernet controller. Intel has already licensed out CXL so even the expected Intel CXL enabled NIC may not be the first.

    The real gamble is going to be the oneAPI libraries. Intel has a tradition of being horrible when it comes to the software side of things. Larrabee died as a GPU because they simple couldn't get the software to work as desired. Intel never figured out the compiler magic to get Itanium to be competitive.

    Intel is doing a lot but they have done much of it before at a smaller scale.
  • JayNor - Tuesday, November 19, 2019 - link

    Intel plans to offer pcie5 and cxl in an Agilex I series, announced back in April. It would make sense for Intel to use these for emulation and perhaps they will also appear ahead of the Aurora project. Intel has been sampling the F series since August. I've seen reports that the I series features were demoed in the lab.

    https://www.intel.com/content/www/us/en/products/p...
  • JayNor - Tuesday, November 19, 2019 - link

    "In addition to the DOE's Aurora supercomputer, Lenovo and Atos also plan to build HPC platforms using Intel's Xeon processors, Xe GPUs and oneAPI unified programming layer, according to Intel."

    https://www.crn.com/news/components-peripherals/in...
  • del42sa - Monday, November 18, 2019 - link

    10nm +++ :-D
  • yeeeeman - Monday, November 18, 2019 - link

    What is the problem with improving what you already have?
    If you compare first gen 14nm products (aka Broadwell with latest (Comet Lake), you will see a huge improvement in both frequency and power.
  • Spunjji - Monday, November 18, 2019 - link

    The joke is that they always did it, but only recently did they feel the need to start adding the +++ for marketing purposes.
  • name99 - Monday, November 18, 2019 - link

    It depends on your business model.
    For TSMC it makes sense because they support a wide range of customers operating on a range of lithographies.
    For Intel it's more of a problem, to the extent that optimizing 14nm does not get you much (or any) insight into how to improve 10nm or 7nm. Intel's current business does not really operate on providing a wide range of processes (eg still shipping stuff on 14nm+++ at the same time as 7nm). Sure they can try to wiggle around it with chiplets, but its unclear that the demand numbers really balance.

    In other words it's a problem in that for TSMC each dollar invested in that way (optimizing non-leading edge process) results in a long stream of revenue. For Intel each such dollar only results in a short burst of revenue.
    This doesn't matter as long as Intel has a monopoly. But that monopoly appears to be crumbling... (No, no, our recent halving of Xeon prices had absolutely nothing to do with AMD, not at all. We just wanted to show our customers how much we appreciate them.)
  • JayNor - Tuesday, November 19, 2019 - link

    In the most recent quarterly cc, Intel stated that their 14nm profitability will remain high due to the manufacturing equipment being almost completely depreciated. Here's the quote:

    "The third thing is just George flagged this I just simplified. There's no transition and for us no transition next year is going to be 14-nanometer, we'll be a little better in terms of its profitability. Yields won't be dramatically different because we're extremely mature. But depreciation levels will be lower, because a lot of these tools have been fully depreciated, there because we've been on that node for so long. So, for the node transition 14 will be a little bit better."

    https://seekingalpha.com/article/4298931-intel-cor...
  • visualzero - Monday, November 18, 2019 - link

    Intel - Innovation by powerpoint. What a joke.
  • Spunjji - Monday, November 18, 2019 - link

    Jam tomorrow!
  • mdriftmeyer - Monday, November 18, 2019 - link

    That's all they got. Intel has hit a wall and all the innovation around them is passing them by.
  • Duncan Macdonald - Monday, November 18, 2019 - link

    Given the problems that Intel has had reaching 10nm - what is the chance that they will get 7nm working well enough for this project to come in on time? (Unless of course they contract TSMC to make the 7nm chips!!!)
  • jabbadap - Monday, November 18, 2019 - link

    Yeah those TFLOPS does not really compute. I must admit I don't know nothing about Supercomputers: But isn't Aurora use Cray Shasta cabinets? Only thing I could find about those is NERSC-9 supercomputer and if I'm not mistaken each of those can take 64 2Us or 32 4Us.
  • Silma - Monday, November 18, 2019 - link

    Intel's plans seem totally unrealistic.
    It was 5 years late for 10 nm but we must believe it'll produce in 7 nm in little less than a year?
    Plus new interconnect plus new language, plus new packaging technology.

    Also there's a reason why NVIDIA was so successfull in HPC: unlike Intel, it doesn't kill its technology (Xeon Phi),

    What are supercomputers like Atos, which promised exascale Phi based machines, to do?
    Their customers will surely be happy to know that the money they invested in next-gen was actually lost in no-gen.
  • haukionkannel - Monday, November 18, 2019 - link

    Intel develop several generations at the same time. The 7nm have been under development Many years! So 7nm very soon is more than likely exspecially if their 10nm is so bad as it has been so far, They will drop it as soon as possible.
  • mdriftmeyer - Monday, November 18, 2019 - link

    My friends say otherwise and they should know seeing as their jobs revolve around it.
  • Phynaz - Monday, November 18, 2019 - link

    Good source.
    Dumbass.
  • Korguz - Monday, November 18, 2019 - link

    Oh?? lets see you do better...
    oh wait.. you CAN'T
  • Kevin G - Monday, November 18, 2019 - link

    7 nm is to use EUV which simplifies the manufacturing greatly (no quad patterning). The main delay for EUV tools being available.
  • RSAUser - Wednesday, November 20, 2019 - link

    As Kevin said, the main problem was tools, namely EUV tools.
    It's the same reason TSMC took so long to get below 14/12nm, and is progressing so rapidly to 5nm (2Q2020) now and busy building a 3nm plant while Samsung is at 7nm now, going to 6nm.
    Intel will also probably manage to hit 7nm stably soon.

Log in

Don't have an account? Sign up now