Comments Locked

188 Comments

Back to Article

  • colinisation - Thursday, October 29, 2020 - link

    Why are they bringing AVX-512 to the mainstream desktop? Is it confirmed to be full fat 512bit vector processing, or will the be split into 2, 256 bit chunks like zen 1 and 256 bit AVX? Not a software dev but I think full fat 512 bit is a waste on the home desktop.

    Does this core support TSX, does TSX have a future in Intel CPU's?
  • SarahKerrigan - Thursday, October 29, 2020 - link

    On laptops, which use the same core, it's full-on AVX-512.
  • IntelUser2000 - Thursday, October 29, 2020 - link

    Nope.

    Icelake and Tigerlake is 512 split into two.
  • SarahKerrigan - Thursday, October 29, 2020 - link

    Is it? I thought that was only true of load/store ops. Definitely willing to learn something, though - my only AVX512 exposure has been on SKL-SP (where the app got solid performance wins over the previous AVX2 implementation.)
  • saratoga4 - Thursday, October 29, 2020 - link

    Icelake has full 512 bit wide vector units and data paths, so not split in two. Probably he is confusing Icelake (1x512 wide) with Skylake-SP(2x512 wide). So yes, it is half (512 bit) of the previous Skylake-SP (1024 bit).
  • Elstar - Friday, October 30, 2020 - link

    It's more complicated than that. For example, see this diagram:

    https://en.wikichip.org/wiki/intel/microarchitectu...

    Basically, Skylake-SP has three execution units. Two are 256-bit wide and one is 512-bit wide; and the smaller 256-bit units can be synchronized to execute AVX512 instructions.

    The net effect is -- roughly speaking -- that one can either start executing three AVX2 instructions in a given cycle or two AVX512 instructions (because two of the units need to be ganged together).

    What consumers are getting in Ice Lake and later is the ability for two 256-bit execution units to be lashed together to execute AVX512 instructions. I believe the dedicated 512-bit execution unit is sill server/workstation only.

    Also, ganging together execution units is quite normal. When 128-bit SSE came out, those instructions were emulated via 64-bit execution units. And for a while, 256-bit AVX instructions were emulated via 128-bit execution units. In fact, it was only with Zen 2 that AMD started natively having 256-bit wide execution units, thus allowing some serious performance gains when executing AVX code.
  • dotjaz - Saturday, October 31, 2020 - link

    ICL doesn't split, rather fuses 2x256 to 1x512. Not exactly the same but makes no real difference anyway.
  • Kevin G - Thursday, October 29, 2020 - link

    It didn't split, kinda. SkyLake SP has an extra true 512 bit wide unit. The consumer cores never got it.
  • dotjaz - Saturday, October 31, 2020 - link

    ICL uses 2x256bit FMA/Vec units to do one 512bit, Intel didn't explicitly say it, but it can only execute 1x512bit and not 1x512+1x256, what else can it be?
  • Revdarian - Thursday, October 29, 2020 - link

    The real problem with AVX-512 is "which actual feature set of AVX-512 is going to be present?" because unlike the other types of advanced vector extensions, there is not a one size standard but instead multiple different extensions and you aren't required, nor expected, to support them all in order to claim AVX-512 support, creating a bigger fragmentation than just "it has AVX-512 or not". This was a mistake on intel's part imho and is what soured Linus Torvald and other developers regarding the instructions.
  • Luminar - Thursday, October 29, 2020 - link

    FMA is superior.
  • Klimax - Friday, October 30, 2020 - link

    Not true. There is AVX512F (aka foundation) that supports all original AVX512 instructions and masking at 512-bit width. It is always present. All other extensions can be checked and you still need fallback anyway for other CPUs.
  • brucethemoose - Thursday, October 29, 2020 - link

    Its not a waste, but it would certainly be nice if the x86 players (or at the very least, Intel) standardized x86 SIMD like ARM is doing with SVE.
  • whatthe123 - Thursday, October 29, 2020 - link

    I think the problem is that avx512 doesn't make sense anymore if they're shipping Xe gpus.

    Why put a fat clockspeed draining instruction on CPU when you have a GPU that is much likely considering faster and more efficient? Intel really needs to just admit there's no market on consumer devices for avx512 and dump it, preferably making use of the die space to improve their CPU ipc.
  • KurtL - Friday, October 30, 2020 - link

    If you've ever done programming for GPUs you would know that there are cases where it makes perfect sense. There are cases that AVX512 can accelerate and a GPU can not because with a GPY you have a transfer of control between the CPU and GPU which costs time, and you need to transfer between two memory spaces. It is also a myth that AVX512 by itself is power hungry. This is not true at all. In fact, it is easier to execute CPU vector instructions than NVIDIA-style GPU instructions as the compiler does more of the work in AVX. It is the implementation in Skylake that was mediocre at best. This may or may not be corrected in the new core.
  • whatthe123 - Saturday, October 31, 2020 - link

    It's not a myth, by default AVX512 implementations come with offsets because of its power draw, and all you need to do is run mixed ops to see the voltage falloff and stalls caused by AVX512.
  • YB1064 - Thursday, October 29, 2020 - link

    Intel's naming scheme and product stack gives the USB consortium a serious run for its money. If you sift through all the bullsh!@#, this is just the same old wine in a new bottle.
  • lmcd - Thursday, October 29, 2020 - link

    Incredibly idiotic that you're reposting the same comment year after year longer than Intel has been releasing the same product year after year. Did you not read the part where this is a new core? New graphics architecture? Improved memory controller? New video decoder/encoder?

    Not every release changes the world but Rocket Lake looks to be a solid release, the second best and Intel's best since Skylake.
  • Flunk - Thursday, October 29, 2020 - link

    AVX-512 instructions target a very limited set of applications, unless you're using them the logic is just taking up space on the die. Probably not that much space, after all Intel is also shipping a iGPU on all CPUs, taking up a huge amount of space even if you don't need one.
  • dullard - Thursday, October 29, 2020 - link

    To make it easier on developers. Few developers want to have many versions of the software out there, one version for CPUs with AVX-512 and one version without. Remember all the hassle of installing 32-bit or 64-bit versions, and making sure you have the right one? With AVX-512 getting down to almost all CPUs, it encourages developers to finally incorporate it into software. Until then, it was just sporadically included into specific software for specific use cases.
  • brucethemoose - Thursday, October 29, 2020 - link

    One can compile support for multiple SIMD targets into a single executable. Automating this in a standard library is even a goal of some programming languages atm.

    Its just a PITA. I can't even imagine supporting all these x86 architectures, each with their own subset of AVX1 or 2 or 512, as an assembly programmer.
  • Buck Turgidson - Thursday, October 29, 2020 - link

    As someone who deals with both x86-64 and ARM64 assembler (as I’m the poor sod who writes the hyper-optimized routines) it’s not as big an issue as you’d think. Nobody (outside of those in the compiler/interpreter space) is writing whole modules or significant libraries in assembler. In most cases, it’s only used sparingly in specific scenarios. Generally, it’s where we discover that either the compiler is grossly inefficient or the intrinsics implementation is poor (sadly, compiler intrinsics are still hit or miss). In that situation, we’ll generally look at the disassembled output, come up with something better, and move the function to an external asm and link it. Generally I keep it symmetrical between ARM and x86, not the least of which is the compilers (especially GCC) seem very good at generating the same inefficiencies across architectures (the load/store’ness of ARM seems to make it worse, I’ve seen some bizarre compiler misfires on ARM around load/store).

    So how does this relate to AVX version? I can count on one hand the number or manually coded AVX functions I’ve written. And when deciding what AVX rev to use, that came down to were there instructions in the later AVX set of great value in the situation I was faced with, if not, you go with lowest common denominator. Keep in mind (despite my negative comments regarding intrinsics) almost everything we’re doing in AVX (and other simd implementations) is being done with intrinsics. So for someone on the team to note a serious enough performance anomaly for me to do doing some hand-coding, is super rare in that space (mainly because the whole point of those instructions is they reduce iterative complexity).

    Most of the time, when I’m having to rework something in assembler, it’s almost always far more mundane stuff (e.g. optimized fallback implementation that’s needed for a CPU that doesn’t support some level of AVX).
  • Kangal - Thursday, October 29, 2020 - link

    What about emulators?
    Useless on old ones like Nintendo Wii and lower, but I've heard it is handy for the heavier task ones such PS3, WiiU, and Switch.

    Or how about emulating different ISA, such as Mac/Windows emulating older 32bit-x86 programs on a newer ARM CPU ?
  • Buck Turgidson - Saturday, October 31, 2020 - link

    I dunno, I’ve never thought about trying to write an emulator for the PowerPC ISA. Both the 360 and the PS3 are, in some fashion, PowerPC ISA. The PS3’s Cell CPU thingie is a PPC core with a bunch of special vector units, while the 360 has a triple-core, SMT enabled PPC CPU. While I can’t cite specific use-case scenarios for mapping PPC ISA translation to AVX (mainly because I don’t know the PPC ISA), I’m sure there are a few, probably some that are specific to emulation scenarios (beyond just the obvious AltiVec mappings). I’d imagine, in the case of the Cell’s vector units, AVX would be a good use-case, but that’s just a supposition because I assume the Cell vector units have functionality overlap with AVX, but how those Cell vector units work (context and state, types and alignment) I have no idea. That said, to borrow from Hunter S. Thompson, people who write emulators are high powered mutants, never even considered for mass production.
  • sing_electric - Sunday, November 1, 2020 - link

    I'm wondering how the vector execution on the cell was used in practice; it wouldn't surprise me if it was lightly used on many titles given developer unfamiliarity with it. I think IBM released a very low-speed emulator for testing out code for its Cell-based chips. I'm also wondering whether GPU emulation might be a route to try.
  • abufrejoval - Friday, October 30, 2020 - link

    Thanks for your insights!

    It's comments like this, which make Anandtech so attractive.
  • RSAUser - Thursday, October 29, 2020 - link

    AVX512 is only useful for a very limited subset of tasks, it usually requires a lot of effort on the part of the developer for little gain, the fact that more CPU have it doesn't really change anything.
  • Klimax - Friday, October 30, 2020 - link

    Inaccurate. AVX512 will not see benefit if scenario is massively memory bandwidth bound and no transcendental functions. Otherwise it can accelerate code incredibly.

    And it actually contains functions like sin or log which never had direct support in any of previous SSE/AVX extensions. (Closest one could get was through SVML) And they are FAST. (Even SVML loses significantly)
  • iAPX - Thursday, October 29, 2020 - link

    I would recommend to take a look at x264 source-code: https://code.videolan.org/videolan/x264/-/tree/mas...

    Yes, there are some ASM code today, using AVX, AVX2 and AVX-512 when needed.
  • eastcoast_pete - Thursday, October 29, 2020 - link

    Desktop use and support by software is a chicken and egg problem; if you develop for laptop or desktop x86/x64, adding AVX512 use may not make much sense if most of your customers don't have CPUs that support it anyway. For Intel, AVX512 is one of their last real differentiators to AMD, so Intel pushing it hard makes sense.
  • RSAUser - Thursday, October 29, 2020 - link

    Not sure it's a chicken or egg problem as most of those tasks are better handled by the GPU, or they're not something a consumer would really run.
  • eastcoast_pete - Thursday, October 29, 2020 - link

    You're correct for some things, not so much for others. The virtual background for several videoconference applications currently use AVX/AVX2, but might be better with 512bit AVX. On the other hand, NVIDIA's new cards do an awesome job with background removal/replacement, but not everyone has one of those.
    As for general computing on GPU, that is and has been a big disappointment for me; still so few programs that make use of the enormous compute power even in iGPUs. Now, I am not a programmer, but maybe some people who are could explain why that hasn't materialized to the extent promised about 10 years ago. In the meantime, I take all the long/wide extensions on the CPU I can get my hand on.
  • eastcoast_pete - Thursday, October 29, 2020 - link

    And, like most these days, I work on a company-issued laptop, no dGPU in sight or even allowed. So there, the CPU is "it".
  • quorm - Thursday, October 29, 2020 - link

    Most code runs better or good enough on cpu. Running things on gpu adds overhead and makes the code more complex. Its only really worth it for things that lend themselves well to parallelization (such as matrix math) and also consume a lot of computation time. The video/image processing you mention is a good example.
  • Gigaplex - Friday, October 30, 2020 - link

    The biggest problem with AVX is that it's segmented to the point that software developers can't rely on it being present. Also, maybe some software developers want to be able to write and test their code on mainstream laptops even if the production code will eventually target other environments.
  • quadibloc - Friday, October 30, 2020 - link

    I've been waiting for Intel to do this. Having wider floating-point was a major Intel advantage over Bulldozer, and over the first generation of Ryzen. So I expect this chip to be the one that takes the lead in single-core desktop performance back for Intel, marking the occasion of Intel being back in the game. Of course, since it's taken so long, AMD may well have the resources to keep up.
  • quadibloc - Friday, October 30, 2020 - link

    Except it's still 14nm and not 10nm. So we may have to wait a little longer before Intel gets back in the game.
  • DannyH246 - Friday, October 30, 2020 - link

    Because they don't have anything else. They want to be able to say they have something that AMD doesn't.
  • brucethemoose - Thursday, October 29, 2020 - link

    Could y'all confirm that the platform supports AV1 encoding, and not just decoding?

    As far as I know that would make Rocket Lake the first platform with hardware AV1 encoding.
  • brucethemoose - Thursday, October 29, 2020 - link

    Yep, that appears to be a mistake in the slide: https://reddit.com/r/intel/comments/jkapv5/fresh_n...

    Y'all might want to reach out to Intel and post a correction, so readers don't think Rocket Lake has extra encoding support.
  • eastcoast_pete - Thursday, October 29, 2020 - link

    Was wondering the same. Is there even an AV1-encoding ASIC out there that doesn't cost thousands of dollars?
  • brucethemoose - Thursday, October 29, 2020 - link

    I don't think any ASICs even exist. There seems to be some FPGA IP.
  • GentleSnow - Thursday, October 29, 2020 - link

    Intel slide: "Double Digit IPC Instructions Per Clock (IPC) improvement"
    When a company this sized doesn't even have someone technical proofread its promotional materials, something is really wrong inside.
  • GentleSnow - Thursday, October 29, 2020 - link

    And of course I'd make a typo in this post :P But again, forum comment vs official corporate promo materials
  • yeeeeman - Friday, October 30, 2020 - link

    Yeah, their problem is called bob swan, he's an idiot.
  • Xajel - Thursday, October 29, 2020 - link

    AV1 is new on the hardware side, it's normal to start with decoding only. RTX 3000 & RX 6000 Series both have decoding only atm.
  • brucethemoose - Thursday, October 29, 2020 - link

    Yeah, decoding is what I assumed since Tiger Lake has it.

    The slide specifically says *encoding* though.
  • Luminar - Friday, October 30, 2020 - link

    More interesting would be ray tracing on igpus
  • yeeeeman - Friday, October 30, 2020 - link

    Wouldn't make sense cause they won't have sufficient performance anyway
  • Makaveli - Thursday, October 29, 2020 - link

    What I want to know is what will the IPC gain be vs Comet lake. I don't to hear anything about IPC vs Skylake !!!
  • SarahKerrigan - Thursday, October 29, 2020 - link

    Comet Lake is essentially the same core as Skylake, so IPC will be approximately identical.
  • Otritus - Thursday, October 29, 2020 - link

    Comet lake ipc = skylake ipc + 0%, so IPC vs skylake is appropriate
  • Flunk - Thursday, October 29, 2020 - link

    No, actual new core design this time. Same manufacturing node, new design. I wouldn't have big expectations, but there could be IPC improvement.
  • MetaCube - Saturday, October 31, 2020 - link

    He never said it wasn't a new core ?
  • goatfajitas - Thursday, October 29, 2020 - link

    I am more concerned about heat. Specifically the real #'s not Intels bullshittery #'s.
  • dullard - Thursday, October 29, 2020 - link

    It will use the turbo power heat when turbo, and the TDP the rest of the time. Why is that too hard to understand?
  • goatfajitas - Thursday, October 29, 2020 - link

    Its not. That is where the "bullshittery" comes in to play. The implication is that Intel is full of crap with its figures.
  • dullard - Thursday, October 29, 2020 - link

    Which specific figures are you concerned with being incorrect? The last slide says the chips will take up to 250 W for up to 56 s, then drop down to 125 W. Over time, it will use up to 250 W when there is thermal headroom to do so. Otherwise, it will use 125 W. Are you disputing the 250 W, the 56 s, or the 125 W?
  • goatfajitas - Thursday, October 29, 2020 - link

    "the chips will take up to 250 W for up to 56 s, then drop down to 125 W. Over time, it will use up to 250 W when there is thermal headroom to do so"

    = Bullshittery
  • dullard - Thursday, October 29, 2020 - link

    I guess I just don't understand your point. That is what Intel claims and that is what the chips do. Doing what you say is not the normal definition of bullshit. You might not like what Intel is doing. But there are more correct terms for that.
  • yeeeeman - Friday, October 30, 2020 - link

    What is bullshit man? The fact that you don't understand how it works? The chips actually do respect this rule, even the 10900k. It goes to 250w if it has thermal headroom and only for 56 sec. If the motherboard allows it, it will boost indefinitely to 250w.
  • yeeeeman - Friday, October 30, 2020 - link

    Stop watching YouTube videos to collect information. Come and read some articles on Anand.
  • 0ldman79 - Thursday, October 29, 2020 - link

    Not like there's much difference in IPC between Skylake and Coffee Lake. Just some cache timing tweaking.
  • yeeeeman - Friday, October 30, 2020 - link

    It will be between ice lake and skylake, the gap being 20% ish. So if they say double digits we can be pessimistic and say 10% better ipc vs Skylake. I expect more like 15%.
  • Duncan Macdonald - Thursday, October 29, 2020 - link

    Intel trying (not very successfully) to divert attention from AMD's recent announcements.
    Wake me up when Intel has a desktop processor to beat the Ryzen 5950X.
  • goatfajitas - Thursday, October 29, 2020 - link

    I am sure they will pick and choose benchmarks and ignore heat and power to present it that way.
  • quorm - Thursday, October 29, 2020 - link

    Excited for the upcoming demo using a 1kW water chiller to cool an 8-core cpu.
  • Arbie - Friday, October 30, 2020 - link

    Aw, c'mon guys - give the struggling little underdog a break.
  • Everett F Sargent - Thursday, October 29, 2020 - link

    So a late March announcement with wide availability in Q2'21 based on previous reporting.

    Eight cores maximum? Why am I not surprised that AMD did not announce an eight core 5700X or some such.
  • Targon - Thursday, October 29, 2020 - link

    AMD didn't announce a 5700 or 5700X because not enough silicon fails to QA for the 5800X. If you have three different 8 core/16 thread processors, you need to divide up the chips you produce. It is really just putting an artificial reason why there are chip shortages at launch, because perfectly good for the high end product are being slowed down for those lower performing products, just so there are enough at launch.

    I expect AMD will release 5700 and 5700X chips in January/February, once things calm down from the Zen3 launch.
  • smilingcrow - Thursday, October 29, 2020 - link

    That's not the reason to not release the lower tier chips at launch.
    They rightly want to maximise their profits by just releasing the top tier chips at launch which are more expensive.
    Many chips released as lower tier SKUs aren't lower binned to a significant degree but it's more a matter of segmentation based purely on price.
    With the demand for Zen 3 likely to be very high, AMD have decided to maximise profits so have delayed the cheaper SKUs.
    All good business sense.
  • yeeeeman - Friday, October 30, 2020 - link

    Exactly. Amd is entering Intel mode with zen 3.
  • Spunjji - Friday, October 30, 2020 - link

    "Amd is entering Intel mode with zen 3"

    Bit of a jump to go from "aren't shooting themselves in the foot by dropping prices while supply constrained" to "milking the market for everything it's worth and knifing competitors in the back" 😅
  • robbro9 - Thursday, October 29, 2020 - link

    So this is nearly a Tiger Lake on the latest 14 nm process? I thought the super fin or whatever advance going from 10 to 10 + (or is that 10+ to 10++) was a huge breakthrough/advantage? If so will these be severely handicapped compared to the Tiger Lakes now showing up in laptops?
  • AdrianBc - Thursday, October 29, 2020 - link

    Actually it is an Ice Lake made in the 14 nm process, but the Ice Lake and Tiger Lake cores are very similar. The main reason why Tiger Lake is much faster is because of a much higher clock frequency.

    Rocket Lake will have a higher single-core turbo frequency, at least 5.0 GHz, maybe even up to 5.3 GHz, or even higher, but that is unlikely.

    So it will have a higher single-thread performance than Tiger Lake. If the turbo frequency would reach 5.3 GHz, like Comet Lake, then the single-thread performance would slightly exceed that of Zen 3, probably taking back the 1st place in gaming from AMD.

    Nevertheless, in multi-threaded applications the efficiency of the 14-nm cores will be unavoidably low, so despite the 125/250 W power consumption Rocket Lake will be much slower than AMD Zen 3 for most professional applications.
  • Unashamed_unoriginal_username_x86 - Thursday, October 29, 2020 - link

    Good analysis, much appreciated!
  • brucethemoose - Thursday, October 29, 2020 - link

    I imagine it won't matter much since desktop chips can run at insane per-core TDPs.

    If you're interested in sweet-spot power efficiency, you should buy Tiger Lake. Or better yet, Renoir.
  • powerarmour - Thursday, October 29, 2020 - link

    Why is there so many Intel articles on this site these days?
  • Ryan Smith - Thursday, October 29, 2020 - link

    Intel's been fairly busy as of late with the Tiger Lake launch and their business transactions.

    Though taking a quick survey, this is the first Intel article we've published in a few days. Most of this week has been AMD due to their various transactions, keynotes, and product announcements.
  • Hulk - Thursday, October 29, 2020 - link

    Okay I'm confused. Can someone help me out?

    If Cyprus Cove (Rocket Lake) is a back port of Sunny Cove (Ice Lake) with the new AVX 512 instructions, which are already included in Willow Cove (Tiger Lake) then wouldn't Cyprus Cove be more aptly called a back port of Willow Cove without the larger L1/L2 cache?

    Intel is killing us not so much with their naming schemes, which are convoluted, but even more so with a lack of clarity in explaining them.

    Does Cyprus Cove include the larger L1/L2 cache of Willow Cove? If so then it is essentially Willow Cove, right? Or at least architecturally closer to Willow than Sunny?
  • GeoffreyA - Thursday, October 29, 2020 - link

    It appears to be Sunny Cove ported to 14 nm, not Willow Cove. At any rate, Intel needs to reset their entire nomenclature. Come on, how long are they going to go on with these Lakes and Coves. Lack of imagination on Marketing's part?
  • sing_electric - Thursday, October 29, 2020 - link

    This. 100%. And they need to just use model numbers that make sense. I've always had a pet peeve for 0's that aren't used (why is there a Radeon 6800XT when there's not a 6801XT or a 6810XT? Why not just Radeon 68?), but the kicker is that Intel actually could use them in a way that could provide people with useful information.

    As it stands you need a decoder ring to figure out if a part has features you want or not.
  • Spunjji - Thursday, October 29, 2020 - link

    Radeon 69XT

    Nice
  • Targon - Thursday, October 29, 2020 - link

    You have to figure that a lot has to do with history where the product branding leaves room in between the models for different SKUs. Before things went into the thousands, you would see a 100 series with various parts within it, then the following generation went to the 200 range. Sure, numbers would be skipped, but looking at the first digit was a good way to know about the difference in generation. In the jump from 900 to 1000 series, that trend of first digit saying generation ended up sticking around. So, 1000 to 1100...same generation, or different if all you look at is the first digit, the lazy people would find it confusing. So, we saw the jump from 1000 to 2000, to 3000. AMD had been going through its own issues, but then, marketing comes into play. They see a 3000 model from NVIDIA, and that MUST be faster than a 1000 series from AMD, right? At least, the stupid people might think that way.

    It would have been better to choose a better way to differentiate products, but I am not a marketing person, I just see some of the reasoning behind it.

    Intel is a lot worse when it comes to part numbers. Laptop i7 chips with 2 cores, or 4? Oh, quad-core i5 is faster than dual-core i7 in laptops...yea, you need to do a search to tell which is faster, because going from a 3.5GHz i7 to a 2.1GHz i7 a few generations later makes a lot of sense...
  • sing_electric - Sunday, November 1, 2020 - link

    Right, and despite using a TON of numbers, Intel's nomenclature tells you NOTHING about the chip: Even within 10xxxx or 11xxxx chips, there's a mix of both 14nm and 10nm parts (depending on whether you're talking laptop or desktop). The number doesn't tell you the # of cores, threads, speeds, or TDP and they need to add letters at the end to tell you stuff like whether its unlocked.

    AMD's been guilty of this, too, until recently, on the CPU side, by having their mobile parts a generation "behind" what you'd think based on the product number, and AMD's consistently made its motherboard chipset #s a digit higher than Intel (which is sort of funny since basically everyone decides on CPU vendor first, then chooses a motherboard to fit).

    I'm frankly not sure why they bother with model numbers that don't contain real information - the vast majority of buyers frankly don't care AT ALL - they don't even know what i7 vs. i5 or i3 means, let alone product generation, etc., and the majority of buyers who DO care know enough to do a quick search before buying to get the information they want.
  • tygrus - Thursday, October 29, 2020 - link

    Next, users will be confusing GPU names with GPU names.
    A 5800X with a 5700XT. Or next generation Ryzen 6800XT with this years Radeon 6800 XT.
    The goldfish in the marketing department need to be replaced with someone with longer memories. People that can do something more interesting & long lasting than counting by 1 and adding more zeros.
  • Qasar - Friday, October 30, 2020 - link

    have you seen the way intel names its cpus ????
  • sing_electric - Sunday, November 1, 2020 - link

    RADEON 6000 FOREVER!!!! https://www.youtube.com/watch?v=JPsKuCHJ7D4
  • GeoffreyA - Friday, October 30, 2020 - link

    You know, I miss the simplicity of the old days, when it used to be Pentium II 450 MHz, for example, though I understand that's not possible any more.

    As for microarchitecture names, how hard is it to open a map and look for a few minutes, picking out a nice one? "Welcome, people, to Intel's new Casablanca microarchitecture, successor to our venerable Santa Clara design."

    Intel had some pretty good names before: Coppermine, Northwood, Sandy Bridge, etc.Admittedly, Willow Cove sounds all right to me, even Sky, Tiger, and Rocket; but the thing is, Lake and Cove need to be retired.
  • Spunjji - Friday, October 30, 2020 - link

    The lake/cove fixation has been coincidental with their inability to iterate on their designs. I've always viewed it as a deliberate attempt to obfuscate the underlying architecture, much like the way seeing a "10th gen" mobile CPU gives you no indication as to what tech-generation the CPU belongs to.
  • GeoffreyA - Friday, October 30, 2020 - link

    Agreed, and it's doing its job of confusion beautifully.

    Allowing for simplification, there are just three architectures: Skylake, Sunny Cove, and Willow Cove (if we even entertain the latter for its cache). But in the hazy realms of language, where 2 + 2 can equal 5, Intel can have as many as they want.
  • sing_electric - Sunday, November 1, 2020 - link

    FWIW, I think one example of model numbers "done right" is Dell's monitor nomenclature: its consistent, and though it starts off looking random, once you get the hang of it you can even "predict" model numbers that don't exist: I don't know if they'll release a UP3221Q, but I do know that if they did, it'd be:

    U: Ultrasharp (brand for higher-end monitors with small bezels)
    P: Premier color (typically 10 bit, professionally calibrated from the factory)
    32: 32" size class
    21: Released in/for 2021 (they sometimes jump the gun a bit, but at least you know that a U2412 is an ~8-year-old design)
    Q: 4k display.
  • GeoffreyA - Monday, November 2, 2020 - link

    That's an excellent scheme. Packed to the brim with useful information. I think from now on, when I chance to come across a Dell monitor in the advertisements, my mind will be trying to decode its model number :)
  • Spunjji - Monday, November 2, 2020 - link

    Indeed - and there's no reason Intel and AMD couldn't do something similar, although it does make for unwieldy names when used as part of marketing. The example I came up with was "Ryzen 5 (5th gen) U8 (Ultra-low-power, 8 cores) 48 (4.8Ghz boost) G8 (Graphics, 8EUs). U848G8 is quite a mouthful, though, and AMD have already used the "Ryzen 9/7/5/3" nomenclature as a performance class thing rather than the generation of product.

    Still, at least they've been fairly consistent with Ryzen 3 being 4 cores, Ryzen 5 being 6 cores, 7 being 8 cores, and 9 being "whatever else we have going".

    Intel's i3/5/7/9 nomenclature started off terrible (2-core and 6-core i7s in the same generation back with Sandy Bridge!) and has only gotten worse since then.
  • GeoffreyA - Tuesday, November 3, 2020 - link

    Here's what I came up with, borrowing a bit from yours:

    [Zen generation] - [cores] [SMT] [base clock] [boost] - [TDP] [graphics + EUs]

    Ryzen 5 5600X = Z3-6T3746-D

    Ryzen 7 3700X = Z2-8T3644-D

    Ryzen 3 3200G = ZP-4N3740-DG8

    My system's a mess.
  • GeoffreyA - Tuesday, November 3, 2020 - link

    * 36 for 3200G, not 37

    SMT = T
    No SMT = N
  • anonomouse - Thursday, October 29, 2020 - link

    Sunny Cove had the bigger L1, Willow Cove had the bigger L2. If they're saying it's based on Sunny Cove/Ice Lake, then i'd expect no bigger L2 (which makes sense, considering 14nm means these cores will be fat).
    The other distinction of Willow Cove is some ISA support, so again, same story.
  • Hulk - Thursday, October 29, 2020 - link

    FYI Sunny Cove also has larger L1/D by 50%.
  • yeeeeman - Friday, October 30, 2020 - link

    Willow cove has a different cache layout vs sunny cove. So if rocket lake uses the cache subsystem from ice lake, then that it is. Cannot say it is derived from Willow cove.
  • drajitshnew - Thursday, October 29, 2020 - link

    Dear @iancuttress @ryansmithAT @anandtech
    I think most readers are annoyed by the placement and the title "Rocket lake DETAILED" which suggest that this is a significant news and ?a significant breakthrough from Intel. I most certainly am. My perception is that over the years AT has started to draw a more professional crowd than the enthusiasts it had originally. Both of these will be annoyed. Please note "enthusiasts" as used by me does not equal to gamer.
    On the other hand a gamer that misinterprets this as a breakthrough, and holds off his purchase for a potentially late and disappointing launch, will not trust AT again.
  • boozed - Thursday, October 29, 2020 - link

    Welcome to Anandtech powerarmour. Anandtech is an online magazine that publishes news and reviews of consumer computing hardware. Hence, you will find articles about Intel here.
  • Spunjji - Friday, October 30, 2020 - link

    Total bunkum
  • yeeeeman - Friday, October 30, 2020 - link

    Cause you're a hater. There are more AMD articles, but you are bothered about the Intel ones. hmmm
  • Spunjji - Friday, October 30, 2020 - link

    Are there, though? I'm feeling like no particular "team" is getting more attention than any other
  • Qasar - Friday, October 30, 2020 - link

    ever consider there seems to be more articles on amd, cause well, they have had 2 big releases in the last month ? seems when intel has product releases, the same is said about intel articles.
  • Teckk - Thursday, October 29, 2020 - link

    How will this compare to TigerLake? Both seem to be fairly similar except this is on a derivative of 14nm instead of the 10nm SuperFin.
  • drothgery - Thursday, October 29, 2020 - link

    28W or less Tiger Lake (aka the ones that have been announced) have 4 cores (or 2 in some Pentium and Celeron parts, and maybe one i3) and iGPUs with more compute units.

    We should see an 8-core/~45W Tiger Lake H at some point (mostly in high-end larger laptops like the Dell XPS 15), and in a lot of ways it'll be a better chip than Rocket Lake, but given the power envelope probably won't hit the same maximum clock speeds.
  • Teckk - Thursday, October 29, 2020 - link

    Thank you! Then there’s Alder Lake coming out too in 2021? All of them on different sockets?
  • drothgery - Friday, October 30, 2020 - link

    Comet Lake desktop and Rocket Lake share sockets, but Adler Lake needs a new one.
  • UNCjigga - Thursday, October 29, 2020 - link

    <Obi Wan>These are not the cores you are looking for...</Obi Wan>
  • Smell This - Thursday, October 29, 2020 - link


    LOL
    It truly has to be exhausting working for Chipzillah these days
  • eastcoast_pete - Thursday, October 29, 2020 - link

    More on the other end of the price range, any idea if and when you'll get the i3-10000something with 4 cores/8threads for under $90 in for testing? That CPU could be really interesting for a plain potato home office/entry level gaming rig.
  • flgt - Thursday, October 29, 2020 - link

    I'm surprised AMD still won't include an integrated graphics solution in their latest Ryzen 5000 designs. There have to be a lot of vanilla corporate PC's out there. Their APU's aren't bad but they shouldn't let Intel compete against their previous generation.
  • Targon - Thursday, October 29, 2020 - link

    It is far easier to make a pure CPU than it is to then need to add graphics, which will also limit the CPU performance due to power and heat.
  • Spunjji - Friday, October 30, 2020 - link

    I believe that's going to be happening with Zen 4 "Raphael" in 2022.

    I think the biggest problem preventing that is that their APUs have had to be monolithic processors for performance reasons, which is in conflict with the chiplet strategy on the pure-CPU side of things. My best guess would be that Raphael resolves this by transitioning the IO die to a more dense process (7nm?) and using some of that space to integrate a small RDNA 2 GPU.
  • JayNor - Thursday, October 29, 2020 - link

    Ice Lake's Sunny Cove was advertised as 18% IPC improvement over Skylake.

    https://www.tomshardware.com/news/intel-10th-gener...

    There should be additional performance gains (relative to Comet Lake) from new features of PCIE4 SSD, PCIE4 external gpu io and DDR4 3200 memory support.

    Intel provides avx512 support and simd support for Xe graphics in their oneDNN.

    https://www.phoronix.com/scan.php?page=news_item&a...
  • shabby - Thursday, October 29, 2020 - link

    Finally 10nm... oh wait nevermind 😂
  • SanX - Friday, October 30, 2020 - link

    :))))
  • Jon Tseng - Thursday, October 29, 2020 - link

    What i never understand is why don't they tape out an enthusiast class CPU with no on-die graphics (obv the Ks just have it disabled) which would give them die space to squeeze in as many cores as you want. I mean if you think about how much die space in a 10900K is effectively dark silicon because its always gonna be tied to a GTX/RTX-glass GPU its a crime. Don't know if the issue is tape-out cost vs. low enthusiast volumes? Or worries it will overshadow the non-K APUs. But it seems to be a really obvious solution that is staring them in the face..
  • brucethemoose - Thursday, October 29, 2020 - link

    A significant fraction of the dies will end up in deskops with no dGPU.

    Still, this *is* a special case, as Rocket Lake clearly isn't going to end up in laptops, NUCs or anything low power. And now Intel has a dGPU to sell.
  • sing_electric - Thursday, October 29, 2020 - link

    I think energy/heat's got to be a factor: You're already facing significant tradeoffs between thermals, clock and maxing out the cores without the graphics on, and adding more cores would just add more heat to the equation. Better to have "dead space" with disabled graphics than to add cores that couldn't run at full tilt anyways. Intel's apparently already going down to a max of 8 cores from 10, even though there's likely room on the die for all 10 for this exact reason.
  • movax2 - Thursday, October 29, 2020 - link

    "back-ported" to 14 nm.

    Though "double digit" uplift was for 10 nm.
  • Everett F Sargent - Thursday, October 29, 2020 - link

    Read the fine print ... their numbers are all in binary ... so that 00-to-11 is a 3% improvement (but in Intel language that would be a double digit improvement) and 000-to-111 is a 7% improvement (but in Intel language that would be a triple digit improvement). ;)
  • movax2 - Thursday, October 29, 2020 - link

    there are 10 kinds of people: those who understand binary and those who don't
  • SanX - Friday, October 30, 2020 - link

    :)))
  • Spunjji - Thursday, October 29, 2020 - link

    I'll be interested to see the die size numbers for this - especially compared with 8 core Tiger Lake. We'll finally get a marginally better idea of the relative densities of their processes.

    Looks like a solid upgrade path for anyone who bought into Z490, which is a nice change. Definitely good for gamers already using a solid water-cooling chip. That turbo TDP, though... Ouch!
  • drothgery - Thursday, October 29, 2020 - link

    Though less than perfect, because 8 core Tiger Lake undoubtedly will have at least the iGPU from 4-core Tiger Lake, which is relatively a lot bigger than what's going in Rocket Lake.
  • Spunjji - Friday, October 30, 2020 - link

    Last I'd heard, TGL 8-core also uses a 32 EU Xe GPU - presumably because it's headed for larger devices that can benefit from a dGPU. That may have just been a rumour, though.

    Either way, TGL has larger caches than the cores in Rocket Lake - so it'll never be a perfect comparison. I'm betting it will reveal fun things about 10nm++ / SF density, though.
  • phoenix_rizzen - Thursday, October 29, 2020 - link

    So they're continuing with the 2 separate CPU architectures per generation?

    10th gen == Comet Lake (14 nm) or Ice Lake (10 nm)
    11th gen == Rocket Lake (14 nm) or Tiger Lake (10 nm)

    Because that's not going to be confusing. It was bad enough when it was limited to laptops, but now it's going to be on desktops too. Would be nice if they used different names for each, instead of Lakes for both 14 and 10 nm variants.

    On the bright side (?) they'll at least both have the same iGPU architecture, so you won't have to choose between better CPU (Comet) or GPU (Ice), you'll just have to choose the number of CPU cores you want: up to 4 (Tiger) or up to 8 (Rocket).

    The "Intel Decoder Ring" is really starting to get unwieldy... Wonder just how bad the model numbering is going to be for these. At least with Comet vs Ice you could look for the "G#" at the end to know you're getting Ice Lake with upgraded graphics. How are they going to differentiate things now?
  • mooninite - Thursday, October 29, 2020 - link

    I completely agree with you. It's impossible to figure out what an Intel CPU is anymore without looking it up in the Ark.
  • Spunjji - Friday, October 30, 2020 - link

    Tiger's going up to 8 cores, too, but I doubt it'll show up on desktop - so with any luck the 11th gen brand will be *slightly* less of a lumpy, unsightly mess than the 10th was in terms of architectures.

    The product names, though, are just getting worse 😆
  • drothgery - Friday, October 30, 2020 - link

    It looks like they're not going to be doing mobile Rocket Lake or desktop Tiger Lake, so barring NUCs and the like there shouldn't be much overlap in 11th Gen, really.
  • movax2 - Thursday, October 29, 2020 - link

    So 14 nm in 2021.

    Wow, Intel, just wow.

    Impressive.
  • Tom Sunday - Thursday, October 29, 2020 - link

    With Intel's "Alder Lake" 10nm desktop CPUs reportedly launching in late 2021, who would even consider in upgrading ones system prior to that? Almost all major associated hardware components will change with the 12th Gen Core introduction. For starters having to retiring ones DDR4 memory, going to a brand new mobo (layout) configuration, present AIO's will not fit the newer LGA 1700 socket, etc. Meanwhile AMD will be keeping the $$$ heat on Intel so a late 2021 12th Gen Core may be in the offering.
  • JayNor - Thursday, October 29, 2020 - link

    what's Intel doing with those extra 500 pins?
  • Zizy - Friday, October 30, 2020 - link

    Power delivery, what else :D
  • Spunjji - Friday, October 30, 2020 - link

    It's a good question that I'm interested to find out the answer to.
  • Spunjji - Friday, October 30, 2020 - link

    Intel are primarily concerned about systems integrators - you're thinking from the perspective of an enthusiast. The vast majority of end users never upgrade their system beyond maybe extra RAM or an HDD, and even then, not often.

    From that perspective, having *anything* that can compete is a priority, regardless of longevity.
  • Rudde - Thursday, October 29, 2020 - link

    This announcement seems very rushed on multiple fronts. It certainly does not lend confidence to a launch in Q1 of 2021.
  • defaultluser - Thursday, October 29, 2020 - link

    Ice Lake backport is likely to run into a clock sped reduction over Skylake+++++. IF they can't hit 5.0 Ghz all-core turbo on the Core i9 parts, then the real gains are nonexistent.
  • Spunjji - Friday, October 30, 2020 - link

    The rumours - to which I currently credit near-zero veracity - reckon it'll hit 5.5Ghz on 14nm++++.

    Why a larger core design would clock faster on a process for which it hasn't been optimised, vs an architecture that has had 4 rounds of optimisation for the process since release, is beyond me.

    I reckon it'll push 5.2Ghz at that 250W outside-edge and thus barely, *BARELY* scrape back the single-thread crown, at the cost of hilarious power use and a bloated die. The profit margins will surely be depressing.
  • MDD1963 - Thursday, October 29, 2020 - link

    "providing an upgrade path even for those with a 10900K" Oh, yes, I'm sure 10900K users are quite relieved at this news, as I'm sure most are already feeling quite hindered with current performance levels... :)
  • Spunjji - Friday, October 30, 2020 - link

    I mean, it's not a bad proposition for said owner 1-3 years from now when Rocket Lake is in the bargain bin and they can get a substantial performance boost to go with a new GPU.
  • MetaCube - Saturday, October 31, 2020 - link

    The 11900K will never hit the bargain bin though.
  • Spunjji - Monday, November 2, 2020 - link

    Not brand-new, for sure. Intel don't do price cuts! But system pulls are a great way to pick up an upgrade.
  • Santoval - Thursday, October 29, 2020 - link

    The most noteworthy part is the AV1 encoder, though I doubt the very fat TDP due to the paleolithic in computing terms 7-year old (in 2021) process node is worth the trouble. It's also a first-gen hardware encoder, which almost certainly means it will generate crappy quality videos (it's always better to do encoding in software, while decoding can be done in either software or hardware, it doesn't really matter).

    Do the 5000 series Ryzen CPUs only support AV1 decoding or not even that?
  • Zizy - Friday, October 30, 2020 - link

    CPUs obviously support any encoder or decoder you want, it just isn't fast or efficient. 6xxx series GPUs support AV1 to some degree, I can't be bothered to look what exactly.
  • GeoffreyA - Friday, October 30, 2020 - link

    Not sure about Zen 3. RDNA2 appears to have some form of decoding at least.

    For an improvement in speed over libaom, give Intel's SVT-AV1 a spin. FFmpeg includes it. But I think they've traded quality for speed, compared to libaom. At any rate, I'm waiting for VVC, to see what it can do :)
  • Santoval - Friday, October 30, 2020 - link

    I (somehow) forgot that non-APU Zen 3 processors will continue to lack a GPU block. Media / video engines are tightly paired with GPUs, so since non-APU Zen 3 CPUs will lack a GPU they will also lack a media block and thus any decoding and encoding acceleration.
    AMD's Navi 2 cards support AV1 decode acceleration (but not encode), and I expect the same will apply to AMD's APUs - probably only the Van Gogh ones (Zen 2 + Navi 2). Intel's SVT-AV1 is fast but generates crappy quality videos.
  • Santoval - Friday, October 30, 2020 - link

    edit : Above I meant "I expect the same will apply to AMD's *next-gen* APUs". Since AMD's Zen 3 based Cezanne APUs will retain a Vega GPU block it is doubtful they will support AV1 decode acceleration or any new decode or encode acceleration at all. Unless of course AMD realize their folly and scrap their bizarre plans for distinct (and both unequal) Cezanne and Van Gogh APUs and simply pair Zen 3 with a Navi 2 GPU block. That's very doubtful.
  • GeoffreyA - Saturday, October 31, 2020 - link

    Agreed, SVT-AV1's quality is deplorable. Rav1e is also available but I haven't tried it and can't comment. Same with others, like Cisco's.

    You know, when I first heard about AV1, I grew quite excited. Felt the beating of my heart: a new royalty-free video codec, battling it out with HEVC. Marvellous! Good riddance to HEVC! But I was quickly disappointed. AV1 only has marginal gains over HEVC, and owing to its speed, isn't practical to use. From anecdotal testing one afternoon, using libaom in FFmpeg, I found that its picture quality looked lovely, superior to HEVC, but noticeably soft (the source, Mulholland Drive, had a slight bit of grain). I admit, there are likely settings in AV1 that can mend that. Subjective, yes, but softness bugs me.

    Hardware support should make AV1 easy to encode, but then again, such video might end up being beaten by x265.

    Despite the patent rubbish, I am looking forward to VVC/x266 and hope it delivers. Truth is, I dislike HEVC's picture quality and still prefer AVC's (though at lower bitrates the latter loses of course).
  • Fergy - Saturday, October 31, 2020 - link

    Benchmarks claim that AV1 is 20% better than H265. On youtube you can't see the difference between H264,VP9 and AV1. Decoding of AV1 on youtube takes max 1.25 zen cores for 60fps full hd.
    The only reason I see to use H265 is if the end device does not have support for VP9 and it HAS to be smaller than H264. The rest of the time VP9 and later AV1 will be the better choice. Every GPU developed after 2019H2 can decode AV1.

    Your comments about quality and speed are time limited. It is a matter of time before AV1 wins at every metric. There are a lot of billion dollar companies wanting a free to use video codec.
  • GeoffreyA - Saturday, October 31, 2020 - link

    I hope that AV1 wins, and you are right, its picture does look noticeably better than H.265/HEVC, though a little soft. But the thing is, when x266 comes out, VVC is another variable that will have to be taken into account. And already, if the studies are correct, it shows better compression than both AV1 and HEVC.

    I hope I didn't sound as if I were defending HEVC in the comment from earlier today. In fact, I can't stand its picture and prefer H.264 any day.
  • GeoffreyA - Saturday, October 31, 2020 - link

    I agree about the APUs and was myself wishing there'd be a Zen 3 + RDNA2 one for AM4 people.

    As for the video units, it appears that AMD is able to change that, apart from the GPU. Take a look at the die shot of Raven Ridge, and you'll see the multimedia block is seperate; and, according to Wikipedia, Renoir uses VCN 2, whereas Raven R. was using VCN 1. In short, it's possible that Cezanne, while carrying Vega, might use the newer video block, with AV1 decoding.
  • Spunjji - Monday, November 2, 2020 - link

    Perfectly possible, and quite likely.
  • Slash3 - Friday, October 30, 2020 - link

    It was an error on their slide. It does not support accelerated AV1 encoding.
  • mdriftmeyer - Friday, October 30, 2020 - link

    Correct. Intel sure as hell won't have AV1 encode on their GPU before Nvidia or AMD

    https://www.amd.com/en/products/specifications/com...
  • Santoval - Friday, October 30, 2020 - link

    I see. That makes more sense actually. So no AV1 encoding acceleration from Intel until at least Alder Lake and no such support either from AMD until at least Zen 4 based APUs and Navi 3 graphics cards. Swell...
  • yeeeeman - Friday, October 30, 2020 - link

    While this ain't a bad thing, what intel should have done is to cancel rocket lake entirely and bring alder lake faster
  • Spunjji - Friday, October 30, 2020 - link

    It's not really clear that these things are strongly related. In fact, I'd argue that them keeping this live suggests they *couldn't* bring Alder Lake faster.

    Bear in mind that they still haven't actually released any products based on their first attempt at big/little - Lakefield - or released anything larger than a 4-core mobile CPU on 10nm.
  • drothgery - Friday, October 30, 2020 - link

    Erm... that's not true. They released two CPUs this summer. https://ark.intel.com/content/www/us/en/ark/produc...

    And you can buy things with them in it now; here's just the first thing that came up on a search by the CPU name + laptop for me:
    https://www.bestbuy.com/site/samsung-galaxy-book-s...
  • Santoval - Friday, October 30, 2020 - link

    That's Lakefield, which is a very special case; almost an Intel experiment really. It is technically a 5-core SoC but the 5th "big" Sunny Cove core is barely ever employed due to strict thermal limitations. AnandTech covered it and found that unlike what one might expect the "big" core cannot be used to accelerate single threaded code. It is only used (when thermals allow it) for "fast responsiveness".
    In other words the 95+% of the work is done by the 4 small Tremont cores. As a result, while the power efficiency is rather high, the performance is *quite* poor. For more read this article :
    https://www.anandtech.com/show/15877/intel-hybrid-...
  • Spunjji - Monday, November 2, 2020 - link

    I knew they'd *launched* Lakefield, but until you sent that link I hadn't seen any evidence for availability of a single product that actually *used* it. I'm based in the UK, for the record.

    Just checked and, yeah, searching "core i5-l16g7 laptop" doesn't return an actual product for sale until you get to the 7th result, which turns out the be the same results as the 8th and 9th - that one Samsung product available from BestBuy and quoted in USD, for which there are no reviews available.

    Some release! :D
  • Spunjji - Monday, November 2, 2020 - link

    FYI, "just the first thing" only works as a rhetorical device when there are *other things*. If not, then it's not the "first thing" - it's the *only* thing. After 5 pages into Google and only seeing the Galaxy Book, I stand by my initial assessment of it basically being a paper launch.
  • Santoval - Friday, October 30, 2020 - link

    Of course they couldn't. Their 10nm yields at large die sizes and/or high clocks are still poor, and an S-series desktop requires *both* a rather large die size and high clocks, so it's the worst of both worlds (Ice Lake Xeon CPUs have even larger dies but their clocks are quite lower; besides, Intel can well afford the lower yields due to the obscene prices they will charge for them; on the other hand we still have not seen any Ice Lake Xeons released either..).

    Intel hope to fix their yield and clock issues with the third revision of their 10nm process node with which they will fab Alder Lake, their "first top to bottom" 10nm product. That node variant is clearly not ready yet, otherwise Intel would employ it to fab Alder Lake and actually compete with AMD in both performance and efficiency. With Rocket Lake they are (once again) ditching efficiency and will only compete in performance, probably barely..
  • Santoval - Friday, October 30, 2020 - link

    Their 10nm++ / SuperFin Gen 2 (formerly 10nm+++) process node is still not mature enough to deliver the yields they want at the clocks they want. If Intel did not release Rocket Lake they would lack any option to compete with Zen 3 for 6 to 8 months, since Comet Lake's ancient Skylake μarch can no longer cut it.
    Rocket Lake is stop gap solution until they are ready to release their actual competitive product, Alder Lake, targeted at the idiots who pray to and swear by Intel's gods and, as long as they have an "Intel Inside" CPU, they wouldn't mind if they burnt 2 to 3 times the power for roughly (within a range of +/-5% at best) the same performance.
  • Spunjji - Monday, November 2, 2020 - link

    I have indeed seen plenty of these idiots posting to various forums proclaiming how they were a genius for buying the utter rubbish that is Comet Lake, because they can "upgrade" to Rocket and get "the best gaming performance" again. All at the low, low price of 250W!
  • beginner99 - Friday, October 30, 2020 - link

    Why is this non-info post not just a pipeline story? Very suspicious at least to say...Either AT gets paid by intel or their tactics with this useless non.info fully worked.
  • Spunjji - Friday, October 30, 2020 - link

    Is this some kind of weird reverse-psychology making-the-AMD-fanboys-look-bad thing, or are there really several commenters who are deluded enough to think that Anandtech take Intel bungs?
  • hanselltc - Friday, October 30, 2020 - link

    AV1 Encode for reals or nah?
  • Spunjji - Friday, October 30, 2020 - link

    That's a nope
  • AntonErtl - Friday, October 30, 2020 - link

    Given how much time they needed to port Sunny Cove to 14nm, it's interesting that they managed to do the very recent Xe graphics in 14nm. This means that for Xe graphics they either use a more automatic approach for the last process-specific steps of designing such a thing than for Sunny Cove, or they did these steps almost in parallel for 10nm and 14nm for Xe graphics. My gess is that it's the former.

    Why Sunny Cove and not Willow Cove, i.e. why a smaller L2 cache? My guess is area. Why only 8 cores? My guess is area; power would just lower the base clock, which is not that important to most buyers. Why AVX-512? It's part of Sunny Cove and a USP for Intel; they already had it in Cannon Lake and Skylake-SP; I am actually surprised that AMD has not implemented it yet (even though it's not used in much software).
  • AdrianBc - Friday, October 30, 2020 - link

    It is not clear whether they have back-ported Xe graphics to 14 nm.

    All the previous rumors about Rocket Lake said that it will be composed of 2 chips, inside the package, a 14-nm back-ported CPU and a 10-nm Xe GPU.
  • Spunjji - Friday, October 30, 2020 - link

    That would just leave even more questions, like, how does that 10nm GPU access memory?

    Honestly it's way more plausible that they did an automated hack-job on 32EU Xe than they figured out MCM for GPU connected to a CPU with an on-board memory controller and... just decided not to tell anyone.
  • AdrianBc - Friday, October 30, 2020 - link

    Intel has already done CPU + GPU combinations in a single package, the Kaby Lake G processors, which included an AMD Radeon GPU.
    That had more constraints, because they had to use PCIe links, as only that was compatible with the AMD GPU. With their own GPU, they would have been free to implement some custom communication protocol, maybe more suitable for the task.
  • Spunjji - Monday, November 2, 2020 - link

    Kaby Lake G didn't share a memory controller, and it's not a relevant comparison. The better one would have been Arrandale, Intel's first go at on-package Northbridge and graphics. It worked because the GPU was weak and sat next to the memory controller, which was already decoupled from the CPU because they hadn't integrated it into the CPU core design yet.

    Seriously, the GPU on Rocket Lake is integrated at 14nm. No need to take my word for it though, you'll find out in about 4 months when it actually becomes relevant.
  • Santoval - Friday, October 30, 2020 - link

    Your guesses appear to be correct. Smaller die size means higher yields and thus lower costs. Intel do not normally have a yield issue with their hyper-mature 14nm process node, but I have no idea whether backporting a new CPU and GPU μarch affected their yields at 14nm. Maybe they just wanted to play it safe since this is the first time they tried that.
  • DannyH246 - Friday, October 30, 2020 - link

    Oh WOW!!!! Another "look what we'll have in the future" article from Intel.

    Definitely definitely do not buy anything until this amazing architecture is released.
  • Toadster - Friday, October 30, 2020 - link

    interesting that the CML test setup used the 760p SSD and RKL used 660p https://ark.intel.com/content/www/us/en/ark/compar...
  • Golgatha777 - Friday, October 30, 2020 - link

    4 more PCIe lanes will help with a single M.2 SSD, but is Intel going to up the bandwidth of the DMI connection past 3.93GB/s in case users might want more than one M.2 SSD in the system?

    https://en.wikipedia.org/wiki/Direct_Media_Interfa...
  • abufrejoval - Friday, October 30, 2020 - link

    For me the most exciting features of Rocket and Tiger lake as well as Zen 3 seem to be the control flow extensions (CFI) with shadow stacks etc. as well as the multi-key per-VM memory encryption facilities.

    Unfortunately, there seems to be zero information yet on these topics, nothing on if this is compatible across the two or the level of support across server and client platforms.

    I sincerely hope that AMD will extend the per-VM memory encryption also to all client platforms, because the ability to run secured enclaves isn't limited to server use cases. From the hints on Tiger-Lake, Intel seems to regard all these security enhancing features as more global than ECC or AVX-n: Let's hope AMD does, too.

    Unfortunately one issue I have gets bigger and bigger as the two continue diverge in feature sets: Live migration of VMs, which is keeping me from mixing AMD and Intel in my infastructure, something I used to do 10 years ago.

    I'd love to get some [Ry]Zen into my lab, but it would partition the infrastructure in that regard: Too high a cost at a moment when Ryzen is more attractive than anything Intel can offer.
  • TelstarTOS - Friday, October 30, 2020 - link

    only 8 cores? Pass.
  • dustwalker13 - Sunday, November 1, 2020 - link

    With this thing out sometime in 2021 and the slides being several months old this very much looks like another case of:

    "Uh-oh, no one is talking about us while the competition is showing off one great product after another. Quick let us do a blog post ... oh no we did that two weeks ago to unsuccessfully try and get attention away from Zen3 ... let us just post this old internal PowerPoint on our next cpu ... who cares this will launch in half a year right?"
  • bill44 - Sunday, November 1, 2020 - link

    Wait. Will there be a 45W 8 core Tiger Lake and an 8 core 45W Rocket Lake available at the same time? Performance wise, what will be the difference?
    Also, if I understand this correctly, both Tiger Lake H and Rocket Lake H will have a 32EU XeLP vs 96EU on Tiger Lake U.
  • Spunjji - Monday, November 2, 2020 - link

    I don't think 45W Rocket Lake is happening anymore.

    You're right about the 32EU GPU, though.
  • fogifds - Monday, November 2, 2020 - link

    So Z590 is one gen only? I suppose Z390 was as well.

    Or are we expecting another delay with Alder Lake? Seems strange Intel will have two CPU releases in the same year. Although, Alder lake won't be available in volume until 2022 anyway, I'm sure.
  • Dr_b_ - Monday, November 9, 2020 - link

    Rocket Lake will beat Zen3 IPC, and on an older node
  • Davistravis - Tuesday, May 4, 2021 - link

    The technology has been going to an advanced level and the way of communicating have been getting more easier. For example this website https://flirtymania.plus/go-live-onlyfans-en.html has been helping people to get connected with others around the world.

Log in

Don't have an account? Sign up now